OpenAI to make Sora available to the public in 2024

OpenAI to make Sora available to the public in 2024

Last month, OpenAI introduced its revolutionary text-to-video tool Sora, which can produce realistic 1080p videos. It is currently available only to a handful of filmmakers and creators who are testing it to find vulnerabilities and fix them, ensuring security before releasing it to the general public. In an interview with the Wall Street Journal, OpenAI CTO Mira Murati said that they plan to launch Sora to the public by 2025, and that it could even happen within “a few months.”

Unlike the video of Will Smith eating spaghetti that was introduced just a year ago, Sora generates “hyper-realistic” content (except for some strange hand and finger images), and OpenAI’s CTO says they won’t be making the tool available to the public until they’re sure it’s safe. It’s worth noting that videos created with Sora will also be watermarked, just like many other text-to-picture tools.

When asked about how this tool might affect the work of authors in the future, CTO Murati said that their goal is for this text-to-video model to serve as a tool that helps authors in their work, not replace them.

“I see it as a tool to expand creativity, and we want people in the film industry, creators everywhere, to be part of informing how we develop it further and also how we implement it. And also, you know, the economics of using this model, when people provide data and so on.”

Speaking of data, when asked what kind of data they used to train Sora, she didn’t go any further, just saying that it was “publicly available and licensed data.” It could be videos from YouTube, Facebook, Instagram, and other similar platforms.

It is noteworthy that Sora does not include audio in the generated clips, while OpenAI reportedly plans to integrate audio in the future. Although, it might not be Sora, but an even better version under a different name. As in the case of DALL-E, you will have to pay for using the model.


Please enter your comment!
Please enter your name here