OpenAI introduces Sora, its text-to-video AI model

0
351

OpenAI is launching a new video-generation model, and it’s called Sora. The AI company says Sora “can create realistic and imaginative scenes from text instructions.” The text-to-video model allows users to create photorealistic videos up to a minute long — all based on prompts they’ve written.

Sora is capable of creating “complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background,” according to OpenAI’s introductory blog post. The company also notes that the model can understand how objects “exist in the physical world,” as well as “accurately interpret props and generate compelling characters that express vibrant emotions.”

The model can also generate a video based on a still image, as well as fill in missing frames on an existing video or extend it. The Sora-generated demos included in OpenAI’s blog post include an aerial scene of California during the gold rush, a video that looks as if it were shot from the inside of a Tokyo train, and others. Many have some telltale signs of AI — like a suspiciously moving floor in a video of a museum — and OpenAI says the model “may struggle with accurately simulating the physics of a complex scene,” but the results are overall pretty impressive.

A couple of years ago, it was text-to-image generators like Midjourney that were at the forefront of models’ ability to turn words into images. But recently, video has begun to improve at a remarkable pace: companies like Runway and Pika have shown impressive text-to-video models of their own, and Google’s Lumiere figures to be one of OpenAI’s primary competitors in this space, too. Similar to Sora, Lumiere gives users text-to-video tools and also lets them create videos from a still image.

Sora is currently only available to “red teamers” who are assessing the model for potential harms and risks. OpenAI is also offering access to some visual artists, designers, and filmmakers to get feedback. It notes that the existing model might not accurately simulate the physics of a complex scene and may not properly interpret certain instances of cause and effect.

Source link