New AI Video Generator, Gen-3 Alpha from Runway, Competes with Open AI’s Sora

0
120
Runway introduces Gen-3 Alpha video-generating AI, challenging Open AIs Sora

Runway has introduced the new Gen-3 Alpha model that will be powering its various services, with the capabilities of generating high fidelity videos based on prompts.

An example of a video generated by Gen-3 Alpha. (Image Credit: Runway).

New Delhi: Runway was among the first movers in the space of generative AI models for producing videos, but has been eclipsed by OpenAI’s Sora and Kuiashou’s Kling AI models, among others. Now, Runway intends to return as a contender with the release of the new and improved Gen-3 Alpha model, with the capabilities of producing photorealistic videos up to 10 seconds in length. Sora can generate videos up to 60 seconds long, while Kling can generate videos that go on for 120 seconds.

Runway has indicated that Gen-3 Alpha will be only the first in a series of models purpose-trained on new infrastructure set up specifically for training large-scale multimodal models. Gen-3 Alpha is a significant improvement over Gen-2 when it comes to fidelity, consistency and motion according to Runway, and is a step towards realising General World Models, which are computational models capable of simulating, understanding, and predicting a wide variety of phenomena.

Gen-3 Alpha has been trained jointly on videos and images, and will power Runway’s Text to Video, Image to Video and Text to Image tools, with existing control modes such as Motion Brush, Advanced Camera Controls and Director Mode, as well as brand new tools that will provide more granular control over structure, style and motion of the generated videos.





videoframe 7355

Gen-3 Alpha can create photorealistic humans. (Image Credit: Runway).

Photorealistic Humans

Runway has claimed that Gen-3 Alpha excels at generating expressive human characters, and can generate a wide range of realistic actions, gestures and emotions. These capabilities are squarely aimed at creators, with the model capable of interpreting a wide range of styles and cinematic terminology. The model also allows for fine-tuned temporal control, allowing users to prompt precisely for transitions and key-framing elements within a scene.

Tech developed in partnerships with industry

Runway has indicated that it has collaborated and partnered closely with leading entertainment and media organisations to realise custom versions of Gen-3. These custom versions allow for more stylistically controlled and consistent characters, with a focus on specific artistic and narrative requirements.