Clicky chatsimple

Runway’s Gen-3 Video-Generating AI Improves Controls

Category :


Posted On :

Share This :

Video quality is rising in the AI race.

Runway, a business developing generative AI tools for film and image content makers, released Gen-3 Alpha on Monday. Recent AI models create video clips from text descriptions and still photos. The model offers “major” improvements in generation speed and fidelity over Runway’s previous flagship video model, Gen-2, as well as fine-grained controls over video structure, style, and motion.

Runway subscribers, including enterprise customers and creative partners, will get Gen-3 in the coming days.

“Gen-3 Alpha excels at generating expressive human characters with a wide range of actions, gestures and emotions,” Runway claims on its blog. “It was designed to interpret a wide range of styles and cinematic terminology [and enable] imaginative transitions and precise scene key-framing.”

Perhaps Gen-3 Alpha’s biggest drawback is its 10-second footage restriction. Runway co-founder Anastasis Germanidis guarantees that Gen-3 is merely the first and smallest of multiple next-gen video-generating models trained on updated infrastructure.

“The model can struggle with complex character and object interactions, and generations don’t always follow the laws of physics precisely,” Germanidis told TechCrunch this morning. “This initial rollout will support 5- and 10-second high-resolution generations, much faster than Gen-2. 5 seconds and 10 seconds require 45 and 90 seconds, respectively, to manufacture.

Gen-3 Alpha, like all video-generating algorithms, was trained on many videos and images to “learn” their patterns and generate new clips. Where did training data originate from? runway wouldn’t say. Few generative AI firms share such information because they perceive training data as a competitive advantage and consider it confidential.

“Our in-house research team oversees all of our training and uses curated, internal data sets to train our models,” Germanidis added. He ended it.

If the vendor trained on public data, including copyrighted online data, training data specifics could lead to IP claims, another disincentive to share much. Several court rulings reject vendors’ fair use training data defenses, stating that generative AI tools copy artists’ styles without their permission and let users create new works that look like artists’ originals without paying them.

Runway said it engaged artists on the model, addressing copyright concerns. Which artists? Not sure.) Germanidis told me that in a 2023 TechCrunch Disrupt fireside:

“We’re working closely with artists to figure out what the best approaches are to address this,” he said. “We are exploring data partnerships to expand and develop new models.”

Runway also expects to deploy Gen-3 with enhanced precautions, including a moderation system to restrict attempts to produce films from copyrighted photos and anything that violates its terms of service. A C2PA-compatible provenance system backed by Microsoft, Adobe, OpenAI, and others is being developed to identify Gen-3 films.

“Our new and improved in-house visual and text moderation system employs automatic oversight to filter out inappropriate or harmful content,” Germanidis said. All Gen-3 models’ media is authenticated by C2PA authentication. As model capabilities and high-fidelity content generation improve, we will invest heavily in alignment and safety.

Runway has also partnered with “leading entertainment and media organizations” to build custom Gen-3 characters that meet “specific artistic and narrative requirements” and are more “stylistically controlled” and consistent. The business says, “This means that the characters, backgrounds, and elements generated can maintain a coherent appearance and behavior across various scenes.”

Controlling video-generating models to produce consistent video that matches a creator’s artistic aims is a huge challenge. My colleague Devin Coldewey recently noted that generative models are needed to solve simple filmmaking problems like choosing a character’s outfit color. Each shot is developed autonomously. Sometimes even workarounds fail, requiring editors to manually edit.

Google (which gives Runway cloud compute credits), Nvidia, Amplify Partners, Felicis, and Coatue have invested $236.5 million in it. As its generative AI investments develop, the corporation has partnered with the creative industries. Runway Studios, an entertainment subsidiary that produces films for enterprise clients, sponsors the AI Film Festival, one of the first events dedicated to AI-made films.

But competition is rising.

Last week, Luma, a generative AI firm, launched Dream Machine, a meme-animating video generator that went viral. Adobe announced a video-generating paradigm based on Adobe Stock assets a few months ago.

Other incumbents include OpenAI’s Sora, which is strictly restricted yet seeded with marketing companies and indie and Hollywood film directors. (OpenAI CTO Mira Murati attended the 2024 Cannes Film Festival.) This year’s Tribeca Festival, which partners with Runway to promote AI-powered films, showcased short films developed with Sora by early access directors.

As it works to integrate Veo into YouTube Shorts, Google has given its image-generating model to chosen producers, including Donald Glover (AKA Childish Gambino) and his creative agency Gilga.

Regardless of the cooperation, generative AI video tools threaten to upend the film and TV industry.

Tyler Perry reportedly halted a $800 million production studio development after witnessing Sora’s potential. Director Joe Russo, who directed “Avengers: Endgame,” expects that AI will be able to make a movie in a year.

In 2024, the Animation Guild, a union for Hollywood animators and cartoonists, reported that 75% of film production businesses that implemented AI cut, consolidated, or eliminated positions. The report predicts that generative AI will disrupt over 100,000 U.S. entertainment occupations by 2026.

Video-generating tools will need robust labor rights to avoid following other generative AI tech and reducing creative employment demand.