Clicky chatsimple

Viggle Makes Controllable AI Characters For Memes And Ideas

Category :

AI

Posted On :

Share This :

Even if you’re not familiar with Viggle AI, you’ve probably seen the popular memes it produced. Numerous footage featuring rapper Lil Yachty bouncing onstage at a summer music festival have been remixed by the Canadian AI firm. Lil Yachty is replaced in one video by Joaquin Phoenix’s Joker. In another, Jesus appeared to be energizing the assembly. Numerous users remixed this video innumerable times, yet the memes were being fueled by an AI startup. And Viggle’s CEO claims YouTube videos drive its AI algorithms.

According to the company’s press statement, Viggle trained JST-1, a 3D-video foundation model, to have a “genuine understanding of physics.” The main distinction between Viggle and other AI video models, according to CEO Hang Chu, is that Viggle lets users choose the motions they wish their characters to make. Chu says that although other AI video models frequently produce irrational character motions that defy physics, Viggle’s models are not like that.

In an interview, Chu stated, “We are essentially building a new type of graphics engine, but purely with neural networks.” The model itself differs significantly from current video generators, which are primarily pixel-based and lack a thorough understanding of physics’ structure and behaviors. Our model is far superior in terms of controllability and generation efficiency because it is built with this insight.

For example, to make the Joker as Lil Yachty video, simply submit the original clip of Lil Yachty dancing on stage along with an image of the Joker to mimic that movement. As an alternative, users can supply character photos together with text questions that include animation instructions. Viggle is a third method that lets users make animated characters from scratch using only text prompts.

However, Chu claims that the model has been widely adopted as a visualization tool for creatives, with only a small percentage of Viggle’s users being meme creators. Chu claims that although the videos are not flawless—they are choppy and the faces are expressionless—it has worked well for animators, filmmakers, and video game creators to bring their ideas to life. Chu aims to enable more complicated movies in the future, but for now, Viggle’s models are limited to creating characters.

Currently, Viggle’s AI model is available on Discord and its web app in a limited, free edition. In addition, the business provides a $9.99 membership for more capacity and grants select creators exclusive access via a creator program. The CEO reports that Viggle is in discussions with video game and film studios over technology licensing, but he also notes that independent animators and content makers are adopting the technology.

Viggle revealed on Monday that it has secured a $19 million Series A funding round led by Andreessen Horowitz, in which Two Small Fish also participated. The business claims that this financing will enable Viggle to grow, quicken the pace of product development, and add more employees. According to Viggle, TechCrunch, it trains and runs its AI models in collaboration with cloud providers like Google Cloud. These Google Cloud collaborations usually give access to GPU and TPU clusters, but they usually do not grant access to YouTube videos for AI model training.

Training Information

We questioned Chu during our TechCrunch interview about the data that Viggle used to train its AI video algorithms.

Chu responded, “So far we’ve been relying on data that has been publicly available,” echoing the comments made by OpenAI’s CTO Mira Murati regarding Sora’s training data.

Chu gave a direct “Yes” when asked if YouTube videos were included in Viggle’s training dataset.

That could provide an issue. The CEO of YouTube, Neal Mohan, stated to Bloomberg in April that it would be a “clear violation” of the platform’s terms of service to use YouTube videos to train an AI text-to-video generator. The remarks were on the possibility that OpenAI trained Sora using footage from YouTube.

Google, the company that owns YouTube, may establish agreements with certain content creators to use their videos as training datasets for Google DeepMind’s Gemini, as Mohan noted. Nevertheless, Mohan and YouTube’s terms of service prohibit downloading videos from the site without permission from the owner.

A Viggle representative retracted Chu’s statement via email after TechCrunch’s interview with the company’s CEO, stating that Chu “spoke too soon in regards to if Viggle uses YouTube data as training.” Actually, Hang/Viggle cannot divulge specifics about their training data.

Chu’s previous remarks were recorded, so when the representative for Viggle was asked for a definitive response, they acknowledged that the AI startup gets its training from YouTube videos.

Viggle creates AI content by utilizing a range of open sources, such as YouTube. Our training data has undergone rigorous curation and improvement, guaranteeing adherence to all terms of service over the whole procedure. We place a high value on upholding our solid relationships with websites such as YouTube and are dedicated to abiding by their agreements by refraining from large-scale downloads and any other activity that may include downloading videos without authorization.

We have not received a response from Google or YouTube representatives that we have contacted.

The firm operates in a gray area by joining others who use YouTube as training data. Several AI model makers, such as OpenAI, Nvidia, Apple, and Anthropic, have reportedly used YouTube video transcriptions or clips as training data. The dirty secret in Silicon Valley is perhaps not that secret after all—everyone probably engages in it. Saying that aloud is actually uncommon.