The next natural step in the generative AI era is generative videos, but that’s much more complex than just generating pictures or text. Meta’s Movie Gen AI wants to be a comprehensive interface for all your video-generating needs, but it’s still not publicly available.
Meta has announced Movie Gen, a new set of generative AI models that can create and edit videos and audio using text prompts. The models are said to outperform similar models in the industry and represent a significant advancement in AI-powered media creation. Movie Gen will let you generate videos up to 16 seconds long from text prompts, as well as generate videos featuring a specific person based on their image and a text prompt. You can also edit existing videos using text prompts, adding, removing, or replacing elements with precision, and it can even create audio up to 45 seconds long that is synchronized with video content, including ambient sound, sound effects, and background music, to go with those generated videos.
Video generation is something the company has been working on for a long while, and now, those efforts are finally closer to a finished product. Meta claims that Movie Gen achieves state-of-the-art results in several areas, including video quality, personalization, editing accuracy, and audio-video alignment. The company attributes these achievements to technical innovations in model architecture, training data, and evaluation methods. The company aims to eventually collaborate with filmmakers and creators to ensure that Movie Gen becomes a valuable tool for creative expression. Potential applications of Movie Gen include creating animated videos for social media, generating personalized greetings, and editing videos using simple text commands.
Meta isn’t giving us a real timeline for when we might expect to check this out. AI video generation, in general, is in an early stage right now and we don’t have models that are as advanced as image/text models currently are. This might change that if it releases soon.
Source: Meta