OpenAI may be dialing back its efforts in the video generation market with the shutdown of its Sora app, but ByteDance on Thursday confirmed that its new audio and video model, Dreamina Seedance 2.0, is now rolling out in its editing platform, CapCut.
ByteDance says the model allows creators to draft, edit, and sync video and audio content by using prompts, images, or reference videos.
The phased rollout will begin with CapCut users in Brazil, Indonesia, Malaysia, Mexico, the Philippines, Thailand, and Vietnam, with more markets added over time.
The news of the launch in CapCut follows a recent report that said the model’s global rollout would be paused, while it worked to address intellectual property issues that drew criticism from Hollywood over alleged copyright infringement. That likely explains the limited number of markets where the model is currently available within CapCut.
In China, the model is available to users of ByteDance’s Jianying app.
Image Credits:ByteDance
The video generation model works without reference images, even if the creator only uses a few words to describe the scene they have in mind, ByteDance says in its announcement. CapCut is also good at rendering realistic textures, movement, and lighting across a range of visual perspectives and angles, which the company notes could be used to edit, enhance, or correct creators’ own footage.
Another use case would be allowing creators to test potential ideas based on early concepts or sketches before filming the real video.
In addition, Dreamina Seedance 2.0 can be used for a wide range of content, including cooking recipes, fitness tutorials, business or product overviews, and videos with motion or action-focused content, where AI video models have historically faced challenges, the company explains.
At launch, the model supports clips of up to 15 seconds long across six aspect ratios.
... continue reading