• Ben's Bites
  • Posts
  • Luma Labs plans to win AI video with Dream Machine.

Luma Labs plans to win AI video with Dream Machine.

Luma Labs, the folks behind cool NeRFs last year have joined the video generation queue. They’re releasing a new video generation model called Dream Machine. 

What is going on here?

Luma Labs releases Dream Machine - a text-to-video generation model.

What does this mean?

Dream Machine takes on the increasing number of text-to-video model announcements. To name a few: Sora from OpenAI, Veo from Google Deepmind, and Kling from KWAI (China). Where other video models are just sneak peeks, Dream Machine is ready to use now. Luma Labs joins RunwayML and Pika Labs, which have good quality video generation models out there in the world.

Currently, Dream Machine can create realistic videos from text prompts and images with a speed of 120 frames in 2 minutes. It understands motion and can create action-packed shots with smooth motion, cinematography, and drama.

The crown jewel for video generation is consistent characters and accurate physics, which Dream Machine does amazingly well. But don’t take my word for it—see the samples or try it out.

Why should I care?

Luma Labs excelled at building tools for creating NeRFs—in simple words, simulating the world in 3D. This capability is a must for creating high-quality videos because you need to understand physics for things like motion, directions etc.

So, while it came out of nowhere, this isn’t a weird pivot. And if cracked teams like Luma are focusing on video generation, we can imagine the Midjourney moment for video generation coming very soon.

Join the conversation

or to participate.