- Ben's Bites
- Posts
- Daily Digest: Movies with AI, coming soon.
Daily Digest: Movies with AI, coming soon.
PLUS: State of AI in production
Subscribe | Ben’s Bites Pro | Ben’s Bites News
Daily Digest #436
Want to get in front of 100k AI enthusiasts? Work with us here
Hello folks, today we have all the AI models needed to create a movie. Some are out now, some will be soon. Let’s get in;
PICKS
Runway just announced Gen-3 Alpha, their latest and greatest video generation model. And boy, does it look slick! Say hello to a new era of high-quality, ultra-controllable AI videos. Oh, not so soon—it’s not public yet.🍿Our Summary (also below)
Google Deepmind generates audio for silent videos. Google Deepmind's video-to-audio (V2A) tech can create rich, synchronized audio for AI-generated videos using just the video pixels and a text prompt for additional guidance. Think it’s time for AI video to move on from awkward mime acts.🍿Our Summary (also below)
Sound Effect by Eleven Labs - Imagine a sound and bring it to life. The text-to-sound capability is available in their API now. They also made a cool video → sound effect app. Check the docs or try the app.
TikTok is rolling out some shiny new AI tools to help brands and creators go global with their ad game. It is expanding its Symphony AI ad suite with two main additions: custom avatars and automatic dubbing.🍿Our Summary
from our sponsor
Connect your AI to the Web
Brave is the fastest growing search engine since Bing…and now it's available to you, with the Brave Search API. You can efficiently build groundbreaking AI apps backed by the shared knowledge of the Web:
Billions of indexed pages
Independent; no big tech biases
Up to 77% cheaper than Bing (developer-first pricing)
TOP TOOLS
Spiral by Every - Automate 80% of repeat writing, thinking, or creative tasks.
BaseHub Templates - Production-ready websites for your next big project.
Newra - Build AI-powered chatbots driven by your enterprise data.
Zenes - Revamp the QA process with your AI partner.
BuilderKit - Ship your AI SaaS in days.
Pizi - Turn your photos into product pages.
Summit - The AI life coach for your biggest goals.
Multi-step tool use is now available in the Cohere API.
NEWS
The state of AI in production by Retool.
Vinod Khosla on what to build in AI.
AI agents and the RaaS revolution.
AI and the why now of data DAOs.
What policymakers need to know about AI.
Models find ways to "game" the system to obtain rewards - New research from Anthropic.
Adobe upgrades Acrobat AI chatbot to add multi-document analysis and image generation.
Hona raised $9.5M Series A to reduce the communication load of attorneys.
Finbourne taps $70M for tech that turns financial data dust into AI gold.
Softbank offers a one-year subscription to the premium version of Perplexity in Japan.
Unclassifieds - short, sponsored links
Job Boardly - Launch a no-code niche job board instantly. Built for SEO & monetization. Get $50 off lifetime access deal. Use promo code: BEN at checkout.
QUICK BITES
Runway just announced Gen-3 Alpha, their latest and greatest video generation model. And boy, does it look slick! Say hello to a new era of high-quality, ultra-controllable AI videos.
What is going on here?
Runway's been hard at work training Gen-3 Alpha on a brand-new setup designed for large-scale multimodal training. The result? A major level-up in fidelity, consistency, and motion compared to their previous Gen-2 model.
What does this mean?
Runway's Gen-3 Alpha is trained on both videos and images for maximum versatility. Gen-3 Alpha will power all of its existing video and image tools with boosted performance. More features to provide fine-grained control over elements like structure, style, and movement are in the pipeline.
Runway claims Gen-3 Alpha has seamless transitions & keyframing, excels at generating expressive, realistic humans, and understands a wide range of artistic styles and cinematic lingo (hello, storytelling potential!).
Here are a few prompts they’re showcasing the results for:
Subtle reflections of a woman on the window of a train moving at hyper-speed in a Japanese city.
An astronaut running through an alley in Rio de Janeiro.
An empty warehouse dynamically transformed by flora that explodes from the ground.
Close-up shot of a living flame wisp darting through a bustling fantasy market at night.
An FPV shot zooming through a tunnel into a vibrant underwater space.
Plus, Runway is collaborating with top media players to create custom versions tailored to their specific needs. You can reach out to Runway if you want one.
Why should I care?
Looks like the models understand our world—just like OpenAI’s Sora and the quality is great too—something Luma Labs’ Dream Machine has lacked a bit.
But right now, it’s only to get excited. Gen-3 Alpha is not out yet. Runway ships, but any wait is minus points. For now, we can just record Runway’s entry in the logbook for next-gen video AI.
When it does, Runway will release the model with its beefed-up safety measures, including a new moderation system and provenance standards. That I’m okay with. Video AI is getting to the point where people can be fooled into thinking it’s real.
QUICK BITES
Google Deepmind just dropped some hot new research on how they're making those silent AI-generated videos actually watchable. Think it’s time for AI video to move on from awkward mime acts.
What is going on here?
Google Deepmind's video-to-audio (V2A) tech can create rich, synchronized audio for AI-generated videos using just the video pixels and a text prompt for guidance.
What does this mean?
This V2A is like giving AI videos their voice (literally). It can generate stuff like:
Dramatic scores and sound effects that match the on-screen action
Realistic ambient noise to set the scene
Dialogue (still a bit wonky on the lip-syncing, but they're working on it)
You can even mix it with Google Deepmind's souped-up Veo video generator for the full AI-powered cinematic experience. Or use it to resurrect vintage silent films with immersive audio.
The best part? You can generate endless soundtrack options for a single video and use text prompts to nudge it in the direction you want. Handy for perfecting that chase scene score.
Why should I care?
Because the future of AI entertainment is looking pretty darn immersive. V2A is a major leap toward ultra-realistic AI-generated movies, shows, games - you name it.
Google’s not releasing this tech into the wild just yet. It's still got some kinks to iron out and rigorous safety checks to pass (to avoid another fiasco). Let’s ope we can hear the popcorn pop soon!
Ben’s Bites Insights
We have 2 databases that are updated daily which you can access by sharing Ben’s Bites using the link below;
All 10k+ links we’ve covered, easily filterable (1 referral)
6k+ AI company funding rounds from Jan 2022, including investors, amounts, stage etc (3 referrals)
Reply