• Ben's Bites
  • Posts
  • OpenAI shares the status of its content credentials efforts.

OpenAI shares the status of its content credentials efforts.

Generative AI is here in a big way, and it's getting harder to tell what's real and what's made by machines. The latest from OpenAI is about how we can figure out where online images, video, and audio come from.

What’s going on here?

OpenAI is tackling content authenticity with new tools and an industry-wide push for standards.

What does that mean?

They're joining the popular C2PA standard for certifying content origins and adding metadata to everything created by their tools. Images generated by DALL-E 3 (and soon their videos from Sora) get an invisible tag for easy verification. Watermarking for audio is in the works to help verify sound clips generated by their tools. They're also backing a new $2M fund to spread AI literacy.

They've got a classifier that sniffs out DALL-E 3 images with crazy accuracy but falls flat when it comes to other AI models. It's now open to researchers to help improve it even more across multiple models.

Why should I care?

Imagine a world where you can't trust what you see or hear online. These tools help us combat deepfakes and other manipulated content. We need this to navigate a future where AI-generated content becomes the norm, not the exception.

Join the conversation

or to participate.