• Ben's Bites
  • Posts
  • Meta is trying to label AI content on its platforms.

Meta is trying to label AI content on its platforms.

Meta's stamping "Imagined with AI" labels on AI-generated images across Facebook, Instagram, and Threads. The goal? Making sure we know when we're looking at something a machine cooked up versus a human.

What’s going on here?

Meta's rolling out labels for AI-generated images, aiming for transparency.

What does this mean?

The folks at Meta are not new to the AI game, but now they're pushing for transparency by labelling AI-crafted pics. They've been tagging their own AI creations with "Imagined with AI" tags, but they're looking to expand this transparency across the board. They're in talks with industry pals to create a common language for AI content. Meta says whether you're scrolling through dog pics or deep dives into distant galaxies, if it's AI-generated, you should know.

Meta’s also stepping up its game with invisible watermarks and metadata to make this process slicker, not just on its platforms but across the web. But much of this is removable with rather simple options (like taking a screenshot in many cases).

For audio and video, they’re making a feature for self-labelling AI content and if Meta finds out, you haven’t followed through, it’s gonna release the algorithm dogs on you (the bad ones).

Why should I care?

Imagine scrolling and not knowing if what you see is someone's beach photo or AI's dream vacation. These are early days for the spread of AI-generated content, so any indication, even if removable helps filter AI content.

There also has to be human caution in identifying AI-generated stuff, which is very hard. So look out for signals away from the content: is this info likely to be true? Does the person sharing this often talk gibberish and trust fake news?

Join the conversation

or to participate.