• Ben's Bites
  • Posts
  • Open AI isn't ready to play "spot the AI image"

Open AI isn't ready to play "spot the AI image"

AI detector tool for Dall-E 3 exists, but OpenAI is not ready to show it to the world. The tool has higher accuracy compared to previous attempts but has not met OpenAI's high standards for reliability.

What's going on here?

OpenAI is debating when to release a tool that can detect if an image was made by their AI art generator DALL-E 3.

What does this mean?

OpenAI's image classifier is 99% accurate on unmodified DALL-E 3 images. It remains over 95% accurate even if the image is cropped, resized, compressed or overlaid with minor text or cutouts.

For comparison, the text classifier’s accuracy ranged about 30% for multiple factors. A couple of months after release, Open AI took the text classifier offline.

Now, OpenAI is unsure what threshold of accuracy is sufficient before releasing the tool publicly. Their current focus is Dall-E 3 generate images only but even then, the further the human edits it, the harder it becomes to catch.

Why should I care?

On one hand, the tool could help detect harmful deepfakes and AI art theft. But if the accuracy isn't near perfect, it risks unfairly labelling human-made art as AI-generated.

OpenAI wants to avoid a repeat of the controversy around their previous AI text classifier tool which was criticized for low accuracy. They are being cautious and soliciting input from impacted communities like artists.

To add, the core philosophical question is, at what point does an image created with AI but subsequently edited stop being considered AI-generated? No easy answers to that one.

Reply

or to participate.