• Ben's Bites
  • Posts
  • Anthropic's throwing cash at third-party AI evaluations.

Anthropic's throwing cash at third-party AI evaluations.

Anthropic wants to pay people to build better ways to test their AI models. They're basically saying "Hey nerds, our AI keeps acing all the tests we throw at it, so we need some real brain-busters now!โ€

What's going on here?

Anthropic announced a new initiative to fund third-party evaluations of advanced AI capabilities and risks.

What does this mean?

Frontier AI models are outgrowing the old evaluation methods faster than teenagers outgrow their shoes. Creating and running new evals is expensive, especially as we get more of the larger models (GPT-4 class) like Gemini 1.5 Pro and Claude 3 and 3.5 series.

To solve this, Anthropic is opening up its wallet to the wider AI community, hoping fresh eyes (i.e. new evals from third parties) can judge these models better.

Anthropic is looking at three main categories: AI safety level assessments, advanced capability metrics, and tools for building evals.

  • For safety, they want tests for stuff like AI hacking skills, the ability to design bioweapons, and how autonomous models can get.

  • On the capability side, they're after evals for cutting-edge science, multilingual skills, and societal impacts.

  • They also want infrastructure to make it easier for experts to whip up good evals without needing coding chops.

It is also sharing a wishlist and inviting proposals through an application form.

Why should I care?

Anthropic's trying to stay ahead of the curve because when your creation starts acing tests faster than you can write them, it's time to bring in the reinforcements.

If you're an AI whiz or domain expert, there's cash on the table. Itโ€™s not just Anthropic, other big AI labs are also sweating about evals (OpenAI famously gives early access to eval contributors).

Reply

or to participate.