Open AI calls experts for red teaming

OpenAI announced an open call for experts to join the OpenAI Red Teaming Network to help assess risks of AI models before deployment. This expands their safety efforts beyond internal testing to continuous input from external researchers and civil society.

What's going on here?

OpenAI is formalizing ongoing collaborations with outside experts into a network for iterative red teaming of AI systems.

What does this mean?

Rather than one-off engagements before launches, OpenAI will maintain a network of trusted experts in diverse domains that can provide perspectives on potential harms throughout development. Members will sign NDAs and provide at least 5-10 hours annually. OpenAI will select experts case-by-case for red-teaming new models based on their expertise. This complements their other safety initiatives like the Researcher Access Program.

Why should I care?

This network enables broader, continuous input on AI risks from diverse experts vs. just internal testing. As AI grows more capable and influential, thoughtful oversight mechanisms like this could help align it with human values. While details are sparse, the open call signals a meaningful commitment to cooperative safety efforts.

Reply

or to participate.