- Ben's Bites
- Posts
- OpenAI makes a new safety committee and hints at new models.
OpenAI makes a new safety committee and hints at new models.
OpenAI just announced a new Safety and Security Committee led by Sam Altman, Bret Taylor, and other board members. This move comes hot on the heels of some top scientists leaving, including those leading the "superalignment" team focused on tackling long-term AI risks.
What is going on here?
OpenAI makes a new safety committee and hints at new models.
What does this mean?
The committee, a group of experts including the CEO and other board members, will take 90 days to look closely at and strengthen the safety measures they already have in place. This is all happening as OpenAI starts training its next big AI model, which they think will be a major step up in what AI can do.
Besides the board members, the committee also includes some of OpenAI's top experts in tech and policy, like Aleksander Madry, Lilian Weng, John Schulman, Matt Knight, and Jakub Pachocki. They'll also be getting advice from former government officials like Rob Joyce and John Carlin, who know a thing or two about cybersecurity and national security.
Why should I care?
OpenAI is a leader in AI, so their choices about safety could influence other companies in the field. However, some former employees have criticized OpenAI, and there are worries about how committed they are to safety after losing some key members of their safety team.
I want more than “iterative deployment is the solution” from the committee's findings. Let’s see how OpenAI plans to address these concerns and prioritize safety as its tech advances.
Reply