• Ben's Bites
  • Posts
  • OpenAI faces safety questions as the Superalignment team disbands.

OpenAI faces safety questions as the Superalignment team disbands.

There's some drama at OpenAI, again. Safety researchers are leaving and questioning the company, while uncommon equity practices are inviting criticism. Moreover, it is pausing an AI voice in its products soon after demoing a real-time voice assistant.

What is going on here?

OpenAI is again in the tangles of controversy regarding AI models' safety.

What does this mean?

It all kicked off when Ilya Sutskever, their former chief scientist, and Jan Leike, the head of the superalignment team, left the company. A few other team members followed suit.

Jan spoke out about his resignation, saying it felt like an uphill battle doing safety research at OpenAI. He was also unhappy with the company's shift towards building "shiny products" instead of focusing on their mission—making sure AGI is safe.

Meanwhile, Kelsey Piper @Vox released a piece on OpenAI’s offboarding agreement. It prohibits ex-employees from talking foul about OpenAI. If they do, they risk losing all their vested equity.

Greg Brockman, the president of OpenAI, responded to Jan's comments by explaining that the company's current safety strategy involves gradually releasing their products and systems into the real world to gather feedback and make improvements.

As per a Wired report, what’s left of the Superalignment team is being absorbed into other groups at OpenAI with co-founder John Schulman overseeing the far-fetched risks of AGI.

Sam Altman, the CEO, also addressed concerns about a clause in their exit agreements that could potentially take away employees' shares. He admitted that the clause was a mistake, stating it had never been used and inviting any former employees who might be affected to contact him directly.

Why should I care?

Well, OpenAI presents itself as the go-to place for building safe artificial general intelligence (AGI). These incidents damage that image—not just among regular users, but also among researchers and those who regulate AI technology.

Although, the vibes feel like “OpenAI has lost the mandate of heaven” such drama is not unexpected. OpenAI is the hottest company in the race to AGI, so it’s naturally going to attract attention—both positive and negative.

As this drama dies down, OpenAI is now facing another challenge. They've paused the use of Sky’s voice in ChatGPT, likely because it sounds too similar to Scarlett Johansson's voice.

Reply

or to participate.