- Ben's Bites
- Posts
- Ilya Sutskever's quest for Safe Super Intelligence
Ilya Sutskever's quest for Safe Super Intelligence
The "Where's Ilya?" mystery is solved. OpenAI co-founder Ilya Sutskever ended months of speculation by launching a new company called Safe Superintelligence Inc. This pure research outfit aims to develop "safe superintelligence"—an ultra-powerful AI system that won't harm humanity.
What is going on here?
OpenAI's co-founder Ilya Sutskever is starting a new pure-play AI research company called Safe Superintelligence. And yep, it's exactly what it sounds like.
What does this mean?
AGI (artificial general intelligence) is old news. Sutskever wants to build superintelligence that's safe and beneficial to humanity. No near-term commercial products, no distractions—just hardcore research to crack the code on "safe superintelligence."
SSI is likely swimming in investor interest, but Sutskever is mum on backers and fundraising. But he's locked in two co-founders: Daniel Gross (tech investor and ex-Apple AI), and Daniel Levy (OpenAI vet).
A few key deets:
Offices in Palo Alto and Tel Aviv. Lean team with a singular focus.
Baked in safety through engineering breakthroughs. Think nuclear safety treatment, not just content moderation.
LLMs will play a role, but the end goal is a crazy powerful, general-purpose AI.
Bloomberg’s got the scoop if you want to read more.
Why should I care?
Sutskever is an AI legend. His involvement in OpenAI was key to its meteoric rise and he's been MIA amid drama there for months.
Now he's back with a new mission that cuts to the heart of the AGI race: how do we create an AI system radically smarter than humans...without dooming humanity in the process?
TBD if "safe superintelligence" is even possible. But this team is one to watch for sure.
Reply