• Ben's Bites
  • Posts
  • OpenAI stops state-actors from abusing AI.

OpenAI stops state-actors from abusing AI.

Hey, some wild stuff here. Know those state-sponsored groups that doomers keep freaking about? Well, some of them were using OpenAI’s AI models for shady cyber stuff and guess what? They got the axe.

What's going on here?

OpenAI busted five state-backed groups misusing their AI services for their cyber ops. The good news is, their results were pretty limited as far as advanced cyberattacks go.

What does this mean?

OpenAI caught groups from China, Iran, North Korea, and Russia messing around with AI to find open-source information, researching targets, translating documents finding, coding loopholes and spinning up some code-quickies. OpenAI and Microsoft shut them down and used Microsoft’s Threat Intelligence to track their activities.

OpenAI says that their models actually aren't super useful for complex cyberattacks (yet). But a separate research doing rounds on Twitter showed that GPT-4 based agents can hack websites.

Why should I care?

AI tools are sweet, but – no surprise – bad actors will try to misuse them. State-backed hackers are serious business. OpenAI’s approach of flagging sus activities and disabling access works for now, but critical web infra should brace itself for more attacks aided by AI.

And no doomers, the solution isn’t to pause AI. It’s to have more vigilant checks and stress-testing security systems. You don’t stop running if the road is thorny, you get good shoes.

Join the conversation

or to participate.