• Ben's Bites
  • Posts
  • AI panic campaigns are targeting to influence public policy

AI panic campaigns are targeting to influence public policy

Nirit Weiss-Blatt from AI Panic News has written two “quite long” pieces on AI safety groups and their public (and not-so-public) agendas. The AI Panic Campaign Part 1 and Part 2.

Blatt claims these groups test narratives and target messaging to manipulate public opinion, aiming to push the fringe idea of AI extermination risk into the mainstream. Tailored for maximum impact on politicians, their goal is to increase AI's perceived danger and urgently necessitate severe legal restrictions like surveillance and criminalization of AI research.

Disclaimer: This summary is based on Blatt’s POV. In addition, some of our biases may have crept in as well. Please do your own reading before you form an opinion.

What’s going on here?

AI safety organizations conduct "message testing" to spread fear about AI risk and lobby for regulation. Their goal is to restrict AI progress through an "AI panic campaign."

What does this mean?

AI safety groups like Campaign for AI Safety and Existential Risk Observatory test different narratives through surveys to see which ones best convince people that AI will cause human extinction. They tailor messages based on demographics like age, gender and politics to maximize impact on specific audiences.

For example, they found "dangerous AI" resonates more with Republicans while "superintelligent AI" works better for Democrats. According to Weiss-Blatt, their goal is to create a sense of urgency and imminent threat to get support for an AI moratorium or other restrictive policies.

After inflating the threat in the media, they submit policy proposals to governments demanding actions (like banning AI system training over a certain compute level). Their next target is influencing the upcoming UK AI Safety Summit.

Why should I care?

If true, the impact of small fringe groups on public policy is concerning. The debate around AI needs to remain grounded in science, not science fiction. In total, the claims of AI existential risk in mainstream media are common now. Keep in mind that these claims could be driven by lobbyist agenda (and the same holds true for overhyped AI coverage as well).

Reply

or to participate.