Daily Digest: False AGI Risk

PLUS: Margaret Atwood reads AI

Sign up | Advertise | Ben’s Bites News
Daily Digest #263

Hello folks, here’s what we have today;

PICKS
  1. OpenAI's new values are AGI aligned. OpenAI recently changed the core values listed on its website, putting a new emphasis on developing artificial general intelligence (AGI). 🍿Our Summary (also below)

  2. AI panic campaigns are targeting to influence public policy - The AI Panic Campaign Part 1 and Part 2 by Nirit Weiss-Blatt from AI Panic News claim AI safety groups test narratives and target messaging to manipulate public opinion, aiming to push the fringe idea of AI extermination risk into the mainstream. 🍿Our Summary (also below)

  3. What every app that adds AI looks like. Funny read to start your week with 😅

from our sponsor

Turn word-of-mouth into an MRR machine with Rewardful's affiliate tracking tool.

Loved by AI companies for its simplicity, perfect for SaaS using Stripe. Join the likes of Copy.ai, PDF.ai, Speechify & drive more MRR.

TOP TOOLS
WHO’S HIRING IN AI
NEWS
QUICK BITES

OpenAI recently changed the core values listed on its website, putting a new emphasis on developing artificial general intelligence (AGI).

What is going on here?

OpenAI is shifting its focus more towards creating advanced, human-like AI.

What does this mean?

Previously, OpenAI listed six core values for its employees: Audacious, Thoughtful, Unpretentious, Impact-driven, Collaborative, and Growth-oriented. The website now lists five values, with “AGI focus” being at the top. As the site states, anything not helping with AGI development is out of scope. The other new values are

  • Intense and scrappy

  • Scale

  • Make something people love

  • Team spirit

Why should I care?

While OpenAI has said for years it wants to create AGI, the specifics are unclear. In 2018, it described AGI as “highly autonomous systems that outperform humans at most economically valuable work.” Sam Altman has occasionally said that LLMs are just the start of their journey towards AGI.

In recent months, we have seen Open AI go multimodal with rumours about plans to build consumer hardware with AI. At the same time, it’s also looking at a $90B valuation along with significant growth in its revenue. The core value changes along with this recent activity, it’s a safe bet to guess they are advancing to the next stage in their path to AGI (maybe something non-LLMy).

One thing I can’t help but notice is that the previous set of values resembles the dictionary of a research lab, whereas the newer ones are more similar to startup vocabulary.

QUICK BITES

Nirit Weiss-Blatt from AI Panic News has written two “quite long” pieces on AI safety groups and their public (and not-so-public) agendas. The AI Panic Campaign Part 1 and Part 2.

Blatt claims these groups test narratives and target messaging to manipulate public opinion, aiming to push the fringe idea of AI extermination risk into the mainstream. Tailored for maximum impact on politicians, their goal is to increase AI's perceived danger and urgently necessitate severe legal restrictions like surveillance and criminalization of AI research.

Disclaimer: This summary is based on Blatt’s POV. In addition, some of our biases may have crept in as well. Please do your own reading before you form an opinion.

What is going on here?

AI safety organizations conduct "message testing" to spread fear about AI risk and lobby for regulation. Their goal is to restrict AI progress through an "AI panic campaign."

What does this mean?

AI safety groups like Campaign for AI Safety and Existential Risk Observatory test different narratives through surveys to see which ones best convince people that AI will cause human extinction. They tailor messages based on demographics like age, gender and politics to maximize impact on specific audiences.

For example, they found "dangerous AI" resonates more with Republicans while "superintelligent AI" works better for Democrats. According to Weiss-Blatt, their goal is to create a sense of urgency and imminent threat to get support for an AI moratorium or other restrictive policies.

After inflating the threat in the media, they submit policy proposals to governments demanding actions (like banning AI system training over a certain compute level). Their next target is influencing the upcoming UK AI Safety Summit.

Why should I care?

If true, the impact of small fringe groups on public policy is concerning. The debate around AI needs to remain grounded in science, not science fiction. In total, the claims of AI existential risk in mainstream media are common now. Keep in mind that these claims could be driven by lobbyist agenda (and the same holds true for overhyped AI coverage as well).

Ben’s Bites Insights

We have 2 databases that are updated daily which you can access by sharing Ben’s Bites using the link below;

  • All 10k+ links we’ve covered, easily filterable (1 referral)

  • 6k+ AI company funding rounds from Jan 2022, including investors, amounts, stage etc (3 referrals)

Reply

or to participate.