• Ben's Bites
  • Posts
  • Day 1 of UK's AI safety summit. Here's what you need to know:

Day 1 of UK's AI safety summit. Here's what you need to know:

Day 1 of the UK’s AI safety summit was completed yesterday. The next summit will be hosted by South Korea 6 months from now and then by France 1 year from now. Here’s all that you need to know:

Areas of Focus of different countries:

Highlights from the speaker’s remarks cutting aside the ”Oh! We should do something” fluff.

United States:

  • Get companies to make voluntary commitments and adhere to reporting requirements.

  • Launching an AI safety institute of its own under the Department of Commerce.

China:

  • Equal rights for every country to develop and use AI.

  • Global cooperation to share AI knowledge with the public on open-source terms.

EU:

  • Innovation - Opening up EU’s supercomputers to train models free of charge (for EU startups only I guess)

  • Guardrails - Finalization and agreement of the EU AI act by the end of the year. To release the chapter on generative AI by 6th December.

  • Governance - G7 voluntary code for AI regulations with new signatories to be announced.

India:

  • AI as an enabler of mass digital adoption.

  • We can’t afford regulation to fall behind innovation.

UAE:

  • Global first AI with multiple languages as the start.

  • Govern the use cases of AI, not the tech based on prior evidence.

Nigeria:

  • Socio-economic impact of AI is the biggest challenge to address.

  • Still looking at AI as a beneficial force in education and healthcare.

Korea:

  • Freedom, Fairness, Safety, Innovation and Solidarity are the 5 principles for aligned AI development.

  • Focus on protecting the privacy of citizens.

Takeaways from roundtable discussions

These are some of the takeaways shared by the roundtable moderators. I’ve combined takeaways from multiple discussions that address the same points.

On responsibility and accountability:

  • Developers have the inherent responsibility to not share harmful models. Additionally, the burden of safety should be on the vendors.

On loss of control:

  • Today’s AI systems don’t pose the risk of control yet. These systems need prompting by humans and are limited in the actions they can take in the real world. Humans are more likely to give control than systems taking it.

On pausing AI development:

  • The responsible developers and companies are exercising caution regardless of these rules. The ones who don’t care won’t stop just because you say so.

  • Even in the long-term regulations, the incentives need to be designed to stick to the regulations.

On risks and open source:

  • Open Source increases the surface area for unpredictable failures.

  • AI could empower bad actors globally: bioweapons, cyber weapons, infoweapons like deepfakes, etc.

  • Open source is the antidote, if we want inclusive AI.

Integration of AI in society:

  • Global inequality in terms of access and representation by the AI models.

  • Privacy and IP rights of creators that power AI models.

  • Better technical evaluations that reflect the societal impact of AI.

  • Inclusion of young citizens in AI committees (not just experts).

On international collaboration:

  • Start by having the right values with flexibility for countries to have their own POV. Implement on a local and national level.

  • Publish an open list of research questions. Find the most important ones quickly.

  • Nations want to build their own capacity alongside a pool of shared resources.

  • AI safety network of each country’s AI safety institutes. Collaboration between these institutes to share learning.

  • UN’s POV: Getting AI to work for the rest of the world, not just developed nations.

Miscellaneous:

  • Action items should be to grow global technical expertise, and continuous AI testing and auditing.

  • Regulation examples: Product safety laws, Liability laws and sandboxing AI.

  • To look at AI with multiple facets, not just a singular field: Domains of AI, Approaches to releasing AI in the world.

  • Healthcare and education are the most cited examples of “AI for Good”.

And last, I didn’t catch the name but kudos to the presenter who sneaked in the “size does matter” comment.

Reply

or to participate.