- Ben's Bites
- Posts
- Stopping Innovation is how companies are trying to get ahead in AI
Stopping Innovation is how companies are trying to get ahead in AI
Ben Thompson (Stratechery) discusses how big tech companies and AI researchers are lobbying the government to heavily regulate AI development, likely to lock in their market positions.
What's going on here?
Large tech companies and AI labs are urging the government to regulate AI development in the name of safety. However, their calls for regulation align closely with their business interests, indicating an ulterior motive of stifling competition.
What does this mean?
The big tech companies and AI researchers warning about AI risks tend to be the current leaders in the field. OpenAI and Anthropic, smaller labs with popular models, lead the charge. Google, Microsoft and DeepMind have also jumped on board. Meanwhile, Apple, Amazon and Meta have been quiet.
The prominent voices calling for AI regulation are the companies benefiting most from the current hype around models like ChatGPT. The laggards have fewer representatives lobbying for restrictions. This suggests regulations could lock in the position of today's winners over up-and-coming rivals.
Why should I care?
These dynamics highlight the need for skepticism around AI policy. While risks exist, progress requires striking a balance between safety and innovation. However, tech giants may suggest precautionary restrictions serving their own interests, not the public's. Their warnings evoke past episodes where leaders proclaimed risks to justify limiting competition.
We should be wary of incumbents writing rules benefiting themselves as incumbents. Regulation could easily shift from protecting consumers to protecting concentrated market power. Before restricting new advances, policymakers should honestly discern between real dangers versus threats to current leaders. Prioritizing safety is crucial but must be balanced with enabling better futures through ongoing AI progress.
Reply