• Ben's Bites
  • Posts
  • Top AI models offer little transparency, even open source ones

Top AI models offer little transparency, even open source ones

Commercial foundation models are becoming less transparent, according to researchers at Stanford's Center for Research on Foundation Models. Together with folks from MIT and Princeton, they created a new index called the Foundation Model Transparency Index (FMTI) to measure companies' transparency levels across 100 indicators.

What’s going on here?

FMTI finds the top 10 major foundation model companies lacking in transparency.

What does this mean?

The group evaluated 10 major companies on 100 indicators covering how models are built, how they work, and how they're used downstream. The highest score was 54 out of 100 (Llama 2) showing much room for improvement across the board. Many critical details like training data sources, labor practices, and model usage stats weren't disclosed by any company.

The FMTI methodology and indicators are designed to avoid conflicts between transparency and other values like privacy and security.

Why should I care?

As foundation models spread across sectors, transparency is crucial for properly regulating these powerful systems and ensuring they are built and used responsibly. This lack of transparency makes it hard for businesses, academics, regulators and the public to understand these increasingly influential technologies.

Without basic details of how models work, issues like bias, privacy violations, and other harms can't even be identified, let alone addressed. Nine of the 10 companies have committed to managing AI risks, and the researchers hope the index will help them follow through. They also want to inform policymakers considering regulation around foundation models.

Reply

or to participate.