Artificial Intelligence Companies Aren’t Very Transparent, Report Finds

Artificial intelligence

Researchers from Stanford University have published a report ranking how transparent artificial intelligence (AI) models are and found them wanting.

The findings from Stanford Human-Centered Artificial Intelligence (HAI) were compiled into its Foundation Model Transparency Index and gave 10 leading AI models a score out of 100.

Among the models that Stanford HAI looked at are Stability AI’s image generator Stable Diffusion, Meta’s Llama 2, and OpenAI’s ChatGPT.

Meta’s Llama 2, a generative text model, scored highest with a score of 54 out of 100. However, the researchers note that the score is not close to providing “adequate transparency” which, they say, reveals a “fundamental lack of transparency in the AI industry.

Stable Diffusion received an overall score of 47 percent, placing it fourth on the list.

Breaking down into the data, Stable Diffusion received 100% in the “Model Access” which is presumably because its training dataset, Laion-5B, is publically accessible with websites like Have I Been Trained allowing photographers and other creatives to see if their images were included in the set. Spoiler alert: If your images have ever been online, they probably are in there.

However, Stable Diffusion received 14% in the “Impact” category which looks at the impact that the model has on its users and the policies that govern its use.

But Stable Diffusion was not alone in getting a low score in the “Impact” category with The Verge noting that none of the models’ creators disclose any information about the technology’s impact on society. This includes where users can complain about privacy, copyright, or biases.

Rishi Bommasani, society lead at the Stanford Center for Research on Foundation Models and one of the researchers in the index, wants the index to provide a benchmark for governments and companies.

“What we’re trying to achieve with the index is to make models more transparent and disaggregate that very amorphous concept into more concrete matters that can be measured,” Bommasani tells The Verge.

An AI Act being put forward by the EU is still in the works and could force AI companies to be more open about how their models are built.


Image credits: Header photo licensed via Depositphotos.

Discussion