The European Union (EU) plans to introduce new legislation in 2023 that will force the companies behind AI image generators to be more open about how their models are built.
The EU artificial intelligence (AI) act is the first law on AI by a major regulator anywhere in the world. Lawmakers in Europe are working on rules for image-producing AI generative models, such as DALL-E, Stable Diffusion, and Midjourney.
As noted by MIT Technology Review, the models that AI companies use are fiercely guarded. Midjourney founder David Holz admitted to using “hundreds of millions” of images to scraped from the internet without obtaining permission.
While Holz’s admission outraged many, The EU AI Act will force companies to shed yet more light on the inner workings of their AI models. The full details of the Act are yet to be finalized, but MIT Technology Review reports that companies that want to sell or use AI products in the EU will have to comply or face paying up to 6% of their total worldwide annual turnover in fines. It’s expected to be made law this spring.
The Act is causing consternation among company owners and venture capitalists alike after it became clear the legislation will cover many more companies than first expected.
Science Business reports that concerns are growing that the EU Act will have a big impact on start-ups. A survey presented to the European AI Forum in December showed pessimism with 73% of respondents expecting it to reduce the competitiveness of AI firms based in Europe.
“Regulating these technologies is tricky, because there are two different sets of problems associated with generative models, and those have very different policy solutions,” Alex Engler tells MIT Technology Review, an AI governance researcher at the Brookings Institution.
“One is the dissemination of harmful AI-generated content, such as hate speech and nonconsensual pornography, and the other is the prospect of biased outcomes when companies integrate these AI models into hiring processes or use them to review legal documents.”
Engler believes that generative models should limit what they can produce, monitoring their outputs so the technology is not abused.
Image credits: Header photo licensed via Depositphotos.