A new law that means all artificial intelligence (AI) generated photos and videos must be labeled is being considered in the U.K.
The Times of London reports that U.K prime minister Rishi Sunak is currently considering the legislation in a bid to regulate the fast emerging technology and combat the threat of deepfakes.
As part of the plans being considered by Sunak, any pictures and videos that are made by AI will have to be labeled clearly.
According to The Times of London, Sunak is working on national guidelines for the AI industry that he will present before a global safety summit to be hosted in the U.K. in the autumn.
However, the prime minister hopes that the planned laws will form a template for legislation that could be implemented across the world.
The U.K. government has also begun work on a British AI safety agency that will assess the most powerful models to prevent them deviating from their intended goals.
Sunak previously said that he was concerned about warnings of the threat posed to humanity by the most powerful AI systems.
‘[AI is] going to reshape every aspect of our lives’, Sunak reportedly told Sky News.
“And whilst that will bring many opportunities and benefits, it also poses risks, not just existential, but also risks as misuse of the technology. And that’s why guardrails are important and regulation is important.”
The Threat of Deepfakes
Deepfakes continue to cause serious concern. In May, an AI-generated photo which showed a fake explosion near the Pentagon building in Washington D.C. went viral — even causing the markets to briefly dip.
A series of photorealistic AI images that show Donald Trump being arrested also went viral online and underlined the technology’s dangers.
Earlier this month, the European Union (EU) also called on tech companies that generate AI content to label it and will require social media companies to do so under the forthcoming Digital Services Act.
Google also pledged to label AI-generated images to better help users understand the origins of a photograph.