With next year’s presidential elections taking place in an unprecedented age: the age of artificial intelligence — Facebook and Instagram have announced a new requirement that political ads generated by AI will need to be disclosed.
Labels indicating that an ad for a political party was generated by AI will appear on pieces of content with the rules coming into effect on January 1 and will be applied worldwide. It will cover any advertisement for a social issue, election, or political candidate.
The regulations are designed so that potentially misleading images, videos, or audio are properly labeled. The rise of generative AI tools now means realistic content can be created by just about anyone, giving bad actors the opportunity to create misinformation.
“In the New Year, advertisers who run ads about social issues, elections, and politics with Meta will have to disclose if image or sound has been created or altered digitally, including with AI, to show real people doing or saying things they haven’t done or said,” Nick Clegg, Meta president of global affairs, says in a Threads post on Wednesday.
Standard alterations to images such as cropping and color correcting don’t need to be disclosed but imagining real events with the help of AI is definitely off-limits.
Microsoft has also unveiled its own initiative to combat the spread of fake news in 2024’s crunch election year: a tool that allows political campaigners to insert a digital watermark on their ads.
AP reports that time for lawmakers in the United States to pass regulations on AI before the next year’s election is quickly running out. Officials in Europe are ahead of their American peers in terms of creating a legal framework for AI.
Earlier this year, the Republican party made an entirely AI-generated video imagining what the United States will be like if Biden is elected to a second term in office; unsurprisingly, it depicted the scenario in apocalyptic terms. However, the ad was labeled as AI-generated.
But the fear is that if political campaigns do not disclose AI-generated material, such as when Ron DeSantis made an attack on Trump containing fake images, people may not know what they’re looking at is fake.
“It’s gotten to be a very difficult job for the casual observer to figure out: What do I believe here?” Vince Lynch, an AI developer and CEO of the AI company IV.AI, tells AP. “The companies need to take responsibility.”
Image credits: Header photo licensed via Depositphotos.