Meta Will Add ‘Made With AI’ Labels on Images and Videos Next Month

meta facebook instagram ai label made with threads

Meta will add a “Made with AI” label to images and videos detected to be AI-generated across Facebook, Instagram, and Threads starting next month.

In a blog post on Friday, Meta revealed that it had updated its AI-generated content policy. Beginning in May, the company will apply a “Made with AI” label to content on its platforms.

According to Meta’s post, this new policy will apply to content across Instagram, Facebook, and Threads. The company will start labelling more video, audio, and image content as being AI-generated from next month onward.

Labels will be applied to content either when users disclose the use of AI tools or when Meta detects “industry standard AI image indicators.” However, the company did not offer any further details about its detection system for AI-generated images.

‘This Technology is Quickly Evolving’

Following recommendations and feedback from Meta’s Oversight Board, the company will also update the manipulated media policy that it created in 2020.

The old policy only prohibited videos that are created or altered by AI tools to make a person appear to say something they didn’t say. However, Meta acknowledged that this existing policy is ” too narrow” and doesn’t cover the wide range of AI-generated content that has recently flooded the internet.

“In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving,” Meta writes in the blog post.

“As the Board noted, it’s equally important to address manipulation that shows a person doing something they didn’t do.”

Meta says it will provide transparency and additional context to these AI-generated images with the “Made with AI” labels. The company argues that labels and additional context are a better way to address manipulated media and avoid the risk of unnecessarily restricting freedom of speech.

Meta first announced that it had been working with industry partners on common technical standards for identifying AI content, including video and audio, back in February.

Meta previously noted that it was crucial to provide more transparency on AI-generated images at this particular time, given that the 2024 U.S. election race is already well underway and deepfake images of politicians have already been circulated online.


 
Image credits: All photos by Meta.

Discussion