Meta’s Oversight Board to Investigate Two Cases of Explicit AI Images

Meta

Meta’s semi-independent policy body The Oversight Board will investigate two instances of sexual AI images of women being shared on Facebook and Instagram.

Both cases involve explicit AI images of female public figures who have not been identified “to avoid gender-based harassment.” However, the two cases look at different markets: one is in India and the other is in the United States.

First Case

An explicit image of a famous Indian woman was shared to an Instagram account that exclusively shares AI pictures of Indian women and the majority of users who engaged with the images are based in India.

A user reported the image to Meta for pornography but the report was automatically closed because it was not reviewed within 48 hours. That same user then appealed Meta’s decision but that appeal was also automatically closed meaning the offending image stayed live on Instagram.

The same user then took their case to the Oversight Board which decided to investigate. The Board’s acceptance caused Meta to take down the post as it had left the content up “in error” and was removed for violating the Bullying and Harassment Community Standard.

Second Case

The second case pertains to an AI-generated image of an American public figure that was posted to a Facebook group for AI pictures.

The image showed the nude woman with a man groping her breast and her name was included in the caption. Most of the users who reacted to the images were based in the United States.

A different user had already posted the image and this led to the case being escalated to Meta’s policy matter experts who removed the content as they deemed it a violation of the Bullying and Harassment policy, specifically for “derogatory sexualized Photoshop or drawings.”

The AI image was then searched via Meta’s Media Matching Service Bank which finds and removes images that have already been identified by a human as breaking Facebook’s Community Standards. The image was removed but the user who posted it appealed to Meta and then subsequently to the Oversight Board.

Request for Public Comments

The Board says that it has chosen these cases to “assess whether Meta’s policies and its enforcement practices are effective at addressing explicit AI-generated imagery.”

The Board has requested public comments about the harms of deepfake pornography, particularly how it affects women, as well as comments on Meta’s automated systems that close appeals in 48 hours if no review has happened. Contributors can also advise on broader strategies on how the company can tackle deepfake pornography posted on its platforms.

Anonymous public comments can be shared here.


Image credits: Header photo licensed via Depositphotos.

Discussion