Two Separate Reports Scrutinize Facebook’s Policies and Algorithm

Facebook under magnifying glass

A new report found that Facebook was accidentally elevating harmful content for the past six months instead of suppressing it. A second report found that its internal policies may have resulted in the underreporting of photos of child abuse.

Facebook May Underreport Child Abuse

A leaked document seen by the New York Times indicates the likelihood that Facebook has been underreporting images of potential child sexual abuse. In contrast to how competitors Apple, Snap, and TikTok handle such reports, Facebook had been instructing its moderators to “err on the side of an adult” when assessing images. The Times report finds that moderators had taken an issue with this stance, but Facebook executives defended it.

The main issue at hand is how Facebook content moderators should handle reporting images if the photos in question are not immediately recognizable as children. Suspected child abuse imagery is reported to the National Center for Missing and Exploited Children (NCMEC), which then reviews the reports and refers likely violations to law enforcement. Those that feature adults may be removed per Facebook rules but are not reported to any outside organizations or authorities.

But the Times notes that there is not a reliable way to determine age based on a photograph. Facebook moderators use an old method that relies on the “progressive phases of puberty,” a methodology that was not designed to determine age. This, combined with Facebook’s policy of assuming photos are of adults if it is not immediately obvious, has led moderators to believe that many photos of abused children are not being reported.

Facebook reports more child sexual abuse material to NCMEC than any other company and argues that its policy is designed to protect users’ privacy and avoid false reporting, which it says could be a legal liability for them. But as mentioned, Facebook’s main competitors all take the opposite approach to reporting.

Bug Led to Increased Views of Harmful Content

Facebook’s troubles this week do not end there. According to The Verge, Facebook engineers identified a massive ranking failure in the company’s algorithm that was mistakenly exposing as many as half of all News Feed views to “integrity risks” over the last six months.

In short, the internal report obtained by The Verge shows that instead of suppressing posts from repeated misinformation offenders, the algorithm was instead giving the posts increased distribution which resulted in spikes in views to that content of up to 30% globally. Engineers were unable to initially find the cause of the issue, which died down before ramping back up again in early March. It was only then that they were able to isolate and resolve the ranking issue.

It should be noted that there does not appear to be any malicious intent behind the ranking issue, but Saher Massachi, a former member of Facebook’s Civic Integrity team, tells The Verge that it is a sign that more transparency behind the algorithms that platforms like Facebook use is needed.

“In a large complex system like this, bugs are inevitable and understandable,” Massachi says.

“But what happens when a powerful social platform has one of these accidental faults? How would we even know? We need real transparency to build a sustainable system of accountability, so we can help them catch these problems quickly.”


Image credits: Header photo licensed via Depositphotos.

Discussion