Meta is Expanding its Child Protection Efforts in Response to Whistleblower

Meta

Meta announced Friday a list of new efforts it’s making to keep kids safe on its platforms, notably after a whistleblower recently testified to Congress about the company’s failure to protect children.

Meta laid out several steps it is taking to making children on its social media apps like Facebook and Instagram safer, including making changes to the content it suggests to younger users, targeting “suspicious adults,” and utilizing online child safety experts. This all comes after a former Meta employee told Congress less than a month ago that the company has not done enough to keep minors safe. And earlier this week, Meta sued the FTC the latest move in a legal battle in which the FTC claims Meta has failed to make good on its privacy promises made in a 2020 settlement.

“We take recent allegations about the effectiveness of our work very seriously, and we created a task force to review existing policies; examine technology and enforcement systems we have in place; and make changes that strengthen our protections for young people, ban predators, and remove the networks they use to connect with one another,” Meta’s statement reads, alluding to the recent testimony made against it.

Reels and Instagram explore, both of which feed and suggest content to users, will expand their protections. The social media giant says these tools already designed to avoid suggesting upsetting or rule-breaking content, but a central list across Facebook and Instagram will grow to include things like misspellings and spelling variations as well as to better understand language use within predatory networks. This sounds like an attempt to combat increasing use of “algospeak,” in which people use euphemisms to avoid having their content blocked, taken down, or flagged. Of course, by its very nature, algospeak is ever evolving, constantly updating to get ahead of the latest filters. This is done not only to get inappropriate content out but also to have conversations about serious topics like suicide and assault. Meta said it will also flag these terms for content reviewers so they can better understand the subtext.

Further, Meta says it will target “suspicious adults” using 60 signals to flag possible predators. These signals, the company explains, includes things like “if a teen blocks or reports an adult, or if someone repeatedly searches for terms that may suggest suspicious behavior.” Meta says it already uses this technology to limit interaction with young users, including finding and following their accounts, but the changes mean they’ll have a harder time find and interacting with each other, too. They wouldn’t be able to find each other on the Discover page or see each others’ comments on public posts. Groups, Pages, and Profiles flagged for things like a high percentage of these “suspicious adults” or whose members greatly overlap with members of another Group removed for child safety policy violations won’t be recommended to others. Meta explains it will also use specialists in law enforcement and child safety to help identify and remove these groups as well. These efforts, the company promises, will then be used to better train its tech to proactively identify and remove the groups in the future. The company adds that it already removed thousands of profiles and groups fitting these descriptions.

Reporting and enforcement will also be increased with the new measures. Meta notes that it joined the group Lantern, which shares “a variety of signals about accounts and behaviors that violate their child safety policies” across tech companies. Reports for content reviewers will be prioritized and gaps in the system, like an error that would close user reports, have been worked on.

Throughout its statement, Meta highlights actions taken to remove users, groups, and content that violate its policies or are flagging for possible predatory behavior. More than 250,000 devices were block on Instagram for violations of its child safety policies since the beginning of August, 16,000 Groups were reviewed and removed for violations of child safety policies since July 1, and there were five times as many automated deletions of Instagram Lives that contained adult nudity and sexual activity since a new enforcement effort was launched in September, among other similar statistics Meta boasted in its post.

While these figures do highlight attempts to clean up Meta’s social media sites, the figures are staggering also in their revelations of just how much potentially harmful content is already present. One must also wonder how much more has been missed.


Image credits: Header photo licensed via Depositphotos.

Discussion