Leading AI Companies Promise to Protect Children From Dangers of AI

Three images demonstrating ai's ability to modify faces: a young woman with dark hair, similar young woman altered to resemble Audrey Hepburn, and the same altered image with added youthful features.

Several major players in the artificial intelligence field pledged to protect children online, marking another chapter in the progression of AI safety.

The pledge, made in collaboration with child safety organization Thorn, responsible tech non-profit All Tech Is Human, and companies Amazon, Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, Mistral AI, OpenAI, and Stability AI, commits to a set of principles concerning the training, deployment, and maintenance of AI products. And it must be made clear, the content and issues these companies promise to fight are already happening.

“In the same way that the internet accelerated offline and online sexual harms against children, misuse of generative AI presents a profound threat to child safety — with implications across child victimization, victim identification, abuse proliferation and more,” Thorn says in its release announcing the collaborative pledge. “This misuse, and its associated downstream harm, is already occurring — within our very own communities.”

Earlier this year, two teens from Miami were arrested for creating explicit AI-generated images of their classmates, who were also minors. And similar events have come up in New Jersey, Seattle, and Los Angeles. Sexually-explicit deepfake images of actor Jenna Ortega at age 16, though she is now 21, were used in blurred ads on social media promoting an AI product that creates deepfake nudes.

“Yet, we find ourselves in a rare moment — a window of opportunity — to still go down the right path with generative AI and ensure children are protected as the technology is built,” Thorn continues, before laying out the “principles guard against the creation and spread of AI-generated child sexual abuse material (AIG-CSAM).”

These guidelines include responsibly sourcing training datasets, safeguarding datasets and generative AI products from child sexual abuse material (CSAM), continuously testing models’ capabilities to create abusive content, moving with misuse in mind, responsibly hosting models, taking ownership of product safety, preventing AI products from eventually making misuse accessible, investing in research to continue safeguarding against abuse as practices evolve, and continuously fighting CSAM on platforms.

“The collective commitments by these AI leaders should be a call to action to the rest of the industry,” Thorn says. “We urge all companies developing, deploying, maintaining, and using generative AI technologies and products to commit to adopting these Safety by Design principles and demonstrate their dedication to preventing the creation and spread of CSAM, AIG-CSAM, and other acts of child sexual abuse and exploitation.

“In doing so, together we’ll forge a safer internet and brighter future for kids, even as generative AI shifts the digital landscape all around us.”


Image credits: Thorn

Discussion