Deepfake Faces Could Protect People’s Privacy in Social Media Photos

deepfake

Social media users could use artificial intelligence (AI) generated deepfakes faces to hide their identity in other people’s photos.

Researchers at Binghamton University and Intel have used AI deepfake techniques to subtly alter a person’s appearance in photos posted on social media so that only their friends and permitted contacts can see their original faces.

According to Metaphysic, the researchers have proposed various “face-access” models which are able to replace “unapproved” faces in photographs on social media with “quantitatively dissimilar” AI-generated deepfake faces.

The deepfake faces retain the gender, age, pose, and basic disposition of the people in the original photographs.

But while the subtly altered deepfake faces are not wholly dissimilar to the people in the original photograph, AI facial recognition technology can not identify the individuals in the new images.

These extremely subtle deepfake faces are therefore not accurate enough to form usable training data for further deepfakes or latent diffusion models, and other image synthesis systems.

My Face My Choice

In a recently published paper, the researchers outlined their proposed deepfake system called “My Face My Choice (MFMC).”

MFMC alters a photograph when an individual uploads it on social media so that “outsiders” can only see all the faces in the images as broadly representative deepfakes.

The transformations are generated by ArcFace, a 2022 project led by Imperial College London.

ArcFace optimizes the deepfake so that the new face has approximate visual parity with the “overwritten” face, without allowing real features to be copied over to the amended photo.

With MFMC, only the submitter can decide to reveal their real face in the image. Any friends or other people in the photo who wish their real face to be revealed have to ask the submitter to “unlock” that face.

The system also allows identities that are tagged by the user to be revealed, with the tag acting as a default “unlock” of the face. When the tag is removed, the face is replaced by a deepfaked face.

Depending on how deeply entrenched a user is in a friends’ network that’s represented in a photograph, they may see all or none of the “real” faces from the original image.

The Choice to Appear in a Photo

The researchers say the system is designed to be adopted by existing social media platforms, rather than to constitute the basis of a new social media network.

While social media platforms such as Facebook and Instagram let users decide whether they are tagged in photos, there is no way to stop other users from sharing those photos.

Moreover, with the rise of AI facial recognition technology, it is becoming easier to identify the individuals in photographs regardless of whether they are tagged in an image or not.

“In current social platforms… we all appear in hundreds of photos voluntarily or involuntarily,” the researchers write in the paper entitled My Face My Choice: Privacy Enhancing Deepfakes for Social Media Anonymization.

“We believe that the access rights should be designed per face, where everyone has freedom over which photos they appear [in].”


Image credits: Photos sourced from My Face My Choice: Privacy Enhancing Deepfakes for Social Media Anonymization


Image credits: Header photo licensed via Depositphotos.

Discussion