AI Image Generator DALL-E Now Allows Users to Edit Human Faces


OpenAI, the company behind AI image generator DALL-E, has announced that it will allow users to edit photos that contain human faces.

Previously, users were not allowed to produce images with realistic faces for fear of misuse such as creating deepfakes.

But now the company says it has made “improvements in our safety system” and is ready to support human faces after receiving feedback from users demanding the feature.

“Many of you have told us that you miss using DALL-E to dream up outfits and hairstyles on yourselves and edit the backgrounds of family photos,” OpenAI says in a statement.

“A reconstructive surgeon told us that he’d been using DALL-E to help his patients visualize results. And filmmakers have told us that they want to be able to edit images of scenes with people to help speed up their creative processes.”

OpenAI has positioned itself as a more brand-friendy artificially intelligent (AI) image generator than competitors such as Midjourney or Stable Diffusion, with the latter taking the opposite approach by including next to no safety filters.

“We made our filters more robust at rejecting attempts to generate sexual, political, and violent content — while also working to reduce false flags — and built new detection and response techniques to stop misuse,” OpenAI adds.

Users of DALL-E, which is still invitational, can upload their own photos where they can edit the photos or ask the clever technology to generate variations of that photo, such as in the Tweet below.

It is worth noting that the DALL-E content policy still prevents users from uploading images of people who have not given their content or images that the user does not have the right to. However, it is unclear how DALL-E will enforce this.

Nascent Industry

Open AI is backed by Microsoft, as well as large venture capital firms. In these early days of text-to-image generators, DALL-E stands out as a more mainstream model by considering legal and ethical issues more carefully than Stable Diffusion.

The powerful technology could potentially allow lay people to create convincing deepfakes which is anathema for a functioning, trustworthy society.

Image credits: Header image generated by DALL-E.