• Facebook

    500 K / likes

  • Twitter

    1 M / followers

This ‘DeepFaceDrawing’ AI Turns Simple Sketches Into Portrait Photos

Comment

Researchers at the Chinese Academy of Sciences have created a deep learning algorithm that can turn rudimentary “freehand sketches” into hyper-realistic photographs that are nearly indistinguishable from real-life portraits.

The technology is described in a paper released earlier this month, and will be shown off at this year’s (online-only) SIGGRAPH conference in July, and while it’s not the first implementation of so-called “sketch-to-image translation,” the results are far superior to previous implementations.

They’ve achieved this by treating each facial feature locally first, and then the face as a whole, basically assigning a probability to each feature. That way you don’t need a professional sketch to generate a realistic-looking image, but the better the sketch, the better and more accurate the results become. What’s more, the software can work in near-real-time, as you can see from the demo video below:

This isn’t the first time researchers have created an AI that turns drawings into “photos,” nor is it the first time AI has been used to generate photo-realistic portraits of people who don’t actually exist, but we’ve never seen these kinds of results form such incredibly basic input.

But don’t take our word for it, this nightmare-inducing comparison compares the DeepFaceDrawing AI (bottom row) against other applications that have been released thus far. Fair warning, the more rudimentary the AI, the more horrifying the results:

The researchers admit that the system requires further refining before it’s ready for prime time. For instance, the paper mentions that the AI, “currently does not provide any control of color or texture in
synthesized results,” which brings up questions of how the AI would handle variation in race, for example. It’s also limited to straight-on head shots for the most part, because of a lack of training data.

All of this, says the paper, will be addressed in future iterations as more parameters and a wider variety of training data is applied.

To dive deeper into the technology and understand what exactly is going on here, check out the full research paper at this link. Notably, the researchers do not mention any potential uses for this technology, but it doesn’t take a big imaginative leap to imagine that this could be used by law enforcement to turn rudimentary suspect sketches into full-blown portraits.

(via Engadget)

Comment