Google Used a 64-Camera, 331-Light Array to Train Its Portrait Lighting AI

Portrait Light, an AI-based lighting feature that Google launched for new Pixel phones in September of 2020, allows the photographer to change lighting direction and intensity in post. Google has published a blog revealing how the company developed the technology.

In the Pixel Camera on Pixel 4, Pixel 4a, Pixel 4a (5G), and Pixel 5, Portrait Light is automatically applied post-capture to images in the default mode and to Night Sight photos that include people. Google says that in Portrait Mode photographs, the Portrait Light feature provides more dramatic lighting to accompany the shallow depth-of-field effect already applied. The company says this results in a “studio-quality” look.

Google says that it believes lighting is a personal choice, and because of this the developers wanted to allow the photographer to manually re-position and adjust the brightness of applied lighting within Google Photos to match a personal preference.

Though Portrait Light debuted in the Pixel 4 and 5 phones, it has since been added as an update to older Pixel phones, dating back to the Pixel 2, as well.

In order for the AI model to understand how changing light direction affects a human face, Google needed millions of portraits with different lighting scenarios from different directions.

“Portrait Light models a repositionable light source that can be added into the scene, with the initial lighting direction and intensity automatically selected to complement the existing lighting in the photo,” the developers of the technology explain in a blog post. “We accomplish this by leveraging novel machine learning models, each trained using a diverse dataset of photographs captured in the Light Stage computational illumination system.”

The two models that Google developed are the automatic directional light placement and synthetic post-capture relighting. The first model works first by looking at a given portrait, then next the algorithm places a synthetic directional light in the scene consistent with how a photographer would have placed an off-camera light source in the real world. The second model, the synthetic post-capture relighting, synthetic light is added in a way that looks realistic and natural for a given lighting direction and portrait.

In order to teach its artificial intelligence how to properly implement these two models, Google took advantage of its Light Stage computational illumination system:

By placing a subject in the Light Stage and photographing that model multiple times with one of 64 differently positioned cameras and combining those with the 331 individually programmed LED light sources of the Light Stage, the developers were able to simulate different lighting conditions that the AI would need to understand.

In the image below, Google explains that the example images as they were captured on the Light Stage – illuminated one by one – can be added together to form the appearance of the subject in any lighting environment:

Google says that it sees Portrait Light as the first step of many it plans to take towards making post-capture lighting controls in mobile cameras more powerful using AI and machine learning.

(via Engadget)

Discussion