Scientists Create Incredible Tool That Realistically Relights Photos in 3D
Researchers at the Computational Photography Lab at Simon Fraser University in British Columbia, Canada, developed a new software tool that can relight any photograph with precise physical-based controls, similar to what software like Blender can do with computer-generated graphics.
The research project, described in the new paper “Physically Controllable Relighting of Photographs,” brings precise lighting control to photo editing that is typically reserved for computer graphics artists using tools like Blender and Unreal Engine. While there are AI-based tools that can relight photos, these are constructed using trained neural engines that attempt to guess how light may interact with a scene based on images it has been trained on of similar scenes.
This new work also uses a neural network, but not throughout the entire relighting process. Instead, the new tool creates a 3D map of a 2D image that enables the creation of a customized, controllable light map specific to that exact image.
When a photograph is put into this new software, it estimates a 3D version of the scene without any lighting. This aspect of the method builds upon prior research at Simon Fraser University performed by Chris Careaga, Ph.D. student and lead author of the new research. Last year, Careaga and others developed AI that can understand the light in photographs.
“After creating the 3D scene, users can place virtual light sources into it, much like they would in a real photo studio or 3D modeling software,” says Careaga. “We then interactively simulate the light sources defined by the user with well-established techniques from computer graphics.”
At this point, a rough preview is available, showing the scene under its new lighting. This provides the information users require to select their desired lighting, and it includes physically accurate rendering. Then, a neural engine the researchers developed works to finalize the photorealistic relit image.
While the research, which was shown last week at SIGGRAPH in Vancouver, is currently only working on still images, the team wants to extend its relighting technology to video, which would help filmmakers and VFX artists avoid costly and sometimes impossible reshoots on video projects.
“As this technology continues to develop, it could save independent filmmakers and content creators a significant amount of time and money,” says Dr. Yağız Aksoy, the leader of Simon Fraser’s Computational Photography Lab. “Instead of buying expensive lighting gear or reshooting scenes, they can make realistic lighting changes after the fact, without having to filter their creative vision through a generative AI model.”
PetaPixel previously covered the Computational Photography Lab’s groundbreaking research in 2021 when scientists at Simon Fraser University taught AI cameras how to more accurately see depth in photos, research that continues to inform ongoing work, including the new relighting methods.
Image credits: Simon Fraser University’s Computational Photography Lab. The research paper discussed, “Physically Controllable Relighting of Photographs,” is written by Chris Careaga and Yağız Aksoy.