A research team from the University of Maryland has developed an artificial intelligence-powered (AI) method to reconstruct complex scenes and objects in 3D using only the reflections in a person’s eye.
“The reflective nature of the human eye is an underappreciated source of information about what the world around us looks like. By imaging the eyes of a moving person, we can collect multiple views of a scene outside the camera’s direct line-of-sight through the reflections in the eyes,” explain the researchers.
Reconstructing a 3D scene using eye reflections is problematic for two primary reasons. The researchers explain that it is hard to accurately estimate a person’s eye pose, making it challenging to reconstruct a scene based on the reflection. Further, the human eye and iris texture interact sophisticatedly with eye surface reflections. The underlying texture of the eye can significantly affect the appearance of reflections.
“The cornea geometry is approximately the same across all healthy adults. Because of this fact, if we count the pixel size of a person’s cornea in the image, we can compute exactly where their eyes are. Using this insight, we train the radiance field on the eye reflections by shooting rays from the camera, and reflecting them off the approximated eye geometry. To remove the iris from showing up in the reconstruction, we perform texture decomposition by simultaneously training a 2D texture map that learns the iris texture,” writes the researchers.
As Gizmodo reports, the new research, conducted by Hadi Alzayer, Kevin Zhang, Brandon Feng, Christopher Metzler, and Jia-Bin Huang, relies upon previous research on neural radiance field (NeRF) technology. NeRF can create novel views of a 3D scene using 2D data inputs.
In the case of reconstructing a 3D scene using eye reflections, the researchers had to overcome numerous challenges, including finding a way to compensate for iris texture and cornea poses. To do so, the team developed an estimated eye texture and designed a way to translate the shape of a reflection on a typical cornea into a typical, natural perspective. To that end, the researchers also relied upon previous extensive research about human eye geometry.
In real-world experimentation, the team successfully reconstructed a room in 3D using eye reflections. While the results are not exceedingly high resolution, they are fascinating.
The team explains that standard NeRF techniques are insufficient because of noise inherent in cornea localization, complex iris textures, and the low-resolution nature of small reflections. To overcome these obstacles, the team introduced novel cornea pose optimization and iris texture decomposition methods during its training process. The method also allows for improved scene reconstruction when a person moves their head, which is unique, as other NeRF methods utilize a moving camera rather than a moving subject. When subjects turn their heads from side to side, the additional data in subsequent image captures improves the results.
“With this work, we hope to inspire future explorations that leverage unexpected, accidental visual signals to reveal information about the world around us, broadening the horizons of 3D scene reconstruction,” the research concludes.
Image credits: “Seeing the World through Your Eyes” by Hadi Alzayer, Kevin Zhang, Brandon Feng, Christopher Metzler, and Jia-Bin Huang / University of Maryland, College Park