PetaPixel

Researchers Reconstruct Highly-Accurate 3D Scenes Using High-Res Photos

3D modeling for movies and video games is often done using lasers. The modeler scans whatever it is they are trying to reconstruct using a laser and then ends up spending a good bit of time cleaning up the results in post. In contrast, a new method developed by the folks at Disney Zurich promises to generate much more accurate results by replacing the lasers with photos.

The paper the researchers published detailing the technique is called Scene Reconstruction from High Spatio-Angular Resolution Light Fields. In case you don’t speak that level of tech, it basically means generating 3D scenes by analyzing the light in high-res 2D imagery on a pixel-by-pixel basis (instead of analyzing patches of pixels). According to the paper, this method captures the real world “in unparalleled detail.”

disney3d_1

First, the modeler takes many high-resolution 2D photographs of the scene from different angles — in the video above, it looks like it was done using time-lapse photography — after which the special algorithms analyze the light and generate very accurate depth info.

Although building 3D reconstructions using 2D photos isn’t by any means new, what the researchers have managed to do is apply the technique to the kinds of cluttered, complex environments that typically cause the most problems. And what’s more, they’ve designed the software to work on a “standard graphics processing unit” — no need for expensive graphics capability.

Even though the software does a decent enough job creating a 3D representation using only one photo, by using many photos, they’re able to reconstruct the scene from every angle so that moving around it in a virtual world leaves no blind spots.

disney3d

For the video at the top, each scene was photographed 100 times using a 21-megapixel DSLR. The resulting scenes are extremely accurate compared to the 1 to 2 megapixel reconstructions typically achieved with other methods.

According to Phys.org, the researchers are currently presenting their findings at ACM SIGGRAPH in Anaheim, CA. To find out more about the methods used and read up on all the interesting (but very complicated) research behind this new approach, check out the full research paper by clicking here.

(via Polygon)