Researchers Reconstruct Highly-Accurate 3D Scenes Using High-Res Photos

3D modeling for movies and video games is often done using lasers. The modeler scans whatever it is they are trying to reconstruct using a laser and then ends up spending a good bit of time cleaning up the results in post. In contrast, a new method developed by the folks at Disney Zurich promises to generate much more accurate results by replacing the lasers with photos.

The paper the researchers published detailing the technique is called Scene Reconstruction from High Spatio-Angular Resolution Light Fields. In case you don’t speak that level of tech, it basically means generating 3D scenes by analyzing the light in high-res 2D imagery on a pixel-by-pixel basis (instead of analyzing patches of pixels). According to the paper, this method captures the real world “in unparalleled detail.”


First, the modeler takes many high-resolution 2D photographs of the scene from different angles — in the video above, it looks like it was done using time-lapse photography — after which the special algorithms analyze the light and generate very accurate depth info.

Although building 3D reconstructions using 2D photos isn’t by any means new, what the researchers have managed to do is apply the technique to the kinds of cluttered, complex environments that typically cause the most problems. And what’s more, they’ve designed the software to work on a “standard graphics processing unit” — no need for expensive graphics capability.

Even though the software does a decent enough job creating a 3D representation using only one photo, by using many photos, they’re able to reconstruct the scene from every angle so that moving around it in a virtual world leaves no blind spots.


For the video at the top, each scene was photographed 100 times using a 21-megapixel DSLR. The resulting scenes are extremely accurate compared to the 1 to 2 megapixel reconstructions typically achieved with other methods.

According to, the researchers are currently presenting their findings at ACM SIGGRAPH in Anaheim, CA. To find out more about the methods used and read up on all the interesting (but very complicated) research behind this new approach, check out the full research paper by clicking here.

(via Polygon)

  • Das

    “3D modeling for movies and video games is often done using lasers. ” really?

  • Peng Tuck Kwok

    Yes. Clay/resin models – scan them using the equipment and you get all the points from the models. This is faster than getting your 3D artist to construct every triangle to come up with the model.

  • Stephan Mantler

    The source imagery is “a dense set of photographs captured along a linear path”. The analogy to time-lapse is only a coincidence, in reality the variation in shading (ie. of clouds passing through) is undesirable. Having all images perfectly aligned allows for some additional optimizations and more efficient searches of possibly correlated pixels from different images, and it also removes errors introduced by having to estimate the entire camera pose from the source images in the first place.

    Also note that reflections are – and continue to be – a huge issue, and that consequently all those sample images are conveniently non-reflective, isotropic surfaces.

  • Friv

    How well this will give us the photos are high resolution and beautiful

  • Elrano

    There is also LIDAR (LIght Detection and Ranging), which allows a vfx artist to reconstruct a real set inside a virtual one with almost 100% accuracy.

  • lms

    look at the autocad 360 iphone app..

  • lms

    i meant
    AutodeskĀ® 123DĀ® Catch for ios

  • nikonian

    Not always but often… And it is not just “lazers” but a tool called a 3D scanner