We always get a laugh when news organizations or governments try to pass off bad Photoshop jobs as real images, but with the way graphics technology is advancing, bad Photoshop jobs may soon become a thing of the past. Here’s a fascinating demo into technology that can quickly and realistically insert fake 3D objects into photographs — lighting, shading and all. Aside from a few annotations provided by the user (e.g. where the light sources are), the software doesn’t need to know anything about the images. Mind-blowing stuff…
Posts Tagged ‘siggraph’
Artificial lens flare is an important part of making certain computer generated scenes look realistic, but up to this point creating realistic lens flare has been a task that requires a good deal of processing power. Now, researchers have come up with a way to simulating lens flare quickly and accurately, taking into account a large number of physical factors that cause the phenomenon:
The underlying model covers many components that are important for realism, such as imperfections, chromatic and geometric lens aberrations, and antireflective lens coatings.
The video above discusses how the technology works, and also touches on the science behind lens flares. The method is patent-pending, and will be presented later this year at SIGGRAPH 2011.
Physically-Based Real-Time Lens Flare Rendering [Max-Planck-Institut Informatik]
You might have seen examples of Photoshop’s Content Aware Scaling feature in action, but do you know what goes on behind the scenes that allows it to magically work? This presentation from the SIGGRAPH 2007 conference sheds some light on the technical mojo that allows you to manipulate the size and shape of photos in crazy ways.
The same concept is found in the Liquid Scale app we featured a while back.
At SIGGRAPH 2010 in Los Angeles last month, Microsoft researchers showed off some new technology that improves existing digital blur reduction techniques by outfitting a camera with motion detecting sensors.
The team created an off-the-shelf hardware attachment consisting of a three-axis accelerometer, three gyroscopes, and a Bluetooth radio, attaching the setup to a Canon 1Ds Mark III camera. The researchers then created a software algorithm to use the motion information captured during the exposure to do “dense, per-pixel spatially-varying image deblurring”.