Posts Tagged ‘siggraph’

Researchers Develop Method for Getting High-Quality Photos from Crappy Lenses

simplesample

There are many reason high-quality lenses cost as much as they do (and in some cases that is quite a lot), and one of them is that high-end lenses use many specially-designed elements that are perfectly-positioned to counteract aberrations and distortions.

But what if you could correct for all of that in post? Automatically? With just the click of a button? You could theoretically use a crappy lens and generate high-end results. Well, that’s what researchers at the University of British Columbia are working on, and so far their results are very promising. Read more…

This Crazy Software Extracts 3D Objects from Photos with a Few Clicks

If you were wanting to have your mind blown today, the video above might do it. It’s a demonstration of a piece of 3D object extraction and manipulation software that made its debut at SIGGRAPH 2013, and it may just offer a glimpse into the future of photo manipulation. Read more…

Software Lets You Tweak The Lighting of Your Scene After the Fact Until It’s Perfect

lightsoftware1

Researchers at Cornell University recently developed a new piece of software in conjunction with Adobe that could some day help amateurs and professionals alike exert more control over how exactly certain static scenes are lit. Read more…

Mind-Blowing Research Into Inserting Artificial Objects into Photographs

We always get a laugh when news organizations or governments try to pass off bad Photoshop jobs as real images, but with the way graphics technology is advancing, bad Photoshop jobs may soon become a thing of the past. Here’s a fascinating demo into technology that can quickly and realistically insert fake 3D objects into photographs — lighting, shading and all. Aside from a few annotations provided by the user (e.g. where the light sources are), the software doesn’t need to know anything about the images. Mind-blowing stuff…

Rendering Synthetic Objects into Legacy Photographs (via PhotoWeeklyOnline)

Researchers Come Up With Quick Way of Generating Realistic Lens Flare

Artificial lens flare is an important part of making certain computer generated scenes look realistic, but up to this point creating realistic lens flare has been a task that requires a good deal of processing power. Now, researchers have come up with a way to simulating lens flare quickly and accurately, taking into account a large number of physical factors that cause the phenomenon:

The underlying model covers many components that are important for realism, such as imperfections, chromatic and geometric lens aberrations, and antireflective lens coatings.

The video above discusses how the technology works, and also touches on the science behind lens flares. The method is patent-pending, and will be presented later this year at SIGGRAPH 2011.

Physically-Based Real-Time Lens Flare Rendering [Max-Planck-Institut Informatik]

Explanation of Content Aware Resizing

You might have seen examples of Photoshop’s Content Aware Scaling feature in action, but do you know what goes on behind the scenes that allows it to magically work? This presentation from the SIGGRAPH 2007 conference sheds some light on the technical mojo that allows you to manipulate the size and shape of photos in crazy ways.

The same concept is found in the Liquid Scale app we featured a while back.

MIOPS: Smartphone Controllable High Speed Camera Trigger

MIOPS is a new smartphone-controlled camera trigger that combines all of the features photographers want in a high-speed camera trigger into one convenient device.

Read more…

Microsoft Researchers Use Motion Sensors to Combat Camera Blur

At SIGGRAPH 2010 in Los Angeles last month, Microsoft researchers showed off some new technology that improves existing digital blur reduction techniques by outfitting a camera with motion detecting sensors.

The team created an off-the-shelf hardware attachment consisting of a three-axis accelerometer, three gyroscopes, and a Bluetooth radio, attaching the setup to a Canon 1Ds Mark III camera. The researchers then created a software algorithm to use the motion information captured during the exposure to do “dense, per-pixel spatially-varying image deblurring”.
Read more…