Posts Tagged ‘tech’

Turning the Eye into a Camera Sensor

What if in the future, the human eye itself could be turned into a camera by simply reading and recording the data that it sends to the brain? As crazy as it sounds, researchers have already accomplished this at a very basic level:

In 1999, researchers led by Yang Dan at University of California, Berkeley decoded neuronal firings to reproduce images seen by cats. The team used an array of electrodes embedded in the thalamus (which integrates all of the brain’s sensory input) of sharp-eyed cats. Researchers targeted 177 brain cells in the thalamus lateral geniculate nucleus area, which decodes signals from the retina. The cats were shown eight short movies, and their neuron firings were recorded. Using mathematical filters, the researchers decoded the signals to generate movies of what the cats saw and were able to reconstruct recognizable scenes and moving objects. [#]

Basically, the scientists were able to tap into the brain of a cat and display what the cat was seeing on a computer screen. Something similar was accomplished with humans a few years ago, and scientists believe that in the future we may even be able to “photograph” human dreams!

Brain-computer interface (via Reddit)

Future Nikon DSLRs Might Respond to Photographers’ Emotions

You might soon be able to control Nikon DSLRs using only your emotions. A patent published recently shows that the company is looking into building biological detectors into its cameras, allowing the camera to automatically change settings and trigger the shutter based on things like heart rate and blood pressure. For example, at a sporting event, the sensors could be used to trigger the shutter when something significant happens and the photographer’s reflexes are too slow. The camera could also choose a faster shutter speed to reduce blurring if the user is nervous.

(via Egami)

Lytro Is Developing a Camera That May Change Photography as We Know It

A company called Lytro has just launched with $50 million in funding and, unlike Color, the technology is pretty mind-blowing. It’s designing a camera that may be the next giant leap in the evolution of photography — a consumer camera that shoots photos that can be refocused at any time. Instead of capturing a single plane of light like traditional cameras do, Lytro’s light-field camera will use a special sensor to capture the color, intensity, and vector direction of the rays of light (data that’s lost with traditional cameras).

[...] the camera captures all the information it possibly can about the field of light in front of it. You then get a digital photo that is adjustable in an almost infinite number of ways. You can focus anywhere in the picture, change the light levels — and presuming you’re using a device with a 3-D ready screen — even create a picture you can tilt and shift in three dimensions. [#]

Try clicking the sample photograph above. You’ll find that you can choose exactly where the focus point in the photo is as you’re viewing it! The company plans to unveil their camera sometime this year, with the goal of having the camera’s price be somewhere between $1 and $10,000…
Check out more sample photos here

iPhone 5 Rumored to Pack an 8MP Sensor Made by Sony

If you think the 5-megapixel sensor found on the iPhone 4 is good, wait till you see the camera found on the next iPhone — it’s reportedly going to be a 8-megapixel sensor made by Sony. The Street wrote back in 2010 that the next version of the iPhone to arrive in 2011 would pack an 8-megapixel Sony sensor rather than the 5-megapixel OmniVision one found in the current phone, and Sony’s CEO Howard Stringer seems to have confirmed that today in an interview with the Wall Street Journal.
Read more…

Canon Rumored to be Working with Apple

The blogosphere is abuzz today over a rumor that Canon and Apple may be planning to collaborate on an upcoming project. Craig over at Canon Rumors started it yesterday when he wrote,

I’ve received a few pieces of information about an upcoming collaboration between Apple and Canon. What that collaboration is hasn’t been spelled out to me. It could be with the upcoming Final Cut Pro 8, or maybe something more.

The story was soon picked up by blogs and magazines, with everyone trying to make guesses as to what the “secret project” might be (if there even is one). Hopefully it has to do with Aperture or something photography related, though the next version of Final Cut Pro is a likely candidate as well.


Image credit: Canon Laptop by Frank Kehren

High-resolution 3D Models of Cities Created from Aerial Photographs

If you thought Google Earth was cool, check out the work being done by Swedish corp C3 Technologies. Using only photos shot from planes, they can automatically create high-resolution 3D models of entire cities that can then be explored. The above video shows a beautiful fly-by of New York City.

All of the C3 products are based on high-resolution photography captured with carefully calibrated cameras. For every picture, the positions and angles of the cameras are calculated with extremely high precision, using an advanced navigation system. This is what enables C3 to give each pixel its geographical position with very high accuracy. [#]

They can also apply the technology to turn panoramic photographs captured at street-level into 3D models of the scene that the user can navigate through freely. Hopefully this kind of thing makes its way to products like Google Maps soon. It would also be awesome for creating maps in video games!

(via Photoxels)

3D Webcam Capture Demo at 60 FPS

Kyle McDonald is a programmer working on building open source utilities for realtime 3D scanning using structured light, a technique that requires only a projector and a cheap camera.

Here’s a demo of what 3D capture looks like using a PS3 webcam:

If you’d like to get involved in the project, head on over to the structured-light Google code page to check out the code (no pun intended).

Do you think this might be the future of photography and video?

(via Make)

CIA Takes Interest in Lens Startup

LensVector, a Silicon Valley startup working on novel lens technology, has received its latest round of funding from In-Q-Tel, a not-for-profit venture firm that invests for the sole purpose of boosting US intelligence capability by providing the CIA with state-of-the-art information technology.

So what’s LensVector developing that CIA would want? Lenses that focus electronically with no moving parts.

Here’s a diagram by LensVector showing how their tiny autofocus lenses work compared to traditional technology:

Rather than using mechanical parts to focus a lens, LensVector uses electricity to align liquid crystals to a desired shape, which focuses light to a particular point.

Given the CIA’s interest in this technology, it must be working pretty well. Hopefully we’ll see this introduced to consumer cameras that need it (i.e. cell phones) soon.

A fun fact: another startup that received In-Q-Tel funding was Keyhole, Inc., the geospatial data visualization company that was acquired by Google in 2004. Their flagship product, Earth Viewer, was turned into Google Earth.

(via CNET)

PicTreat Provides Instant Face Retouching

PicTreat is a free online application that allows you to quickly and easily retouch portraits using patent-pending face detection and correction technology.

By “correction”, they mean the application can make your skin “smooth and shiny”, remove “irritating skin flaws”, fix red-eye, and correct color balance.

While we would prefer not to promote our culture’s obsession with outward appearance, we wanted to examine the technology behind this application.

Here’s an example of a before and after displayed on the front page:

To test exactly what the application does to a portrait, I decided to use the portrait of President Obama that I referred to recently. However, the app apparently couldn’t find any “blemishes”, and returned a nearly identical image — albeit with mildly smoother skin.

Thus, I decided to test how the service retouches a photograph by altering the photograph manually. Using Photoshop, I added some red-eye, added some spots to his face, and gave the photo a green tint. Here are the original, altered, and PicTreated images:

The app successfully corrected the artificial red-eye, restored the color to almost what it was originally, and left the random spots I added alone (which it should, lest it remove things like birthmarks).

In spite of the interesting technology behind PicTreat, many may find the app offensive due to the fact that it intentionally removes such things as freckles (a taboo among photo editors) and uses the slogan, “everybody’s perfect”.

What are your thoughts on this kind of service?


Image credit: Obama portrait by the Obama-Biden Transition Project

Sneak Peek At Photoshop’s Mind-Boggling Content Aware Fill

Adobe is working on a new feature for Photoshop called “Content Aware Fill”, and posted a mind-boggling demonstration of it on YouTube. The description states:

One of the biggest requests we get of Photoshop is to make adding, removing, moving or repairing items faster and more seamless. From retouching to completely reimagining an image, heres an early glimpse of what could happen in the future when you press the delete key.

Basically it allows you to alter or create reality in photographs as easily as selecting an area and running the feature. Gone will be the days when photojournalists are caught with embarrassing patterns created by improperly using the stamp tool. The demonstration is so amazing that many commenters are saying it’s fake, going as far as to say it looks… “photoshopped”?

What do you think of this feature and the sneak peek? Is it too good to be true, or will it change the way we think about photography forever?

(via PopPhoto)