As technology improves, features that were once limited to expensive professional models often become available to the masses, but will this ever be true for full-frame sensors? Nikon’s Senior VP David Lee was recently asked this question in an interview with TWICE, and here’s what he said:
I think that there are definitely two different approaches here. What we’re seeing is that sensor performance continues to improve, but obviously there’s really a need for bulk because with a full-size sensor there’s a real low-light performance benefit, high speed performance, framing rates, and so on and so forth. So, I think you’ll definitely continue to see the higher-end pro consumer continue to have that large format. It’s definitely needed in the D3 and D700. You’ll see that technology continue to improve and grow, but the DX sensor form factor is also important. The compactness of the D3100 and D5100 is very popular. I don’t think one approach will ever overtake the other because of the overall image capabilities and the light performance capabilities.
Seems like he either misunderstood the question, or decided to beat around the bush. It’s an interesting question though — will any of the big manufacturers shake up the industry by being the first to put a full-frame sensor in a consumer-level camera? The sensors have already jumped from pro-level cameras to prosumer-level ones starting in 2005 with the Canon 5D, so it seems like the next logical step will be the consumer level. A sub-$1000 full-frame camera. Now that’s a thought.
Last week it came to light to Amazon founder Jeff Bezos had filed a patent for having airbags built into cell phones to protect them if they’re ever accidentally dropped. Rather than having a NASA-style airbag that completely envelops the phone, micro air jets orient the device so that it lands on a tiny airbag that pops out of the bottom. Wouldn’t it be interesting if this kind of thing became common on digital cameras in the future? The idea is pretty farfetched, but some people I know would definitely benefit from camera airbags.
There’s a good chance the digital photos you’ve stored on hard drives and DVDs won’t outlive you, but what if there was a disc that could last forever? M-Disc, short for Millenial Disc, is a new type of disc that doesn’t suffer from natural decay and degradation like existing disc technologies, allowing you to store data safely for somewhere between “1000 years” and “forever”.
Existing disc technologies write data using an organic dye layer that begins to experience “data rot” immediately after it’s written, causing the disc to become unreadable after a certain amount of time. The M-Disc, on the other hand, actually carves your data into “rock-like materials” that are known to last for centuries, meaning there’s no data rot. Apparently NASA uses the discs to store data. Hopefully it becomes available and affordable soon…
For his project “Back from the Future”, photographer Sander Koot asked his subjects to find old photos of themselves that brought back good memories. He then made portraits of those people reliving those happy moments. Read more…
Facial recognition features are appearing in everything from cameras to photo-sharing sites, but have you thought about the different security and privacy concerns it introduces? Fast Company has published a piece on how mobile apps in the future may be able to quickly look up your identity, your personal information, and perhaps even your social security number!
[CMU researchers] used three relatively simple technologies to create their face recognition system: An off-the-shelf face recognizer, cloud computing processing, and personal data available through the public feed at social networking sites such as Facebook [...] Combining the data gathered from the face recognizer hardware with clever search algorithms that were processed on a cloud-computing platform, the team has performed three powerful experiments: They were able to “unmask” people on a popular dating site where it’s common to protect real identities using pseudonyms, and they ID’d students walking in public on campus by grabbing their profile photos from Facebook.
Most impressively the research algorithm tried to predict personal interests and even to deduce the social security number of CMU students based solely on an image of their face–by interrogating deeper into information that’s freely available online.
Having a camera that shoots 5000 frames per second is enough to capture slow motion footage of a bullet flying through the air, but scientists at the Science and Technology Facilities Council have now announced a camera that shoots a staggering 4.5 million frames per second. Rather than bullets, the camera is designed to capture 3D images of individual molecules using powerful x-ray flashes that last one hundred million billionth of a second. The £3 million camera will land in scientists hands in 2015.
Facial recognition technology has become ubiquitous in recent years, being found in everything from the latest compact camera to websites like Facebook. The same may soon be said about location recognition. Through a new project called “Finder“, the US government military research division IARPA is looking into how to quickly and automatically identify where a photograph was taken without any geotag data. The goal is to use only the identifying features found in the background of scenes to determine the location — kinda like facial recognition except for landscapes.
What if in the future, the human eye itself could be turned into a camera by simply reading and recording the data that it sends to the brain? As crazy as it sounds, researchers have already accomplished this at a very basic level:
In 1999, researchers led by Yang Dan at University of California, Berkeley decoded neuronal firings to reproduce images seen by cats. The team used an array of electrodes embedded in the thalamus (which integrates all of the brain’s sensory input) of sharp-eyed cats. Researchers targeted 177 brain cells in the thalamus lateral geniculate nucleus area, which decodes signals from the retina. The cats were shown eight short movies, and their neuron firings were recorded. Using mathematical filters, the researchers decoded the signals to generate movies of what the cats saw and were able to reconstruct recognizable scenes and moving objects. [#]
Basically, the scientists were able to tap into the brain of a cat and display what the cat was seeing on a computer screen. Something similar was accomplished with humans a few years ago, and scientists believe that in the future we may even be able to “photograph” human dreams!
The WVIL concept camera that made the rounds on the Internet featured a lens that could operate separately from the camera body, but Or Leviteh‘s MMI camera is even simpler: it’s a small screen-less camera that uses a smartphone as its “camera body”.
MMI enables you to see what the camera sees on your [smartphone] screen, to adjust the settings as needed, and to see the results without getting up and even to upload the pictures online. From the application you can control all settings: white balance, focus, picture burst, timer and even tilt the camera lens, all without having to reach the camera.
Separating the lens and sensor components of a camera from its LCD screen and controls seems to be a pretty popular idea as of late (Nikon even showed off a similar concept camera recently).
A compact camera probably isn’t the first thing someone would grab when looking to make a photo with an extremely shallow depth-of-field, since the small aperture and small sensor limit it in this regard. That might soon be different: a recently published patent application by Samsung shows that the company is looking into producing achieving shallow depth of fields with compact cameras by using a second lens to create a depth map for each photo. Read more…