PetaPixel

The Camera Versus the Human Eye

This article started after I followed an online discussion about whether a 35mm or a 50mm lens on a full frame camera gives the equivalent field of view to normal human vision. This particular discussion immediately delved into the optical physics of the eye as a camera and lens — an understandable comparison since the eye consists of a front element (the cornea), an aperture ring (the iris and pupil), a lens, and a sensor (the retina).

Despite all the impressive mathematics thrown back and forth regarding the optical physics of the eyeball, the discussion didn’t quite seem to make sense logically, so I did a lot of reading of my own on the topic.

There won’t be any direct benefit from this article that will let you run out and take better photographs, but you might find it interesting. You may also find it incredibly boring, so I’ll give you my conclusion first, in the form of two quotes from Garry Winogrand:

A photograph is the illusion of a literal description of how the camera ‘saw’ a piece of time and space.

Photography is not about the thing photographed. It is about how that thing looks photographed.

Basically in doing all this research about how the human eye is like a camera, what I really learned is how human vision is not like a photograph. In a way, it explained to me why I so often find a photograph much more beautiful and interesting than I found the actual scene itself.

The Eye as a Camera System

Superficially, its pretty logical to compare the eye to a camera. We can measure the front-to-back length of the eye (about 25mm from the cornea to the retina), and the diameter of the pupil (2mm contracted, 7 to 8 mm dilated) and calculate lens-like numbers from those measurements.

You’ll find some different numbers quoted for the focal length of the eye, though. Some are from physical measurements of the anatomic structures of the eye, others from optometric calculations, some take into account that the lens of the eye and eye size itself change with the contractions of various muscles.

To summarize, though, one commonly quoted focal length of the eye is 17mm (this is calculated from the Optometric diopter value). The more commonly accepted value, however, is 22mm to 24mm (calculated from physical refraction in the eye). In certain situations, the focal length may actually be longer.

Since we know the approximate focal length and the diameter of the pupil, its relatively easy to calculate the aperture (f-stop) of the eye. Given a 17mm focal length and an 8mm pupil the eyeball should function as an f/2.1 lens. If we use the 24mm focal length and 8mm pupil, it should be f/3.5. There have actually been a number of studies done in astronomy to actually measure the f-stop of the human eye, and the measured number comes out to be f/3.2 to f/3.5 (Middleton, 1958).

At this point, both of you who read this far probably have wondered “If the focal length of the eye is 17 or 24mm, why is everyone arguing about whether 35mm or 50mm lenses are the same field of view as the human eye?”

The reason is that the the measured focal length of the eye isn’t what determines the angle of view of human vision. I’ll get into this in more detail below, but the main point is that only part of the retina processes the main image we see. (The area of main vision is called the cone of visual attention, the rest of what we see is “peripheral vision”).

Studies have measured the cone of visual attention and found it to be about 55 degrees wide. On a 35mm full frame camera, a 43mm lens provides an angle of view of 55 degrees, so that focal length provides exactly the same angle of view that we humans have. Damn if that isn’t halfway between 35mm and 50mm. So the original argument is ended, the actual ‘normal’ lens on a 35mm SLR is neither 35mm nor 50mm, it’s halfway in between.

The Eye is Not a Camera System

Having gotten the answer to the original discussion, I could have left things alone and walked away with yet another bit of fairly useless trivia filed away to amaze my online friends with. But NOOoooo. When I have a bunch of work that needs doing, I find I’ll almost always choose to spend another couple of hours reading more articles about human vision.

You may have noticed the above section left out some of the eye-to-camera analogies, because once you get past the simple measurements of aperture and lens, the rest of the comparisons don’t fit so well.

Consider the eye’s sensor, the retina. The retina is almost the same size (32mm in diameter) as the sensor on a full frame camera (35mm in diameter). After that, though, almost everything is different.

The retina of a human eye

The first difference between the retina and your camera’s sensor is rather obvious: the retina is curved along the back surface of the eyeball, not flat like the silicon sensor in the camera. The curvature has an obvious advantage: the edges of the retina are about the same distance from the lens as the center. On a flat sensor the edges are further away from the lens, and the center closer. Advantage retina — it should have better ‘corner sharpness’.

The human eye also has a lot more pixels than your camera, about 130 million pixels (you 24-megapixel camera owners feeling humble now?). However, only about 6 million of the eye’s pixels are cones (which see color), the remaining 124 million just see black and white. But advantage retina again. Big time.

But if we look further the differences become even more pronounced…

On a camera sensor each pixel is set out in a regular grid pattern. Every square millimeter of the sensor has exactly the same number and pattern of pixels. On the retina there’s a small central area, about 6mm across (the macula) that contains the densest concentration of photo receptors in the eye. The central portion of the macula (the fovea) is densely packed with only cone (color sensing) cells. The rest of the macula around this central ‘color only’ area contains both rods and cones.

The macula contains about 150,000 ‘pixels’ in each 1mm square (compare that to 24,000,000 pixels spread over a 35mm x 24mm sensor in a 5DMkII or D3x) and provides our ‘central vision’ (the 55 degree cone of visual attention mentioned above). Anyway, the central part of our visual field has far more resolving ability than even the best camera.

The rest of the retina has far fewer ‘pixels’, most of which are black and white sensing only. It provides what we usually consider ‘peripheral vision’, the things we see “in the corner of our eye”. This part senses moving objects very well, but doesn’t provide enough resolution to read a book, for example.

The total field of view (the area in which we can see movement) of the human eye is 160 degrees, but outside of the cone of visual attention we can’t really recognize detail, only broad shapes and movement.

The advantages of the human eye compared to the camera get reduced a bit as we leave the retina and travel back toward the brain. The camera sends every pixel’s data from the sensor to a computer chip for processing into an image. The eye has 130 million sensors in the retina, but the optic nerve that carries those sensors’ signals to the brain has only 1.2 million fibers, so less than 10% of the retina’s data is passed on to the brain at any given instant. (Partly this is because the chemical light sensors in the retina take a while to ‘recharge’ after being stimulated. Partly because the brain couldn’t process that much information anyway.)

And of course the brain processes the signals a lot differently than a photography camera. Unlike the intermittent shutter clicks of a camera, the eye is sending the brain a constant feed video which is being processed into what we see. A subconscious part of the brain (the lateral geniculate nucleus if you must know) compares the signals from both eyes, assembles the most important parts into 3-D images, and sends them on to the conscious part of the brain for image recognition and further processing.

The subconscious brain also sends signals to the eye, moving the eyeball slightly in a scanning pattern so that the sharp vision of the macula moves across an object of interest. Over a few split seconds the eye actually sends multiple images, and the brain processes them into a more complete and detailed image.

The subconscious brain also rejects a lot of the incoming bandwidth, sending only a small fraction of its data on to the conscious brain. You can control this to some extent: for example, right now your conscious brain is telling the lateral geniculate nucleus “send me information from the central vision only, focus on those typed words in the center of the field of vision, move from left to right so I can read them”. Stop reading for a second and without moving your eyes try to see what’s in your peripheral field of view. A second ago you didn’t “see” that object to the right or left of the computer monitor because the peripheral vision wasn’t getting passed on to the conscious brain.

If you concentrate, even without moving your eyes, you can at least tell the object is there. If you want to see it clearly, though, you’ll have to send another brain signal to the eye, shifting the cone of visual attention over to that object. Notice also that you can’t both read the text and see the peripheral objects — the brain can’t process that much data.

The brain isn’t done when the image has reached the conscious part (called the visual cortex). This area connects strongly with the memory portions of the brain, allowing you to ‘recognize’ objects in the image. We’ve all experienced that moment when we see something, but don’t recognize what it is for a second or two. After we’ve recognized it, we wonder why in the world it wasn’t obvious immediately. It’s because it took the brain a split second to access the memory files for image recognition. (If you haven’t experienced this yet, just wait a few years. You will.)

In reality (and this is very obvious) human vision is video, not photography. Even when staring at a photograph, the brain is taking multiple ‘snapshots’ as it moves the center of focus over the picture, stacking and assembling them into the final image we perceive. Look at a photograph for a few minutes and you’ll realize that subconsciously your eye has drifted over the picture, getting an overview of the image, focusing in on details here and there and, after a few seconds, realizing some things about it that weren’t obvious at first glance.

So What’s the Point?

Well, I have some observations, although they’re far away from “which lens has the field of view most similar to human vision?”. This information got me thinking about what makes me so fascinated by some photographs, and not so much by others. I don’t know that any of these observations are true, but they’re interesting thoughts (to me at least). All of them are based on one fact: when I really like a photograph, I spend a minute or two looking at it, letting my human vision scan it, grabbing the detail from it or perhaps wondering about the detail that’s not visible.

Photographs taken at a ‘normal’ angle of view (35mm to 50mm) seem to retain their appeal whatever their size. Even web-sized images shot at this focal length keep the essence of the shot. The shot below (taken at 35mm) has a lot more detail when seen in a large image, but the essence is obvious even when small. Perhaps the brain’s processing is more comfortable recognizing an image it sees at its normal field of view. Perhaps it’s because we photographers tend to subconsciously emphasize composition and subjects in a ‘normal’ angle-of-view photograph.

The photo above demonstrates something else I’ve always wondered about: does our fascination and love for black and white photography occur because it’s one of the few ways the dense cone (color only) receptors in our macula are forced to send a grayscale image to our brain?

Perhaps our brain likes looking at just tone and texture, without color data clogging up that narrow bandwidth between eyeball and brain.

Like ‘normal-angle’ shots, telephoto and macro shots often look great in small prints or web-sized JPGs. I have an 8 × 10 of an elephant’s eye and a similar-sized macro print of a spider on my office wall that even from across the room look great. (At least they look great to me, but you’ll notice that they’re hanging in my office. I’ve hung them in a couple of other places in the house and have been tactfully told that “they really don’t go with the living room furniture”, so maybe they don’t look so great to everyone.)

There’s no great composition or other factors to make those photos attractive to me, but I find them fascinating anyway. Perhaps because even at a small size, my human vision can see details in the photograph that I never could see looking at an elephant or spider with the ‘naked eye’.

On the other hand, when I get a good wide angle or scenic shot I hardly even bother to post a web-sized graphic or make a small print (and I’m not going to start for this article). I want it printed BIG. I think perhaps so that my human vision can scan through the image picking out the little details that are completely lost when its downsized. And every time I do make a big print, even of a scene I’ve been to a dozen times, I notice things in the photograph I’ve never seen when I was there in person.

Perhaps the ‘video’ my brain is making while scanning the print provides much more detail and I find it more pleasing than the composition of the photo would give when it’s printed small (or which I saw when I was actually at the scene).

And perhaps the subconscious ‘scanning’ that my vision makes across a photograph accounts for why things like the ‘rule of thirds’ and selective focus pulls my eye to certain parts of the photograph. Maybe we photographers simply figured out how the brain processes images and took advantage of it through practical experience, without knowing all the science involved.

But I guess my only real conclusion is this: a photograph is NOT exactly what my eye and brain saw at the scene. When I get a good shot, it’s something different and something better, like what Winogrand said in the two quotes above, and in this quote too:

You see something happening and you bang away at it. Either you get what you saw or you get something else — and whichever is better you print.


About the author: Roger Cicala is the founder of LensRentals. This article was originally published here.


Image credits: my eye up close by machinecodeblue, Nikh’s eye through camera’s eye from my eyes for your eyes :-) by slalit, Schematic of the Human Eye by entirelysubjective, My left eye retina by Richard Masoner / Cyclelicious, Chromatic aberration (sort of) by moppet65535


 
 
  • Charlie

    Pretty interesting ! I love stuff like this.

  • http://www.facebook.com/nicolettewells.photo Nicolette Wells

    Looks like both Charlie & I loved this article, excellent research- thank you

  • http://twitter.com/BenicioMurray Benicio Murray

    I dont blame them for not wanting the elephant in the lounge!

  • Khoi

    Thank you for doing such a great research for us. It very interesting .

  • http://twitter.com/JohnJoRitz John-Jo Ritson

    Nice article. informative.

  • http://twitter.com/Mike_Philippens Mike Philippens™

    What about dynamic range and light sensetivity? Doesn’t the human eye has a better dynamic range than a sensor? And what’s out iso value? ;)

  • Beni

    Very deep and insightful. Thanks

  • http://www.facebook.com/jonathan.maniago Jonathan Maniago

    “So the original argument is ended, the actual ‘normal’ lens on a 35mm SLR is neither 35mm nor 50mm, it’s halfway in between.”

    I often wish that lens and camera companies would simply describe their lenses using the field of view since it’s far more intuitive that focal lengths especially when considering different sensor sizes. I suppose that marketing is one of the primary factors for this standard because a “200mm advantage” of a 600mm over a 400mm seems more impressive than a 2° difference between 4° and 6°.

  • nate parker

    Great article! Science talk and camera talk= more of that!

  • http://www.facebook.com/yeshenv Yeshen Venema

    Excellent article. Thanks for doing the research. I especially like the idea that black and white images hold their special appeal because the eye and brain are less concerned with processing colour information and can therefore focus on the tonal and compositional elements.

    The whole eye v camera ‘field of vison’ thing is misunderstood partly because camera companies do not tell you their sensors are cropped, they see it a negative. Actually, if you are creative in your use of lenses, it can even have its advantages.

  • Mr Gubrz

    wooo my highly uncertain testing proved correct… when i was back in college and someone brought this up, i just put my camera up to my eye, left both eyes open, and zoomed till things matched up.. i think i guestimated the range as 46… so i wasnt far off!

  • Mr Gubrz

    omg are we cmos or ccd?!?!?!?!

  • mdarnton

    Isn’t it really more about perception than mechanics? For instance, when i lived in a rural area, my go-to lens was 28mm. Now I live in a big city, and everything is closer, and I find myself using a 24mm for the same stuff, as my “normal” lens. That’s what *I* pay attention to. For a long time, when I was really young, and felt more detached, it was a 50.

  • Bert Happel

    As an Optometrist and an advanced amateur photographer I found the article very well done. The illustrations added to the article by PetaPixel (as compared to the original source) improved the education value of the article IMO.

  • qwerty

    This article presented one odd question in my mind. Why do modern camera sensors remain flat? Obviously the cost factor would be a hurdle, but could a camera system be developed to capture perfectly sharp images edge to edge by emulating the curve of the eye? Hopefully I am not overlooking anything too obvious with this premise.

  • amy

    Nice article. I agree with the writer that human eyes are much more better than any camera if we know the ‘ manual’. Likewise human brain is much more better than any kind of computer if we know how to train and practise…

    We don’t see as much as the camera because we don’t take time to look…we use our cameras to catch the whole scene and then observe later through pictures …so our eyes become our camera’s server instead the other way.

    Anyway its just my opinions. I have camera but I chose to train my eyes first…

  • http://www.facebook.com/carin.basson Carin Basson

    Thanks for the informative article :)

  • eraserhead12

    The main difference between a machine and the human eye is perception. What we *actually* see is very different from what we think we see, whereas a photograph simply records. You’d never buy a camera with tunnel vision and blind spots :P.

  • Kyoshi Becker

    Foveon…

  • http://www.facebook.com/people/Daniel-Austin-Hoherd/576367461 Daniel Austin Hoherd

    I’ve thought the same thing, but really, sensor size variability just makes it hard no matter what. With a different system, the mm conversion to the “normal” 35mm sensor would be gone, but then we’d have a new degree conversion to the “normal” whatever we choose. The simplest thing I can come up with would be a grid to tell us what angle of view we’d get on what sensor, and that’s something we can already come up with, and we don’t have to rely on the camera manufacturers to do it.

  • http://www.facebook.com/people/Daniel-Austin-Hoherd/576367461 Daniel Austin Hoherd

    +1 for your support of illustrations. that is something so many photography articles lack but yet it helps us understand the concepts immensely better.

    +1 for using “advanced amateur” in place of “intermediate” ;-)

  • http://www.facebook.com/people/Daniel-Austin-Hoherd/576367461 Daniel Austin Hoherd

    Right?? I love the science talk! I want to understand the physics so I can find the unexplored areas and push the bounds!

  • bradley macinnis

    Hi Qwerty: A brief answer is that the camera sensor has to be manufactured on a substrate with suitable semiconductor properties, this generally turns out to be silicon of high purity. The light sensing elements are essentially a matrix of photo diodes. Diodes are electronic devices that are based on and require semiconductor properties. These elements that make up the working sensor are diffused into the structure of the silicon wafer. This silicon crystal, when manufactured, has a cylindrical shape and this, in turn, has to be sliced with a diamond cutter into round flat disks in the same way that other semiconductors are manufactured, for example, the microprocessor in your computer. Several additional processes later the disks are cut into rectangles and mounted in a chip holder. I’ve never heard of any process that produces silicon in customized shapes such as a curve of revolution like that of the retina and I expect if it were possible, the cost would be astronomical! Hope this helps.

  • Howie

    I’ve read (a long time ago) that the diagonal of the film/sensor size indicates the ‘standard’ lens for a camera so for a 35mm film (or full frame DSLR) it’s the diagonal of a 24mm x 36mm rectangle which works out as 43.26mm (and for APS-C cameras between 27mm and 29mm depending on the crop factor).

  • PeterTorsk

    My view of a scene has the feel of a 28mm focal length for 35mm film, so I have tried to maintain this in choosing digital point and shoot cameras. A 4x zoom allows me to see parts of this scene with detail that my eye perceives. That is why I always carried a 100mm lense in addition to the 28mm on my single lens reflex 35mm camera in the past.
    The so-called 50mm “standard” lense fulfilled neither requirement, and a 35mm “wide-angle” did not either. The latter has been the widest equivalent of many digital point and shoot cameras over the years!

  • http://www.facebook.com/stephan.haggerty Stephan Haggerty

    Great piece. I hope we will see more substantial work like this on Petapixel.

  • Helsinki Phil

    Extremely interesting article. Puts me in mind of the Douglas Adams’ quote about the whooshing sound. I’m a simple fella, me

  • kendon

    focal length is the only physical attribute to describe the lens. won’t change any time soon, and that is a good thing.

  • IrishAIrWolf

    I was trying to explain how a camera works to a friend some 10 years plus ago and used the eye as an example, but we forget (at least film wise) that the camera also sees different colors in things depending on the Kelvin of the light, which the eye either must ignore or automatically translates to what we are used to seeing. Not to mention that the focusing system of the eye is so great that we don’t see the backgrounds of closeup as fuzzy like you do with a camera unless it is VERY close up

  • Per

    I have not been reading this article to it’s length. I think it’s a lot of misunderstanding what’s a normal perspective and what focal length of a camera that gives that perspective. The perspective depends on the viewing distance and the size of the picture. Say that the viewing distance is 25cm then a 35mm lens (on 35mm “full frame”) will give a “normal” perspective when the picture is about 200x250mm. I think this is the reason why the 35mm prime were so popular in “the old days” as most copies were near that format. At the same viewing distance and with a bigger copy does a more wide angle lens give a more natural perspective and at the opposite a smaller copy and a more tele lens does give a more natural perspective at the same viewing distance. At greater viewing distances you need to use longer focal lengths to get a “natural” perspective. I think this is uncomplicated and doesn’t need a lot of complicated explanations of human vision to understand. It’s all about proportions and viewing distance!

  • Nobo Griffin

    D600 owners would disagree! Doh, ducking now!

  • jaxsaxoutback

    simple . . . the human eye was created, the camera was an “invention!!” Actually there really isn’t a comparison at all, scientifically nor physically…. blah, blah, blah….

  • handiworkofChrist

    nope. . . it surely will not. . . CREATION isn’t manufactured by MAN, not yesterday, today nor tomorrow!! Mutation and comparisons, YIKES!!! Thankfully I know where “my” eyes came from, as did Albert Einstein . . . read God vs. Science for an awesome reference point, folks!!

  • Trent Kozun

    Fake

  • Trent Kozun

    This is so fake. Please. Go home. You are drunk.

  • Trent Kozun

    Hi I’m future prime minister of Canada. You will lose your job.

  • http://eeskaatt.tumblr.com eeskaatt

    I always think of my field of vision same as that optical viewfinder with frame lines/guides–think of leica and fuji xseries OVFs. The camera sees the peripheral(outside the frame lines in the OVF) but only records the scene inside the frame lines(the part of the scene your eye is focused on).