external Answering the Question ‘Do I Really Need a Full Frame Camera’ —Luminous Landscape

It has been a rule of thumb (some even believe a law of nature) since the dawn of the digital camera era at the turn of the past century, that for the highest possible image quality one needed to use the largest possible sensor…

But then a funny thing started to happen about three or four years ago. Many photographers started to find that they were getting truly excellent image quality from APS-C and Micro Four Thirds cameras. Once pixel count got to about 16 Megapixels, the number of pixels itself stopped being something to pursue just for its own sake.

Of course higher resolution sensors offering 24MP and 36MP allow one to produce larger prints, and crop severely if necessary. But, for most photographers somewhere in the 16-20MP range was sufficient at a practical level.

Visit this link → · Shared on Dec 18, 2013 · ↬ Via
  • furcafe

    Doesn’t seem to be any different from when improvements in film technology enabled first medium format & then 35mm to be taken seriously.

  • ZeGerman

    Don’t cameras with larger format sensors have a larger dynamic range? There was no mention of that in this writeup.

  • harumph

    I’m not sure if there’s a cause and effect there, but yeah, the highest dynamic ranges all belong to full frame sensors (D600/610, SLT-A99, D800, D4, DF…in descending order). Does that mean it’s not possible for a crop sensor to rate that high, though?

  • Johnny

    The article starts on about the high production costs of full frame sensors. Any one who knows anything about mass production of electronic components knows that no small part costs more than about 50cents at factory level. The price of cameras of any sensor size is driven by three key factors, cost of development, number of units likely to be sold(higher price=less sold), and if the product is being marketed as ‘high end’. My guess would be that an apsc sensor costs the factory about 10cents to make, and the full frame sensor about 15cents.

  • Stephan Zielinski

    Sadly, this ignores the fact that there’s a minimum useful size to sensels, below which one merely gets very well documented blurs. Once your optics are good enough you have a diffraction-limited system, the only way to capture more information about a scene is to use a larger sensor.

    This is one reason why the “megapixel wars” were as pointless as they were. Once all your point light sources become Airy disks, increasing the number of sensels on the same area of sensor just allows you to skip the anti-aliasing step in your post-processing.

  • Stephan Zielinski

    Don’t mistake the marginal price of a component for its true price. It may not cost much to gronk out an additional component once the research and development is done and the manufacturing line is built and debugged, but those required preliminary steps cost an arm and a leg– and it’s even more for very large microelectronic components like sensors. (Wikipedia estimates a semiconductor fabrication plant will cost you about a billion to 9.3 billion dollars.)