11 Dumb Things Camera Companies Are Still Doing

As much as we talk about the lack of true innovation in the camera market, particularly when it comes to integration with the Internet and social media, every day I keep encountering cameras that have the same “hey this is the way it used to be” design philosophies underlying them.

I call it “lazy engineering.” Someone up top in management keeps screaming “cut costs,” and one of the ways you can cut costs is to not redesign something you’ve already designed. Just go with the solutions you’ve been using. Just use the parts you’ve been using. Buy cheaper, not-leading-edge parts.

But the real problem with lazy engineering is that user problems just stay unsolved. Solved user problems sell cameras. Unsolved ones don’t.

What’s the first thing most of us do after buying a new camera? Accessorize it properly. So let’s start there.

#1. Lack of Arca Swiss Mounting

The 1/4″ tripod socket at the bottom of the camera has been there since our ancestors evolved off into a new species millennia ago. Okay, maybe not quite that long. Ever since the Bible says the world started. Nope, still a bit too long. I can say this: the first camera I ever saw as a child now about 60 years ago had a 1/4″ tripod socket on it.

When was the last time I used a 1/4″ tripod socket to mount to a tripod? Over twenty years ago. Did the camera companies notice? Nope. Someone in accounting just keeps ordering 1/4″ tripod sockets and sends them to the factory.

But we don’t use those things for mounting to a tripod because it takes too much time to mount and unmount a camera, plus screwing in doesn’t exactly secure the position of the camera on the tripod (it eventually rotates). The camera makers all used to put a rubber layer on the bottom of cameras to help with the latter problem, but the rubber layer would then just get torn and needed repair, so now companies like Nikon just use a plastic body surface with some indents in it (which doesn’t work for keeping the camera from slipping and rotating in the socket).

Because cameras tended to transition to plastics for the body, there’s even the issue of how the companies mount the tripod socket securely so it doesn’t break, get misaligned, or come out. Lots of screws and frame position gets involved in that, and I’ve still seen plenty of tripod sockets break on people trying to actually use them.

The pros long ago moved to Arca Swiss style plates on their cameras, and the enthusiasts followed. First, there’s the near instant on/off ability that they provide. But more important, a well-designed Arca Swiss plate doesn’t ever swivel on the camera body. Most importantly, well-designed Arca Swiss plates provide a near solid metal-to-metal bond between our camera and support system. That’s important because if the bond isn’t 100% solid, you introduce a vibration point in your support system. Trust me on this, I’ve seen so many camera-on-tripod-via-tripod-socket connections that create a vibration point that if I had a grain of sand for each one, I’d own a beach.

A couple of camera companies have sort of caught on to this. Fujifilm has made extension grips with Arca Swiss dove-tailing, and Olympus has been putting the same on the tripod mounts of their long lenses. The most recent Tamron announcement (100-400mm) has a tripod collar with Arca Swiss dove-tailing. Bravo, guys. Now do it for everything!

User solution: Buy RRS or Kirk plates for everything, at huge extra costs. Why didn’t we upgrade our camera every generation? Because it wasn’t just the cost of the camera that we had to figure into our calculations. New batteries, new vertical grip, new plates, lots of extra expenses.

#2. Tripod Collars Braced Only At or Near the Lens Mount

Related to the Arca Swiss plates are tripod collars for long lenses. Oh dear, don’t get me started.

Again, the whole point of having a support system is that you eliminate all vibrations from the system. Nikon got notorious for having tripod collars that introduced vibrations to the system. The whole idea of making a removable collar, putting that right at or near the mount, then extended it down while making a 90-degree turn in some skinny metal just invites issues.

Simple test: put your telephoto lens on your strongest tripod with the tripod not fully extended (e.g. make it as steady as possible). Now tap hard on the top front of the lens hood. Nothing should move. Not even one tiny bit. Out of the factory, we’ve been seeing a lot of tripod collars that violate that little standard and introduce significant vibration with just this simple test.

The RRS Long Lens Support and some of the Kirk replacement collars do the right thing: they add in a second support position forward on the lens. The lens then sits more in a cradle than balanced off the front of the lens mount, and thus distributes weight properly. And yes, this makes for real and meaningful differences in use. Enough so that, had the camera makers actually been paying attention to the customers buying their top gear, we’d have had this type of collar already coming in the box with the lens out of the factory.

But camera makers don’t really pay all that much attention to users. Moreover, they all have the really bad habit of discounting the serious user: “hey, we don’t need to add that change because it costs money and that might lose us some of our low-end customers.” Gee, what happened to getting users hooked on your quality at a low level so that they’d become loyal customers forever?

This omission falls into the “what we do is good enough” category that’s driving the camera industry in a race to the bottom. No, what you do is not good enough. Your serious user base discovered that long ago and they’re still waiting for you to figure that out.

Let me put it differently: any lens that sells for $2,000 or more and fails on the tripod collar test—and most do—means that the camera makers don’t care. Further, not offering said accessories as options means that the camera makers can’t see. Almost no one I know who owns an exotic actually uses the supplied handles/feet, and they all lament the collars. True of some $2,500 lenses, and true of $12,000 lenses. And has been for years, maybe decades. Talk about dumb. How they could ignore this for so long just shows how disconnected the camera makers are from their user base.

User solution: Again, third-party solutions, though they tend to only to apply to a limited set of lenses.

#3. Optical Remotes

Let’s move on. Television seems to have pioneered the widespread adoption of the consumer optical (IR) remote. Okay, I get that. And so many of those remotes have been made and the costs of making them driven so far to the bottom, it’s almost a no-brainer to add a little receiver and a simple one-button remote. But remember, we’re facing the TV and the TV is facing us when you use that remote.

So we got pretty much every consumer camera (and a lot of professional ones) with an IR remote receiver installed (and some include the transmitter, though most camera companies are so cheap and disconnected from solving user problems they don’t). Lately, though, some companies have taken to going backward here. Nikon, for instance, took the rear IR receiver off the D7500, leaving only the front one.

What that means is that Nikon thinks that D7500 users only use the optical remote for taking selfies. I’ll bet, however, that more of them are standing behind tripods and trying to figure out why their wireless remote isn’t triggering the camera. Awkward hand-over-camera ensues.

But you’re going to see me rail on optical remotes for a different reason: they’re old school technology. Indeed, as we progress in this article, you’re going to see case after case where the camera companies just buy cheap, older technology that doesn’t keep up with where the modern technical device should live. Optical remotes are just one of them.

Bluetooth fixes the “transmitter must be pointed directly at receiver” problem. It fixes the “someone else triggered my camera” problem. It fixes a lot of things. With the minor complication of it having to be set up (paired). But not only does using this more leading technology fix a lot of the problems of the old technology, it enables new things. I could now trigger my camera from any Bluetooth-enabled device (assumes useful apps).

I’d love to mount my camera in my vehicle and trigger it from the steering wheel. Put it behind a goal or on a bar above me and trigger it from my iPhone. Trigger my camera on a tripod no matter where I’m standing or pointing. Trigger it in the studio from my Mac. Trigger it on my drone. Heck, trigger the shutter release anywhere, anytime, from any device that’s using modern technology. Heck, maybe I want to set up a bullet time situation and have multiple cameras triggered a small fraction of a second apart (or simultaneously; but I like the slightly staggered approach visually). Oh wait, multiple cameras, that’s a thing? ;~)

What I’ve been writing about for a long time is that new technologies enable solving new user problems. The camera companies think that the only user problem you want solved is that you want to take a picture. They don’t look at how you want to take that picture, what you want to do with it, or the way that picture was enabled by or interacts with your other devices.

Bluetooth triggering should be a thing built into every camera, and accessible to any Bluetooth app on any device. Simple as that.

User solution: eBay and Amazon for a cheap, Chinese, radio-controlled remote set.

#4. Slow Wi-Fi

Old parts? Did I mention old parts? Of course I did. I’m about to do it again (I’m looking at you Nikon).

Wi-Fi isn’t just one thing. There is an incredible number of subtleties to it. 2.4Ghz versus 5Ghz. 802.11b, 802.11g, 802.11n, 802.11ac. Overlapping channels versus non-overlapping. Ad-hoc versus Infrastructure versus Direct. MIMO. Single antenna versus multi-antenna.

What all those things enable is a device that isn’t limited to a single type of use. Yet that’s exactly how the camera makers have tended to make them, probably because it’s not a simple task to make a state-of-the-art Wi-Fi hardware/software system that is completely configurable to user need.

Nikon even went out the way to say that the primary design goal behind SnapBridge was to make it as one-time simple setup as possible. They then went on to include all the more complicated setup things in the menu systems of said cameras, making what the customer sees when they have a problem even more complicated than it need be.

But more to the point, Nikon is using old parts in their cameras. Very old parts. You have to look deep into the manuals to see: 802.11bg, 2.4Ghz only. And the software really only supports Ad-hoc mode.

This is another problem the camera industry keeps having: lowest common denominator (LCD). LCD is what happens when you chase large consumer volumes and have to drive pricing down. It’s always a race to the bottom. Well, I’m here to tell you that Nikon hit bottom and went splat. Not that they’re alone in that, but for someone promoting the heck out of SnapBridge as being the solution to sharing images, well, it isn’t, and the parts built into the camera and the software that supports those parts are the reason.

Oh, and the smartphones Nikon wants to talk to? Pretty much all 802.11ac, dual waveband, MIMO support. So 1300Mbps theoretical maximum versus the Nikon cameras’ 54Mbps. No wonder Nikon is sending 2mp images via SnapBridge.

User solution: They turn off the Wi-Fi (Airplane Mode) because it uses battery, and they use antiquated methods to get images where they want them. Or better still, they just buy a smartphone with a better camera and make do.

#5. Slow Card Writes

Old parts? Did I mention old parts? ;~)

As I get older I feel like I repeat myself a lot. But in this case it’s the camera companies that are repeating themselves. There’s only one reason not to include state-of-the-art UHS-II, CFast, or XQD slots in your camera (one, or two matching slots, too): you want to save costs. You’re essentially saying to customers that “a few extra frames in the buffer be damned, you’ve got a big enough buffer already.” Oh, and “see, you won’t have to buy new cards.”

That may be true of the lower-end users. Or it might not. I think we’d have to go out and see them in action to verify. They may just think that 10 frames in the buffer is state of the art and live with that. It’s probably true that they actually are using non-state-of-the-art cards. And in the field, I keep encountering folk who are stifling the performance of their cameras by sticking in a generic brand card they bought eight years ago on sale for a couple of bucks.

Still, you see companies making bad decisions here. The D7500 has only one card slot, so you’d think that Nikon would have gone for state-of-the-art with that slot. Nope. It’s UHS-I, which puts a top end on the buffer performance on that camera that it shouldn’t have. Moreover it sets those users up for future failure because they’re going to see that there’s no need to buy a state-of-the-art card, so they’ll buy a generic UHS-I one. When they update in the future, they’re going to find that card will be their new bottleneck.

I mention the D7500 for a reason. I can see not worrying too much about performance in the low-end consumer cameras (though wouldn’t it be something uniquely marketable if the did the opposite?). Those customers aren’t likely to understand the tech or want to try to optimize. But the D7500? That’s right at the core of Nikon’s biggest enthusiast camera base. What’s Nikon telling them? Sorry, state-of-the-art isn’t for you. One small change and we could have made this camera better for sequence shooting. Nikon didn’t bother. Heck, it even took out the second card slot while it left the first card slot crippled to current standards.

Enjoy the beans, those of you counting in the finance department. Hopefully that type of decision doesn’t render the company you work for an also-ran.

I singled out Nikon here, but they’re not the only guilty party. Not by a long shot.

Sony A9? Supposedly the fastest horse in the race, if you’re to believe Sony marketing. Just don’t use the second slot then, because it’s only UHS-I. Seriously? On a $4,500 camera? You really needed to save those extra few pennies, Sony? (Sony’s marketing department twists this: “Lower card slot supporting UHS-II…is available for fast transfer speed.” Oh, so it’s a benefit, then?)

User solution: None.

#6. Slow Serial Connections

Looked at your latest laptop? Well, you’re probably seeing USB 3.1 ports. Looked at your latest camera? You very well may be still seeing USB 2.0 ports.

This is much like the Wi-Fi specs: not typically state-of-the-art. Okay, let’s go back and ping Sony for a moment. Their Wi-Fi on the A9? Oh, it supports 802.11ac. But only at 2.4Ghz, which limits the top end transfer speed possible. See, even when camera companies do look a bit more progressive, they aren’t typically anywhere near state-of-the-art.

And nowhere do we find that more often than in the USB connection.

RANT ON

Okay, we hit it with Wi-Fi and now we’re hitting it again with USB: the speeds at which you can transfer data off your camera are limited by the camera companies’ choices. Uh, the 21st Century happened guys. And with it came a change in the way images move from one place (camera) to another (social media, which is also a change of place from where they used to go): it all happens in the Internet world, which is half wired for speed, half un-wired and prefers speed.

But cameras aren’t optimal in moving images at all. Camera companies still want you to do it the old sneakernet way. Taking the card out of your camera and bringing it to your computer is not a heck of a lot different than taking the film out of the camera and taking it to your one-hour lab. Okay, so now we have the virtual lab in our home, but still, when all is said and done the camera companies simply haven’t noticed that things changed.

People shoot more images today than ever, and the vast majority of them are moved from camera (smartphone) to social media virtually immediately, with little user interaction, and very quickly via wireless communications. The smaller and smaller minority still using dedicated cameras: the camera companies are reluctant to put a part in the camera anywhere that would help you be competitive with that smartphone user. Not Wi-Fi, not USB, not anything.

/RANT OFF

Okay, so now USB.

USB 2.0 gives you 480Mbps. USB 3.0 takes you up to 5Gbps. USB 3.1 doubles that (as will USB 3.2 yet again). In other words, the camera makers who insist on using USB 2 parts are penalizing you. You could move your images 10x faster at a minimum if they’d just use even a rather recent part (USB 3.0 was November 2008 ratification, I guess we can still call that recent).

Okay, so sure, you can take your fast card out of your slow card slot and slow camera and put it into a fast USB reader connected to your fast computer. Apparently the camera companies never heard of tethering. Or wireless image sharing.

In the tech world, lack of performance-minded, problem-solving solutions generally means that you get fewer and fewer takers for your products. Sound familiar, camera industry? Yeah, you did that.

User solution: Fast card readers, but that’s not really a solution. I’ve watched a few studio shooters simply move to cameras that “tether fast.” But that list is still pretty small, and there might not be a camera in the brand you prefer that accommodates your needs, so you switch brands.

#7. No Qi-Type Power Option

Photo by IKEA

Cameras and batteries have a long history of user-abuse. In the beginning, there was the ever-changing camera battery specification. New cameras meant new batteries needed. Then came the ever-tightening third-party lockout batteries, where a third signal line was used to communicate “brand authenticity” to the camera. Even that doesn’t seem to work all that well, as the EN-EL15/15a/15b saga seems to suggest (I get different camera battery capabilities with each of the same official batteries in this line with the D500, despite the same basic battery spec).

Next up was the “we don’t supply a charger” thing, though often you’d get a USB-type charger that would connect to the camera and charge your batteries at tortoise speed.

Technically, the batteries we use are all pretty much made of the same set of only a few size cells from only a couple of makers, and with only a couple of optional specifications. Only a few cell types and sizes are actually produced for Li-Ion. So while Canon, Fujifilm, Nikon, Olympus, and Sony batteries may look different on the outside and respond differently at the pinouts, inside they’re pretty much all two-cell batteries from the same factories. In other words, just like there are only a few sizes and configurations of replaceable alkaline batteries (e.g. AA, AAA, C, D, etc.) there are only a few sizes and configurations of what’s glued together inside camera batteries in a proprietary package.

First response: yeck! (that would be the sound of a cat ejecting a fur ball)

But the real issue lying underneath all the battery machinations is this: what’s optimal for the user?

Well, that would be a Qi-like wireless charging solution. You know, the one that Apple just endorsed and IKEA been making nightstands that include it for over a year now. Heaven forbid, IKEA is leading technology faster than the consumer electronic camera companies?

Yep. And that’s not just an insult, but it’s a condemnation of those boys in Tokyo designing your state-of-the-art cameras (and yes, it’s almost all boys).

Seriously, there are two things I want in battery charging for my cameras: (1) a wireless set-my-camera-down-and-it-charges solution; and (2) USB chargers that take two or more batteries. Tokyo hasn’t given us #1, but China has provided plenty of generic #2s.

Why USB charging? Because you can plug it into AC if you have to, but you can also then transfer charge from a big portable battery (like the Omnicharge ones I use) to not just your camera batteries but to pretty much any other device (that charges by USB) that you’ve bought. Next time a hurricane wipes out your AC, you’re going to want some big batteries and a solar panel to charge them.

I’ll repeat: users want problems solved. The trick is to find the problem before the user does. Consistently, the camera makers are actually behind the user discovering the problem. In many cases, the camera makers never actually notice that the users discovered and are complaining about a problem that’s easily solvable. Go figure.

User solution: eBay and Amazon for USB battery chargers for a number of mainstream batteries. Dual battery chargers (like the Watson from B&H) for heavy users.

#8. Vulnerable Cable Connections

A picture is worth a thousand words they say, so first a picture, then my thousand words:

What you might not notice in that mess at the side of Nikon’s supposed best-ever digital camera is Nikon’s new HDMI/USB Cable Clip, which is supplied with the D850. It’s there to protect the potential for tripping over a cable interrupting communications (and possibly damaging the camera, as undue pressure on the cable connections has proven in the past to break cameras).

Elegant, isn’t it? Makes holding the camera so easy, too!

Connectors on cameras are located where it’s convenient for the camera designers to put them. No space on that side, well put one of the connectors on the other side (e.g. EOS M5). Users never actually hold their cameras at the sides, do they?

Yes Georgia, they do.

The funny thing is that the terrible wireless capabilities of these cameras make it more likely that you’re going to plug a cable in. Talk about one problem compounding another. If I want to tether in the studio, it’s not going to be by wireless (that image above is exactly Nikon’s intended solution for me in the studio; not!).

If you’re starting to understand why I call a lot of camera engineering these days dumbass lazy, well, you might be getting the point by now. One lack of problem solution leads to new problems caused by terrible other solutions.

Basically, I have to conclude one (or both) of two things: (1) the camera companies are inherently lazy in engineering solutions; or (2) they never actually use their products (or even watch others using them). Sad.

User solution: None that are elegant. The video guys buy rods and cheese plates and rig out their gear so much that it’s nearly impossible to see (or hold) the camera inside. The still studio folk tend to suffer with what we get.

#9. No Focus Info in Viewfinder

Okay, let’s move on to another topic: lack of useful information. It took the camera makers a few years to figure out how to put aperture, shutter speed, and exposure information in the viewfinder. Apparently they thought they were done when they managed that in the 1970’s.

Sure, they’ve moved to newer technologies to display those things in the latest DSLR viewfinders. And they’ve moved some icons from the top-of-body LCD into the viewfinder.

Conspicuously missing in all that is useful focus information.

Yes, I know that modern low dispersion elements in lenses mean that perfectly exact focus information isn’t possible due to changes that happen in different temperatures. But we don’t even have any focus information. Indeed, sometimes even the actual focus point used isn’t displayed because, well, that would take a bit more horsepower in the supplemental chips to do and you know, cut costs.

Do I need to know that I’m focused at exactly four feet, three and half inches? Not really. If things are requiring me to be that precise, I’m in a controlled situation with the camera on a tripod and lots of measuring and calculating tools handy. Would it be nice to know that I’m focused at four feet and that at my current aperture everything from two feet to seven feet should have some level of acceptable focus? You bet it would.

Along the way, we’ve gotten a few cameras that do this. But most don’t. That’s a shame, because “getting subjects in focus” is one of the primary things that many struggle with using their cameras.

User solution: Estimation and app-based calculators.

#10. No Raw Tools

The ability to shoot raw must be there for a reason, right? And what reason would that be? Perhaps so that serious users can get the very most out of their systems that is possible?

Right up at the top of the list of “getting the most” is making sure your exposure is optimal. I won’t get into the esoterics of Expose To The Right (ETTR), but in a general sense, you want the very brightest detail you retain to be at right about the saturation point of the sensor. That’s where you’d maximize the dynamic range between your bright point and darkest detail captured.

Okay, how do we do ETTR in a camera shooting raw?

A lot of folk look at the histograms that the cameras supply. Only problem? That histogram comes from demosaiced, white balanced, and tonally conditioned JPEG data. And that’s if you’re looking at the histogram for an image already taken. For histograms in real time, as some of the mirrorless cameras provide, it’s worse: the histogram has all the faults I just mentioned plus it’s from a subset of the sensor provided by a video stream from the sensor that doesn’t match the dynamics of what the actual shot will attain.

Okay, maybe we should look at the blinkies (highlights display). Again, that’s derived from the JPEG data, but now we have the additional problem that most camera companies don’t even tell us when those blinkies are triggered (hint: on Nikon cameras, it seems to be at an 8-bit value of 248).

Why can’t we have raw histograms and blinkies? Because no one in Tokyo sat down and designed such a thing in the imaging ASICs. It would have to be in the imaging ASIC, because given the megapixel counts we have these days the counting would go on forever if it wasn’t done with a dedicated algorithm-in-hardware. But personally, I’d be willing to wait a moment—only a moment mind you—for a true raw histogram or highlights display to be generated on playback.

Unfortunately, it doesn’t stop there. Nikon’s latest metering system has a tendency to preserve highlight latitude. Only we don’t know how much that latitude they’re trying to preserve actually is. Years ago, it was common that I’d be shooting with my exposure compensation starting at -0.3EV as a general rule on my Nikon DSLRs. With the D5, D850, and D500, that tends to be +0.3EV now, although I’m still evaluating this: the meter also tends to be a bit more variable with high contrast versus low contrast situations. Also, with matrix metering, there’s that 16.3 EV cutoff that comes into play.

Still, there’s clear highlight latitude going on in the metering system, which is the antithesis of ETTR. Moreover, your raw converter may be playing games with you, too. Adobe applies unseen exposure compensations that come into play (0.5EV in the case of the Nikon D5).

No, we’re still not done.

While sensors are generally linear, they’re definitely not perfectly linear. So where does that highlight non-linearity begin, and how much is it? Silence in Tokyo. While Bayer is defined as red, green, and blue, what are the actual filtrations being used in the sensor? Silence in Tokyo. Is there white-balance pre-conditioning going on in the raw data, and if so, how does that change the data? Silence in Tokyo. At what point does actual shadow detail disappear into noise in the bit values for 14-bit? Silence in Tokyo. Where are gains changed or other sensor-based strategies used to control noise? Silence in Tokyo. Okay, maybe not complete silence: some will tell you in their marketing documents that they use a two-gain strategy, but no details of what that actually means and how it might impact your data.

I’m just getting started here. And I’ll think that I’ll stop here. Because if we got ETTR, raw histograms and highlights, definable highlight triggers, and actual spectral information out of the Japanese camera companies, we’d all be happy for a very long while.

User solution: UniWB, which is just one of the many things we have to reverse engineer to get useful raw information. RawDigger as a post-analysis tool that we can use to analyze controlled situation tests with. Experimentation.

#11. Minimal or Incomplete Help Systems

It’s not help if it doesn’t help. Might as well put a big bright yellow Help button on the camera and display the message “Did you try turning it on and off” when the user presses it.

Yes, I know we have very small displays that we’re dealing with here. The handiest camera to me as I write this is a Sony A9. Close examination says that the textual engine they’re using in the menus produces something like 40 characters on 8 lines. That’s a bit more than two tweets. (Aside: hmm, interesting project. Can I tweet a menu help system for a Nikon DSLR? One tweet per menu item, one tweet per choice within the menu item. Yes, I think I can. I think I might even try that. What do you think? Should I?)

So we don’t have a lot of real estate to deal with before we trigger scrolling, which I think we want to avoid, and we have to deal with translating into 70 or so languages, certainly at least 35 to say we’re a global product.

But you know what I think the real problem here is? Wait for it…wait for it…wait for it…

Costs.

The Japanese camera companies probably wouldn’t even pay what we’d call minimum wage for help system text to be generated. They’d off-site that to the lowest bidder, or they’d just add it to some salaryman’s list of things to get done ASAP, and he’d spend minimal time on it.

Another hidden cost is in the background, though. Let’s say that we have 100 menu items with an average of 5 choices that are to be done in 35 languages. And each is 320 characters (40 chars on 8 lines). That’s potentially 5MB characters. While memory is relatively cheap, we need this memory to be flash memory so that firmware upgrades can fix errors, and flash memory is more expensive and will be limited in any camera design. Almost all the camera companies probably have fixed size targets for all the firmware and help to fit into. As you start adding more features, you lose help space, which is why you see some Nikon menu features without help: someone prioritized which ones should have it in order to fit.

Yes, I know that there are ways to compress text, potentially in very dramatic ways if we limit the vocabulary, but that increases engineering costs.

And since we have cameras that don’t talk very well to smartphones, we can’t even have a smartphone app that would provide real-time help as you scroll through the menus on your camera (that would solve the space issue, but doesn’t mitigate the other cost issues).

User solution: Don’t use features you don’t understand, lots of sleuthing through Web sites, YouTube videos, books, etc. Ask a friend. Call technical support. use

Gotta Stop Somewhere

I’m not nearly done. I’ve barely touched the surface in how the camera companies aren’t hearing users and aren’t delivering state-of-the-art. But I have to stop somewhere, or I’d just spend the rest of my life typing additions to this article.

I think my point is made.

The camera companies say they want more sales, more profit. But they’re not even close to providing a fully satisfactory experience for their existing customers, so where do they think these additional sales and profits will come from? Just plodding along with what they’re doing? That’s always a scenario that has a bad ending.


About the author: Thom Hogan is a photographer and author of over three dozen books that combined have sold over a million copies worldwide. The opinions expressed in this article are solely those of the author. You can find more of his work and words on his website. This article was also published here.

Discussion