The NVIDIA RTX 4090 is Amazing, and Photographers Should NOT Buy It

When NVIDIA reached out to ask if we wanted to try the new RTX 4090 GPU, we almost said “no.” Most of our readers are photographers, and most photo editing simply doesn’t require a top-tier GPU. But we did say “yes” because we wanted to answer a different, more relevant question: do photo editors need a GPU at all? And if so, how much do you need to spend to get top-notch photo editing performance?

To that end, we put the shiny new RTX 4090 Founder’s Edition graphics card that NVIDIA sent over up against three of its little siblings from last year: the RTX 3070, RTX 3080, and RTX 3090 Founder’s Edition, all new-in-box before this test, all running inside the exact same PC.

First, we put these cards through a slew of high-powered synthetic benchmarks that test things like 3D rendering and Ray Tracing in order to highlight the kinds of massive performance gains you can expect if you work in high-end visual effects or 3D design. Then, we tested these same cards in our usual Photoshop, Lightroom Classic, and Capture One Pro benchmarks to show why most photographers are better off saving some money and buying a last-gen GPU.

Oh right, spoiler alert: Most photographers are better off saving some money and buying a last-gen GPU.

But that definitely doesn’t mean you should skip the GPU entirely, nor does it mean that you won’t see any performance gain by upgrading your graphics card to the latest and greatest. It’s just that, in our testing using Lightroom Classic, Capture One Pro, and Photoshop, you can get 90% of the performance for less than one third the price of an RTX 4090.

Let’s dive in.

Table of Contents

Our Testing Rig

All of the tests below were performed in my personal editing rig, which I built about six months ago. It consists of:

  • An MSI MEG Z590 ACE Motherboard
  • An Intel Core i9-11900K CPU, water-cooled using a MSI CoreLiquid 240r v2 AIO
  • 64GB of Corsair Vengeance Pro RGB DDR4-3600 CL18 RAM
  • A 1TB Corsair MP600 Pro PCIe 4.0 M.2 NVMe SSD
  • An EVGA 1000 P3 1000W PSU (80 PLUS Platinum)

Notably, this isn’t the kind of bleeding-edge rig you’ll see on some excellent high-end gaming reviews from people like LTT, Gamer’s Nexus, or JayzTwoCents. We simply don’t have access to that kind of gear. But it’s probably closer to the kind of PC many of our readers are actually using if they build an editing rig within the last two or three years, and it gives us a reasonably powerful CPU that won’t immediately bottleneck every one of our tests.

Speaking of which…

A Note on CPU Bottlenecks

Unless you’re a gamer the term “CPU bottleneck” might be new to you, but it’s important that you understand what it means. That’s because CPU bottlenecks are at the core of why some people can justify spending $1,600+ on a GPU and other people may as well be lighting their money on fire.

A CPU bottleneck is exactly what it sounds like: this is when your CPU is the limiting factor in a given computational task. Whether we’re talking about exporting photos, rendering a Premiere Pro project, or using AI to upscale a big batch of RAW files, there comes a point where adding more GPU horsepower will do absolutely nothing for performance, because your current GPU is already spending most of its time sitting around idle, waiting for your CPU to finish its part of the job.

The concept is particularly relevant for gaming, where updating your graphics card doesn’t actually improve your frames per second (fps) in a given game at lower resolutions, because the limiting factor is your CPU. Crank up the resolution to 4K and suddenly the weaker graphics cards will fall behind, but before that, a more powerful GPU won’t help. As you’ll see shortly, this same concept is why many, if not most, photo editing tasks don’t need the latest and greatest high-end GPU.

But first, let’s see how our four graphics cards compare in high-end, graphics intensive tasks that aren’t limited by the other components in our rig.

3D Rendering Benchmarks

When it comes to rendering ultra-high resolution video files or calculating exactly how the light is bouncing off every inch of a 3D animated frame using ray tracing, a powerful GPU can make a huge difference. In this respect, NVIDIA has made a massive leap with the RTX 4090.

By moving to a whole new architecture build on TSMC’s 4nm process, increasing the size of the card and beefing up the cooler, they were able to boost the base clock by over 35% and pack in over 16,000 CUDA cores, 512 Gen 4 Tensor cores, 128 Gen 3 RT cores, all while consuming the same 450W TDP as the last gen RTX 3090Ti.

If that all sounds like gibberish, the upshot is that the this card should lay waste to both the RTX 3090 and the RTX 3090Ti in GPU-bound tasks without requiring a bigger power supply or overheating in the process. This is exactly what we see in all of our 3D rendering benchmarks.

VRay (v5.0.20)

In VRay, the RTX 4090 doubles our 3090’s score in both the CUDA and Ray Tracing benchmarks, rendering over 4,200 “vpaths” and over 5,500 “vrays” in a one-minute run:

This kind of leap in performance is incredibly rare these days, but NVIDIA has pulled it off. And this isn’t some fluke either, it’s a pattern that plays out over and over again in every “creator” benchmark we ran.

Blender (v3.3.0)

In Blender, the RTX 4090 more than doubles the RTX 3090’s performance in the Monster scene, and nearly doubles its performance in both Junkshop and the older Classroom scene:

OctaneBench (v2020.1.5)

Finally, OctaneBench is the same story all over again. In all four rendered scenes, the RTX 4090 comes close to doubling what the already beefy and power-hungry RTX 3090 can do, while the 3090 posts only modest improvements over its little siblings, the RTX 3080 and RTX 3070.

Again, when it comes to high-end rendering performance in benchmarks that are specifically tuned to rely exclusively on the GPU, the RTX 4090 represents a doubling of performance year on year. That’s… incredible. It’s not often we get to say that this generation of *fill in the blank* is 2x or 100% faster than last year’s model without adding a bunch of asterisks. Unfortunately, this is where I have to switch gears and tell you why, as a photo editor, you won’t see anywhere close to this level of performance uplift in your favorite photo editing apps.

Photo Editing Benchmarks

As mentioned earlier, most photo editing tasks are CPU bottlenecked. And yes, that includes tasks that are “GPU accelerated.” It’s not all bad news: Photoshop leans on the GPU to accelerate or outright perform several important tasks like Smart Sharpen and most of the Blur Gallery effects, Capture One Pro 22 uses GPU to significantly accelerate exports and, as of a few months ago, Lightroom added GPU acceleration to their exports as well.

There’s also a growing number of AI-powered photo editing tools like ON1 Resize that use the GPU to speed up processing, and Adobe Sensei-powered actions like Sky Replacement and Super Resolution rely heavily on GPU acceleration as well.

But how much do these things really speed up your workflow? And where is the price-to-performance sweet spot if you’re looking to buy your first GPU? Fortunately for your wallet, the sweet spot for the most time-consuming photo editing tasks is on the low end.

Adobe Lightroom Classic

In Lightroom Classic, import performance is 100% based on your CPU and RAM—the GPU does nothing—so we’re going to skip that benchmark. But when it comes to exports, the latest versions of Lightroom now use the GPU to accelerate that process significantly. Using 110 Sony a7R IV and 150 PhaseOne XF RAW files, we applied a custom preset and then exported each batch of files as 100% JPEGs and 16-bit TIFFs in turn.

Long story short: any GPU is a huge improvement over using the CPU for export, but there is definitely a point of diminishing returns as the GPU gets more and more powerful:

This is one of the few benchmarks we ran where spending more money on an RTX 3080 may actually be worthwhile if you’re exporting thousands of JPEGs every week. Lightroom seems to rely heavily on the GPU’s fast GDDR6 memory when GPU-accelerated export is enabled, so a more expensive GPU with more VRAM makes a significant difference. That’s probably why we see no difference between the RTX 3090 and the RTX 4090: both have 24GB of VRAM.

Capture One Pro 22

The story in Capture One Pro 22 is much the same. Imports do not use the GPU at all, so we’ve skipped that benchmark again, but exports are significantly GPU accelerated and, in this case, they don’t rely on the GPU’s VRAM almost at all.

The first thing to notice about our results is that Capture One Pro and Lightroom Classic are much closer in export performance now that Lightroom also has GPU accelerated export. The next thing to notice is that the difference between CPU-only and any GPU is even larger than in Lightroom. And the last thing to notice is that upgrading to a high-end GPU has basically no impact on JPEG exports, and only a moderate impact on TIFF exports.

The JPEG results in particular represent a classic CPU bottleneck. The RTX 3070 is already sitting around waiting for the CPU to catch up, so there is no difference between any of the GPUs in that export. The TIFF results are a little better, getting slightly faster with each upgrade, but it’s nothing like the massive performance leap we saw in the 3D rendering benchmarks.

Adobe Photoshop

Unsurprisingly, our Photoshop benchmarks show more of the same. We ran the usual benchmark: PugetSystem’s PugetBench v0.8, which we still use because it includes a PhotoMerge test that was removed in later versions. In this case, we don’t actually care about PhotoMerge, all we care about is the “GPU” category score. Every other score was within margin of error from one GPU to the next, and only the GPU category score reliably increased as we upgraded from using the integrated GPU, to the RTX 3070, the RTX 3080, the RTX 3090, and finally, the RTX 4090.

Just like our other tests, there’s a big leap from iGPU to the discrete GPUs, but it’s frankly shocking how little performance an extra $1,000+ will buy you when it comes to GPU accelerated tasks.

These findings extend to our qualitative experience. Using Photoshop features like Sky Replacement or applying Super Resolution through Camera RAW on individual RAW files is definitely faster on the beefier GPUs, but you won’t notice the speed increase when you’re editing one photo at a time. The RTX 3070 might take 2 seconds to AI upscale your 100MP photo, while the RTX 4090 takes just 1 second. That’s still a massive speed improvement in percentage terms, but it’s on a time scale that’s totally irrelevant to your workflow.

Based on our experience, any modern GPU will deliver a buttery smooth experience and great performance in Photoshop. No need to shell out for top-shelf.

AI Resizing

In our final test, we ran two different AI-powered resizing algorithms that both use GPU acceleration: ON1 Resize and Adobe’s Super Resolution. Both algorithms rely exclusively on the GPU to upscale the photo and the CPU to export the result.

Unfortunately, we couldn’t test these two on a one-to-one basis because ON1’s algorithm takes much longer per photo than Adobe’s despite generating slightly worse results (to my eye). So we used ON1 Resize to upscale five full-sized Sony a7R IV RAW files, and we used Adobe Super Resolution (through Lightroom Classic) to upscale the full batch of 110 Sony a7R IV RAW files we use for testing.

Adobe Super Resolution will double the resolution of your RAW file and export a DNG by default, so we chose matching settings in ON1 Resize and ran both tests independently.

I should note that we wanted to run these tests using the iGPU as well, but the processing is just way too slow. Upscaling a single photo using the Intel UHD integrated graphics on our Core i9-11900K took 34 minutes in ON1 Resize and two-and-a-half minutes in Lightroom Classic. That translates into almost 3 hours for the full ON1 test and over 4 hours in Lightroom Classic. However, the GPU accelerated results are still illuminating:

As you can see there’s a steady climb in performance for the ON1 Resize test, and a much more meager climb for Adobe Super Resolution. The former still has room to improve, but the latter is pretty much CPU bottlenecked from the beginning: the actual AI upscaling only takes a few seconds per photo and the rest of the time is spent waiting for the CPU to export the DNG, add it to the library, and create a 1:1 preview.

You’ll definitely notice a performance improvement if you use Adobe Super Resolution to upscale large batches of photos. But based on these results, there’s very little reason to upgrade beyond the RTX 3080.

Conclusion

This article isn’t meant to be a full, comprehensive review of the RTX 4090, nor is it meant to be a comprehensive comparison against all of the major competitors on the market. We would have loved to have an AMD GPU in the lineup or some 20-series NVIDIA GPUs for that matter, but that wasn’t really the point. The point of this article was to confirm, once and for all, that photographers don’t need a high-powered GPU to get the best possible photo editing performance.

As more and more photo editing apps tout the fact that they’re “GPU accelerated” it’s tempting to think that more powerful is always more better, but when it comes to photo editing that’s not the case in the vast majority of applications. The latest generation of GPUs are aimed squarely at gamers, animators, and 3D designers who are either rendering hundreds of frames per second or one insanely complex 3-dimensional scene. Day-to-day photo editing tasks are child’s play to a modern GPU.

Where a GPU is useful is for AI-accelerated batch editing or massive GPU accelerated exports, and for these tasks, pretty much any modern GPU will do, whether you spend $500 or $5,000.

Discussion