James Lorentson Photography: Nature Photography Workshops & Fine Art Prints

View Original

Dynamic Range & Visual Perception

DISCLAIMER: this is a technical (geeky) article that gets into the weeds. I think the material is interesting and useful to think about. You may not. If you want to skip to the technique to add depth in post, stay tuned for next week’s post.


Dynamic Range. More, Please?

In the quest for more of everything—more megapixels, more image stabilization, more advanced features (i.e. focus stacking)—more dynamic range is at the top of the list for most landscape photographers. And for a good reason—we shoot in crazy contrasty light, and more dynamic range permits us to capture more shadow and highlight detail. Camera manufacturers are hard at work battling it out, and many photographers choose one camera over another for its superior dynamic range.

There are numerous sites that exhaustively test and compare camera models by dynamic range, Here, DPReview compares dynamic range between the latest DSLR cameras (2017)

In a previous article, I discussed a number of ways that our perception of a scene differs from a photograph, including Contrast & Dynamic Range.

Since then, I learned some surprising new information. Today, I’ll share it with you, and lay the groundwork for a powerful tool I use in Photoshop to add depth to my images.

We 'See' More Than Our Cameras Do

Dynamic Range of Human Visual Perception vs Modern Camera Sensors. Some smart people dive into the science in greater detail if you're into that sort of thing.

As you see in the image above, the newest modern camera sensors are capable of capturing 14-15 stops of light. Pretty impressive, but still no match for the evolutionary marvel that is our visual perception, which can ‘see’ an astonishing 30 stops!

But, while the human eye is capable of seeing about 30 stops of light, called the Static Contrast Range, it only sees about 10 stops at any given time. Like a movie camera, our perception continually records these 10-stop images as your eyes scan the scene. These separate images are combined into one complete mental image of the scene.

How our eyes and mind work together to perceive a scene. We only “see” 10 stops at a time. Our minds put these separate images together into one complete mental image.

For example, let’s say you are out shooting Joshua Trees against a sunrise. As you scan the scene, from the deep shadows in the foreground trees through the midground out to the bright sky, your eyes dedicate 10 stops to each part, and then your mind combines them into one complete scene. All without you even thinking about it. Pretty amazing.

Condensed Dynamic Range

That 30-stop scene is what we remember, not the 14 stops of the camera or smartphone screen. This is one of the reasons photographs don't do justice to our memories.

You might be thinking, "Don't techniques like HDR and luminosity blending overcome these dynamic range limitations?" Well, sorta. They DO allow us to avoid clipped shadows and blown out highlights.

But the 30 stops that you perceived are now condensed down to what your screen can display: around 10 for standard monitors and close to 14 for newer HDR monitors.

So even with multiple exposures, the highlights and shadows that were spread out over 30 stops in your mind are squished towards the middle in your image, reducing depth and compressing subtle tonal gradations.

Add Back Depth

Knowing this, and keeping it in mind when editing, you can process your images with more intent. And with careful editing, you can add back depth to create a more visceral representation of what it felt like being there. I have a neat Photoshop technique for doing that, and in the next post, I’m going to share it with you. Please subscribe to the mailing list so you won’t miss it!


See this form in the original post