Repost: HDR, the zone system, and dynamic range

_8042033 copy
My eyes, my eyes! I had to work quite hard to make this as a) I don’t own any of those filter programs and b) I don’t do this kind of hyper toned, overlapping HDR. The actual, final version of this image is at the end of the article.

Note: I’m reposting this article as a refresher before I talk about something a little harder to define in the next one.

HDR/ High Dynamic Range photography is perhaps one of the greatest blessings and curses of the digital age of imaging. On one hand, we have retina-searing rubbish that’s put out by people who for some odd reason celebrate the unnaturalness of the images, encouraged by the companies who make the filters that make doing this kind of thing too easy – and on the other hand, there are a lot of HDR images out there that you probably wouldn’t have pegged as being anything other than natural. There is, of course, a way to do it right, and a way to do it wrong. I use HDR techniques in almost all of my images – I live in the tropics, remember, and noon contrast can exceed 16 stops from deep shadows to extreme highlights – we simply have no choice if you want to produce a natural-looking scene.

I originally wanted to explain my recent digital B&W conversion epiphany, but I realized that I couldn’t do it without explaining the whole concept of dynamic range and briefly touching on the zone system first. All photography centres around light, and how that light is represented in a way that captures the scene to represent and convey the artistic intentions of the photographer. Taking the quality and quantity of light in the scene as a given, it always then becomes a question of how you map input luminance to output luminance, especially when one exceeds the other.

We also need to understand something around how human vision works: our eyes are nonlinear, and this is partially because our brains are extremely sophisticated processing devices, and partially because of the way we see: the eyeball scans rapidly many hundreds of times a second, building up a detailed picture that’s then composited together by the brain. This is known as persistence of vision, and is the reason why cinema at 25fps still appears to be mostly smooth motion even though we are seeing discrete frames – our brains fill in the blanks. While it’s doing that, it’s also dynamically compensating the iris and the signal processing to maximise dynamic range: the upshot is that whilst we almost never have to deal with blown highlights – no matter how bright a scene is, we can almost always make out some luminance gradient – we don’t see so well into the shadows; seeing pure black with almost no detail is normal. This is a consideration of perception.

A photograph is a static scene: we view it and the brain doesn’t get any additional information from scanning it again with a larger iris or while collecting more light. We therefore need to ensure that the limited tonal range contained within a static image – be it backlit and transmissive as on a screen, or reflective in a print – represents the actual scene in such a way that the observer’s brain can reconstruct the relative tonal relationships. I put heavy emphasis on ‘relative’ here; again, because our eyes scan the image and the brain uses persistence of vision to reconstruct the whole (see these two articles – part one, part two – on psychology and how we view images for more information) – the absolute difference doesn’t matter; only the relative difference. So long as the image maintains an overall semblance of separation, and the right relative separation to adjacent areas, then all is well – and the image appears natural.

This is a good thing, because even if our cameras can capture 16 stops of dynamic range – none of our output media can display it; digital or print. We therefore need to find a way of allocating input to capture, and capture to output. The final stage isn’t so much of an issue as the nature of the technology tends to take care of this for us – the extremes of the range will become compressed, but they will never overlap. It’s the input to capture portion that one must be extremely careful of. Of course, all that follows applies only to scenes where you are not in control of the light; if you’re using a controlled lighting setup in studio and have to use HDR to control your dynamic range, you are an embarrassment as a photographer.

explaining DR and HDR-D2D

Here’s what normally happens: input (top) goes to output (bottom). The gray wedge represents the tonal/ luminance scale; it’s grossly oversimplified as there’s one for each colour channel, but this is purely for purpose of explanation. In the process, there’s some tonal clipping – the areas represented by the red triangles is usually lost and compressed to either extreme of the tonal scale; i.e. everything below a certain luminance level goes to pure black only, and everything above goes to pure white only. The same process is generally true of the digital file (or film negative) to final output process, except it’s the digital file on top, and your output medium of choice at the bottom.

Assuming we extend the recorded dynamic range of the scene by bracketing and compositing or using grad ND filters or some other process – we are still going to be left with more input dynamic range than output dynamic range. We need some way of allocating the extra information, preferably in such a way that it a) doesn’t look unnatural and b) is useful – i.e. opens up the shadows slightly, or tames the highlights for a smoother rolloff. What is typically referred to as ‘HDR’ is this allocation process.

explaining DR and HDR-bad hdr

Your typical HDR image has tone mapping that has undergone a process that looks like this. The Roman numerals are zones; a zone is basically a luminance band/ range. The problem here is that the allocation results in overlaps: input zones 0-IV are output as zones 0-VII; but zones VII-X output as IV-X. Thus zones IV-VII become this ambiguous soup: we have highlights that are darker than shadows/ midtones, and shadows that are brighter than midtones/ highlights. And this simply looks unnatural – it’s also what I’ve done with the first image in this post. In case your retinas are still intact, here it is again:

_8042033 copy

You’ll notice that there’s something not quite right with the naming convention of the zones: I suspect this is the root cause of all of the bad-looking HDR is partially because whoever is using or writing the software makes the mistake of thinking that there are as many output zones are there are input zones: there simply aren’t. (This is why we have to do HDR in the first place: we cannot accurately capture or represent the full input tonal range; maybe our monitors don’t go perfectly black, or bright enough, or it’s because our sensors are limited, or because paper can’t get any brighter than zero ink density.) On top of that, most of the tone mapping is performed in the full RGB channels, instead of luminance only: hues doesn’t change when it gets darker. This results in hue/ colour shifts and the rather strange palette you see above. To understand why this is the case, we need to consider how the software works: you put in a number of images taken at different exposures; for each given pixel address, the program calculates an average luminance value for each RGB channel based on the input files. It may then run those through a curve to bias input/output. This of course is is a purely mathematical approach – it has to be, as every situation is different, and there is no such thing as an ‘ideal exposure – it varies based on artistic intent – it simply cannot beat personal, perceptual adjustment on a case by case basis.

Now, here’s how to do it right:

explaining DR and HDR-wide hdr

First, remember that everything is relative: there are fewer output zones than there are input zones. Golden rule: there should be no tonal overlaps. The input tonal band is always wider than the output band; even within the bands we must make sure there are no tonal overlaps. This way, the relative brightness transitions across a scene and between the subjects in the scene are preserved.

Second, and perhaps more importantly, HDR gives us the ability to choose the bias of the tonal allocation: whether we have priority given to the shadow areas, or the highlights, is up to us; this can still be done without overlaps:

explaining DR and HDR-low key hdr
Shadow bias; highlights are compressed into fewer output zones, shadows are not

explaining DR and HDR-high key hdr
Highlight bias – the opposite of above

This method allows us to maintain the smoothness of the transitions, especially at the shadow and highlight borders – areas where our eyes are particularly sensitive, and digital becomes binary – an area is either white and detail-less (i.e. no contrast because it’s fully desaturated) or it isn’t. A good HDR starting point should appear very flat: i.e. little apparent output contrast between adjacent areas in the image – because this gives you the flexibility to put the contrast where you want it later on, i.e. the ability to do the above allocation by means of a curve or local adjustments like dodging and burning.

_8042033 copy

This is the same image, also run through a HDR process, but I think you’ll agree that it looks far more natural: firstly, there are no tonal overlaps. Secondly, colour remains natural and accurate to the scene (assuming a calibrated monitor). Thirdly, if you look closely, you’ll see that there are no large clipped areas in the image – there are small areas to allows the viewer’s eyes to automatically calibrate and get a feel for overall scene brightness only. Finally, pay close attention to the deep shadow and extreme highlight transitions: they’re smooth, and natural. But here’s the thing: if I didn’t tell you it wasn’t a single exposure, would you have known? In my mind, the benchmark of good processing – HDR or not – is that the first thing you should notice is the subject: not the post-capture adjustments. MT

__________________

Be inspired to take your photography further: Masterclass Chicago (27 Sep-2 Oct) and Masterclass Tokyo (9-14 Nov) now open for booking!

__________________

Ultraprints from this series are available on request here

__________________

Visit the Teaching Store to up your photographic game – including workshop and Photoshop Workflow videos and the customized Email School of Photography; or go mobile with the Photography Compendium for iPad. You can also get your gear from B&H and Amazon. Prices are the same as normal, however a small portion of your purchase value is referred back to me. Thanks!

Don’t forget to like us on Facebook and join the reader Flickr group!

appstorebadge

Images and content copyright Ming Thein | mingthein.com 2012 onwards. All rights reserved

Comments

  1. Awesome article! I often use this technique( http://petapixel.com/2015/02/21/a-practical-guide-to-creating-superresolution-photos-with-photoshop/) when shooting handheld with my coolpix A. It’s awkward to say the least, but it allows me to produce some very nice landscape prints, without carrying a tripod, and it helps with moire and chromatic aberration which has caused me problems with this camera. I have wondered about combining this and an HDR technique, as I live in an area that can exceed the dynamic rang of the sensor. Any thought on how that would be done? Maybe bursts at three different exposures?
    Obviously this isn’t the way you should do these sort of shots I know, but 10 ounces is literally about all I want to carry on many of my adventures, particularly when putting in dozens of miles over mountains in a day.

  2. Hi Ming,

    Is your HDR method covered in any of your tutorial videos?

  3. There’s that famous Ansel Adams quote, “There is nothing worse than a sharp image of a fuzzy concept.” And I think the use of automatic HDR programs (just like the use of automatic scene modes) exemplifies that saying … unless you’re after HDR barf colors, then I guess you’d be on-concept.

    When I think about it, it floors me that we’d let a computer program pick, pixel by pixel, which luminance value you’d want from a series of exposures. Talk about the blind leading the blind: does the computer even know what you’re trying to say or show with the photo?

    Using careful selections from amongst the various exposures in Photoshop, and then blending them in so that the picture reflects the idea in your head is really the only way to do it. Of course, to do that, you need an idea in the first place, which as I’m slowly discovering is by far the toughest thing in photography.

    • I don’t even think the computer knows what’s in the photo or has any concept of all of the other ideas and expectations associated with those subjects. And that is the problem not just with HDR but filters and auto-processing etc…

      As for the idea: 😉

  4. I’ve never been a fan of HDR but this post of yours has given me reason to rethink my perhaps dynasaur type attitude to the technique.

    • I think the same goes for a lot of people because of the expectation that HDR is that retina-searing tone mapping made so popular by a certain photographer – it doesn’t have to be, of course.

      • Ming, I have mixed feelings about this thought in general. While my *personal* tastes agree with yours, that HDR should be subtle and result in a natural image this is purely my own opinion. If we look at photography as art, which I believe it is, then there is no right or wrong way to do anything. If you disagree with harsh processing and ‘retina-searing’ results then that is fine but I don’t think it’s fair to say it’s wrong. Similarly, one can say paintings should be photo realistic, that abstract paintings of cubes and swirls are wrong and anyone who paints in that style should just stop.

        • I agree on art/photography being entirely subjective. So let me rephrase: ‘Should be’ is perhaps too dictatorial – how about ‘it doesn’t have to be retina searing’.

  5. Ming, good post. More here…

  6. So unlike the standard HDR method of starting with a high dynamic range, contrasty output and reducing global contrast, you start with a flat image containing all the data and manually add contrast only where you want.

    I’ve written a tool that works the “normal” direction, which doesn’t need much intervention and doesn’t have much artifacting, but I can see the appeal in the full control your method gives you.

    On the other hand, they encourage different overall aesthetics: the “add local contrast” method will never get quite as strong a pop look to it, and the “reduce global contrast” method will not as easily get the isolation and emphasis on one subject.

    • I go the opposite direction because it most closely matches ‘normal’ workflow, which in turn results in consistent-looking images. You can get more pop, I just prefer not to have it.

  7. Martin Fritter says:

    This so totally useful and clear that I just made a small donation.

  8. A bit off topic, but I remember reading that HDR technique is not unique to the digital era; that is was implemented in the late 30’s or early 40’s to photograph atomic bomb testing.

  9. An interesting article, although on a small point persistance of vision is no longer thought to be the mechanism whereby we perceive motion. There is much about this on the web to explain.

  10. Interesting in the black and white article: will you write about dark areas to high light areas transition?
    I was thinking about it two days ago: I went to pick up the second Leica Q, and then walked around the city of Bologna taking some photos. I entered one of the oldest “bars” of the city, something quite curious with an inner “garden”.
    The lights and shadows game was so harsh that I had some trouble to figure out how to properly expose the scene, because I wanted to capture and see the writings on the walls but also the furnitures on the tables in the shadows.
    An hdr could have helped, but it would have looked also less natural to my eyes.
    Waiting for your next article Ming, and thank you for the repost, it’s good to have a refresh.

    • If the brightness of a scene is so extreme that you have to squint to see everything, then I think you really have to make a choice about what to clip and what to keep. It isn’t the clipping that looks natural: it’s how abrupt the transition to overexposure/ underexposure is. This means some judicious dodging and burning around those transitions…

  11. In the last LR version (6), the HDR merge function works like a charm. It’s perfectly natural, does look like only one exposure, while still allowing a large latitude for all kinds of local and general editing. With this in mind, the raw shooting and the necessity for any serious photographer to do some editing/processing work, one way or another, let’s agree that the issue of the limited dynamic range of some sensors versus others becomes less important for anything else than candid street photography. 🙂

    • Indeed; even with candid street photography, it’s become easier than ever to maximise in-camera DR with say EVFs and exposure zebras – very, very few situations actually result in clipping if you can deploy the full 13-14 stops of dynamic range. It’s when you have to do recovery off a hasty an inaccurate exposure that thing get messy.

      Good heads up on LR6; not using LR myself I was not aware of that. I’m sure it must also be in PS somewhere…

      • Sadly, LR’s new HDR merge has yet to make it to PS. I’d encourage anyone with access to the former to try it, as in addition to erring on the side of natural it outputs a DNG file that you can then tweak further or open in Camera Raw if you prefer (as opposed to the baked-in approach of PS’s Merge to HDR). As with all automated HDR solutions, though, it’s lacking in the fine control of the manual approach Ming mentions below.

        • I guess it isn’t the same thing that’s was in PS’ Photomerge then.

          • No, it isn’t. I was not happy at all with that one. Even if it is “better” than Photomatix or Nik, it is still not “natural”. For one thing, I found the tone curves terrible…Well, with this LR6 implementation, what I get is a true RAW file (digital negative format, as Todd pointed out), but much larger in size. That is to say that all the info from the 3 (or more) files has been saved and made available for processing. The only difference between the sum of the files and each of them is that the dynamic range is larger or much larger (depending on the number of exposures I combine). In my work flow, there are only two “baked-in” things I get, by choice: the de-ghosting – particularly useful when shooting a dynamic (street-like) scene and, second, the auto-align – very useful when shooting bracketed but handheld (yes, I am losing a couple of pixels of resolution…). For the rest, it is all up to me and my “idea” (if any at all 🙂 ). And even these two I choose not to take in, if I shoot a static landscape from a tripod, for instance. There is really nothing that I cannot fully access in terms of exposure, tones, colors, noise and so on, you name it. Again, it truly is a raw (DNG) large size file, with (very) “high dynamic range”. 🙂

            • Ah, interesting. So theoretically you could use LR6 to generate the ‘flat’ DNG, and run that through the normal ACR/PS process. That may actually be faster than erasing through layers.

              • Yes, you can do everything you usually do with any raw or DNG file. I’ve been reading today on an adobe users forum and, to my unpleasant surprise, some people consider this LR implementation NOT being HDR…For the very same reason I love it (not doing any tone mapping or other retina-hurting manipulations), some actually hate it. 🙂

        • I am not sure Todd what you’re referring to when mentioning “lacking in the fine control of the manual approach”, can you please explain? I thought by ticking out the two boxes “auto align” and “auto tone”, it is all still manual, isn’t it?

  12. Ian Moore says:

    Very interesting…..didn’t realise this before about overlapping zones , and explains an awful lot why people like my HDR images when I (occasionally) do them , but then say “Nice , but looks a bit weird…un-natural somehow…”
    Will you be following up with an explanation of HOW you achieved the last image….i.e. settings in Photomatix (or whichever other software you used) , as there are multiple options, and I’m curious if this was achieved after a standard HDR compilation of several images , then adjusted , or whether you have worked out a better STARTING compilation, with minor adjustments afterwards?
    The last image does look very ‘natural’ (if a little dull!…..it may be just the large area in shadow in the foreground…..but beautiful cloud ‘treatment’ ) , but as you say, its always a very personal subjective assessment ?

    • I just stack the exposures in PS with the lightest on top, and erase through with a feathered and low-opacity brush to the layers below until they look natural/feasible. I would also highly recommend keeping the whole image in view whilst doing the erasing to avoid problems of relativity between different ares of the frame…

  13. I think this explains where I went wrong in most past HDR attempt:

    “A good HDR starting point should appear very flat: i.e. little apparent output contrast between adjacent areas in the image – because this gives you the flexibility to put the contrast where you want it later on…”

    I was always trying to adjust the settings of various HDR tool to get a final output, rather than an intermediate output for further processing.

    • I think it only makes sense to look at it as an intermediate step: since you want the final output to look non-HDR, you would logically have to do the same steps as for a normal image to get you there. And this can only mean matching starting points then manipulating the final result…

  14. This is a great article. Thank you for sharing

  15. I look forward to reading how this relates to a B&W epiphany!

    • A little bit of it is in here, a bit more here, and practical examples here. 🙂

      • Thanks for these great links Ming! Some great info there as well as the HDR stuff above. And that first tunnel shot in that Underground Workers in Mono set is one of my favorites of yours Love the depth and tones in that one. Can you sell prints of that one? Would look incredible in the K7 Special Edition Piezo inks I think. 🙂

Thoughts? Leave a comment here and I'll get back to you.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: