Sunrise over Lake Michigan
Continuing this little series on tonality, mood and monochrome, I’d like to explain a little about the idea of native tonal response: it’s something I’ve frequently referred to in reviews, but never fully explained. Unfortunately, there are a very large number of variables, so bear with me.
Tree exhalation
We first need to take a step backwards to understand how digital sensors work. Light excites a photosensitive layer into releasing an electron (or more), which is then released once an electrical ‘gate’ (the transistor) is opened, allowing a current to flow. This current is measured and then translated into a value. That particular location has a physical x-y address, and so do its neighbours; to complicate things further, each adjacent physical address has a different color filter over the top of it, to allow measurement of the intensity of light of a different wavelength. With the exception of Foveon sensors, none of these devices can measure the wavelength and intensity of light at a single physical location accurately; interpolation from adjacent locations is required to come up with an approximate value for the color at that location. Our displays use the R-G-B primary colours in various combinations to make up every intermediate color (or close to it); the sensors therefore also take readings and output R-G-B values for a given physical location. To complicate things further, this interpolation may be done in computer or in camera, and by different software. And then at the output end, we could be viewing the final image in print, on screen, at different sizes, and at different levels of information compression.
So what actually affects tonal response, and more importantly, how much control over the process do we really have?
Evening ruins
The idea of tonal response can be distilled down to the following concept: for a given absolute amount of light hitting the sensor, what value does the sensor record? It is an input-output map, nothing more. But there are so many things affecting that input-output map that no one single element really dominates. Firstly, there’s the sensor type: CCD, or CMOS? CCDs tend to be nonlinear in response; nothing much down the bottom, as though the activation threshold is high and the low-light read noise (i.e. erroneous/ spurious signal) is prevalent, but then tonal response appears more natural towards higher luminance levels. CMOS sensors are more linear in their tonal response, which is better, but this also means visually more abrupt clipping at highlight and shadow boundaries (which is worse). And then you’ve got the color pattern itself – multilayer Foveons respond differently to Bayer patterns, and then there are three (RGB) and four (RGBE) color Bayer patterns, and other pattern layouts – X-Trans, for instance. And we haven’t even talked about the read or on-chip noise reduction circuitry between photosite and raw signal; on-camera image processing to raw or JPEG, your choice of raw converter, and postprocessing.
The call
It is actually quite common for the same base sensor to have very different effective tonal response depending on the camera it is put into: this is the effect of the processing engine, plus how your choice of raw software or postprocessing affects the final output. Even if you zero out everything in a universal raw converter like ACR, you’ll probably find two cameras look very different under the same circumstances – this is partially to do with whatever defaults ACR has applied at the behest of the manufacturer or themselves; or it may be to do with the in-camera processing engines. And I’m sure we’re all very aware that not all processing engines are equal. ACR, for instance, has become very proficient indeed with anything Bayer; it’s useless on Fuji X-Trans files before version 8 and doesn’t even bother with Foveon.
Other third party software may do a better job with the actual interpolation from raw data to a viewable image, but they do not offer the same workflow as ACR: for instance, it is very difficult to apply local or other adjustments without outputting an image first, which means that those adjustments happen on data that has already been worked on; it isn’t as ‘close to the source’ as ACR might be. This may not be visible under most circumstances, but for processing that’s close to the limits – such as extreme contrast changes, borderline out of gamut colors, or very fine tonal transitions in deep shadows or bright highlights – that slight loss of data may well become visible and turn into something make or break: think posterization.
Black Island
A couple of examples of cameras with different ‘tuning’ but the same, or very similar sensors would be the Nikon Coolpix A and Ricoh GR, the Nikon D800E and D810, and the Phase One IQ250 and Pentax 645Z; I’m sure there are plenty of other examples but these are the ones I’m most familiar with. The IQ250 has a very different color palette to the 645Z, but these could easily be fixed with a little profiling; interestingly even though both cameras lack an AA filter, they produce results of very different sharpness – my theory is that there’s a bit of unsharp masking or something similar going on before the raw files are written. There isn’t any more resolution, because we have the same spatial frequency of sensor, and we were using the same lens via adaptors on both. The Pentax will take a lot more sharpening before hitting the limit; the Phase One, less – and sharpened optimally, they both look identical.
Moon and bridge
The D800E and D810 are slightly trickier: it isn’t clear if they’re using the same sensor or not, but it’s highly unlikely the D810 uses a new design as we’re not really seeing any improvements in color accuracy or noise performance. But what is telling is we moved a generation of image processor between the two cameras, and suddenly we’ve gone from a very linear sensor in the D800E to a very nonlinear one in the D810. The D800E favours underexposure to maximise dynamic range: you get perhaps a stop of recoverability at the highlight end, but seemingly limitless shadows that not only retain color information and relative tonal separation/contrast when brightened, but also very low noise. The D810’s shadows look muddy and flat when treated the same way, but what you think is clipped at the highlight end isn’t – in fact, you could probably have pushed another stop or stop and a half. (We are of course testing with the same raw converter, settings and lenses here.)
In practical terms, this means that the D800E looks better in low light or high contrast because you can pull out hints of detail here and there with little noise penalty. The D810 is a bright-light camera – highlights roll off very naturally, but maintain relative tonal separation well into the higher zones where the D800E would just have clipped. It means in practice that whilst I’d never really liked the D800E for monochrome work – but it was manageable because of its enormous starting dynamic range – I find the D810 a very natural camera for B&W work, simply because the highlight rolloff is so much more gradual. I don’t need to do anywhere near as much dodging and burning, and a single curve will suffice where I’d had to use two or more, previously. There are reasons and jobs that justify me keeping both.
Departure time
I’ve left the most stark example for last: the Coolpix A and GR. Both cameras definitely use the same 16MP APS-C Sony sensor, but one has been tuned for color, and one for black and white, and this obviously shows. Once again, to understand why, we need to go back to human psychology and physiology. Different colours appear to our eyes of different relative brightness even if they are of the same absolute luminance level; this is mainly to do with the way our retinas respond to the different wavelengths. There will naturally be attenuation towards the ends of the visible spectrum, which is something that digital cameras do not suffer from – in fact, it’s just as easy for them to pick up UV and IR pollution as actual intentional signal. This is one of the reasons that cameras have such strong UV and IR cut filters over the sensors; otherwise, the output would not look natural at all. You’ll actually find that a lot of color improvement in digital cameras has come from these cut filters both increasing in strength and ability to approximate the response of our eyes.
Lonely chef
Cut filters aside, this means that a camera tuned for color is going to have uneven amplification for each channel (this happens somewhere between the raw data and writing a raw file, in the camera’s on-board converter) in order to deliver a natural looking image. This can distort the relative luminance differences between areas and produce a file that’s very flat and lacking in local contrast; i.e. poor for B&W conversion. However, a camera that has equal amplification across all channels is going to be great for B&W but produce very strange and possibly undersaturated-looking color – and that’s precisely how the GR’s color images look without profiling. For cameras with larger photosites and more dynamic range, such a choice isn’t necessary: it can be profiled out because you have enough information to work with.
Texture medley
A true monochrome sensor will get around these limitations – but again, there’s the question of the output map, and you cannot have color if you change your mind for one shot. The upside is that you can of course optimise that output map for the specific kind of nonlinearity that looks good for monochrome images.
How much of the difference can we make up in postprocessing? To a large extent, that depends on the amount of information your sensor is capturing. The more dynamic range, the easier it is to reallocate the tonal response the way you want it. It’s one of the reasons why you never hear about MF sensors being poor at one or the other – it’s just not possible to make a poor B&W file if you’ve got nearly fifteen stops of dynamic range. It might need a little bit more work to introduce a similar nonlinearlity to match our human visual range, but the data is there. If the data isn’t there, no amount of tonal stretching through curves and dodging and burning is going to avoid posterisation or put in subtle color gradation where there is no information to begin with.
Father and son at the end of the day
Color response is fortunately much simpler to fix: it can be profiled out, and ideally done for different ambient color temperatures (again due to nonlinearities in amplification or differences in the Bayer filter itself). Bottom line: you can probably get a 95% result with a non-ideal camera, but it will take you many, many times longer than a 100% result with a fit-fo-purpose one. My advice is to think very carefully about the types of subjects you shoot, and the light conditions – and select equipment accordingly. The images in this article have been specifically chosen to demonstrate this: I was able to easily (with only a minute or two worth of PS work) achieve the color or tonality (and more importantly, color and tonal separation) I wanted. MT
Color profiling for cameras is covered in Photoshop Workflow II, and tonality and style are explored in Outstanding Images 5: Processing for Style. Both are available here from the Teaching Store.
__________________
Take your photography to the next level: 2015 Masterclasses now open for booking in Prague (9-14 Mar 2015) and Lucerne (17-22 Mar 2015)
__________________
Limited edition Ultraprints of these images and others are available from mingthein.gallery
__________________
Visit the Teaching Store to up your photographic game – including workshop and Photoshop Workflow videos and the customized Email School of Photography; or go mobile with the Photography Compendium for iPad. You can also get your gear from B&H and Amazon. Prices are the same as normal, however a small portion of your purchase value is referred back to me. Thanks!
Don’t forget to like us on Facebook and join the reader Flickr group!
Images and content copyright Ming Thein | mingthein.com 2012 onwards. All rights reserved
Ming, in PS workflow II you explain how to create a color profile for a camera using the X-Rite Colorchecker. I noticed that X-Rite offers free software for creating color profiles for ACR. The user imports a photo of the Colorchecker and the program creates a profile automatically. Have you experimented with this software and do you find it useful? I’m wondering how the results compare with your method. In theory it should produce accurate color as long as the monitor is properly calibrated using the method explained in PS workflow II.
It produces accurate but not necessarily pleasing results: as far as I can tell, it does not take into account the nonlinear color response of our eyes to luminance. As the light level falls, saturation reduces and sensitivity to blues decreases (i.e. reds relatively increase) – cameras however are linear. The only way to accurately take this into account is by using the final viewing device on a profiled monitor, i.e. your eyes.
Thanks for the reply. You are probably right, the luminosity will be somewhat off. I’m expecting to take delivery of the X-Rite Colorchecker some time next week, then I will see for myself. At the very least I can use the automatic profile as a starting point. If it’s close enough to perceived colors, then only minor HSL adjustments are needed, which definitely helps. I will let you know how it works out.
Try printing from it, too.
I received the ColorChecker today and tried the automated profile creation. You were absolutely right, the results were not the same as perceived color. I started tweaking the profile by adjusting the luminance of each color, but quickly noticed that I also had to tweak saturation. The only thing I didn’t need to touch was hue. So basically what I was getting from the automated profile was accurate hue, but luminance and saturation were off. I haven’t tried printing yet, but will in the coming weeks.
I think the newer and higher end models may compensate for this (I remember seeing ‘absolute’ and ‘perceptual’ options, but in the end I always land up calibrating by eye with the Apple Calibration Utility and a couple of ‘known’ images and color swatches open in the background…
The island in your “Black Island’ photo is actually called Ruby Island. If it makes you feel better, Michael Kenna misspelt a place name on his website when displaying a NZ photo. (I’m sure the title wasn’t meant to be literal, though
Thanks Mike, I’m aware – the title wasn’t meant to be literal. 🙂
The only dedicated Monochrome(MM) camera I have ever, briefly, used is the Leica. I liked its tonal response and I’m mulling over getting either that or their supposed future CMOS model based on the M240. Based on your comments, the relatively narrow dynamic range of the MM ought to be a problem, even though its B&W dedication ought to be a boon. Do think, then, that the future CMOS based MM will be even more effective for B&W?
A second, related, question: at what level of dynamic range is there enough compensation to equal or even better the tonal response of a dedicated B&W narrow dynamic range sensor, such as the Leica MM? Is the Sony A7R or the Nikon D810 actually going to be better for B&W by virtue of their much superior DR? Will a new/future CMOS based Monochrom be better still?
I think a D810 is still a better B&W tool because you have the option of the channel mixer afterwards, which you don’t with the MM. And the double pixel count makes up for resolution, as does the newer pixel architecture. Of course an achromatic D810 would be better still…
MT, after reading and pondering this for my D810, had to use your link to B and H to out in a lens order today. Question: How much does Highlight Protect .* metering Mode on the D810 alter the variables in the highlight rolloff? And, is there any chance of losing shadow tonal range if the D810 meter mode is Highlight Protect?
Thanks. * spot meter just adds a bit of exposure compensation assuming you are placing the metering spot on a highlight instead of midtone. It doesn’t change the sensor’s response.
Now… can you explain that whole process for shooting video? hahahaha
Sure, if you’ve got several days. That said, a surprising amount can be back-applied because PS can now be used to edit video including most of the tonal adjustment tools…
Would it not be possible,at extra cost at least,for a manufacturer to give one a choice of tonal responses to suit various needs, ie monochrome,colour and low light vs bright sunlight, using different on camera processing. It would be nice to have a setting where my Nikons would behave similarly to the ricoh GR etc. It would be worth extra spend to me at least. Is this technically possible or is have I missed a point?
I think that’s the theoretical function of picture controls. The problem is they don’t really use the full latitude of the file, e.g only for jpegs.
Not following. Are you inferring that Nikon files are lossy? They are larger than the GR files, much larger than RX100 files. It would seem that if profiling GR color channels can produce pleasing natural colors, and you have shown this to be the case, why is there not an inverse profile I can use to produce better monochrome? My own attempts tend to produce stark results, that are so unsavory I avoid monochrome altogether.
No, neither file is lossy. Every sensor requires some signal preprocessing before being written to a raw file. Each manufacturer does this differently.
But I actually LIKE the GR OOC colour signature. De gustibus non est disputandum, I guess… 🙂
It is unique, I’ll give you that.
Thanks for the enlightening of something that has been mystifying to me since I got my hands on a D810 from a friend! I was using a raw converter that doesn’t really understand the non-linearity and the low-level colours were muddy and terribly noisy! Compared to an earlier D810E I had tried, the D810 was a total disappointment!
Even with CaptureDX I still get this problem from time to time. Now, it’s perfectly clear why and I can help my friend correct this.
Thanks once again!
Sorry: I meant D800E, obviously.
Puzzled me at the start too, but now I’m getting better results from the 810 than the 800E – more natural looking highlights, at any rate…
Ming, your essay reminds me of writing thesis, like doing my questionnaires and concluding my research!! You are so knowledgeable, I am surprised Sony, Fuji or Nikon haven’t offered you an office somewhere in their campus! Wonderful article.
Thanks. There are ~1,000 articles on this site, and at an average of 2,000 words each…that’s a lot of dissertations. Camera makers are interested in selling cameras, not knowledge. More knowledge makes it obvious where they are cutting corners…
Hi Ming,
If I understand correctly, you are saying that with the D810’s sensor and high contrast scenes, ETTR may yield cleaner result than with other Nikon’s like the D4S (where highlight suppression in exposure combined with shadow retrieval in PP is cleaner)?
If true this would make it more difficult to shoot (switching exposure method) with both cameras at the same time.
The D810’s shadows look muddy and flat when treated the same way, but what you think is clipped at the highlight end isn’t – in fact, you could probably have pushed another stop or stop and a half. (We are of course testing with the same raw converter, settings and lenses here.)
Yes and no; it’s not quite that simple because ETTR+ for the D810 may require more light than the D4S, so we’re effectively going to have to boost ISO to get the same shutter speed – and we’re back to square one, or slightly worse. I do find I’ve got to be very careful when shooting the 810 and 800E side by side.
Hi Ming,
I am not sure I understand this. ETTR is supposed to be beneficial with any kind of sensor, isn’t it?
In the arcticle above you write: “The D800E favours underexposure to maximise dynamic range …”, but I suspect the dynamic range increases by doing ETTR, i.e. usually overexposing. Or did I miss something?
What exaclty has to be done then to get most out of the D810’s sensor?
And is it covered in PS I or in PS II (your hints in this Comments section seem to be contradictory)?
Any clearification is highly appreciated!
Firstly, ETTR is beneficial from an information-gathering standpoint, but not necessarily a tonal pleasingness one. Secondly, perhaps it’s me not explaining clearly: the D800E’s histogram more accurately reflects the contents of the raw file than the D810; the D810 always seems to have a bit more highlight headroom than you think (even if the histogram looks nearly clipped). I add another stop or half stop over the point at which the histogram appears to just clip. The problem is the histograms are reflective of JPEG not raw/recoverable data…
You can use either PSI or PSII but I find the LAB-based workflow in PSII seems to be better suited to retaining the highlight subtlety.
Thank you, Ming, for clearing this up.
So, the way to go is to do ETTR to get the most information possible and then do a tone remapping to get pleasing colors.
BTW: For capture I always use the Flat Picture Profile in order to get in-camera, JPEG-based histograms that resemble RAW histograms as good as possible (although still not good enough as you have explained above). In Lightroom the images really look awful then, so I use a different profile for post-processing. Hopefully, PS II shows how to create profiles for getting pleasing colors. I suspect, though, that conversions from RGB to LAB and back cannot be done using Lightroom / ACR but only in PS.
Bingo. I’ve tried the flat picture profile also, but then find it hard to judge channel saturation…
interesting thoughts.
related point: i don’t know if you have perceived this the same way as i have but to my eye the D810 metering tends toward overexposure (avoiding excessive shadow), which coincides with your observation that it is stronger at highlight recovery/weaker at shadow recovery vs the D800e. having never used a D800e Coming from the D700 i am just luxuriating in the great res and overall latitude.
For the most part yes, but the meter appears to be a bit less ‘stable’ also – it tends to fluctuate sometimes to underexposure also.
i can see what you mean. compared to my old d700 the main thing (besides exposing higher in general) is that the meter moves a large amount depending on the focus point (if set for that mode, which i generally use). the larger situation being that nikon seems to have made a very purposeful design choice to have the camera both expose higher and have more latitude in the highlights. personally i have adjusted and can really use the system. I always try to avoid excessive “shadow lifting” (even when there is good info down there it always looks odd to me, on any camera) and prefer full black/blocked shadows in an image to blown highlights. i am still often surprised when i pull down “blown” highlights in the histo and find they are not really blown and that the raw still has plenty of info in there. definitely requires more thought than just “put the meter in the middle and shoot”. in high contrast scenes this rarely gives the result i want.
I honestly thought the D700/D3 generation had the most reliable meters – they were almost infallible. Add the dynamic range of the D810 and you really wouldn’t have to second guess the camera or go manual in critical situations (and waste time twiddling dials).
i think i pretty much agree. i’ve definitely been doing some second guessing. the end results are ultimately better but if i shot more “decisive moment” type shots with no second chances it might be an issue.
I’m not sure if this question belongs here, but I’ll give it a shot.
If I shoot a subject with very poor contrast, will having a camera with lots of tonal range (in the DxO sense) help when I try to wring a decent result from the raw file in post process?
A good question., It probably won’t make that much difference, actually – it’s within the camera’s capture ability anyway if contrast is low.
Thanks – yeah I guess you’re right. What I’m probably looking for should be something like tonal resolution. 14-bit raw files? (As opposed to 12. Maybe.
The situation giving rise to the question is underwater photography, in relatively clear waters but a significant distance between myself and the subject. Very low contrast.
Definitely 14 bit, and fat pixels for better signal to noise ratio.
Ming,
I was very surprised to see you declare that ACR is “useless for Fuji X-Trans files”, when you have a segment in your latest workflow video about achieving good results with X-Trans in ACR and Photoshop. I find it to work very well with my Fuji cameras, and especially with your tips. Could you please explain?
I should have been clearer: earlier versions before ACR 8. I’ve amended this mistake (and obviously if you see what I can get out of the X-T1, there isn’t a problem 🙂
Between the GR and the A, which together I’ve taken well over 2000 frames with, I would say the nikon is producing slightly better DR. But, it could be that I am equating the GR’s less reliable metering to a difference in DR that isn’t really there. Regardless I had more GR files that either have clipped highlights or shadows that can’t be lifted. I shoot a lot in high alpine and harsh light, so DR is very important. Not saying it’s perfect, I sure would like a more “reactive” camera, but for me it works better than anything I’ve tried.
I find the GR tends to be very conservative in the highlights. If you use the histogram, you’ll find there’s quite a bit more DR than you expect.
Also if you shoot in continuous mode your best ISO is 400… I am not sure why it does that…
No idea either. Nor have I been able to find a way around it.
Ming, it’s great that you elaborate on this topic because too often are we confronted with naïveté concerning color and tonal response. Even seemingly “educated” people bitch about a camera’s colors without even realizing what camera profile they were using in ACR/LR.
Two questions came up while reading your article:
1. Where does your conclusion about the different tonal responses between D800E and D810 stem from? ACR? Nikon Capture NX/-D? Or another converter like Capture One? Because I do wonder whether the difference you mention is really camera inherent or comes merely from different processing in a RAW converter.
2. I agree that bending color response in a desired way is possible but I would be extremely careful about how possible it is for mere mortals outside of Adobe’s or Phase One’s labs.
Like many others I always loved Olympus’ colors and I was very disappointed when in the beginning ACR/LR only offered the Adobe Standard profile (which I hate) for the E-M5. I tried and I tried very hard to get colors close to Olympus’ own out of ACR. Until I went into a discussion with Adobe’s Eric Chan (who is responsible for their camera profiles) and he more or less hinted that he has tools for profile creation that we others could not lay our hands on.
And it’s easy to understand what’s the problem: Tools like Adobe’s profile generator can create a matrixes but otherwise are linear – you cannot alter the response for a given color over the course from low luminosity to high luminosity … i.e. curves! The result will always be that getting one color right will make another wrong. Or tweaking it in a high luminosity will mess up the response for that color in low luminosity.
Therefore I think it’s wishful thinking that one camera could be profiled to look like another.
Thanks.
1. ACR defaults + own profile for optimum information recovery, so they should be fairly consistent/neutral camera to camera.
2. It’s actually very easy, and covered in PS Workflow II. I use the HSL tool rather than curves.
It’s definitely NOT wishful thinking because I can get all of my cameras to give me the tonal response I want, and consistently – there’s no way I can tell a client ‘sorry, the colour is wrong because I used a different camera’!
Ming,
Thank you for clarifying. In my observation it is possible to get “accurate” color for most any camera. What I meant about being rather hard/wishful is getting a camera’s native factory color response (as achieved when shooting JPEG or using the camera manufacturer’s RAW converter) with the tools we have.
This color rendition may be considered as “pleasing”, can be pretty twisted and actually very far from accurate. Like my example with trying to get Olympus’ factory colors without having the right tools.
But I do admit that you may as well be much more skilled in achieving a certain color response. A skill I was never able to acquire. 😦
You may also need a wider gamut monitor – it’s almost impossible if you can’t accurately see what you’re doing 🙂
Any chance you could put some comparison shots up to help explain?
I’d like to have an Olympus color profile for my Sony A7MKII raw files. Olympus colors ROCK! Thanks for a great write up Ming. It’s pretty intense, I think it’s giving me a tick…
It can be done…
Okay, I’ll bite 🙂 which MT series shall I subscribe to learn how it can be done please? Thank you!
It’s in consistency of processing and style. I’d say ideally Ep4/5 and PS 1, though if you have some PS experience you could just do Ep4/5.
One question, Ming: if you compare two similar cameras, say a pair of Ricoh GRs, how much difference is there from sample to sample? Can you use the same profile for both or do you need to calibrate them individually?
A little, but not so much that I’d do separate profiles.
This is gonna take a while to wrap my head around completely. But well written as usual. Time to profile the GR now!
Thanks Ming, this is a subject that is new to me and explains many things. Your ability to articulate a technically difficult subject is second to none. Would love to hear more on this subject.
Thanks. I cover this in a lot more detail in PS Workflow II and The Fundamentals.
Thanks Ming! A thought provoking article. Now time for me to calibrate the Ricoh GR!
Thanks Ed!