An increasingly heard phrase amongst photographers and gear collectors is “it’s a good copy” or “it’s a bad copy”: today’s article explores what this actually means, as well as how it is relevant in real terms.
First, a little background: for the vast majority of my recent lenses, I’ve owned or tried more than one sample. In some cases, quite a number of samples. Not all of them perform equally well; the better ones do, but there have been some enormous variations between good and bad, too. The reason for this goes back to the manufacturing process: anything that is made or assembled from components has tolerances; the dimensions of an object are not 10x10x10mm, they’re really 10+/-0.1 x 10+/-0.1 x 10+/-0.1mm. The magnitude of the +/- portion – the uncertainty or the tolerance – depends on the manufacturing process itself and the quality control procedures employed. The tighter the tolerances, the more expensive and time consuming this process is: remember, measurement itself is not a simple thing; your tools need to be both sufficiently precise and accurate to begin with, otherwise they simply cannot serve as a useful reference point.
Bringing things back to the price of eggs, this means your lens element might vary from 9.9×9.9×9.9mm to 10.1×10.1×10.1mm, or anything intermediate. Clearly then, you can see how this might be a problem with precision optics – fortunately the tolerances are of course a lot smaller; +/- 0.1mm was just an example. But the same thing applies not just to the shape and dimensions of the components, but the assembly and alignment of those components, too – maybe the individual elements need to be centred to within 0.05mm to be ‘acceptable’; you might have a whole bunch that are individually out by 0.05mm, which might create very bad astigmatism – but still be ‘within tolerances’; or you might have every element dead centre; or you might have some odd combination of skewed elements that cancels out decentering, but isn’t that sharp. Needless to say, the more elements and components inside a system, the greater potential for something to go very wrong*. A combination of all of these tolerances coming into play is what makes the difference between a lens reaching the theoretical maximum resolving power and being just a little bit soft.
*It’s for this reason I’m against the use of adapted lenses: the same tolerances of thickness/ size/ planarity/ centring etc. apply to the mount surfaces, too; regardless of who makes them. Introducing two more mounting surfaces into the equation might or might not create undesirable effects, but it will certainly increase the chances of it happening – and given the resolution of today’s camera, it will be more obvious and easily detectable, too.
Any component whose position and dimensions relative to another component are susceptible to this, and affected by even the slightest ‘miss’; whether we can detect it or not is another question: until recently, resolving power of the camera side of things wasn’t really sufficient to detect anything but the most serious cases of out of tolerance or misalignment. Consequently, manufacturers were able to get away with much looser quality control procedures. I suspect that tightening QC is partially responsible for the recent increases in lens prices, though it could also be arrogant greed or increasingly complex optics in some cases.
The D800E is a good case in point: together with the A7R, it is perhaps the most demanding camera at the moment in terms of resolving power. (The A7R is actually more demanding of the assembly side, as its mount spec requires less helicoid movement; therefore an identical absolute change in helicoid position will affect the D800E less than it will the A7R – with the consequent impact on resolving power due to shifting focal plane.) Case in point: the tolerances for this camera are so tight that when I check my AF-fine tune settings every 20,000 exposures or so – i.e. 20,000 shutter cycles – I find inevitably that things have drifted slightly; my guess is that the mirror moving/ wearing in/ coming loose/ whatever is going on – affects the sub mirror position enough that you need to compensate autofocus to obtain optimum results.
Remember the left side focusing issue with that camera? I’m willing to bet that the misalignment of the AF sensor module (or AF sub mirror, hard to tell which) was so small that it would have been a non-issue with a D700. Beyond the camera itself, lens selection matters: I went through five samples of the AFS 28/1.8 G until I found one that didn’t exhibit any decentering. Slower lenses tend to be more forgiving because depth of field masks imperfections to some extent. Fast, long lenses also mask imperfections because it’s very, very rare that you’ll have all parts of the focal plane at the same distance and in focus anyway – so you simply won’t see if some portion of the frame isn’t pulling its weight.
Granted, my threshold tolerance for imperfection is probably lower than most, but there’s no point in buying a lens that doesn’t do the job. And the more expensive the lens, the more performance I expect – be it maximum aperture or resolving power or some other quality. Unfortunately, the higher the theoretical performance, the tighter the tolerances must be to achieve it. This means that realistically – automated assembly with human QC is preferable to human everything; a machine can be both more precise and more consistent – providing it is properly calibrated to begin with – than a human. But it’s much easier for a human eye to spot deviation from pattern than a machine; it’s just one of the things our brain is good at (remember subject isolation and breaking pattern?) Of course, if you have an assembly line that isn’t properly calibrated, you get a very accurate reproduction of an imprecise object – early D800E AF modules again being a case in point.
My experience with hand-assembled cameras and lenses – namely, Leica – has been less than stellar. I’ve gone through six samples of the 50/1.4 ASPH; one was mechanically defective – it threw an aperture blade on day two – one was astigmatic; two were just soft – one I suspect had a slightly too-short intermediate helicoid, the other perhaps elements that skewed slightly in all directions; only the first and last were ‘good copies’ – i.e. elements were aligned, the mechanical bits didn’t break, and the lens generally performed to expectations and in line with the theoretical MTF chart**. The thing is, short of the broken aperture – if you didn’t handle more than one sample, you wouldn’t know that the one you have was defective. And this is how I became very conscious of sample variation in the first place: I made the mistaken assumption that all of the lenses were like my first one, which I sold to a friend knowing that there was another one sitting at the dealer.
**Note that almost all MTF charts are theoretical maximums calculated by computer; as far as I know, only Zeiss actually use measured results for their published MTF charts. So what you think you’re getting might not necessarily match what you actually get.
You can of course use this information in two ways: test the actual lens you’re going to buy, and if you’re happy with it, then don’t try anybody else’s lenses – ignorance being bliss and all that. Or, practically, make friends with your dealer so you can try a reasonably large sample and pick the best one. I try a reasonably large sample and talk to people I know who evaluate lenses on the same criteria I do. Here’s another thing: in order to produce a meaningful review that applies to the majority of samples – whether it’s a lens or camera – it is necessary to try multiple units to ensure that results are consistent. I mention it now because it occurs to me that few, if any, ‘reviewers’ do this; what it means is that their conclusions may only apply to that single unit. If it applies to a large sample of the population, then it’s probably a trend; if not, then it’s meaningless – it could be bad luck. I’ll only mention a problem if I’ve found it on the majority of units tested, and the number of units tested is quite a few – again, not to harp on the D800E issue – but I tested eight units, all of which displayed the problem. Similarly, I tried two copies of the Zeiss Otus (at the time, a full 1% of the total production, and all I could get my hands on) and checked my results with other owners before coming to the conclusion that it was perhaps the best lens ever made for F mount.
I want to finish by putting all of this into perspective: in general, you’re not going to be able to see much difference on systems below 12MP. Current production tolerances are sufficiently tight that you can shoot with confidence. 24MP, and especially on higher density sensors, is more demanding; and the 36MP sensors are the worst of the lot to date – eking out every last bit of information from these is a task that is going to require climbing very far up the diminishing returns curve indeed. In my experience – both photographing and evaluating student’s files – the transition from 16 to 24MP is where you start to consistently start to see the effects of sample variation, but beyond that, it’s also the transition point where technique and shot discipline become dominant limiting factors. So, in conclusion: sample variation matters; pick the best sample you can; but before you spend too much time and effort worrying about it, make sure that your technique is good enough to consistently see the difference in the first place…MT
2014 Making Outstanding Images Workshops: Melbourne, Sydney and London – click here for more information and to book!
Visit the Teaching Store to up your photographic game – including workshop and Photoshop Workflow videos and the customized Email School of Photography; or go mobile with the Photography Compendium for iPad. You can also get your gear from B&H and Amazon. Prices are the same as normal, however a small portion of your purchase value is referred back to me. Thanks!
Images and content copyright Ming Thein | mingthein.com 2012 onwards. All rights reserved