A side effect of the ever-increasing resolution of today’s cameras is that autofocus must necessarily get more precise, too. The Nikon D800/ D800E issues have shown that even a small misalignment or miscalibration in the focusing system can basically cripple the camera into resolving at a far lower level of performance than it would be capable of under ideal circumstances. Short of using manual focus and magnified live view for everything – I would still recommend doing this for critical work, and I do it when working under any controlled lighting situation since I’m more likely to have the time and be using a tripod – it is therefore highly beneficial to pay closer attention to exactly what is going on when the camera acquires focus.
For DSLRs, SLTs and some mirrorless cameras (the Nikon 1s and Sony NEX-5R and NEX-6), a phase detection system is used. This involves taking some of the light from the subject area, passing it through a beamsplitter and comparing the difference in phase of the output; a CCD is used to measure light intensity as a function of position, and the lens is moved until light from both arms of the beamsplitter is coincident upon a single point. This entire module constitutes the AF sensor array that’s either located at the bottom of the mirror box (DSLRs) or embedded in certain specific photosite locations (mirrorless cameras). If you select a specific AF point, then the camera uses only the sensors corresponding to the location of that point; if you let the camera pick, it will usually sample all points to find which is the closest subject covered by the AF sensor array, and focus on that.
Phase detect autofocus is fast and generally does not require racking the lens back and forth – otherwise known as ‘hunting’ – because the sensor is able to tell whether the light is positively or negatively out of phase, and thus in which direction to move the lens in order to correct this and bring the light coincident, thus achieving focusing. The precision of focus depends on several factors: firstly, the resolution of the AF sensor; secondly, the alignment of all secondary optics involved in transferring the light to the AF sensor – specifically, the main and submirror assemblies; any microlenses involved; thirdly, the alignment of the AF and imaging sensors (both must be perfectly perpendicular to the lens mount); fourthly, any calibration data the system requires to establish a perfect zero or null position; and finally, the ability of the lens’ focusing groups to move precisely in small increments that maintain perfect alignment with the optical axis.
Focusing with wide angle lenses is generally less precise with this method because the differences in phase are a lot less; to complicate things, the lens itself may have optical limitations in its design, introducing field curvature, coma etc – all of which can send potentially misleading data to the AF sensor, resulting in incorrect focus. It also doesn’t help that subjects tend to be a lot smaller, and not filling the AF boxes completely. (It’s also worth noting at this point that the AF boxes themselves are an indication of where the sensor grid lies, but there’s no documentation covering precisely where the active areas are located. For greater precision, perhaps the sensors should be crosses instead of boxes.)
For moving subjects, phase detect systems either continuously change the focus distance, depending on the instantaneous phase information received at the AF sensor, or alternatively employ a predictive algorithm and multiple focusing points in order to track the subject. The most sophisticated systems also employ information from the metering sensor in order to track the subject by color. None of these systems are infallible, and can be fooled by objects of a similar color or larger size coming between the camera and subject – for instance if your subject happens to duck behind something. Although the level of processing power and sophistication of these systems has significantly increased over the past years; I have yet to see any autofocus system that can 100% reliably track an erratically moving subject – especially if it leaves the area of the frame covered by the autofocus sensor array.
I’m sure you can now see why the challenge of achieving perfect focus gets more and more difficult as sensor resolution increases: if any one of these is out of tolerance by a very small margin, you’re not going to have a sharp image.
Most mirrorless/ CSC cameras, compact fixed-lens cameras and DSLRs in live view all use a much simpler method of focusing – contrast detection. This involves moving the focus point of the lens back and forth to test which direction delivers the highest contrast. The camera will then iterate this process until highest contrast is achieved; although hunting has been minimized with the new generation of contrast detect cameras; it is still necessary to rack focus back and forth simply because there is no way for the camera to know which direction in which to move the lens. Because of this contrast detect autofocus will always be slower than well-implemented phase detect autofocus, with all other things being equal. However, it will also be more accurate simply because the imaging sensor is used to determine the point of optimal focus, and there are far fewer potential issues with tolerances and alignment of components.
It is also worth noting that the size of the sensor actually plays an important part in determining just how fast contrast-detect autofocus systems can be; this is because larger sensors have shallower depth of field for a given field of view and aperture, requiring more movement of the focusing groups within the lens in order to determine where the point of highest contrast (and correct focus distance) lies. This is especially noticeable when comparing a compact camera to a DSLR; compounding this is the fact that small sensor cameras require much shorter real focal lengths to achieve the same angle of view; this results in extended depth of field for a given angle of view, requiring less focus precision because any potential errors can be covered up by increased depth of field. The slow focusing of DSLRs in live view mode is not due to the lens’ focusing motor speed; the same combination often is capable of delivering blazingly fast results when used with the regular phase-detect system.
There’s one added method that used to be common in older cameras, but is now only to be found on some of the Ricoh compacts: active phase detect. This uses an infrared beam to light the subject, and the reflected light is measured by two phase detection sensors on the front of the camera to assist the contrast detect system. It can greatly speed things up, but range is limited because it requires active illumination from the camera – and the power of these secondary lights is always limited.
Now that you have some understanding of how autofocus systems work, let’s talk about some tips to maximize the accuracy and speed of your camera.
- Don’t let the camera pick the focus point for you. Unless you are shooting and erratically moving subject which you cannot follow manually selecting the focus point; always shoot in single point mode and pick your focus point carefully to be over your subject. Many cameras also weight the metering in favor of the focus point; it is therefore important to ensure that it corresponds with your subject – it is almost always what you would want to have correctly metered anyway.
- Make sure your subject is larger than your focus point. If it isn’t; you need to either move closer (this also becomes a compositional issue) or focus on something at the same distance which presents a larger target.
Phase-detect cameras (DSLRs, Sony NEX-5R, NEX-6, Nikon 1)
- The camera will always focus on the closest object underneath the focusing point. It may sometimes be fooled by a higher contrast structure – for example, a barcode instead of a blank piece of paper immediately behind it – but in general it will pick the closest subject providing it completely covers the focusing point.
- High-contrast subjects (again, like barcodes) make ideal autofocus targets. It is also worth noting that some autofocus points are sensitive to detail in one direction only; i.e. horizontally or vertically, and not both directions. (Cross type points are sensitive to detail in both directions; but these are generally only found at the center point, or distributed around the AF-sensor arry only on high end cameras.) It is therefore important to find a suitable target for your camera – a QR code rather than a barcode, I suppose.
- Use continuous autofocus, unless you are shooting a static object with the camera on a tripod. This is because any small motion of either you or the subject can be enough to move the plane of focus away from the intended point; this is especially critical with fast, shallow depth of field lenses. With continuous autofocus, the camera is always focusing right up to the point of image capture. The one exception to this, is slow, or wide angle lenses. Smaller format cameras are a bit less sensitive to this issue because they have more depth of field for a given angle of view, which tends to compensate for any errors in the focusing system.
- Try to avoid focusing at the center and recomposing your image where possible, because there are potential issues with field curvature – especially at the edges and corners of wide angle lenses. Use the autofocus point that is either directly over your subject or closest to it in order to minimize any potential issues with the lens’ design.
- Assign a button to locking focus (AF-L) to use in conjunction with continuous autofocus; this saves you having to switch to single autofocus with static subjects. Alternatively, decouple focusing from the shutter button by assigning an AF-ON button that activates focusing when pressed; I don’t use this method as it requires you to press two buttons to shoot; I prefer to minimize the number of controls that must be attended to especially in fast-moving situations.
Contrast detect systems (DSLRs in live view, compacts, CSCs, mirrorless cameras)
- Once again, do not let the camera pick the focus point for you; select it yourself. If anything, cameras that use contrast detect systems tend to be far more flexible in where you can put your focusing point; this is because they use the entire area of the imaging sensor.
- Avoid continuous autofocus. This seems counterintuitive in light of my advice for phase detect cameras, however this is because continuous autofocus on a contrast detect system is constantly hunting back and forth around the point where expects to subject to be; imagine a car trying to follow a curve that the driver can’t see until it’s almost immediately in front of him – the path (here, the focusing distance) will be erratic and not match the curve exactly. This tends to result in a very low hit rate. It also helps that contrast detect cameras tend to either have an alternate system to deal with moving objects (in the case of DSLRs); or employ much smaller sensors that are very forgiving of minor focusing errors or changes in subject position due to their extended depth of field.
- If you have to use continuous autofocus because your subject is moving; there are two other alternatives. The first option is to set your camera to maximum contrast (for obvious reasons) – the live view image is usually a preview of your current camera settings and will match the JPEG output. If you’re shooting raw; your file will not be affected by in camera processing. The second option is an old trick using the days of manual focus photography; it’s called ‘trap focusing’. First, decide on your composition and where your subject must go in order to complete it; ensure your shutter speed is high enough to prevent motion blur of the subject; finally, choose single autofocus and prefocus the camera at that position, releasing the shutter when the subject is in the intended position. One added advantage of this technique – especially for compact cameras – is that it significantly reduces the shutter lag to the point where it is very easy to release the camera at the precise moment you intend. Note that if you cannot get a high enough shutter speed; then you will need to pan through with the subject in order to only blur the background out and keep the subject sharp; this is a combination of panning and trap focusing techniques and works best when the subject is moving across your field of view; it is pretty useless if the subject is coming towards you.
- Some cameras have a continuous pre-focus or full-time autofocus option that is always adjusting the lens based on whatever subject happens to be under the focusing point at the time. This is generally a good option if you absolutely must reduce shutter lag and are unable to pre-focus. However, note that the system can also be fooled, most notably by moving the camera around rapidly – especially if you are not pointing it at anything in particular. It is also an enormous drain on (usually already short) battery life because the lens’ focusing groups are constantly in motion so long as the camera is switched on.
It is worth practicing all of these techniques until they become second nature; you’ll be surprised by both the increase in your keeper rate, as well as the improvement in acuity and sharpness at the individual pixel level. It is just one of the many elements of shot discipline; which is critical in achieving the highest possible image quality from your camera. You’ll also be surprised at just how much more responsive your camera has seemingly become. MT
Visit our Teaching Store to up your photographic game – including Photoshop Workflow DVDs and customized Email School of Photography; or go mobile with the Photography Compendium for iPad. You can also get your gear from B&H and Amazon. Prices are the same as normal, however a small portion of your purchase value is referred back to me. Thanks!
Images and content copyright Ming Thein | mingthein.com 2012 onwards. All rights reserved