Less is more: what does a camera really need?

IMG_2115b copy
I’ve long been threatening to post a photograph of a toilet as an example of a minimalist everyday object made interesting – its basic form has been decomposed down to the bare minimums; ornamentation isn’t necessary, nor does it sell more toilets: less is more. Appropriately, this was also shot with a minimalist camera: an iPhone.

Here’s an interesting question: how many of you have given some thought to the bare minimum of what a photographic device needs to be used as an effective camera? The problem today is we’ve become far to accustomed to camera makers stuffing in additional software features in order to sell devices; none of which are useful, most of which don’t even work properly. Think back to when you last used one of the headline ‘new features’ of your last purchase – pano stitching, for instance; or 10fps tracking; or the ‘supergreen national park-like foliage mode’. Probably only once – shortly after unboxing it – and then never again. I’m willing to bet you can’t even remember which combination of button presses is required to activate it. But judging from current product offerings and advertising, the concept of selling a camera with less features in it is one that simply makes no sense…or does it?

[Read more…]

Defining cinematic

_D90_DSC7759 copy

Over the last couple of posts, we’ve looked at the qualities of bokeh, and some examples of cinematic photography in New York; although one of the most obvious hallmarks of the cinematic style is an abundance of very out of focus zones, in reality there’s a lot more subtlety to it. Since this is one of my most frequently used and well-developed styles, I felt that perhaps a little intellectual exercise was in order.

[Read more…]

General photographic workflow tips

Whilst it would be impossible to cover absolutely everything you need to know to be proficient in photography in a single article, the aim of today’s piece is to provide the amateur to hobbyist an idea of the things to keep in mind in order to be able to focus on producing images. It’s something that’s been quite frequently requested in the past few weeks – perhaps a sign that my reader base may be shifting somewhat – so I’ve decided to take a crack at it in a way that makes it both accessible yet still somewhat relevant for the more advanced photographer. Where applicable, the section header links to a more detailed article. I’ll approach this from a in the same sequence as I’d normally deal with my own photographic workflow, in a sort of annotated checklist format.

[Read more…]

New! Intermediate PS Workflow Video and digital downloads

Six months is a reasonably long time: enough that if you’ve had a chance to view and master the Introduction to Photoshop Workflow DVD, then chances are you’ve probably encountered a few situations in which you’ve wanted a little bit more processing horsepower.

What do I mean by that? Specifically,

  • Application and use of masks;
  • Use of layers;
  • Retouching with the healing brush, clone stamp and regular brush tools – in effect, re-rendering of simple surfaces;
  • How to composite images – both for HDR and integrating multiple elements from different frames into one final image (in conjunction with masking);
  • Use of the Liquefy tool;
  • Stitching;
  • How to create actions and automate batches.

The video uses a number of real-life commercial image examples as vehicles to demonstrate intermediate post processing techniques that go beyond the basic Photoshop workflow for converting raw files. It’s impossible to demonstrate these techniques solo, as they’re often paired together in real applications to achieve a particular outcome or effect. It’s perfect for photographers who already have a basic workflow and looking to add polish to their images, or for those who are looking to extend their post processing skills after my Intro to Photoshop Workflow video. It covers effectively 99% of all the postprocessing situations a working pro is likely to encounter. Runtime is 2h20min.

The video is available immediately for US$63 from the Teaching Store or the iPad app – if you don’t see it in the list of videos, swipe down to refresh.

Checkout now via PayPal

Now is also a good time to announce a change in delivery method for this and all other videos: by popular demand, no more physical post! All videos now come with (near) instant gratification: they will be available exclusively via digital download; the compression will be identical to the DVD with slightly more efficient codec, which means slightly smaller file sizes. Please note that prices remain the same as instead of covering postage and materials, I’m now covering for server rental and bandwidth…

Thanks! MT

____________

Visit our Teaching Store to up your photographic game – including Photoshop Workflow DVDs and customized Email School of Photography; or go mobile with the Photography Compendium for iPad. You can also get your gear from Amazon.comhere. Prices are the same as normal, however a small portion of your purchase value is referred back to me. Thanks!

Don’t forget to like us on Facebook and join the reader Flickr group!

appstorebadge

Images and content copyright Ming Thein | mingthein.com 2012 onwards. All rights reserved

Understanding metering, part two: what to use, when

In part one we examined why metering is important, and how the basics of how meters work. In today’s article, I’ll take a closer look at the different types of  metering, how they differ, and under what situations they should be deployed.

metering-viewfinder

A sample viewfinder – in this case, a rough representation of the Nikon D2H/ D2X finder.

With that background out of the way, let’s look at how the various metering options work, and what typical situations they might best be deployed under. Cameras typically have three options, or some variation upon that. Within these options, it’s also usually possible to fine tune various aspects of the meter’s operation. I’m going to leave out handheld meter operation since this is something that’s almost never encountered today. An important point to note is that all meters can be fooled by situations of uniform luminance, so don’t trust the readout blindly. Remember, meters function by averaging the entire evaluated area out to middle gray; this means if your evaluated area is meant to be black or white, you’re going to need to add or subtract some exposure compensation. For predominantly light/ white scenes, you need to add; for dark scenes, subtract. This holds true for every one of the different metering methods detailed below.

Average
The simplest form of metering evaluates the frame as a whole, and tries to expose it to middle gray – under the assumption that there will be shadows and highlights, but these will average out. Seldom used today because you will almost always require exposure compensation (making it unsuitable for the point and shoot crowd which constitutes most of the global camera market), but has the one enormous advantage of behaving predictably under every situation.

Spot
The simplest form of meter is the spot meter. This evaluates luminosity at the desired point only, ignoring everything else in the frame. There are two important things to be aware of with a spot meter: the location and size of the spot. The metering spot’s location is either in the center of the frame, or tied to the selected or active autofocus point; the logic there is that you would typically want to ensure your subject is both in focus and properly exposed. Variations on the spot meter include types that are biased for highlights or shadows – i.e. you meter a shadow or highlight and it doesn’t turn out over or underexposed. Don’t forget to add appropriate exposure compensation.

The size of the spot is also very important – don’t be fooled into thinking that it’s a tiny, precise eyedropper the same size as your autofocus area box – it isn’t! Most consumer cameras have a spot size that’s about 2.5% of the frame area, which is actually quite large – imagine your frame divided into six vertically and horizontally, i.e. a grid of 36 boxes; a 2.5% spot meter is the size of one of these boxes. Professional cameras might have a 1% spot meter; imagine a 10×10 grid of 100 boxes, and this is pretty much what you’ve got. In our sample viewfinder above, the cyan box is a 1% spot meter, tied to the active (red) AF point. Keep this in mind as you’re moving it around. If your spot meter is tied to the center of the frame, then you’ll need to assign another button – perhaps the shutter half press – to lock exposure once you’ve metered for your subject (unless it is of course dead center, which is highly unlikely).

The obvious question would be why spot meters aren’t smaller – firstly, you don’t actually want them to be that acute, otherwise moving the camera by a fraction of a degree might yield a vastly different (and incorrect) exposure – they’d be too sensitive to use. Secondly, some averaging is still a good thing – you can move the camera around a bit until the spot falls onto the right mix of light/ dark to give the desired exposure. With practice, this can be much quicker than using exposure compensation.

Use the spot meter in situations where your subject is in very different light to the rest of the frame – either much brighter or much darker – in order to ensure that the focus of your shot is properly exposed. It’s great for high key or low key images – put your subject in the shadows or highlights respectively, and spot meter there – or even general situations under which the luminance of your composition doesn’t vary that much. I don’t generally use it for street photography or fast moving situations, because it requires precision and/ or a little meter-and-recompose dance that can cost you valuable time.

One tip: the way I use the spot meter is always either covering my subject, if the subject is darker than the rest of the frame; or, on the highlights plus a bit of dark area if your subject is lighter than the frame. This effectively tricks the meter into adding a bit of exposure compensation to average out the bright/ dark areas – you need to do this to prevent your highlights from falling into middle gray and consequently completely losing your shadow information. It also adds a bit of speed in operation since you don’t have to muck around with exposure compensation.

Spot meters only came about when the metering cells in cameras could be made small enough to evaluate only a portion of the frame; they’re common now because our metering sensors are made up of hundreds, if not thousands, of discrete individual elements.

Centerweight
In our sample viewfinder, the circle around the center AF point represents the centerweight meter area border. That sounds like a bit of complex mouthful, but in reality it’s not. A centerweighted meter divides the frame into two areas – the circle in the middle, and the border. The circle in the middle is presumably roughly where most subjects are going to be framed, which in turn you would like to expose properly etc – it is metered separately from the border area. The two metering values are combined in a predetermined ratio – usually 70-30 in favor of the central portion, sometimes 60-40 – to determine the final exposure value.

Centerweighted meters are the predecessor to matrix metering – they try to average things out over the entire scene, and make a sensible assumption or two about what you would like to expose for. Modern cameras allow you to change the size of the center area – the D800E, for instance, allows a spot anything between 8mm and 20mm in diameter. The default center area is usually etched onto the focusing screen for reference. Note that centerweighted metering was the successor to evaluative metering, and shares its advantage of predictability: if you put your subject in the circle, chances are the exposure will be right; the advantage it has over evaluative metering is the ability to bias the exposure towards your subject.

In situations where spot metering would not be suitable – action, for instance – I actually prefer using centerweighted metering to matrix in unfamilar cameras; at least I have some idea of how the meter will respond. There’s nothing more frustrating than missing a shot to over or underexposure because matrix metering has gotten things very, very wrong.

Matrix
Matrix metering is either a miracle or a curse, depending on where you stand. For those who don’t want to take control of their cameras, matrix metering provides a higher ‘hit rate’ than evaluative or centerweight; the problem is, you have absolutely no idea when it’s going to get it wrong, and how much by. This can be rectified with experience with a certain system; as you encounter more situations, you get a better idea of when the camera is going to miss. It’s for this reason that the only time I use matrix metering in a situation where delivery is critical is when I’m shooting cameras I’m familiar with – the Nikons, and the OM-D. Everything else is either spot or centerweight.

That doesn’t of course explain how it works. The frame is divided up into a number of areas – up to 100,000 of them in the Canon 1Dx – and a reading taken of each area, for both luminance and color. The camera then either compares this to a database of similar situations (i.e. photographs converted into 100,000 or however-many pixel maps, along with exposure values) and then determines the exposure. If the camera can’t find a matching situation, then it makes an intelligent guess about what the exposure should be based on a combination of overall scene luminance, color, and the current AF point. With this many variables, it’s actually surprising that the meters get it right such a high percentage of the time – perhaps there are only so many possible luminance maps?

In any case, matrix metering tends to be more reliable in situations that don’t have extreme contrasts, or bright point sources in the frame, or very small subjects. Under quickly-changing circumstances, it’s the method of choice – it might get things wrong, but most of the time it will save you from having to move around the spot or use exposure compensation. For most users, matrix metering is sufficient, and you can always add or subtract exposure compensation and take another shot. It’s also worth noting that matrix meters that use the imaging sensor are much more accurate and reliable than those that have separate metering sensors simply because the tonal response characteristics of both match, making overexposure almost impossible. Presumably, these should also run some form of ‘expose to the right’ algorithm for digital cameras, but then again perhaps not as it would only be useful for RAW shooters.

I think considering some examples would be useful at this point. Let’s take a few of the images from my recent Introduction to Wildlife workshop:

metering-1-master

This image could be taken care of by either spot or centerweight; I have no idea if matrix would have been accurate or not. For centerweight, you would need to ensure the central spot is over the subject area, like so:

metering-1-cw

This implies a lock-exposure-and-recompose is necessary – or, perhaps not seeing as I intended to crop the final image to a more square aspect ratio anyway. You might wonder whether the 70-30 distribution – specifically the metered portion falling on the black water – would throw things out; in this case, actually it helped. The center portion would have metered the white bird to middle gray, i.e. too dark; the outer portion metered the black water to middle gray, i.e. too light. They averaged out.

metering-1-spot

We could also have used the spot meter, in a few different ways. For location A, no compensation would be required so long as we took a bit of the dark portion and a bit of the highlight portion – i.e. enough to average out to middle gray. Location B would have required some positive exposure compensation as it is a highlight, in zone VII-VIII or so. Location C falls in zone V anyway, which is middle gray – so no exposure compensation would haven been required. In this case, I would have picked location C if using an AF lens (I wasn’t) as it’s of both the right luminance value and subject distance – alternatively, the head would have been a good choice, too.

metering-2-master

Here’s our second example. This is a much trickier situation because of the thin rim of backlight around the bird; you don’t want to overexpose that else you’ll lose all tonal detail in the feathers.

metering-2-cw

You can see here that centerweighted metering wouldn’t work; the highlight areas – in this case, the subject from the meter’s point of view – is just too small. It would expose for the dark area and result in blown highlights. Spot metering, on the other hand, is ideal:

metering-2-spot

Location A is obviously nonsensical because although it might be the same luminance value as most of the bird, that isn’t the part we’re exposing for; using location A would result in huge overexposure. Location B is fine, and the highlight area is small enough that it wouldn’t require any exposure compensation since some of the dark background is also included – this is actually what I used – C and D are also workable options, though C might require a little negative compensation.

How about a few more examples?

_5014972 copy
Clearly, spot metering on the eye is the only way to go – all other options would have resulted in overexposure and both detail loss and an imbalance in the composition caused by the eye of the viewer not going to the intended area.

_5014910 copy
Actually, any metering option would work fine here – the scene is divided into relatively large portions of different luminance. If you used spot on the feathers, you’d have to add a bit of exposure compensation to keep things white; if you used center, you’d have to lock exposure and then recompose.

_5014793 copy
Our frame is fairly consistent in luminance, so once again, any metering method would work. However, all would require a bit of positive exposure compensation as the overall tone of the subject is light, and should be kept high-key.

_5015451bw copy
Small white subject against a dark background, intense contrasts – spot meter.

_5014828 copy
Here’s one situation where matrix metering would actually work better than the other options: you have relatively even luminance across the frame, a strong colored background (making centerweight possibly inaccurate) and a fast moving subject (making spot metering impractical).

Of course, knowing which metering method to use in a given situation is quite useless unless you have things set up so that it’s easy to switch between them; otherwise, pick one and get used to the way it operates. If you can lock exposure separately from focus, then you don’t really need to use exposure compensation most of the time – the spot meter is all you need. If you can’t be bothered to do the finger dance, well, that’s why matrix was invented. Needless to say: as ever, practice is the key to mastery. MT

____________

Visit our Teaching Store to up your photographic game – including Photoshop Workflow DVDs and customized Email School of Photography; or go mobile with the Photography Compendium for iPad. You can also get your gear from B&H and Amazon. Prices are the same as normal, however a small portion of your purchase value is referred back to me. Thanks!

Don’t forget to like us on Facebook and join the reader Flickr group!

appstorebadge

Images and content copyright Ming Thein | mingthein.com 2012 onwards. All rights reserved

Understanding metering, part one: introduction

metering-2-master
An image from my recent Introduction to Wildlife workshop, and a very tricky metering situation – more importantly, do you know why, and what to do in a situation like this to achieve the desired exposure outcome?

One of the more important – yet almost always overlooked – aspects of camera operation is metering. Simply put, the meter determines what your final exposure is, and how bright or dark your image looks relative to the scene. Unless you are shooting manual – and even then – the camera’s exposure is determined by the meter. Add the fact that the eyes of a viewer tend to go to the brightest and/ or highest contrast portions of an image first (i.e. this should be your subject) – and it’s clear to see why it’s absolutely critical to understand both how metering works as a fundamental concept and any camera-specific peccadilloes that might exist. The last thing you want is to find that your camera drastically underexposed a once-in-a-lifetime shot of some critically important event because you didn’t know (or forgot) that the meter was extremely affected by point light sources*.

*This can actually happen. The meter in the Leica M8/9 is extremely sensitive to direct point light sources, and can often yield nonsensical readings of say 1/1000s ISO 160 for a shooting aperture of f4 at night – that’s because it’s picking up a street lamp. One can only hope the new M is less affected by this – the only solution to the problem I’ve been able to find is just go 100% manual at night.

How meters work
Depending on which exposure mode your camera is in, the meter will try to find a combination of settings that creates an image that averages out to middle gray in luminance, i.e. the histogram average is around level 127 or thereabouts. There are three exposure parameters the camera can use to control the amount of light reaching the image processor – note that the sensor is also now involved in the process – shutter speed, aperture and digital gain, i.e. ISO. If you fix any one of these variables manually – say by shooting aperture priority at a set ISO – then the camera varies the remaining parameters according to a set of rules in order to achieve the ‘correct’ exposure. If the correct exposure is out of adjustment range – e.g. the required shutter speed for a given aperture is too high – then you’re going to land up with an over or underexposed image.

In program mode, the camera controls both aperture and shutter values depending on its preset program; the photographer can usually shift the program to a different combination of values which still yield the same net amount of light hitting the sensor. In shutter priority, the user fixes the shutter value manually, so the camera alters the aperture. In aperture priority, it’s the other way around. In manual mode, the user fixes both values – the only thing the meter can do is display how far off the manually chosen exposure is from the correct exposure, or alter the ISO or flash. If auto-ISO is activated, then the camera will always default to the lowest possible ISO within the specified range in order to keep the shutter speed at or above a certain value – either user selected or 1/ focal length in second. (Note that for some cameras, using manual shutter and aperture values will cause the camera to shift the ISO rather than display the variance from correct exposure.)

Simple enough, right? So why are there so many different metering methods? My OM-D, for instance, has no less than five: matrix, centerweight, spot, spot low and spot high. The differences are down to the area of the frame the meter evaluates when deciding what the correct exposure should be. Note that in all situations, it will still try to expose the considered portion of the frame to middle gray – except this area might change. One uses the different metering methods in different subject situations. We’ll get into that in more detail later; first, there are a few more things that need explaining by way of background.

The meter itself is a photovoltaic cell, or combination of cells, whose output voltage over a certain area varies depending on how much light lands on it. The more light, the higher the voltage, which is translated into a brighter exposure. A particular chemistry’s electrical response is a fixed property of the material, and therefore consistent across different situations and cameras. Note that some meters require power to give a readout – this is because a base voltage must be applied across a semiconductor for it to respond to light, or to amplify the signal to a point where it gives an output that can be displayed – CCD meters are like this, for instance – other types of semiconductor photovoltaics do not require power because they already produce current on their own the minute light hits them. (Solar cells, for instance, fall into the latter category.)

Note that not all cameras have built-in meters; very early film cameras generally did not, and required the use of a separate handheld meter, or a particularly sensitive eyeball. My Nikon F2 Titan, for instance, comes standard with the unmetered/ plain DE-1 prism/ finder. Early Leicas are the same. A whole variety of hotshoe-based clip on meters are available, as well as handheld types. Modern digital cameras either use a separate metering CCD, usually located in the viewfinder (for an SLR) or use the imaging sensor (for any live-view based cameras) – this is obviously the most accurate possible method of metering given that the metering sensor also perfectly represents the response of the imaging sensor. (This was not always the case with film and separate meters; it was therefore highly important to know the characteristics of your particular chosen film.)

Incident vs reflective
All cameras’ built-in meters are of the reflective type. This is to say that they measure the amount of light reflected from the subject and hitting the camera; the advantage is that you don’t have to stand in the same light as your subject in order to obtain a reading – potentially problematic if your subject is say, a landscape that’s several kilometers away – but at the same time, they suffer from the disadvantage of not being able to obtain an accurate reading for very reflective subjects. Incident meters are always handheld (but handheld meters can be either incident or reflective) and are placed in the same light as the subject in order to obtain an accurate exposure reading. The photosensitive portion of the meter is covered by a matte white dome in order to ‘average out’ the light measured by the meter.

Exposure compensation
The use of exposure compensation is simply translated into an offset of the zero point of the meter. For instance, if you dial in +1 EV exposure compensation, then the meter will add this to the calculated exposure value before displaying the final settings.

Flash metering
Flash meters a slightly more complicated. There are two ways to determine how much flash power is required to achieve the correct exposure. The first is by using an incident light meter next to the subject, firing the flash, and setting the camera with the valued displayed on the meter. This is the most precise method, but again, is often impractical if you do not have time to repeat a shot. The second, more common method, uses a very short duration and low-power preflash of known output in conjunction with the reflective meter to determine how much additional power is required to make up the gap between the trial exposure and a correct exposure, with the given camera settings. The adjustment to flash power is made almost instantaneously and a second, correct power flash is fired along with the exposure. This entire process is so fast that there is almost zero added lag. The disadvantage again is that partially transparent or reflective objects may not be correctly exposed as the metering type is reflective-only.

Histograms and expose to the right
The exposure histogram represents the evolution of the light meter into the digital age. It not only shows you what the average exposure should be over the entire frame, but how that exposure is distributed. For instance, it is important to know whether you have one uniformly gray area across the entire frame, or say two halves of the frame divided into 100% black and 100% white areas. A simple exposure meter that evaluates the entire frame would give identical readings for both situations. However, in the second situation, you would probably expose for the highlight areas to prevent loss of tonal detail. This would actually result in a final exposure that is slightly darker then what the whole-frame evaluative meter would suggest. Learning to read a histogram, is therefore a very useful tool for digital photography. Histograms and digital actually come with two others very useful tools. The first is the ability to display areas of the image that are overexposed – usually in the form of a flashing highlights warning; the second, is the ability to redraw the histogram based on the specific area of the image displayed. Note that availability of both of these functions depends very much on the camera you’re using. Some cameras are able to display histograms and overexposure warnings for individual color channels, as well as overall luminance.

Metering is actually much more critical in the digital age, simply because of the tonal response characteristics of the imaging medium. With film, there was a degree of nonlinearity and reciprocity era which translated into a little bit of latitude in photosensitivity; for negative film, this may vary by as much as 1 to 2 stops: the same exposure with different batches of film, even if the same emulsion type, may not necessarily result in the same final luminance. Add variation in the developing chemistry to that mix, and you can see why having high precision wasn’t all that critical. (Slide film is a different story; it’s very sensitive to over or underexposure.) However, digital photography is nothing if not repeatably consistent. Two identical cameras with identical exposure settings will yield an identical image under any fixed given situation. Changing the exposure by as little as a sixth of a stop will be consistently visible.

There’s also one additional characteristic of the digital medium that we need to take into consideration. This is to do with signal amplification and noise, and also the origins of the ‘expose to the right’ motto. Exposed to the right refers to ensuring that the histogram graph touches the right-hand (highlight) side of the scale, but does not exceed it. The reason for this is to capture as much total information as possible, with as little noise as possible. Underexposure in a digital image may be corrected for by increasing brightness. This is achieved by amplifying the signal; doing so also amplifies any uncertainty in the signal, which translates into increased amounts of digital noise – obviously not a desirable characteristic in an image. The advantage of exposed to the right is that we maximize the amount of signal and minimize the amount of noise. The brightest tonal values in a digital image also contain the most information simply because of the way digital sensors respond to light. This translates into maximizing latitude for post processing, higher color accuracy, and less noise – in short, making the most of your image quality potential.

We therefore want to expose the image as brightly as possible, and then adjust the tonal map later in post processing – or do we? The reality is that in most situations this holds true. However, due to the nature of the total response of some sensors, there may be situations under which we do actually want to underexposed overexpose slightly in order to create a particular look due to the nonlinearity of tonal response. Of course, if you are a JPEG shooter and do not post process at all, you should expose at your intended final output level.

Note that this is much more of an issue for digital cameras than film ones, as the tonal response of film is non-linear – however, underexposure in a digital camera will usually result in undesirable noise when the luminance value is brought up to the desired level because it can only be done by amplifying a small signal. This in turn amplifies the uncertainty in the signal, i.e. noise.

White balance
One additional complication brought upon the digital photographer has to do with white balance and color temperature. Different colors have different luminosity values even under identical illumination; this is to do with the wavelengths that are reflected or transmitted to the imaging device, and their associated energy (luminance) levels. From a perceptual point of view, we see this as different brightness**. White balance is an important setting that comes into play here: it acts as the zero-offset point for color, effectively adding or subtracting different amounts of exposure compensation from the various channels to compensate for the ambient light. (This is how whites can still be rendered as white under extremely warm incandescent light if the correct white balance is used.)

**Nikon’s color matrix metering system has long compensated for this by using a metering CCD that had a color filter array over the top, both to aid scene recognition as well as increase exposure accuracy when presented with strongly colored subjects – for instance, yellow objects always render brighter than reds or purples of a given reflectance even if they’re illuminated under identical light – the color matrix meter compensates for this by increasing or decreasing the exposure if a scene is predominantly of one color or another.

We have several considerations here. The first and most obvious is of color accuracy – even so, this can be compensated for with the eyedropper tool in Photoshop providing we can find something white in the frame to set as a baseline. The less obvious problem is to do with individual channels. If the white balance is incorrect and a channel is overexposed, there is no way to recover this information afterwards. It is therefore important to set a white balance that is in the right ballpark – it doesn’t have to be perfect – to avoid this. Similarly, extreme underexposure of a channel will result in a lot of noise when compensated for afterwards. Generally, the auto white balance function in most cameras will get you in the right ballpark, but you will need to make adjustments afterwards.

The auto white balance function actually works in a similar way to an exposure meter – except instead of trying to average the scene out to middle gray in luminance, it tries to average out the scene to a perfectly neutral color.

The confluence of exposure metering and autofocus
As if the whole metering thing wasn’t complicated enough, DSLR manufacturers have started to use the metering CCDs to aid autofocus – after all, it’s an additional source of information that can be used to help track subjects especially when the mirror is down, and the main imaging sensor is not available. The flow of information is two way and affects both autofocus and metering. The autofocus system uses the spatial and color information from the metering sensor to track subjects by color and location across the frame, especially if they move out of coverage of the autofocus sensor array – the metering sensor always covers the entire frame. The exposure meter uses the autofocus information to determine which area in the frame is being focused on, and is presumably the subject, which the photographer presumably wants to have correctly exposed – in matrix metering mode, exposure is thus biased towards whatever subject is underneath the active autofocus point, or points.

I’m sure you can see there are a lot of presumptions involved. This of course means that the camera doesn’t always get either exposure or focus right when left to its own devices; the metering sensor may lack the resolution to distinguish between the desired subject and another similar-looking one, resulting in focusing errors; or the meter may be too heavily biased towards the area under the active focus area and thus yield erroneous exposures. A situation in which this might happen is say if your subject is much larger than the active focus area, and of a different luminance value. Anything small and reflective almost always causes problems, too.

The bottom line is that it pays to take control of both your meter and focusing system: without this, you can never be fully certain of what your camera is doing; I seldom use auto-anything especially with DSLRs since they do not meter off the imaging sensor (unless in live view, of course).

To be continued in part two! MT

____________

Visit our Teaching Store to up your photographic game – including Photoshop Workflow DVDs and customized Email School of Photography; or go mobile with the Photography Compendium for iPad. You can also get your gear from B&H and Amazon. Prices are the same as normal, however a small portion of your purchase value is referred back to me. Thanks!

Don’t forget to like us on Facebook and join the reader Flickr group!

appstorebadge

Images and content copyright Ming Thein | mingthein.com 2012 onwards. All rights reserved

To process or not to process?

_RX100_DSC2614b composite copy Ginza dusk.

This is the photographer’s analog of the classic fisherman’s dilemma: fish or cut bait?

I’ve always, for as long as I can remember being serious about photography, shot RAW and done some form of processing afterwards. The more potential the file had, the more processing; conversely, I’d also spend time trying to save files that probably weren’t compositionally worthwhile. And as much as I hate to admit it, in the early days, trying to hide photographic mistakes behind punchy processing. In effect, the processing was taking center stage instead of the image. One of the hardest things to do is create a strong, but natural looking image – both from a perspective and processing standpoint; in order for it to stand out well from reality, the light, subject and composition all have to be exceptional. The image has to tell a story – but that’s another topic I covered here and here.

Note: all images in this article are a half-and-half composite of Sony RX100 shots; the SOOC JPEGs are on the left half (especially obvious for the B&W images) and the processed RAW files on the right. The RAWs were converted to DNG first then run through my usual workflow; CS5.5 doesn’t natively support the RX100. Where the finished file was cropped to a different aspect ratio, I’ve followed the finished file. Some noise reduction was one on the high ISO files. As usual, go by what I say and not what you see – there’s web compression involved in the mix, and you aren’t looking at the original files on a calibrated monitor.

_RX100_DSC2614 crop
And a 100% crop of the above – a huge difference here, but aside from tonal density – not much in it at web sizes, is there?

For argument’s sake, I’m going to assume that you’re able to see, compose and execute the image you see at a particular scene when pressing the shutter. For the purposes of this discussion, I’m going to exclude conceptual and commercial work – there is simply no way you can achieve some frames in a single look, and it’s impossible to have a perfectly dust-free product in others – even if you can get the lighting perfect. We’re talking about creative, personal and documentary work only. The reality is that a lot of photojournalists and reports never leave the JPEG – and some even SRGB JPEG – realm as it is. There are many reasons for this; speed and throughput being the first, the display medium being the second – there’s no point in supplying a beautiful file that uses all 16 bits of the Prophoto gamut if it’s going to be printed on a halftone CMYK process on newsprint. It’s a waste of time. (That of course doesn’t mean you can’t shoot both a JPEG and a RAW and deal with the latter if you find yourself up for a Pulitzer.)

_RX100_DSC2172b composite copy
Giant chef. Either rendition works, frankly – I don’t mind the overall tonality of the SOOC image, even if it’s lacking the pop of the actual scene.

A recent email exchange with a hobbyist photographer friend has spurred me on to think about this topic a bit more. The question: is Photoshop really necessary? Shouldn’t a good image be able to stand on its own? Yes, but don’t good ingredients taste better when skilfully prepared and cooked? (The Japanese may disagree with the cooking part; I can’t blame them.)

To answer this, we need to backtrack a bit. In the early days of digital, JPEGs were simply not an option because cameras lacked the required processing power. Raw output was something that was simply a direct data dump off the sensor itself; the data stored (perhaps lightly compressed to save space) for later processing. JPEG file quality was simply, unusable compared to the standards of the day: film. It didn’t have the tonal subtlety, the dynamic range, the detail; to make things worse, there was an inherently blocky “digital-ness” to it that made images feel, well, unnatural. RAW processing was a way to partially get around that and reduce the gap – we could alter the conversion/ output algorithm to create an image whose shadow and highlight response more closely matched that of film. It was also a way to overcome some of the limitations of early sensors; notably color response, chroma noise and tonal accuracy.

_RX100_DSC2103b composite copy
Imperial Palace East Garden, Tokyo. The SOOC image is too green – the needles definitely didn’t look like that after a warm summer and heading into November.

_RX100_DSC2103 copy
100% crop. There’s quite a lot more detail and tonal subtlety in the processed image.

Postprocessing my raw files was a habit I acquired since shooting with a DSLR in 2004, and I haven’t shaken since. I don’t discount JPEG-only cameras, but I’d definitely take the availability of a RAW file into consideration when buying one. And I certainly don’t feel like an image quality evaluation is fair or exhaustive until I’ve run some RAWs through my normal workflow. However, recent experiences with first the Sony RX100 and more recently, the Fuji XF1, have made me revaluate this: in fact, the XF1 has such good JPEGs and such crappy RAW files (perhaps ACR is also to blame here) that I don’t think I’d ever shoot RAW on this camera; but I will still postprocess the JPEGs. However, this is not a RAW vs JPEG debate; it’s something a bit more fundamental: for non-critical applications, is it still worth spending time processing or not?

_RX100_DSC2046b composite copy
What happened here is perspective correction – nope, you can’t do this in-camera. Not much to fix with tonality, though.

Yes, the processed RAW files clearly look better at the 100% level, but you’d have to make a 120cm wide print to really see that. And for practical viewing purposes, the only difference was in the overall tonality (and sometimes not even then) of the image, which could easily be fixed by altered JPEG output settings. Downsizing hides all manner of dirty pixel-level flaws. Could it be that I’d been creating some unnecessary work for myself for some time, and hadn’t realized it? It was the same case with the JPEGs from the Nikon D600 I was testing around the same time; they looked great at typical display sizes, but started to fall apart at the pixel level. (Before you worry that I might have gone all Hipstagram on you, note that I’m always open to finding new ways to balance image quality and throughput – and that includes shooting Ilford PAN F in my F2T and then ‘scanning’ the negatives with a D800E. I’m just saying.)

Suppose you weren’t super-anal about image quality, though. Suppose you didn’t pixel-peep, or print large. Suppose you shared your images online or at most made 8×12″ prints. Remember the points of sufficiency; if your display medium is going to be that severely limited, then the reality is you might not see much of a difference if you set your camera up correctly. The strength of the image and its contents is of course going to make more of a difference; the camera is just a tool and medium.

_RX100_DSC1961b composite copy
Facebook break from shopping. Near zero difference in tonality, just a bit of shadow recovery and perspective correction. Again, the SOOC JPEG would be fine for 99% of uses.

The basic reason for shooting RAW and postprocessing is that there is no one-size-fits-all for camera settings; therefore it’s impossible to have these baked in to the JPEG development algorithms of a camera. Fair enough; however, these algorithms have been getting increasingly clever over time as processing power increases. (A lot of RAW files aren’t even composed of truly RAW data anymore, but that’s another topic altogether.) To push perfection with every frame, there’s no way around RAW and postprocessing, end of discussion. You simply cannot have a camera that’s smart enough to recognize when some parts of an image need to be dodged and burned; the day that happens, I think I’ll retire as a photographer.

There are cases where the format-imposed limitations can actually force you to make stronger images – spot metering for subject or highlights can result in more powerful compositions and fewer distractions, especially when you have very contrasty lighting. Alex Majoli’s early work with the Olympus compacts is a good example of this. I frequently use this technique to strengthen the mood of an image, regardless of what camera I’m using.

_RX100_DSC2062b composite copy
Port of Tokyo. The AWB got this one horribly wrong; it could have been the window I was shooting through that threw things off.

_RX100_DSC2062 crop
100% crop. Not as much extra detail as you would have thought.

Once again: pick the right tool for the job, and that includes your file formats. I think what might be useful is a set of guidelines as to when each method is useful; even for a person who can run through their entire RAW workflow for a file (excluding heavy duty retouching) in about 30 seconds, I’m considering moving to JPEG for some things. Firstly, I don’t need perfect files for everything; social/ personal/ family documentary etc. is one such thing. Secondly, I’ve been spending more and more time processing files as my workload increases and camera resolution gets higher; I simply don’t want to spend any more time in front of a computer than I have to. I’d rather be out shooting and meeting people. (An obvious solution would be to shoot less, but this somewhat defeats the point of being as photographer. And yes, I’m trying film again at the moment, too.) Client and professional work will always remain shot RAW of course – there’s no point in going to the nth degree to ensure pre-capture image quality with the best lenses and supports then throwing most of your tonal space away with a JPEG. And you never know what post-capture manipulations you might need to do later on.

The biggest downside of shooting JPEG is that your settings are pretty much baked unless you’re willing to change settings on the fly from scene to scene. (Some cameras offer bracketing for this, too.) In real terms, you have to make a conscious choice at the time of shooting whether you wan high key portrait color or low key B&W. On top of that, you have to deal with limited dynamic and tonal range, and that you have to get your exposure as close to perfect as possible in-camera. This is very different from shooting RAW with the aim to post-process afterwards; in this case, you always expose to the right (and even clip highlights slightly) to maximize tonal range in the low-noise highlight and midtone portions of the image. My RAW files look flat and a bit too bright; this is normal because matching exposure to the desired tonal map is a critical portion of the processing flow.

_RX100_DSC1931b composite copy
The camera got this one spot on – all I did was straighten out the perspectives a bit, and sharpen.

It’s important to note that you need to spend some time figuring out what the best JPEG settings are for your camera and shooting style; the rest of this article is meaningless if you’re shooting with the wrong settings. And regardless of whether you JPEG or not, I would always shoot RAW+JPEG – you never know when you might need the file later. Storage is cheap; do-overs are often impossible.

When it might be best to use a straight-out-of-camera JPEG (or film + minilab):

  • When file quality is secondary; anything intended for facebook or social email, for example. These distribution methods compress the hell out of the images, strip color information, and then to make things worse, viewing is almost always on a non-calibrated device. You can spend all the time in the world tweaking, but it’s going to look like crap if the display can’t make the required color.
  • In very high throughput situations, like sport or news or reportage. And dare I say it, wedding factories.
  • I’m cringing as I write this, but if you camera has a style preset you particularly like (and are okay adopting as your own style) then go right ahead…so far, I haven’t seen anything that fits the bill personally.
  • When you don’t have the time. If I go away for a week, I’ll shoot an average of 1,000-1,500 images a day; of this, perhaps 100 will be saved to review on a computer later; I’ll throw away another half, but the problem is now I’m stuck with 350 images to process. At say 1 min per image, that’s around six hours. A lot of what I shoot is documentary/ observation/ personal, and these don’t need processing. I am now being even more critical with my editing, but it’s still a lot of time to carve out when you don’t have any spare to begin with.
  • If you enjoy photography but don’t want to deal with the hidden back end that comes with it – the computers, the storage, ensuring you have enough power to run photoshop and that your converters are up to date…the list is endless.
  • What you see is pretty much what you get: if you’re learning, it’s much easier to see the effect of exposure or setting changes. With RAW, you have to use experience to visualize what you can get. This is probably the most common stumbling block I see amongst my students who are just starting to discover the power of Photoshop.
  • If your camera puts out lousy RAW files but amazing JPEGs – the XF1 is a great example of this.

_RX100_DSC2392bw composite copy
I’ve never used SOOC B&W before, though I doubt it would have been able to retain the slight tonal variation in the man’s trousers. I had to do quite a bit of dodging and burning to bring that out.

And in favor of RAW + Photoshop (or self-developed and printed film):

  • No question: when image quality is the first priority.
  • When you want to do something that can’t be done in camera; compositing, for instance.
  • When you don’t have many images to process
  • When you have no choice – either the JPEGs are crappy, or there are no JPEGs at all…
  • When you have to do perspective correction (and don’t have a tilt-shift lens).
  • In extreme lighting situations that can’t be handled out of camera.
  • When you need to make tonal/ exposure changes to part of the image only, and not the whole image; this is where dodging and burning comes in.
  • If you’re a control freak…

But wait, there’s a middle ground:

  • You can shoot RAW but batch process; I think Aperture and Lightroom are a good example of these halfway houses. The problem I see is that you’re spending nearly as much time as a full-blown individual Photoshop workover, but without the same control or output quality. And this somewhat defeats the point. That said, I do keep presets for various things – usually to do with color calibration for flash work or for certain types of lighting or cameras, or high ISO situations.
  • The other option is to have the camera output a very neutral JPEG and postprocess that; you can skip the RAW conversion step (although it is possible to open JPEGs in ACR and have the same adjustments, but not the same latitude of course). This actually frees up quite a bit of time; that minute can get down to 15 seconds or less if you have a fast computer – dodge and burn, curves, color correct, sharpen, save. And it does of course help that the files are much smaller, too. This is actually not a bad option – whenever I review a camera without RAW support, this is the method I use – but if you start to do any extreme tonal manipulations to the files, it will become obvious, especially at the pixel level.

_RX100_DSC1925b composite copy
Standing nap. Yes, it’s two files. Look carefully.

_RX100_DSC1925 crop
100% Crop. More noise, a smidge more detail, and slightly smoother tones; not a lot in it. And that’s with me looking at the full size files on my calibrated monitor, not the web crop; there’s even less difference here.

My currently preferred JPEG settings

Important note: currently, I’m using my JPEGs as preview images either for quick client contact sheets after a job. When shooting RAW-only, the JPEG settings apply to the preview image but not the RAW file - so I generally try to make them as close as possible to be representative of the tonal range I can get of the RAW file, i.e. flat and not saturated, but sharp. If I was shooting JPEG only, I’d run similar settings with the anticipation of doing some light processing work on them afterwards – the halfway house.

  • My current style is neutral and natural; I look for or create light first and foremost (though the latter doesn’t apply here; if I’m creating light, I’m also shooting RAW to maximize image quality). I want to retain a decent amount of the tonal range; however there’s no way to control the output curve for most cameras, which results in large dynamic range images appearing very flat. This means contrast is set to the lowest option, or close to it – depending on the camera.
  • It’s very easy to have individual color channels blow; thus reduce your saturation a notch or two.
  • Sharpening is a mixed bag; some cameras do this well, some don’t. But I don’t use zero sharpening, as a lot of the time this setting does actually affect the in-camera RAW conversion and the amount of detail extracted. It also helps you to determine if critical focus was achieved – I’ll usually run somewhere between neutral/ default and maximum.
  • White balance is on auto, but I will override it where necessary to avoid blown highlights.
  • Maximum quality and size, of course – with an extra RAW file saved, too.

There are still many reasons to shoot RAW – and even exclusively RAW – but I can’t help but feel those are eroding slightly; and for the vast majority of users – even serious hobbyists – it might not be necessary all the time. Admittedly, my main reason for revisiting this topic is in the interests of curiosity and efficiency; I don’t think there’s as much of an image quality penalty as there used to be, especially for smaller output sizes. I haven’t decided just how much SOOC JPEG I’m going to use at this point – edited JPEGs are probably as far as I’m going to go – but you can be assured that I won’t use it at all until I feel that I’m getting the image quality I want. And in any case, I can still apply my normal workflow to the JPEGs – the tradeoff is significantly shorter processing time against a bit less latitude. Moral of the story: get it right out of camera; if it’s not there, you’re not going to be able to add it in afterwards. MT

____________

Visit our Teaching Store to up your photographic game – including Photoshop Workflow DVDs and customized Email School of Photography; or go mobile with the Photography Compendium for iPad. You can also get your gear from B&H and Amazon. Prices are the same as normal, however a small portion of your purchase value is referred back to me. Thanks!

Don’t forget to like us on Facebook and join the reader Flickr group!

appstorebadge

Images and content copyright Ming Thein | mingthein.com 2012 onwards. All rights reserved

Understanding autofocus, and tips for all cameras

_7048556 copy
Levitation (and prefocus). Nikon D700, 85/1.4 G

A side effect of the ever-increasing resolution of today’s cameras is that autofocus must necessarily get more precise, too. The Nikon D800/ D800E issues have shown that even a small misalignment or miscalibration in the focusing system can basically cripple the camera into resolving at a far lower level of performance than it would be capable of under ideal circumstances. Short of using manual focus and magnified live view for everything – I would still recommend doing this for critical work, and I do it when working under any controlled lighting situation since I’m more likely to have the time and be using a tripod – it is therefore highly beneficial to pay closer attention to exactly what is going on when the camera acquires focus.

_7038950 copy
A situation where fast, reactive autofocus can help. Nikon D700, 300/4 D

For DSLRs, SLTs and some mirrorless cameras (the Nikon 1s and Sony NEX-5R and NEX-6), a phase detection system is used. This involves taking some of the light from the subject area, passing it through a beamsplitter and comparing the difference in phase of the output; a CCD is used to measure light intensity as a function of position, and the lens is moved until light from both arms of the beamsplitter is coincident upon a single point. This entire module constitutes the AF sensor array that’s either located at the bottom of the mirror box (DSLRs) or embedded in certain specific photosite locations (mirrorless cameras). If you select a specific AF point, then the camera uses only the sensors corresponding to the location of that point; if you let the camera pick, it will usually sample all points to find which is the closest subject covered by the AF sensor array, and focus on that.

_3035242 copy
En masse. Nikon D3, 70-300VR

Phase detect autofocus is fast and generally does not require racking the lens back and forth – otherwise known as ‘hunting’ – because the sensor is able to tell whether the light is positively or negatively out of phase, and thus in which direction to move the lens in order to correct this and bring the light coincident, thus achieving focusing. The precision of focus depends on several factors: firstly, the resolution of the AF sensor; secondly, the alignment of all secondary optics involved in transferring the light to the AF sensor – specifically, the main and submirror assemblies; any microlenses involved; thirdly, the alignment of the AF and imaging sensors (both must be perfectly perpendicular to the lens mount); fourthly, any calibration data the system requires to establish a perfect zero or null position; and finally, the ability of the lens’ focusing groups to move precisely in small increments that maintain perfect alignment with the optical axis.

_8013045 copy
Turning on the inside. Nikon D800E, 28-300VR

Focusing with wide angle lenses is generally less precise with this method because the differences in phase are a lot less; to complicate things, the lens itself may have optical limitations in its design, introducing field curvature, coma etc – all of which can send potentially misleading data to the AF sensor, resulting in incorrect focus. It also doesn’t help that subjects tend to be a lot smaller, and not filling the AF boxes completely. (It’s also worth noting at this point that the AF boxes themselves are an indication of where the sensor grid lies, but there’s no documentation covering precisely where the active areas are located. For greater precision, perhaps the sensors should be crosses instead of boxes.)

_3030456bw copy
Precisely a fast as a speeding bullet. Nikon D3, 24-70

For moving subjects, phase detect systems either continuously change the focus distance, depending on the instantaneous phase information received at the AF sensor, or alternatively employ a predictive algorithm and multiple focusing points in order to track the subject. The most sophisticated systems also employ information from the metering sensor in order to track the subject by color. None of these systems are infallible, and can be fooled by objects of a similar color or larger size coming between the camera and subject – for instance if your subject happens to duck behind something. Although the level of processing power and sophistication of these systems has significantly increased over the past years; I have yet to see any autofocus system that can 100% reliably track an erratically moving subject – especially if it leaves the area of the frame covered by the autofocus sensor array.

_7032151bw copy
Cyclist, Nepal. Nikon D700, 24/1.4

I’m sure you can now see why the challenge of achieving perfect focus gets more and more difficult as sensor resolution increases: if any one of these is out of tolerance by a very small margin, you’re not going to have a sharp image.

Most mirrorless/ CSC cameras, compact fixed-lens cameras and DSLRs in live view all use a much simpler method of focusing – contrast detection. This involves moving the focus point of the lens back and forth to test which direction delivers the highest contrast. The camera will then iterate this process until highest contrast is achieved; although hunting has been minimized with the new generation of contrast detect cameras; it is still necessary to rack focus back and forth simply because there is no way for the camera to know which direction in which to move the lens. Because of this contrast detect autofocus will always be slower than well-implemented phase detect autofocus, with all other things being equal. However, it will also be more accurate simply because the imaging sensor is used to determine the point of optimal focus, and there are far fewer potential issues with tolerances and alignment of components.

_780_IMG_2312bw copy
Point of view. Canon IXUS 100 IS

It is also worth noting that the size of the sensor actually plays an important part in determining just how fast contrast-detect autofocus systems can be; this is because larger sensors have shallower depth of field for a given field of view and aperture, requiring more movement of the focusing groups within the lens in order to determine where the point of highest contrast (and correct focus distance) lies. This is especially noticeable when comparing a compact camera to a DSLR; compounding this is the fact that small sensor cameras require much shorter real focal lengths to achieve the same angle of view; this results in extended depth of field for a given angle of view, requiring less focus precision because any potential errors can be covered up by increased depth of field. The slow focusing of DSLRs in live view mode is not due to the lens’ focusing motor speed; the same combination often is capable of delivering blazingly fast results when used with the regular phase-detect system.

_M224792 copy.jpg
The instant of terror when you’re base jumping and not sure if your chute is going to open or not. Nikon D200, AI 500/4 P

There’s one added method that used to be common in older cameras, but is now only to be found on some of the Ricoh compacts: active phase detect. This uses an infrared beam to light the subject, and the reflected light is measured by two phase detection sensors on the front of the camera to assist the contrast detect system. It can greatly speed things up, but range is limited because it requires active illumination from the camera – and the power of these secondary lights is always limited.

Now that you have some understanding of how autofocus systems work, let’s talk about some tips to maximize the accuracy and speed of your camera.

_M80_DSC3155 copy.jpg
Through the flood. Nikon D80, 17-55/2.8

All cameras

  • Don’t let the camera pick the focus point for you. Unless you are shooting and erratically moving subject which you cannot follow manually selecting the focus point; always shoot in single point mode and pick your focus point carefully to be over your subject. Many cameras also weight the metering in favor of the focus point; it is therefore important to ensure that it corresponds with your subject – it is almost always what you would want to have correctly metered anyway.
  • Make sure your subject is larger than your focus point. If it isn’t; you need to either move closer (this also becomes a compositional issue) or focus on something at the same distance which presents a larger target.

Phase-detect cameras (DSLRs, Sony NEX-5R, NEX-6, Nikon 1)

  • The camera will always focus on the closest object underneath the focusing point. It may sometimes be fooled by a higher contrast structure – for example, a barcode instead of a blank piece of paper immediately behind it – but in general it will pick the closest subject providing it completely covers the focusing point.
  • High-contrast subjects (again, like barcodes) make ideal autofocus targets. It is also worth noting that some autofocus points are sensitive to detail in one direction only; i.e. horizontally or vertically, and not both directions. (Cross type points are sensitive to detail in both directions; but these are generally only found at the center point, or distributed around the AF-sensor arry only on high end cameras.) It is therefore important to find a suitable target for your camera – a QR code rather than a barcode, I suppose.
  • Use continuous autofocus, unless you are shooting a static object with the camera on a tripod. This is because any small motion of either you or the subject can be enough to move the plane of focus away from the intended point; this is especially critical with fast, shallow depth of field lenses. With continuous autofocus, the camera is always focusing right up to the point of image capture. The one exception to this, is slow, or wide angle lenses. Smaller format cameras are a bit less sensitive to this issue because they have more depth of field for a given angle of view, which tends to compensate for any errors in the focusing system.
  • Try to avoid focusing at the center and recomposing your image where possible, because there are potential issues with field curvature – especially at the edges and corners of wide angle lenses. Use the autofocus point that is either directly over your subject or closest to it in order to minimize any potential issues with the lens’ design.
  • Assign a button to locking focus (AF-L) to use in conjunction with continuous autofocus; this saves you having to switch to single autofocus with static subjects. Alternatively, decouple focusing from the shutter button by assigning an AF-ON button that activates focusing when pressed; I don’t use this method as it requires you to press two buttons to shoot; I prefer to minimize the number of controls that must be attended to especially in fast-moving situations.

IMG_0940b copy
Panning in the rain. Apple iPhone 4.

Contrast detect systems (DSLRs in live view, compacts, CSCs, mirrorless cameras)

  • Once again, do not let the camera pick the focus point for you; select it yourself. If anything, cameras that use contrast detect systems tend to be far more flexible in where you can put your focusing point; this is because they use the entire area of the imaging sensor.
  • Avoid continuous autofocus. This seems counterintuitive in light of my advice for phase detect cameras, however this is because continuous autofocus on a contrast detect system is constantly hunting back and forth around the point where expects to subject to be; imagine a car trying to follow a curve that the driver can’t see until it’s almost immediately in front of him – the path (here, the focusing distance) will be erratic and not match the curve exactly. This tends to result in a very low hit rate. It also helps that contrast detect cameras tend to either have an alternate system to deal with moving objects (in the case of DSLRs); or employ much smaller sensors that are very forgiving of minor focusing errors or changes in subject position due to their extended depth of field.
  • If you have to use continuous autofocus because your subject is moving; there are two other alternatives. The first option is to set your camera to maximum contrast (for obvious reasons) – the live view image is usually a preview of your current camera settings and will match the JPEG output. If you’re shooting raw; your file will not be affected by in camera processing. The second option is an old trick using the days of manual focus photography; it’s called ‘trap focusing’. First, decide on your composition and where your subject must go in order to complete it; ensure your shutter speed is high enough to prevent motion blur of the subject; finally, choose single autofocus and prefocus the camera at that position, releasing the shutter when the subject is in the intended position. One added advantage of this technique – especially for compact cameras – is that it significantly reduces the shutter lag to the point where it is very easy to release the camera at the precise moment you intend. Note that if you cannot get a high enough shutter speed; then you will need to pan through with the subject in order to only blur the background out and keep the subject sharp; this is a combination of panning and trap focusing techniques and works best when the subject is moving across your field of view; it is pretty useless if the subject is coming towards you.
  • Some cameras have a continuous pre-focus or full-time autofocus option that is always adjusting the lens based on whatever subject happens to be under the focusing point at the time. This is generally a good option if you absolutely must reduce shutter lag and are unable to pre-focus. However, note that the system can also be fooled, most notably by moving the camera around rapidly – especially if you are not pointing it at anything in particular. It is also an enormous drain on (usually already short) battery life because the lens’ focusing groups are constantly in motion so long as the camera is switched on.

It is worth practicing all of these techniques until they become second nature; you’ll be surprised by both the increase in your keeper rate, as well as the improvement in acuity and sharpness at the individual pixel level. It is just one of the many elements of shot discipline; which is critical in achieving the highest possible image quality from your camera. You’ll also be surprised at just how much more responsive your camera has seemingly become. MT

____________

Visit our Teaching Store to up your photographic game – including Photoshop Workflow DVDs and customized Email School of Photography; or go mobile with the Photography Compendium for iPad. You can also get your gear from B&H and Amazon. Prices are the same as normal, however a small portion of your purchase value is referred back to me. Thanks!

Don’t forget to like us on Facebook and join the reader Flickr group!

appstorebadge

Images and content copyright Ming Thein | mingthein.com 2012 onwards. All rights reserved

Objectively critiquing images: a primer

_PM03858 copy
The ephermeral idea of sushi. Does this image work? Why? Why not? Read on to understand and come to your own conclusions – leave your thoughts in the comments, and let’s start a discussion. For the original essay featuring this image, click here: Sushi, and the philosophy of photography

A reader sent me a great email a couple of weeks back with some suggestions on how to improve the reader Flickr group.

Since inception, it now has 400+ members, tens of thousands of submissions, about 2,500 that have made the cut – and continues to grow every day. Whilst you do get some indication of what constitutes a good image and what doesn’t based on my acceptances and rejections, it doesn’t really provide a structure for objective critiques and feedback from a wide audience – something I’d always wanted to have. Unfortunately, the infrastructure of Flickr isn’t that conducive to this – there’s no real way to tell which comments were left by a member of a particular group without having badges etc. What I propose instead is that anybody who wants to solicit feedback on an image posts it in a new thread on the attached discussion board; if you’d like to weigh in, go ahead – but remember to be objective and civil. (If the volumes get silly, then we’ll deal with it later.)

This brings me to the second problem: what is objective? How does one deliver an objective critique? Hell, what do you even look for in the first place? How do you set a benchmark and what do you compare it to? The aim of this article is to cover these bases, and provide both a structured simplified assessment/ critique framework. Its usefulness of course goes beyond the Flickr pool comments: it’s also a quick way for you to assess your own images on the fly. (The challenge there is of course stepping out away from the personal attachment that every photographer has to their photos – they’re like our children – and learning the art of detachment.)

First up, if you haven’t already read my article on What Makes An Outstanding Image, I highly recommend you do so first and then come back here afterwards. Part one is here, and part two is here. (Both open as links in new windows.)

Boiling everything down, there are only four things I look for in every image. The first three are fundamentals. The last one is a bonus. (In fact, I’ve said these things so many times at so many events and workshops that I wouldn’t be surprised if somebody decides to engrave them on my tombstone.)

1. Light
Every photograph needs light; no light, no photograph. Fantastic light can transform the most pedestrian subjects, and vice versa. I’m looking for light that isolates the subject, that shows off its textures and physical form and lines in a (preferably) unique way; a color temperature that’s either perfectly natural and accurate and puts you into the scene, or a color temperature that’s artfully shifted to elicit an emotional response in the viewer in a cinematic sense. The subject doesn’t have to be the brightest thing in the frame, but it has to be the most obvious.

2. Subject
Subsequent items get more nebulous and harder to define. In short, the subject is what the image is about. It can be a small part of the frame, or the entire frame itself; it can also be the idea. Basically: a viewer should be able to look at the image and know straight away what the focus is; who is the protagonist in the story? Timing is also a key element that affects both subject and composition – both positioning and expression. Abstracts are a little more difficult to assess, because they may not have a focus per se. In such cases, is the frame sufficiently well abstracted that you lose the sense of relativity and scale that provides the normal visual cues for identification of an object?

3. Composition
I like to think of composition as the way in which the elements within a frame relate to each other. It’s to do with positioning, balance and context; are the secondary subject positioned in such a way as to give priority to and not take away focus from the primary subject? Next, do the secondary subjects enhance the story, or take away from it? How are they relevant to the main subject? Would the image be stronger with or without them in the frame? Are any of them distracting? Next is balance; this is even tougher to define and probably should be the subject of an entire article in itself. In short, it isn’t symmetry, but it is about geometry. Are there things that make one side of the frame look heavier than the other? It isn’t a problem, but something that draws the eye in a particular direction – leading lines, for example – should do so in a way that supports the primary subject. Natural frames can also be used to help isolate the primary subject. You’ve also got to look for things that are distracting and not meant to be in the composition – edge and border intrusions are perhaps the most common example of this.

4. Bonus: the idea.
This is the hardest to define of them all. In its most concise form, does the viewer share the vision the photographer had in mind when he or she pressed the shutter? Note: it’s tough to communicate an idea if there wasn’t one to begin with, or it wasn’t well-formed in the photographer’s own mind. In fact, this is perhaps the toughest part in making a good photograph: you need to know what the final image should look like even before you take the shot. The best photographers do this consciously all the time; I know that if I can’t get what I want, I usually won’t bother taking out the camera. A lot of the time it’s because I don’t have control over the light, or because it’s not being cooperative; sometimes it’s because of technical limitations – I physically can’t get close enough, or I’m not carrying the right lenses to get the perspective I want, for instance.

On this basis, an image that scores a 2 is reasonably strong, but maybe lacking in one or two areas. Grade 3 images are excellent, and grade 4s are outstanding. Of course, there’s more to it than that, but at least you could say something along the lines of ‘3, composition is a bit loose around the edges of the frame’ and that would be implicit that the rest of the aspects of the photograph are strong. In the reader Flickr pool, I don’t admit anything less than a 2.5, or a 2 if the idea is very strong. There are a good number of 3s, but very few 4s. It might be an interesting exercise for you to go through the pool of images again to see what qualifies.

Of course, this is all relative; and that’s why it’s important to view and consciously assess as many images as possible to get an idea of what works and what doesn’t; that was one of the reasons to set up the flickr pool. There’s a lot to be learned from looking carefully at famous images: there’s a reason why they work, even if some aspects of the capture may be weak. And it’s almost always because ‘the idea’ is extremely strong, to the point of overshadowing and dominating the any potential shortcomings. (Robert Capa’s Normandy Landing series is a fantastic example of this).

Here’s the proposal – if you’re going to start a thread in the Flickr group putting your image up for critique, then give it a number (rating) – objectively, of course – and talk about what you think is missing, or what you think is exceptionally strong. That provides a good basis to begin discussion.

Even if you don’t put your images up for critique, keeping this framework in mind when viewing and assessing your own images can help immensely: you will land up with a much stronger raw material, and more times to postprocess them – which of course in turn results in an even stronger final set of images. Iterating this process has two positive consequences: firstly, you land up making ever stronger images, and not being tempted into keeping ‘not bad’ images; secondly, you will find you have a heightened cognisance of your own artistic style. This is of course a good thing – and one that’s extremely difficult to achieve. In the end, the greeks had it right: know thyself. MT

____________

Visit our Teaching Store to up your photographic game – including Photoshop Workflow DVDs and customized Email School of Photography; or go mobile with the Photography Compendium for iPad. You can also get your gear from B&H and Amazon. Prices are the same as normal, however a small portion of your purchase value is referred back to me. Thanks!

Don’t forget to like us on Facebook and join the reader Flickr group!

appstorebadge

Images and content copyright Ming Thein | mingthein.com 2012 onwards. All rights reserved

Experiments with street photography and motion

_5013173 copy

This series of images was captured around dusk in Shinjuku, Tokyo during my last workshop. While my students were off completing their final assignment, I decided to challenge myself to capture the feel and essence of the place in a different way to what I would have normally done. (After all, it wouldn’t be fair for me to put my students outside their comfort zone by insisting on the importance of having a central idea or theme in their images for their assignment if I couldn’t delivery myself, would it?)

_5012881 copy

At the same time, I’d felt as though I’d been reaching a little creative stagnation of late, and wanted to force myself to do something different anyway. Having your own style is good, but at the same time, that style has to evolve and grow in order not to get stale or boring. One of the things I’d been doing a lot of lately is jacking my shutter speeds up very high to ensure I was getting every last pixel of resolution out of the new cameras; whilst this made for great definition under the majority of circumstances, this crispness of capture doesn’t always suit the theme you’re trying to shoot to.

_5013351 copy

The idea I decided to follow for this series was flow – people as water, life as transient, a moment being more than a moment and altogether insufficient to capture the sheer volume of activity of what was going on around me. It’s a very strong impression I got simply by standing in place and watching life moving around me – people simply didn’t stop, torpedoing from location to location with some objective in mind, dispatching that objective, then moving on to the next one. (I’m guilty of this at times too; it’s a consequence of running your own business. Perhaps this experiment was as close to my subconscious was going to get to forcing me to slow down and smell the roses.)

_5013736bw copy

The only two ways I could see of communicating this idea were either to have a huge number of people lining streets and thoroughfares to appear as a continuous mass (there were a lot of people, but not that many, and moreover there was no way or achieving that vantage point) or through the use of motion blur – not a little bit, of the kind that appears at 1/30s and with people walking, but something altogether a bit more abstract. In hindsight, this would have been very easy to accomplish with a tripod, but without it, I didn’t have the foresight to pack one in – much less bring one on the day. Even a mini-pod or a Gorillapod would have been useful.

_5013181bw copy

Instead, I was forced to test the stabilizer of the OM-D to its limits – even with something to brace against (And sometimes not), I’d be needing shutter speeds in the 1/2s-1/5s range to achieve the effects I was looking for. Needless to say, you can only do this when the sun is going down. To give me a higher chance of success, I used the 12/2 for most of these shots, and shot in continuous high burst mode – not for the frame rate, but because I’d be able to keep my finger on the shutter button to minimize camera shake, and have only short intervals between frames. When I had to shoot using the LCD instead of the EVF, I would pull the neck strap tight to tension the camera somewhat against my neck and hopefully reduce shake – this technique is actually surprisingly effective. In hindsight, I should have used the self timer + burst function to completely eliminate finger-induced shake.

_5013468 copy

One of the things with this kind of photography is that you really don’t know exactly what you’re going to get until you get it; there may not be enough motion, or too much, or you might have streaks in the wrong part of the frame; all you can do is do a lot of takes until you get the right one.

Compositionally, the most important thing to remember when involving motion in your shot is that there must always be some clearly static and sharp object in the frame to serve as a visual anchor for your composition; if this is missing, the photograph just appears to be blurred or out of focus without the same directionality and focus that is implied by motion blur. In fact, having a large number of people moving through the frame is somewhat reminiscent of the energy of strong, dynamic brush strokes in a painting. I like the idea of abstracting out the people from the scene, and the contrast between the animate and inanimate. For these images, I chose the visual anchor first, then followed it by imagining where I’d want my flows of people to go; needless to say, there were a lot that didn’t work out because I didn’t have enough people moving close to the camera – a foreground is of course a necessity of using a wide-angle lens.

_5013253 copy

I did use the 45/1.8 for some of the images, but this proved to be extremely challenging as the lower practical limit for handholding a 90mm equivalent was somewhere in the 1/10s range on the OM-D, which is fractionally higher than what I needed for the desired effect. Still, I did manage to get lucky a couple of times with both very stable shots and convenient things to lean against. I also tried some more and less conventional techniques – panning blur, and combining staticness with abrupt motion of the entire camera to impose an impression of chaos whilst maintaining some semblance of a visual anchor. Overall, I’m pretty happy with the results though. Notes for a future experiment: I’d love to try this with a tripod and a longer lens. MT

____________

Visit our Teaching Store to up your photographic game – including Photoshop Workflow DVDs and customized Email School of Photography; or go mobile with the Photography Compendium for iPad. You can also get your gear from B&H and Amazon. Prices are the same as normal, however a small portion of your purchase value is referred back to me. Thanks!

Don’t forget to like us on Facebook and join the reader Flickr group!

appstorebadge

Images and content copyright Ming Thein | mingthein.com 2012 onwards. All rights reserved

_5013236 copy

_5013681 copy

_5013109 copy

_5013834 copy

Follow

Get every new post delivered to your Inbox.

Join 29,049 other followers

%d bloggers like this: