What defines small/medium/large formats, anyway?

1 copy

Originally published by yours truly in the April issue of Medium Format Magazine.

The use of any nomenclature of size already implies some degree of relativity. If a cellphone sensor is ‘small’, then arguably even APSC might be considered ‘large’. Yet there is a legacy expectation that medium format necessitates a recording area of at least ‘645’ (in itself misleading, usually being a bit smaller than 60x45mm at around 54x40mm or so) or larger. At some point – usually 4×5” – this becomes ‘large’ format. The digital sensor size of 44x33mm has challenged this somewhat, being much cheaper to produce than 54×40 (as low as a quarter of the price, due to finite wafer sizes, yield rates, etc.) whilst still offering about 68% more area than 36x24mm ‘full frame’.

In general, we find that a) the steps in sensor sizes are usually somewhere between 70-100% more area. However, in practice and given identical pixel pitch, doubling total pixel count only increases linear resolution by 41%. You can’t resolve structures half the size for this – you need four times the pixel count, or double the linear resolution. This tends to mean in practice you need to go up two sensor sizes to really see a meaningful difference in rendering, resolution, etc. – i.e. from M4/3 to FF, APSC to 44×33, FF to 645. (There are of course differences in intermediate steps, but they’re much harder to see and justify the cost of entry for.)

2 copy

But since this whole notion of a certain size being a certain label is artificial by definition – where does that place our strange aspect ratios (XPAN 65x24mm, for example, or 6x17cm) – are those small or medium format, medium or large format? How does one compose – to match the smaller format, or the larger one? What if we were to combine multiple frames – easily done with digital? An iphone’s sensor may only measure 6x9mm or so, but if you do a swept panorama – you land up with anything up to 9x24mm or more, changing the game significantly.

The same of course applies to larger formats – FF becomes medium format with just two frames (a little under 36x48mm or 24x72mm, allowing for overlap) and medium format gets pretty close to being large (though you may need four or more frames). Yet we still recognized stitched images as being the product of the smaller format. The converse is also true: to get a 16:9 aspect ratio image from a 44×33 sensor, significant cropping is required: 44×24.75mm are left, or 1,089 square milimeters – this not much more than than 36×24 at 864 square millimetres, yet nobody calls such images ‘full frame’.

3 copy

As always, I am less concerned about the nomenclature here as the creative considerations. Naming only matters insofar as it conditions us to think of our tools in a certain way and thus compose with a certain (often flawed) mindset. It doesn’t help that stitching is typically used reactively when a shortfall in your equipment is found: your widest lens can’t take in the intended scene, so you combine multiple images to make up for it.

Mistake Number One is a compositional one: the wider the lens’ angle of view, the greater the foreground exaggeration. This applies whether the effective angle of view is derived in a single image or a synthetic one. Stitching makes for a much wider angle of view (duh) and much greater foreground exaggeration. Given the difficulty of visualizing stitched results in the field, this often leads to a thin line of something on the horizon and not very much in between because the photographer has not compensated for the effective change in angle of view and composed accordingly.

4 copy

Mistake Number Two is a technical one: using a wide lens to stitch is generally a very bad idea because of projection issues, geometric distortion, parallax, nodal points, camera levelling etc. You will basically find that the resulting source images don’t align because the effective viewpoint when optics, axes of motion etc. are taken into account is not consistent from frame to frame. Basically: no matter which software you use, the result will not look natural. However, the opposite is true if the lens has movements built in; shift-stitching is perhaps the best way of doing any sort of image combination as you aren’t moving perspective at all, but instead sampling an outer portion of the image circle – even distortion, if any, is continuous, and doesn’t result in unnatural straight lines.

I think it’s easy to see why results are already often subpar.

You can also ‘stitch’ the same image multiple times: technically, this is stacking, and commonly used for astrophotography to remove noise by averaging. But the same technique can also be used to effectively boost resolution (and effective number of photosites of a given size, which is in turn effectively a larger format) by sampling at a higher spatial frequency than your sensor’s pixel pitch. Lost? Don’t be. Basically, since there’s going to be some very fine motion between you pushing the shutter and capture (caused by minor tripod slip/ give/ whatever), the effective image isn’t quite the same twice in a row. Keep doing this, keep averaging, and you’re effectively doing the same thing as sensor shift mechanisms, albeit less efficiently – thus necessitating more frames for the same gain in image quality.

5 copy

At the opposite end of the scale, we have scenes that are within the reach of our lenses but we deliberately choose to use a longer focal length to increase resolution; typically, we don’t see the same compositional shortcomings here because visualization is stronger in the first place. Technically there are also fewer shortcomings because stitching very narrow angles of view to make a narrow angle of view has much lower risk of nodal point and parallax errors, plus you’re likely using a much longer and lower distortion optic to begin with. This, together with shift-stitching, are my favourite ways of creating effectively larger formats in the field: what you can do with full frame can also be done with 645 etc. – though remember the output medium must be able to support all of that information for it to be a worthwhile exercise at all.

The one thing stitching a large number of frames can offer that a single frame cannot is control over the projection: those of you familiar with stitching software or maps will know that trying to represent a three dimensional surface (in effect, a section of a sphere’s skin) on a two dimensional medium requires some compulsory stretching of the image in order to flatten it. By controlling this stretching, the effective perceived distortion can also be controlled, resulting in much flatter (or exaggerated, if desired) images than possible with a single very wide frame; a bit like the scanning slit of a banquet camera.

6 copy

My preferred methods of working are multi-stitch panoramas (mostly created in my pre-medium format days, or later with smaller formats when size and weight of equipment was highly restricted) precisely because of this control of projection perspective, or using shift-stitching for interior and architectural work.

The series of images here firstly need a much larger medium to breath than a digital magazine – they average 2-3 gigapixels each – but all demonstrate that kind of control over projection and a certain flatness and realism that could not have been accomplished with a single image from a wide lens (I know, I tried). But they also represent an extremely masochistic personal project – given these subjects are highly affected by wind and move between shots, and that stitching software often fails to differentiate between similar fractals – a lot of failed attempts and frustration are normal, not to mention manual point matching. The fourth image, for example, is a night stitch lit by a couple of weak headlamp torches – exposure times averaged a minute or so, for over a hundred frames; I had to repeat it the following night because the wind started up halfway through the first attempt. Of course the ones that do work and wind up being very large prints are something else, and chasing that difference is presumably why you are reading… MT

Images in this article are from the “Forest” project, batch converted with ACR, composited using Autopano Pro and processed again as a complete image with my Photoshop Workflow III. Originally published in Medium Format Magazine.

__________________

Ultraprints from this series are available on request here

__________________

Visit the Teaching Store to up your photographic game – including workshop videos, and the individual Email School of Photography. You can also support the site by purchasing from B&H and Amazon – thanks!

We are also on Facebook and there is a curated reader Flickr pool.

Images and content copyright Ming Thein | mingthein.com 2012 onwards unless otherwise stated. All rights reserved

Comments

  1. Gordon Moat says:

    Reminds me of doing architectural images by shifting the back on a 4×5 camera. Both images would be scanned, then combined in Photoshop. It was a way to avoid some distortion from going with an even wider lens.

    This post got me thinking on something very different, which came out not long ago. The Ricoh Theta Z1 camera. I don’t know of anyone professionally using one for architectural imaging, though it seems that it could be very useful for that.

    • Shift-stitching has long been been a good way to both correct verticals and ‘cheat’ a bit when it comes to resolution and angle of view.

      Theta: I think a lot of estate agents use them for interior views; it isn’t the ‘professional’ definition we might think of, but they are making a living from it I guess…

  2. I have just done a few panoramas in my day, by stitching, but what you write is so true: without using manual mode the chances in getting a useful result is low.

    And as you write, one needs really do a big jump in sensor size to really see any difference. In good light, and otherwise controlled situations, my Nikon1 camera images are as good as my wife’s µ43, and her best is about as good as my DX images. while my FX camera (an old D600) always wins over the Nikon 1 cameras, under any circumstances (well, macro-photography is so much simpler than doing it with my FX camera, as to get the same DOF, with an equivalent lens, you need a lot of gear with the FX, including tripods and often slides, and bigger and heavier lenses to boot. And usually more light as you need to use smaller apertures, thus macro of live animals outdoors is complicated with an FX camera while doing it with a Nikon 1 camera is very simple, mount a macro-able lens (use an adapter if needed) and possibly a flashlight for extra lighting.

    The fail ratio is very low with the Nikon 1, while the FX images are best done in a studio, using dead/frozen/drugged animals, with plenty of lighting available.

    • “Dead/frozen/drugged animals”…😂
      That sounds a lot like food photography!

      • Much insect photography is done with next-to-dead insects, cooled down, or drugged, or dead, so that they can’t move, which of course makes stacking so much easier.

        • I was actually wondering how those images were done, given how flighty those things are. Almost as bad as the superglued frog that was the subject of so much controversy not that long ago…

  3. Christoph Muench says:

    Hello Ming, as a fan of your photography and blog, this series of photos again makes me wonder: Am I just not “getting it”? Frankly, except from images #3 and #5, I wouldn’t even have bothered with starting to edit them in LR, because I would have found them too unremarkable and messy.

    • I used them as examples of when the format is really independent of the intended outcome. The series are meant to be viewed as very large prints, not an 800px web jpeg. So yes and no – you’re not getting it because ‘it’ isn’t complete 🙂

  4. Hello Ming
    As you might remember, I have found a benefactor and am using a Hassy now. At first I planned to use it (her?) for wide angles only, but then I acquired a 250mm tele and started to do stitches too. Could you please specify a bit on your last sentence „.. and were processed AGAIN as a complete image…“. What processing do you before and what after the stitching?
    Until now I only found courage to photograph mountains and shied away from woods. Difficulties to plan and achieve the right amount of DOF have discouraged me. Could you please tell me/us, how you did your beautiful forest images? Did you e.g. stack and then stitched the stacked images? I hope I do not ask too much.
    Thank you for an interesting essay and best regards!

    • 🙂

      Before stitching: just flatten the images in ACR, and ensure the same adjustments are applied to all of them. Lock exposure, focus and WB at the time of capture. I then save these as TIFF16s.

      Stitching: Import into Autopano Pro, stitch with your preferred projection to taste.

      After: Treat the stitch as a single image, and run it through the usual PP (but beware limits on the ACR filter at about 600MP or so; you may have to get creative with masks, gradients and blending modes).

      Forests: the same way as I stitch any other images – it just requires a very still day, otherwise there’s too much wind movement in the branches.

      Good luck!

  5. thanks again for your interesting work, Ming
    I would like to add two interesting tools for high res stitched images.
    In 2004 I got the Nikon D70. As it had only 6 MP I used a cheap but nodal corrected panorama head (Panosaurus) for stitching and doing large prints. There are many more sophisticated panorama heads out now.
    Later I got the first prototype of the GigaPan EPIC and did very high resolution landscape images. These images and so many more can be seen and zoomed in on the GigaPan site.

  6. “to get a 16:9 aspect ratio image from a 44×33 sensor, significant cropping is required: 44x18mm”
    wrong math – this applies only if you crop a 16:9 image in landscape orientation from a 44×33 sensor in PORTRAIT orientation. Starting from the sensor in landscape orientation, you end up with 44×24,75 mm.
    As always, thanks for the article!

    • You’re right – I stand corrected, and have amended the post…

    • John Walton says:

      Hi Ming,

      I’m not sure I follow the processing tips – I’ll need to think about it. My approach has been to use a short tele (90mm Summicron the last big stitch I did), with fixed focus, white balance and exposure. This was fine with a distant image, and the resolution was amazing (7GB file size was not so good).

      Two issues still bother me – (1) I get that the rotation point should be at the plane of the aperture blades. I use an Arca D4 head (the cube was overkill). What accessory do you use to set the rotation point? Am I missing something?

      (2) with a wide panorama, the exposure changes across the image. I suspect this is a reflection of my ineptitude with PhotoShop.

      John

      • I just rotate it about the tripod point – it doesn’t matter with subjects at near infinity and longer lenses. Much more of an issue with wides and near subjects (and then you have to find the nodal point).

        There shouldn’t be exposure changes – you need to be shooting manual. Unless of course there are actual environmental changes and it takes a while to complete the capture phase – this can happen!

%d bloggers like this: