Technology, art, and pushing the boundaries

Even from the earliest days of photography, there has been an inextricable link between the medium and the technology used. Classical artists saw it as an abomination: where was the skill required to recreate the form of a subject when the device did everything for you? If anything, the early photographer was more engineer and chemist than artist. Relative unfamiliarity with some properties of the medium – depth of field, perspective, etc. and near complete unpredictability with others – tonal reproduction, exposure, colour, lighting, emulsion quality etc. meant that results were hit and miss, and more often about getting any image at all – rather than one of any longstanding artistic merit*.

*I believe the popular analogy goes ‘what’s amazing is that the horse can talk at all, not so much what it has to say’.

And for this reason, many of the images we see from the early days of the medium – especially portraits – resemble conventional paintings in the posing, light and general composition. Is it any wonder that early photographers weren’t taken seriously by artists? In fact, I think we can argue that photography as an art medium didn’t really come into its own until the second half of the 20th century, coinciding with not one specific event, but a few technological ‘enablers':

1. Built-in meters.
2. TTL flash.
3. Consistency – both in the cameras themselves, as well as the processing
4. Mass market processing of film.
5. Later on, autofocus.
6. Still later, digital.

Each one of these things has brought its own little impact on the way photographers work; arguably, making executing a technically good photograph a lot easier than it used to be, but also removing the other distractions and freeing the photographer to concentrate solely on the contents of the frame. All things equal, if you don’t have to think too much about exposure, or focusing, or whether you need to remember to adjust the lens a few degrees past the mark to get true infinity, then you should have a bit more spare brain power to spend on composition – which in theory, should make for stronger images.

But on the whole, I’m not sure we’re seeing this. I remember a statistic that said 10% of all images ever taken were shot last year – that’s mind boggling. By simple laws of statistics, more images means more better ones as an absolute number – but perhaps not more as a proportion. I think maybe making things easier has actually resulted in taking a photograph both crossing the threshold of ‘something you do casually without thinking about it’ as well as opening it up to a whole bunch of people who might otherwise have been intimidated by all of those buttons and knobs. Give a random person your Leica M3 to take a photo of you with in the 1950s, and he’d probably do okay. Do the same today, and chances are they’d run off with your camera.

Yet for those of us who are taking the whole photography thing seriously – and I’m not talking about the gearheads and collectors here – it’s a golden era. We photographers have never had such a wide choice of equipment to use; all of which performs well above the sufficiency threshold. Arguably, even the enthusiast compacts of the current era can deliver at a level that was very much cutting edge for anything below medium format not so long ago. Beyond that, the expansion of the overall market has made room for niche equipment makers to survive and thrive; a good example would be a tilt shift bellows for M4/3, made by Novoflex – combine that with the small sensor, and never-ending depth of field, anybody?

And that brings me to the core of this essay: in a creative form that has always been tied to its technological roots, we might as well embrace it and use the technology to open up new creative doors; I think the current generation of photographers is doing that well, but perhaps not taking it as far as they could be. I’m talking about vision, imagination, and the idea; the ability to see in your mind your final frame before you shoot it. And it doesn’t have to come out of the camera that way; there are some things that must be done in post-processing, like compositing or retouching – so what? The only limitation in the results is very much down to how far ahead the photographer can see; how well they can visualize the effect or potential applications of the new tools. At the same time, though, we must be careful not to get caught up in pop culture: a good example would be that not all HDR has to look like a multi-colored psychedelic tone map. What else can we do with HDR that would result in a frame that a) doesn’t look like every other HDR frame, and b) allows you to present a different view on the world? At the same time, it’s important to note that for journalism purposes, a degree of integrity is required in images: changing the tonal presentation is fine, but changing the contents of the image is absolutely not.

I want to talk about some of the emerging and maturing technologies that make me excited because I can see creative applications for them; I’m sure that there are plenty more I’ve not even heard of. So bear with me.

3D/ Lytro.
The presentation aspect of this has some ways to go before it becomes really mainstream; I’m more interested in the ability to fix a ‘near miss’ after capture, or to have perfect focusing all the time, or have control over the depth of field after taking the shot. For this, we wouldn’t need to have an infinite number of light fields to enable focusing at any distance, but just a few before and after the captured focus point to be able to tweak things afterwards. With sufficiently high density sensors, we wouldn’t even have to take much of a hit in resolution – and I’m sure a smart algorithm could make use of the nearby non-image forming pixels to reduce noise or improve dynamic range.

Composite sensors.
It’s a bit surprising that the conventional Bayer array CCD has lasted this long, actually. Although the idea of Foveon – with multiple photosites per pixel – is a good one, there’s no way that vertical stacking is going to be able to deliver the same noise and dynamic range results, because by the time the light hits the lowest layer of the sensor, it’s already been attenuated severely by the filters above it. And if you don’t have much light in the first place, all you get is noise. What would make more sense is some form of pixel binning – especially with the increasingly dense sensors we’re seeing today. The OM-D’s 16MP sensor is a quarter of the size of full frame; that would make a 64MP array at the same pixel density. But what if the pixels were grouped into bunches of four – RGB and luminance – for true color at each photosite? The luminance pixel could be used to further improve dynamic range and noise, too. And I don’t think a real resolution of 16MP is anything to sniff at.

Speed and HDR.
Input dynamic range should not be confused with output or display dynamic range. Even though the display methods we have today are limited to ~8 stops because they can’t all be lightbulbs to replicate the brighter areas – an LCD for instance, or a print – that doesn’t mean we can’t use more input dynamic range. What this lets us do is choose where and how we allocate the output tonal scale, according to our artistic intentions for the scene. The current limitation is that single-capture DR is around 14 stops maximum; whilst this is far ahead of anything we’ve had previously, there are still scenes that exceed this dynamic range, yet remain clearly visible to us in real life without clipping to black or white. At the same time, capture speed is getting faster – why not take two shots very close together with the mirror up, and then merge them in camera to prevent clipping? We’re already nearly there with the back end coding, but the speed (and camera/ subject motion related to it) needs a bit of work. No reason why at say shutter speeds above 1/1000s we couldn’t have a 1/1000s and a 1/2000s exposure…

Extreme perspectives.
Slowly but surely, lenses are getting both wider and longer. They are also quite unwieldy to handle or compose with – how close do you have to be to something with an 8mm lens on FX to make it fill most of the frame? Very, is the answer. But yet there are plenty of creative photographers who are using these tools to create interesting perspectives. On the opposite end, the Phantom HD camera used by the BBC to film Planet Earth comes to mind – it lets us get close, at a surprisingly natural perspective, without having to either endanger the lives of the crew, or scare off whatever it is we’re filming/ photographing. No doubt it’s a bit voyeuristic, but hasn’t that always been the nature of photography?**

**A good example of this is Miroslav Tichy – although pretty much seen as a pervert during his lifetime, he’s now considered an artist. And yes, he made his own cameras out of cardboard and string.

Miniaturisation.
I think it’s impossible to separate perspective from location – getting the camera into places previously impossible, or inaccessible, is also a big part of this. Aside from the obvious aerial rigs to get us remote unsupported shots in the middle of the action (at the Olympics, for example) – there’s also the whole field of miniaturisation. Perhaps the best example of this is what the GoPro camera started: POV filming from absolutely any point of view. What if we could do that to a decent image quality level with still cameras? In my mind…taken to the extreme, I envision sticking the end of an extremely fine endoscope inside a watch movement to photograph it. I’m sure you all can think of other uses.

Endurability/ survivability.
Taking perspectives and location even further – we’re now sending cameras to places where we physically can’t go, like space, or the deep ocean, for instance. The more advanced our technology gets, the more options we have. Just like in the early days of being wowed at capturing any image at all, eventually there might be some thought given to composition, framing and the artistic merits of the photograph – once we’ve sorted out getting the photograph at all.

Increases in sensitivity and color accuracy.
Since the D3 generation of cameras – I feel we have been able to get a useable image under conditions previously unimaginable, or where we’d just say ‘forget it’. But that kind of flexibility ha, if anything, made me even more aware of the other even more difficult shooting conditions under which I can see a shot, but it remains beyond the ability of my camera to capture. Or we can capture it, but it doesn’t quite come out looking as we saw the scene. The ability to really reproduce what your eyes can see, under all conditions, is something where technology has made great strides but still isn’t quite there yet.

Integration with the photographer
Here’s a crazy idea: what if you could just download the image you saw directly from your eye/ optic nerve/ brain? I wouldn’t be surprised if some research lab somewhere is working on it. We’ve already seen integration of CCDs with the optic nerve to be able to restore sight to some degree, so why not the other way around?

It’s definitely an exciting time to be a photographer. At the end of the day though, it’s important to remember that all of the technology is but an enabler: it’s up to us to push our own creativity to come up with something different, and create your own vision. And although that will always remain the biggest challenge, there will be also always be some people who conquer it and move the medium forward as a whole. MT

____________

Visit our Teaching Store to up your photographic game – including Photoshop Workflow DVDs and customized Email School of Photography; or go mobile with the Photography Compendium for iPad. You can also get your gear from B&H and Amazon. Prices are the same as normal, however a small portion of your purchase value is referred back to me. Thanks!

Don’t forget to like us on Facebook and join the reader Flickr group!

appstorebadge

Images and content copyright Ming Thein | mingthein.com 2012 onwards. All rights reserved

Comments

  1. Hi Ming,
    I came across your blog by accident. It is very interesting, keep on the good job.
    Where you are right is that for sure, photography is a baby of the industrial era, but you should revise the history of the medium. It became an art as early as mid 19th century. Look at the big names in the history of this art.
    Besides the enabler #3 (consistency, which was achieved in the late 1800s), you don’t need a built in meter, flash ttl, autofocus, a mass market (started 1901 with the kodak brownie by the way) or digital to produce art : none of those has changed what a photographer can do with the medium.
    The argument that automation helps concentrating on composition is a quick assumption. Indeed, it might very well be the opposite.
    The changes have been dramatic for amateurs only. Whether amateurs are artists … the answer is in the question, it seems.

    • Thanks Andre. Maybe for some lack of attention required in automation also means lack of thought across the board; for others, perhaps not. I know that I certainly spend longer on composition if I’m not also having to worry about exposure, focus etc. As for your last question – I suppose it depends much on your definition of amateur. If you’re not making a living from it, no matter how good a photographer you are, you’re an amateur – and I know plenty of people who are very good at what they do, but choose not to make a career out of it. And yes, some of them are artists. I also know plenty of people who buy DSLRs, claim to be pros, and forget to take off the lenscap or leave the camera in the green square mode :)

  2. Great essay Ming, as always. What do you think of Google’s Project Glass?

  3. bob monson says:

    Excellent article ! Doesn’t the D600 try to do the in camera multi-exposure act to increase DR? Too bad it outputs Jpeg.

  4. Interesting article. I have been a photo hobbiest for over 50 years; and it has been quite a wild technological ride. My first photos were with a borrowed Rolleiflex TLR. All manual. Not even a meter. The technological enablers have been great, but it’s only after digital came along that I felt big improvements in the quality of my photographs. Digital lets me shoot unrestricted and allows unlimited experimentation, and thus more opportunities to learn. The simple economic advantage of memory cards vs film is just what some of us needed. It really is an exciting time for photographers.

  5. Hi, first I want to thank you, I very much enjoy reading your articles and your images are just great.

    I’m not a good photographer (but learning, also thanks to your articles), but I’m quite interested in the technical side (I’m studying computer science). So I’d like to comment on a few things you wrote, especially regarding light fields.

    A plenoptic camera (a camera that records a light field) does not record a 2D image, and also not multiple 2D images with different focus points, it records a part of the 4D light-field. Basically we can imagine this as recording not only the light intensity but also the direction for every conventional pixel on a sensor. A direct representation are multiple regular images from slightly different directions. But this means we cannot restrict the camera to just get a few images in front and behind the actual focus point.

    A conventional 2D image has to be rendered from the recorded light field by averaging over multiple directions for every pixel. To simulate different locations of the image sensor we average over a different set of angles, so that imagined rays from those angles would converge at the desired sensor location. By changing those parameters per pixel we can also simulate tilt/shift lenses, digitally stop down (or render arbitrary aperture shapes) or render with infinite dof. I think this has some serious implications on what photographers could achieve with a camera that allowed access to those parameters. Sadly the Lytro is not this device. The aperture is much too small, and resolution is also very limited even tough the Lytro cam only records 4×4 directions per pixel (I believe). My dream camera would feature at least 4×4 of the camera modules that are normally found in smartphones. With those modules placed over a circle with a diameter of, say 10cm, we could simulate for example a 35mm f/0.35 lens with quite good resolution. Because we use multiple modules the effective sensor size is much enlarged with regards to noise, and those modules should also be relatively cheap compared to a larger sensor. Also such a camera could be pretty thin, similar to a smartphone, so easily pocketable. Just think about such a camera, together with tilt/shift and arbitary dof …

    Interesting times!

    • Thanks for the clarification. Perhaps you can help answer something – why is the Lytro restricted to a fixed number of focus distances? Is this an aperture limitation, a sensor size limitation, a software limitation, or a consequence of the 4×4 directions?

      I think the challenges over a 10x10cm capture area would be achieving perfect planar alignment; I presume each group would have its own optics and be fix-lensed? If not, something to cover a 10×10 image circle would be enormous.

      • Yes, the fixed focus distances are probably related to the number of angles. But it is possible to interpolate (which is not simple), although that might not be feasible with so few angles. Concepts and demos I’ve seen used mostly more views, like 8×8. But as Lytro failed to provide any of the advanced features that light fields could provide, I don’t see them removing those limitations any time soon.

        When using multiple camera modules (each module is basically sensor + optics + readout circuitry, with a volume around/below 1cm³) those only need to be roughly aligned, the rest can be done in software, non-perfect alignment then results only in a (slight) crop of the resulting image.

        • Well, I’m sure they’ll remove them once they’ve forced us to buy several intermediate generations of product – that’s how it goes with corporate.

          Doesn’t non-perfect alignment (planarity) result in errors in reinterpolation too? Or is there a way to nullify this through pre-calibration offset data, like an AF sensor?

          • The nice thing is, that multiple cameras can be directly used to detect the alignment. When the focus point(s) are decided in one image, the other images are searched for the most similar points, from the relative position of the corresponding matches we can calculate af data if required (if the modules are not fixed focus, those sensors a small!) and the alignment. Actually thinking about it we need to check multiple points, depending on the degrees of freedom we want to align. But as a bonus this gives us the same information as phase detect sensors, so af could be as fast as in a dslr. Different focusing characteristics etc. should probably be calibrated, but this could be done on the fly, calibrating one module at a time while the camera is operating.

            I don’t think planarity plays a large role, it should be considered as a parameter to optimize when aligning, but a millimeter movement of a module does not change the viewpoint so drastically that it should pose a problem. Remember we basically render an image from the individual light ray (bundles), as long as we can reliably decide which rays to accumulate they don’t need to actually be recorded at the same plane at all, the modules could also be placed on a sphere or something like that.

            • Got it – thanks for clearing that up.

              So, the next question: when is your camera going to market? ;) I’d be seriously interested in being involved in a project like this if there is one…

              • I’d love to create a plenoptic compact camera, the hardware can’t be that difficulat right ;-) ?
                Who knows, if this is still not realized in a few years …

                • Well, the reality of industry is that companies would rather sell you incremental revisions because that’s where the money is. But I wonder how much it would cost to build a prototype – I’m guessing most of the cost is going to be in the software.

                  • Well basic prototypes already exist, using arrays of webcams or larger cameras. Software should not actually be a big problem, but I’m probably a bit biased about that point. I think the problem is integration. There is a lot of data to shuffle around and while the camera interfaces are somewhat standardized there are a lot of camera modules that need to be interfaced, which requires new hardware specifically designed for that task.

                    • From what I understand, new hardware design would be a completely different ball game to software only…and that’s what keeps the barriers to entry painfully high.

                    • Thats the problem. But without dedicated hardware you need multiple computers just to read out the cameras and that is just not very practical for more than experiments in the lab.

                    • Ouch, no, that’s not practical. What about using reconfigurable FGPAs?

                    • The problem is not the computational power but the interface. I’m not a hardware guy, so this is not my area of expert, but I don’t believe you can find any off-the-shelf components that can be interfaced to at least 16 of those camera modules.

  6. Dear Ming, you may, or may not, have noticed some activity lately around my account with Amazon, buying up some photographic gear, which may result in payment of a small commission.
    Best wishes..

Trackbacks

  1. […] article won’t be along the same lines as my earlier article on technology, art and pushing the boundaries. Rather, I’d like to approach this from a different perspective, and one I always try to keep […]

  2. [...] Even from the earliest days of photography, there has been an inextricable link between the medium and the technology used. Classical artists saw it as an abomination: where was the skill required to recreate the form of a subject when the device did everything for you? If anything, the early photographer was more engineer and chemist than artist. Relative unfamiliarity with some properties of the medium – depth of field, perspective, etc. and near complete unpredictability with others – tonal reproduction, exposure, colour, lighting, emulsion quality etc. meant that results were hit and miss, and more often about getting any image at all – rather than one of any longstanding artistic merit*.  [...]

Thoughts? Leave a comment here and I'll get back to you.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 27,214 other followers

%d bloggers like this: