Crystal ball gazing: Predicting the photographic ecosystem in 10 years, part I

H610J-B9993428 copy

During a recent flight and some (unusual) downtime, my mind started to wander idly towards both how much photography has changed in the last ten years, and idle speculation towards how photography would change in the next ten. The pace of change has been rapid, limited by technology; but I think the next big shift is that it’s going to be limited by the operator. Let me explain why: for hundreds of years, the fundamental principle of using photosensitive chemical media and printing onto non-photosensitive ‘permanent’ media has not changed; whether that media is larger or smaller, celluloid or glass or paper or something else. The process of photographing was destructive in a way: you had only one chance to fix the chemicals to preserve the graphic interpretation of luminance – i.e. the photograph – during which if you messed up, there were no do-overs. The biggest change from a single-use chemical medium to a digital one has really been getting the public used to the concepts reusability and easy post-capture manipulation*. Whilst the start was slow due to a) incumbency of film and film devices; b) cost of entry; c) output options – like a runaway train, we’ve now lost control of our collective visual output. Where does this leave us in another ten years?

*Always a popular/controversial topic: but reality is that manipulation in many forms has not just been around since the very beginning, but is very much at the core of the whole photographic process: you could not even view the image without applying some chemical changes, which might make areas of different brightness more or less visible. In reality, nothing has changed other than your average consumer can now do it at will and with greater ease and flexibility than before. Taste, as always, remains a subjective matter.

I won’t talk about more: more resolution, more sensitivity, more dynamic range, more color accuracy – heck, more images: this is taken as a given following the nature of technological progress, except perhaps the last item.

One of the not-so-long-ago predictions was that video and stills would eventually converge to the point that we would pick out stills from video sequences; proliferation of 4K video modes has made getting a sufficient resolution still for social media or small print easy, but note the caveats of application: when non-demanding. More resolution isn’t going to solve this, neither are higher frame rates or similar. Let me explain why: good video is structurally very different to good stills, and it goes back to the way our brains process images. With a still, some part of the frame must be critically sharp and with high contrast/acuity for us to register both deliberation, technical focus and subject/conceptual focus. With video, persistence of vision means that previous images linger in our visual cortex and are subsequently overlaid by new images; the duration of fade/overlay and transition between images is what gives us the impression of motion. A certain degree of blur is actually required to make the motion feel smooth; this is one of the reasons cinema has stuck to 24/25fps and not gone to 100+ – even though it’s more than technically possible. It just doesn’t look or feel right. The upshot is that even with more information in each component still frame from a clip – a good looking video clip always has too much motion in it to make a good still**.

**If you don’t believe me, shoot a clip of motion with high shutter speeds – say 1/1000s, and compare it to one at 1/25s – which one looks more fluid? Which one makes for better still captures?

That little exercise in the notes above illustrates something else, however: the fact that most still cameras can easily perform this task now speaks volumes about convergence. Whilst technically possible to shoot very good stills and very good video on the same camera – it isn’t optimal. The Panasonic GH5 and Olympus E-M1 Mark II duo are a good example of this. They’re ostensibly both M4/3 format, 20MP, dual IBIS/OIS stabilised systems with high video rates (at least 4K, with HDMI out etc.); however, it’s clear in use that the the GH5 is very video-centric, and the E-M1.2 is stills-centric – small things like the presence of log gamma, or the ability to have exposure zebras when recording etc. make a big difference in practical workflow. The other challenge is a much more practical one: how do you have a control setup that works for both stills and video, when they require some fundamentally conflicting things? A good example is exposure: ideally, a video camera should have stepless exposure to give smooth control over changing light conditions; you want quickly adjustable steps in a stills camera so you’re not endlessly scrolling. Autofocus is another thing: it’s difficult to beat a good manual focus pull for video, but you want flexibility over points and location and configuration for stills. Having switchable groups of settings goes some way towards alleviating this, but I feel still results in confusion of muscle memory in practice.

Prediction #1: we won’t see stills-video convergence.

I’m going to loop back to the start now, with Prediction #2: there will be an end to more, or at least a slowly changing steady state

Those are fighting words and buck the trend of the last ten years, I know. But where were we before that? Film, in its steady state, was basically ‘image quality by the yard’: if you wanted more, you just went bigger. The building blocks were the same below the scale of the smallest format: you could get pretty much the same emulsions in every size, from 35mm and APSC to 8×10″ sheet film. I think we’re going to see the same with sensors: firstly, because economies of scale justify putting more resources into developing technology that can be applied over different sensor sizes, and because it makes it easier for both the sensor makers (i.e. pretty much only Sony at the moment) and cameras makers to stratify their product to consumers: if you want better, buy bigger. But we will eventually reach a point where we hit limits of optics rather than electronics – for the 1.x micron sensors that are in most mobile devices hit diffraction limits on resolution even at f2.8. Building faster lenses is not exactly practical, and increasing pixel density isn’t going to yield significant improvements in output – just bigger file sizes.

Sheer data quantity is the other reason I predict an eventual end to more: even though storage and bandwidth are ever increasing, the display medium has been very slow in catching up. This is probably again due to cost of adoption, but also more practically: a lot of the time, the difference just isn’t visible or isn’t visible enough to justify the effort. It might be the resolution losses from sloppy shot discipline or challenging conditions cause enough camera shake to actually be visible at higher display resolutions (as opposed to being somewhat averaged out when down sampled); or more likely, human vision causes the limitation: not everybody has perfect eyesight, and even if we do, there are limits to how much the eyes can resolve. Going slightly beyond this yields the impression of continuity and transparency (the idea behind the Ultraprint, for instance) – but anything more doesn’t yield any gains because we simply can’t see them. A good example is actually the Ultraprints themselves: a younger audience instantly sees the difference, but older audiences do not because their overall vision tends to be poorer, especially at close viewing distances.

Prediction #3: we are the limitation.

Beyond the obvious fact that creative limitations are a human problem rather than a machine one, the concept of ‘Data rate’ has other implications, too. It isn’t just the amount of information captured in a single image, but the total number of images. It’s never been easier to shoot more and unleash it on an unsuspecting world. The problem is, everybody else is thinking the same and in general – the population lacks any sort of curation filter. The upshot is that with so much content out there, it simply becomes impossible to view everything, much less view anything much with any degree of proper contemplation. This is the other reason we’re not going to see the stills-video convergence: how many people are going to record everything, and then comb through an hour of footage for the one still you want? I suspect none, given that most people don’t even look at the stills they’ve already taken more than once or twice before moving on to the next thing.

Prediction #4: a shift in creative standards: the average level lowers; the top end increases.

Unfortunately, sheer quantity has negative impacts on everything creative: we’re already seeing this in the professional supply of images. On one hand, the threshold for ‘good enough’ on average seems to have lowered because less attention to individual images and more to quantity and gear, which means there are people charging for work who really shouldn’t be (though I suppose this says as much about the clients as the photographers). Inexperience and sheer volume and weekend pros are putting the mid level guys out of business. The top end has always been competitive and if anything, is stretching its legs a bit with more creative enablers (i.e. better technology) – and the distance will always remain from the bottom, though it’s going to be increasingly nuanced and require an educated client to see it. The upshot is that the best will get even better, but there’ll be fewer of them simply because there’s less work at that level to go around.

To be continued in Part II…MT

__________________

Visit the Teaching Store to up your photographic game – including workshop videos, and the individual Email School of Photography. You can also support the site by purchasing from B&H and Amazon – thanks!

We are also on Facebook and there is a curated reader Flickr pool.

Images and content copyright Ming Thein | mingthein.com 2012 onwards. All rights reserved

Comments

  1. Mosswings says:

    Ming – just to pile on the comments about your point #1: Your reasoning I think is too simplistic. Yes, slow frame rate capture video has tons of blur, as it should for video – but what’s happening now is that fast frame rate video is being combined with software interpolation to deembed the blur from the image and to synthesize arbitrary shutter speed images in post. This (TCDI, time domain continuous imaging) has been a subject of several year’s research out of the U of Kentucky, and has produced several papers and the TIK software program which can do rather impressive things. Here are links to a couple of those papers:

    Click to access ei2017TIK.pdf

    Click to access ei2017TSR.pdf

    The singularity may be closer than we think.

  2. First, a belated congratulations on you and Robin teaming up! This seems like a great match, and I am looking forward to Robin’s limited and exclusive camera bag designed for m3/4rd’s shooters! 😉

    Now, I hate to move into the realm of technical when you make a post as you did, but it did raise a question. Regarding video/photo convergence, does Panasonic’s 4k Photo mode (https://www.panasonic.com/uk/consumer/cameras-camcorders/lumix-g-compact-system-cameras-learn/article/4K-Photo.html also perform as you described, or is it optimized for photos since that is the purpose of the feature? I have not had a chance to actually try out this feature, and assumed that it behaved differently that just pulling stills out of a 4k video stream, but have not heard much about it (positive or negative). Your article is the first that I have seen talking about this issue in general in any detail. In theory, it seems like a manufacturer could derive a photo-centric mode that minimizes the issues you have raised, but I am not sure if what Panasonic is offering accomplishes that. Any thoughts?

    Thanks,

    –Ken

    • I just tried it with my GX85 – it looks like a bunch of stills extracted from a 4K video stream. There doesn’t appear to be any use of the ‘intelligent ISO’ function that’s supposed to raise sensitivity when motion is detected (which the camera should know from the gyros used in the IBIS system).

      • Thank you for testing this out. That was very kind of you. I had hoped for better, especially since I was under the understanding that bumping up the shutter speeds beyond what is normally used for video would help to achieve sharper images. I was also, mistakenly, under the impression that this feature could work with raw files, and I am now of the understanding that this is not the case. Thinking more about this has me re-reading your post and finding myself in more agreement with much of what you have written. Learning and using new technology may allow me to do some things that I might not have been able to do in the past, but it is no substitute for where/when more vision is needed. And, sometimes technology will not solve a problem, and that there will be no easy substitute or shortcut for getting the image(s) sought.

        –Ken

        • You could manually force it, but then you’re basically just having a low resolution high FPS stills camera…it is my experience that a single timed shot often works better than a burst (there’s still a lot more time the shutter isn’t open than it is).

          • i absolutely agree that there is no substitute for a well-timed shot, and had that lesson hammered into me when photographing college football several years ago. I would say this is one reason that I tended to shoot action with DSLR’s over mirrorless, but some are now saying that the lag with certain mirrorless models has been greatly reduced and is competitive with a good DSLR. Still, I have shot some dance competitions and concerts in the past two or three years, and in these events, a good clean burst (well-focused) is greatly appreciated as I find the sequences to often contain more than the “one” keeper.

            –Ken

            P.S. I was a bit surprised to see you with a GX85 in your stable. I thought you were using E-M1’s for video, so it a bit surprising to see you with the GX85. Any specific purpose for it?

            • GX85: Crash cam, 95% the capability of the EM1.2 but 30% of the price… 🙂 Turns out the form factor is a little easier to pack, too.

  3. I think when it comes to video / still convergence, you are not looking far enough ahead. In 2016 Ang Lee showed a film shit in 4K at 120 fps. 48/60 fps seems to exist in the uncanny valley in terms of perception that people find unpleasant – though how much that simply is due to its association with lower budget TV / video productions, it’s hard to know. But by all accounts, the 120 fps video leapfrogs that problem and offers the sort of “transparency” that can’t be achieved with conventional 24p cine projection.

    With a standard shutter angle, 120 fps would correspond to a 1/240s shutter speed, giving you the sort of crispness you expect from stills. Extrapolate that to 6K or even 8K at 120fps, and you have essentially an endless stream of 18-33MP stills.

    The practical complaint in this scenario is the astounding data rate requirements for even 4K/120fps video, but storage and processing power continue their endless upward trajectory, so I anticipate this kind of discussion of limitations will sound a bit quaint 10 years from now.

    • That might work, though my own experience with 120fps video still suggests something in the frame rates doesn’t look quite right – it may well be the playback medium though – not sure all video cards will support 5k 120fps as the iMac requires…

      • Christian Hass says:

        But if you can shoot 120fps video you could merge 5 frames into one and get 24fps video, just like you do with multishot noise reduction modes on cameras today. That should give the look you’re after. It will take compute power but this is the future we’re talking about, right?

  4. Changes in display medium would probably be biggest driver in changing how photography is done, for me anyway. More bits per color channel, dynamic range, resolution, these are all welcome and appreciated, but it doesn’t change the fact that, in a lot of ways, the ultimate goal of the picture (for me) is to be worthy of printing and hanging on the wall. To that end, I shoot in a way that will maximize the appearance of depth, the visual impact of the image, and so forth. 10 years may be too aggressive of a timeline, but the realization of the dream of a “true” 3-dimensional display would be a paradigm shift in determining what I shoot and how I shoot it. Something like the “Google street view of the world” mentioned earlier might be thought of as projecting this idea to 2-dimensional media in a rudimentary way.

    • I agree, and have been saying the same thing for a long time. We can capture at best 100MP/image, but display 32MP (and usually quite a bit less). At this point I’d settle for a 2D display that’s sufficiently ‘transparent’ as not to impose its own qualities on the image. 3D would also imply 3D capture and everything that imaging and computation path requires. We’re not even there yet 🙂

  5. Exceptional article Ming – you make several though provoking points. Another area to juxtapose is of course gear. Who will be left standing in ten years? Whether you like the A9 or not, it proves beyond a shadow of a doubt that mirror box / OVF camera is not the future…much like the internal combustion engine. Lloyd recently wrote; “Show some sense of not living in the past!!! At the least, deliver a Nikon D900 which bridges part of the gap to mirrorless (an EVF option along with Pentax K1 style pixel shift and image stabilization)—compatible with F-mount lenses but also taking a new mirrorless lens line.” and “I want to see a relentless single-minded focus on innovation, quality, usability.” That neither Canon nor Nikon has done this is inexcusable IMHO. The door to the future of the gear industry (Pro & Semi-Pro) has been left wide open by the two perennial favorites. For example; if a company like Hasselblad or on the opposite side of the sensor spectrum, Olympus were to develop a camera spec’d out like the A9 but in a D500 size body with Zeiss quality lenses and at Canikon price points, it would be “game over” for the two procrastinators?

    • It’s a bit of a shame, because those who’ve never experienced a really good optical finder don’t know what they’re missing. Maybe EVFs will get there someday – but guess what, it’s again down to sufficient resolution and dynamic range to be transparent…

      “For example; if a company like Hasselblad or on the opposite side of the sensor spectrum, Olympus were to develop a camera spec’d out like the A9 but in a D500 size body with Zeiss quality lenses and at Canikon price points, it would be “game over” for the two procrastinators?”
      I find it a bit troubling when people ask for things like this, because it’s just not possible (and if you re-read the sentence, the answer is already within). You can’t have “Zeiss quality” at “Canikon price points” because said materials and quality control make it impossible to do so – which is why it doesn’t exist. We cam improve functionality and accessibility, but there comes a point when there’s so much overload in the camera it’s confusing rather than useful; this is not something we want to do…

      • You shouldn’t find it troubling as you yourself expect and demand the most from a dollar you spend. I’m not going to ask for sub-par technology and then settle for crap. I’m going as for the very best quality then decide if the offering is sufficient for my dollar. The OVF/EVF argument you post has gotten stale as you’re simply defending Hassy’s tech. I myself love a good OVF. I simply prefer and EVF for reasons other than optics.

        • Charles says:

          I don’t think Ming has changed his tune at all just because of his affiliation with Hasselblad, and I don’t think he is “simply defending Hassy’s tech”.
          OVFs and EVFs offer significantly different advantages and disadvantages.
          An ad hominem doesn’t change the fact that OVFs are currently superior for many purposes.

        • I’m not defending anybody’s tech – I’ve owned the best EVFs and OVFs and prefer the OVF, Hassy or otherwise. You’ll see I’ve said the same thing here in every review long before even owning a Hassy, nor did they invent the optical finder.

      • Peter Bowyer says:

        “It’s a bit of a shame, because those who’ve never experienced a really good optical finder don’t know what they’re missing. Maybe EVFs will get there someday – but guess what, it’s again down to sufficient resolution and dynamic range to be transparent…”

        This a thousand times.

        No idea how to get people to understand without using both. I am trying to “move with the times” but… give me a good OVF any day.

    • Frans Richard says:

      If you are talking about the door to the future I don’t think Canon and Nikon have found it. I even wonder if they, especially Nikon, know where to look for that door. IMHO the future of camera gear will be smart, connected and adaptable. Smartphones will be the camera for the masses, enthusiasts will be willing to carry something bigger (m4/3, DX mirrorless) as long as it has the same smartness as a phone. Most pros will also be using smart mirrorless, with larger sensors if needed. DSLR’s will be niche products in the future, if they even exist at all (outside of musea).
      Canon and Nikon better reinvent themselves soon or their future could be Nokian.

  6. stanis riccadonna zolczynski says:

    What about this-www.dropbox.com/s/btjguf56g52qrz0/Tailor8K.jpg?dl=0

    • stanis riccadonna zolczynski says:

      Sorry I messed up link. The article is on RedShark News. About extracting 8k stills. Seems it`s enough to print A4-3 side in a magazine. No still camera gives you 60 f/sec. Of course it`s expensive piece of gear, Red Weapon 8k.

      • You still have the same problem with motion: you need some motion blur for smooth transition between frames in video, which is obviously not desirable for stills (and vice versa). Resolution isn’t the issue…

  7. I absolutely agree with #3. For the last several years I’ve thought that many of the things we use and deal with in life are at the “98% point”. I suppose somehow a better digital camera could be produced, but could almost anybody tell, except experts? I have become completely satisfied with the output of a couple of normal cameras, (Fuji X100s, Panasonic LX100, Canon SL1), because I know that what you say is right, at this point, not even in the future, I am the limiting factor in the quality of the pictures I take.
    A friend of mine was going to buy a pair of speakers worth thousands of Euros, and I told him he was crazy. Once you get past, say, 1,500 Euros, (and I’m drawing a figure out of thin air), I doubt any human ear could really hear the difference, except in individual tonal quality, which is a matter of taste, not accuracy.
    Piston engines have gone almost as far as they’re going to go. The increases in mileage and power output have somewhat levelled out, year to year, because physics dictates that you have to burn X amount of fuel to do X amount of work. And, you can buy a car here in Europe for a reasonable amount of money that perhaps only several drivers per hundred could actually get the potential out of it that it has to offer. Once again, we are the limit.
    I’m not complaining, I’m enjoying all the excellent products that are now available to people of average means.

    • I wish I could say the continual pushing of limits is going to drive people to push themselves to make use of all that capability, but the reality is probably not…

    • Charles says:

      All very true.
      We now have superb photographic tools at our disposal, and optimising existing technologies is playing at the edges.
      To Ming’s point, it is far easier to sit back and wait 2-3 years for the next bit of amazing kit than it is to develop one’s technique, visual cognition, and finally to put in the leg time and hours needed to make a great photograph.
      I will put my hand up and cry “mea culpa!” in that regard: I have so many wonderful photographic tools it’s embarrassing, and I’m grateful for that opportunity, but photography is in the end my enthusiastic hobby and must by necessity give way to things I hold more important.
      The most valuable and scarce photographic resource we have is time.

      • We may not even be able to do that: the ‘next great leaps’ are technically no less impressive, but creatively much harder to deploy. Going from 50 to 100MP is by no means the game changer that say 8 to 16MP is – you can see the difference in the latter at small print sizes, but you have to a) print very large or very high density and b) have the necessary shot discipline to see the improvement in the former. Similar case with CCD to CMOS and the high ISO gains there – another stop or half stop we typically get with each sensor generation these days isn’t anywhere near as dramatic (or as practically useful when it occurs from say ISO 6400 to 8000.) That’s not to say the differences aren’t deployable, but we’ve very much hit the limits where doing so can be done without investing equal amounts of effort in improving oneself, too.

        • Charles says:

          Yes, absolutely agree: the big photographic gains are to be made between the ears and in investing the time to conceive, plan and execute a vision.

          • In a way, that’s never changed. Photography has always been this precarious balance between creativity and technical mastery: one has to fully understand the tools in order to use them to drive a vision that may have never been done before, but at the same time, one can’t be subservient to its limits. A lot of the time, I don’t honestly think the new technology is useful in the pursuit of this – if anything, it can be more confusing especially if one doesn’t have the necessary grounding first (which fewer and fewer people have, especially those that have come later to the party).

  8. Frans Richard says:

    My prediction? In 10 years time there wil be so many ‘smart cameras’ (not only in phones, but in cars, bicycles, sunglasses, etc) that know where they are and in what direction they are pointing, all instanly storing their images/video in ‘the cloud’ that Google will be able to recreate a ‘streetview’ of anything, anywhere, anytime on earth (at least where humans live). You will be able to virtually visit any place on earth just sitting on your couch and take a virtual image of anything you want, without even having a real camera.

  9. “Prediction #1: we won’t see stills-video convergence” has already been, at least in part, disproven by the Apple iPhone’s “Live Photos” invention. (Still images that include a video clip from just before to just after the “still” exposure.)

    You already can easily “pick the perfect frame” from the Live Photo’s video clip after the fact (retrospectively changing the moment of “exposure”), using third party apps, and Apple has announced that iOS 11 similarly will build-in the capability to let you change the key photo for live photos.

    Given the statistics about the devices actually used for photography, Live Photos taken with cellphones are likely to constitute the majority of photographs taken in the not top distant future.

    If adequate fast storage is available, it is not much of a stretch to expect continuous video to be retained when multiple Live Images are taken in one session, so that you can scrub from one live image to another (without time gaps) to allow creation – after the fact – of additional live images from the continuous video stream that will exist as a result of “exposing” multiple “still” images.

    It also seems extremely likely that image processing will permit a sequence of, say, 30 frames per second sharp images to be computationally motion blurred to make viewing as video appear natural, just as Apple previously introduced computational bokeh for small sensor portrait images.

    • ‘Live photos’ exists, but I don’t actually know anybody who uses it. IS is also disabled and resolution limited, plus with the motion blur from video, you can’t print the stills either. And it isn’t quite what most people had in mind with convergence (certainly not me) – it’s a bad photo and video that doesn’t really say much. Definitely not any better at getting the point across than timing the moment correctly. Picking the perfect frame is more useful (though again, is limited by requisite video motion vs stills staticness) – watching say a 30fps burst of stills looks quite different to a 30fps video. As you point out, computational motion blur may eventually solve this last bit.

      • Industry leaders never see it coming, because they know too much and have too much invested in the status quo – so they see the competing technology’s current/temporary technical limitations, and so miss the coming revolution caused by the “good enough” replacement.

        “[The telephone] has too many shortcomings to be seriously considered as a means of communication. This device is inherently of no value to us.” —Western Union executives, 1876
        ” … the horse is here to stay, but the automobile is only a novelty – a fad.” —President of the Michigan Savings Bank, turning down Henry Ford for a business loan, 1903
        “Airplanes are interesting toys but of no military value.” —Marshal Ferdinand, 1911 (later, Commander of French Military Forces in World War I)
        “Television won’t last because people will soon get tired of staring at a plywood box every night.” —Darryl F. Zanuck, movie producer, 1946
        “The potential world market for copying machines is 5,000 at most.” —IBM, 1959
        “There is no reason anyone would want a computer in their home.” —Ken Olson, President, Chairman and Founder of Digital Equipment Corporation, 1977
        “We won’t see stills-video convergence.” —Ming Thein , 2017

        Ming, you are obviously brilliant, and so extremely expert, that along with the leading camera companies, you seem to be failing to see that the clearly “inferior product” with only a few years of technical improvement will obsolete and kill the industry leading products. Yes, miniature 35mm Oskar Barnack camera negatives were inferior to 8″ x 10″ format, minicomputers were inferior to mainframes, and microcomputers were toys compared to minicomputers, but in each case, never-the-less, crushed (but never did entirely eliminate) the “much better” existing product.

        What iPhones with Live Photos demonstrate is the compelling user interface and ease of use. Likely, in less than five years, a Sony A9 type camera will take world class sharp, high resolution still photographs at a 30 frames per second video frame rate. Since you agree that computational motion blur may well display such a sequence of sharp still images at video frame rates as natural looking motion blurred video, it seems inevitable that there will soon be a stills-video convergence with spectacular image quality. [Not only that, but the camera’s artificial intelligence can (if you fail to time the exposure button press exactly right) help select the still showing eyes open, with a smile, eye(s) in focus, and that lacks subject motion blur.]

        • Phones have already killed most cameras.

          I didn’t disagree with you – I said that the current state of play has a big disconnect between capture and output which is preventing adoption now, which has been the case across almost all photography. We can easily capture 24MP+ one even entry level devices, but only display 14-15MP on top end output devices. I’ve been saying this for some time, actually.

          What I’ve pointed out is missing on screen and in print (and trying to bridge with Ultraprints) is ignored and dismissed as being obsessed with resolution, but just look at what all of the electronics manufacturers are doing with displays: going higher and higher density for more realistic output – which is precisely what I’ve been trying to say. The core of the new experience lies in immersiveness and bridging reality and perception and creation. Yet the same people who accuse me of being over technical, clinical and dry suddenly develop selective comprehension. Go figure…

  10. Ming – first up congrats on your new alliance with Robin. 🙂

    Your prediction about the stills/video convergence is one that has been troubling photographers in my industry as a practical example of what you’re discussing. Unit stills photographers over the past few years have been concerned that the intrinsic simplification of digital meant that studios or networks would no longer need still photographers on set, because they would be able to just pull a frame out of the digital process and use that (and, indeed, sometimes they do, in fact, do that, but it was something that happened from time to time in the days of celluloid as well).

    Beyond your argument, I continue to assert that the logistics of getting promotional imagery out in a timely fashion well ahead of a project’s release date doesn’t typically coincide, nor lend itself to, the chronology of processing the motion imagery (and, in fact, would probably get in the way of that workflow as both would be different processes trying to work off the same data set). Moreover, stills photographers typically get different vantage points that might work better for promotional imagery for a variety of reasons. And, of course, there’s always the requisite behind-the-scenes stuff.

    • Thanks!

      The reality with unit stills is video extracts don’t quite look like proper stills – what looks good as smooth video motion doesn’t translate to a good still and vice versa (as you no doubt know). All I can say is you’re going to have to work even harder to demonstrate that difference now 🙂

      I think behind the scenes has probably more legs to run for now – as the audience becomes more interested in whole ecosystem of a project rather than just the results, there’s more demand for media (and of course more use for it in promotion, too). I’ve been doing this for the last few years: ‘telling the story of’ is something that’s going to have increasing value since it also helps to reinforce the value of the final product, too.

      • Work harder, yes. But the logistics of the workflow is the big one, I think. Data comes off the Arri, goes to a digital tech, goes for processing, grading, etc, etc, before being sent off to editing. Bogging down that whole process between grading and editing by introducing the publicity deparment into the mix so that they can have someone pour through thousands upon thousands of frames looking for what they believe to be just the right moment, actually doesn’t simplify the process, but actually adds a whole new category of creative who must themselves be paid to do that (and relied upon to pick the best moments, which is what the stills person does more efficiently and with less waste on the day). Not to mention the fact that publicity typically needs all these still images well before the footage goes into the editing workflow.

        Totally agree on the story behind the story element, though. People are as fascinated by the “making of” as they are by the finished product itself, and in fact, that material typically also shows up very early in the marketing phase, well before the property is released. Which, again, also emphasizes the need for a separate, independent process that can get the material out early, separate from the complexities and logistics of the film editing workflow.

  11. What I hope to see is improved user-based design. Improved user interface, improved ergonomics, higher reliability. Everything I want already exists – just not quite yet in one camera! So my wishlist is within sight.
    I was discussing with my wife the replacing of our 6 year old family car. I asked her ‘what’s actually different about the current one?’. Her answer was mostly to do with the car’s electronics interface, phone, etc. The ‘guts’ of the car – engine, transmission, brakes… haven’t changed much in years. I feel we are reaching that same point of maturity in cameras. Most cameras exceed what I need – on paper. But the user experience is still a point of frustration on many.

    • I agree. That said, it seems it’s not as simple to get what we want as we think…said from the position of somebody who probably has more say than most about what goes into the tool. I know at least one company is trying, though! 🙂

  12. Ming,

    A bit of a side track to the main point of the article, but your discussion of higher video frame rates and the effect on the viewer is perfectly demonstrated by Ang Lee’s recent Billy Lynn’s Long Halftime Walk. The high frame rates of the flashbacks are jarring.

    Best Regards,

    ACG

    • The jarringness is probably because there isn’t enough motion blur to smooth transitions between individual frames – i.e. shutter speeds too high.

  13. For all its advantages, the key limitation to digital stills photography remains the technical inability to deliver “image quality by the yard”.
    Ming noted the limits to lens design and sensor resolution.
    If there is to be a new digital photography revolution, it will be delivering of digital “image quality by the yard”. That would make real, digital large format photography possible and not just the scanners implemented to date.
    I reckon that digital “image quality by the yard” is more than possible, if we consider the evolution of nanofilms and microcircuit printing.
    It may well be that in ten years, we have merged the film-sensor and digital sensor to obtain the best of both: effectively unlimited sensor size, and instantaneous electronic image capture.
    Though I know the “electronic film” concept has already been attempted and failed as too far ahead of materials technology, I would love to be able run a digital sensor through classic cameras like the Leica M3, Hasselblad V series and XPan.
    (And before the “Film will Never Die” enthusiasts pillory me for saying so, I still think two such technologies would live side by side for purely creative and aesthetic reasons.)

    • The sensing medium isn’t easily scalable, unfortunately – there’s no such thing as a modular sensor, though early MF digital was often smaller sensors welded together. I can’t imagine this being practical for even larger formats.

      What might work, however, is some sort of condenser optics at the film plane that could downsample to say 54x40mm…

      • Charles says:

        I agree that the current silicon wafer technology isn’t scalable, and certainly not economically.
        We are approaching the limits of that approach.
        I was thinking about something more along the lines of an electronic circuit implemented as a nanoparticle film. Something that is intrinsically fault-tolerant, cheaply printable on its substrate, and economically scalable.
        We don’t have a technology yet, but I think cost-effective scaling of electronic sensors in some form is where there are still huge gains to be made.

      • ” there’s no such thing as a modular sensor, though early MF digital was often smaller sensors welded together. ” I’ve always been curious why this isn’t done more widely – it seems like it would be much cheaper to make a medium format camera by sticking together a 10 by 10 grid (say) of phone camera sensors than making them in a single piece, with the associated manufacturing and yield issues. Do you know why this isn’t done? Is it that sensor architecture makes edgeless sensor designs impossible?
        And I agree with you on the video / stills divide, by the way, although we may see far more use of video than we do today – notice for example how many people now post shot video clips to social media rather than stills.

        • Lots of reasons. You can’t have a completely edgeless sensor (which you’d need for at least the central units) plus there are inevitably serious issues in matching up the output from the different elements – color, gain, etc. And the two-piece stitches were prone to cracking or showing noticeable artefacts even after calibration since the differences were often small enough to notice, but too small to consistently linearise under all conditions.

          Outright video is increasing in popularity simply because it’s now become practical to do so in terms of bandwidth – no way could we upload decent 720P (let alone 1080P or 4K) even five years ago from mobile devices…

          • Ah, fair enough. That’s a shame as it seemed like the best way of leveraging the R&D on smaller sensors. Guess we’re stuck with $10,000 MF cameras then 😦

Trackbacks

  1. […] article continues from Part I in the previous post: where will photography land up in the next ten […]