Beyond the numbers: what’s next?

_1M00902 copy
An illogical, whimsical image shot with an enjoyable camera to use. There is a reason why I chose this image as the header for the article – read on and all shall be revealed…

Since the beginning of the medium – supposedly the view from Niepce’s window in 1826 or thereabouts – we have been chasing more. More is supposedly better. More of what? More of everything: resolution, clarity, size, maximum aperture, focal length, width…anything that can be quantised. It is arguable that the sufficiency was achieved for the capable photographer quite some time ago; what’s more interesting is that sufficiency has also been met and far exceeded within the reach of the typical consumer, too. And I think finally, several years afterwards, people are beginning to realise it, too. So: where does photography go from here?

This article won’t be along the same lines as my earlier article on technology, art and pushing the boundaries. Rather, I’d like to approach this from a different perspective, and one I always try to keep in mind: the end image.

I blame the incessant drive for numbers on the marketing people: without a short, catchy, demonstrably better – after all, 20 is better than 16, right? – tagline, it actually requires some thought to sell a product in the ever-increasing quantities that headquarters demands. In fact, the camera companies have done such a good job of conditioning consumers that higher numbers are better that they’ve now shot themselves in the foot: on one hand, buyers are expecting bigger numbers at lower prices, which is challenging from a business standpoint; on the other hand, they’re not seeing the improvements they’ve become used to over the frantic pace of technological change over the last few years, which means a different approach is required, and probably a contradictory one. Finally, pick a remaining foot: more is simply not practical for most people, either due to cost, physical size, or file handling. There is no point in making something people can’t afford or use, and there’s even less point trying to break the rules of physics.

Let’s look at this practically:

More megapixels means you need better lenses, more accurate focusing, more storage space, more powerful computers, and much better shot discipline to obtain crisp-looking results. Better lenses means more glass, more complex optical designs and both more physical weight and higher risk of production sample variation affecting results. Those are not good things. More storage/ computing power means more cost; facebook doesn’t increase in resolution, and desktop screens are still mostly HD – that’s 2MP. Even assuming that photosite technology continues to improve, since the underlying circuitry is equal, bigger pixels will always have better color accuracy, acuity and dynamic range than smaller ones. Lower density is also less likely to show camera shake.

Faster apertures might be nice for bokeh (if you’re into that kind of thing) and low light work, but you’ll also need to have much more accurate autofocus, and very large glass to maintain quality even when used wide open. Just look at the difference in the glass size between a Zeiss Otus 1.4/55 and the 2/50 Makro-Planar – the 2/50’s front element is about 27mm across; the 1.4/55 is a whopping 70mm. And we haven’t talked about cost yet, either. And if you’re not going to use it wide open, what’s the point?

Wider/ longer lenses (more extreme perspectives) Beyond the obvious physical size requirements due to the rules of optics – and associated cost – how many people really know how to use an 8mm ultrawide or 800mm supertele effectively from a compositional point of view?

I did just say Bigger sensors are better, all other things being equal. But they also mean you need bigger bodies and bigger glass. There is an optimum ergonomic size that balances weight, handling, portability, cost, etc – I think that’s perhaps the size of an E-M1 with 12-40/2.8 or thereabouts; assuming that effectively all sensors in consideration are going to hit sufficiency from an image quality standpoint, then this is about the size we should be aiming for. The sensor should therefore be the largest it can be whilst balancing out the requisite size of properly matched optics. Look at the Sony A7R: the body is the same size as the E-M1, but the sensor is enormous. Paired with primes, it makes sense; paired with zooms, the handling is terribly imbalanced. There’s also no room within the body for a stabilizer, so all of the lenses must have it built in there, instead – once again making them larger.. The E-M1, on the other hand, is always balanced, though perhaps it might well be possible to fit APS-C within the same footprint. Look at the Ricoh GR

Smaller sensors go in the opposite direction: assuming pixel-level technology improves, doesn’t this mean we can get away with even smaller sensors, and thus smaller cameras? Yes, if you just want to skim the edge of sufficiency: this doesn’t leave you a lot of margin for contingent situations. An iPhone will give great results in bright light, but average indoors lighting at night is already pretty ropy. Since there’s no point in making cameras that are too small to handle, I’d rather have a bit more and forgo the token camera shoved into every single device you can think of – even those which really don’t need them.

I keep coming back to the point of sufficiency because it’s one of the most misunderstood concepts by consumers, photographers and camera companies alike. Here’s what sufficiency means, in real terms. The megapixel number assumes you a) are sharp at the actual-pixels level, i.e. have good technique; and b) are using a Bayer sensor; c) noise is a non-issue or barely noticeable up to 1600 or average night conditions. Here’s what you need:

  1. Hipstagram – 0.3MP, quality doesn’t really matter anyway
  2. Social media/ facebook/ twitter/ etc – 800x800px: 0.64MP
  3. Dedicated photo sites/ flickr etc – 2000x1500px: 3MP
  4. 6×4″ minilab print, 144dpi – 864x576px: 0.5MP
  5. Single page newsprint ~20×15″, 72dpi – 1440x1080px: 1.5MP
  6. HDTV playback – 1920x1080px: 2.1MP
  7. 18×12″ print, 240dpi (upper limit for most hobbyists and a lot of pros) – 4320x2880px: 12.4MP
  8. Double page A4 magazine spread 16.5×11.7″, 240dpi – 3960x2808px: 11.1MP
  9. 8×12″ Ultraprint, minimum 500dpi, ideal 720dpi – 4000x6000px to 5760x8640px: 24-50MP
  10. Very big billboard 40x20m, 5dpi – 7874x3937px: 31MP
  11. Large fine art print, 36×24″, 240dpi – 8640x5760px: 50MP
  12. 10×15″ Ultraprint, 500-720dpi – 5000x7500px to 7200x10800px: 38-78MP
  13. 16×20″ Ultraprint, 500-720dpi – 8000x10000px to 11520x14400px: 80-166MP (!)

I’m willing to bet that most people don’t know 12MP or less is more than enough for just about every conceivable use. (And 5dpi for a billboard of that size is extremely high resolution; your viewing distance is going to be 50m. You could get away with 1dpi.) A clean 12MP is no great challenge even for today’s compacts – the Sony RX100II will do ISO 3200 at 20MP with impunity. Downsizing to 12MP will look even better. I print regularly at 36×24″ and larger – my last exhibition had prints from 32×32″ as the smallest size, and 60×90″ as the largest – but let’s be honest, how many others do? Our output media is the limiting factor, not the capture device. Even though pixel quality matters, under optimal conditions, an iPhone 5/5s would meet most of these requirements.

You’ll notice I haven’t said anything about the Ultraprints. Simply: most people do not have the means or inclination to make one. The level of commitment required at every stage of the process is very, very high; I do it partially because I’m masochistic, and partially because I’ve always wanted to push the limits. Yet if this wasn’t the case, I’d be settling back into my comfortable 16MP (and still do, for a lot of applications). My personal conflict comes when I might just happen to encounter something that would make a good Ultraprint but I didn’t bring enough resolution to make it happen, especially knowing that the tech and shot discipline is very much is within my reach and the compromise was due to personal laziness.

I haven’t spoken about colour, dynamic range and tonal response. In short: we’re not there yet. I do see colour reproduction improving with every subsequent generation, though we’re fast hitting the limits of the output media – without calibrated displays and universal standards, colour is academic. No matter how accurate the reproduction and how precisely I tweak it, there’s no way I know you’re looking at what I’m looking at. Ironically, we actually have Apple to thank here: colour standards are remarkably consistent across the i-devices, and the Retina iPads have pretty impressive dynamic range, gamut and neutrality – even for monochrome images. Pixel density on the Retina iPad mini is so fine that viewing images on that display is really like looking at a large format transparency on a light table – a very pleasant viewing experience.

Dynamic range and tonal response are a bit trickier: extending dynamic range is one thing; outputting it in a way that looks ‘right’ is completely different. I can understand why you’d want to have a very linear tonal response for ease of later processing – this is what most raw files look like today – but at the same time, that linearity and resulting contrastiness are not good for fast workflow or good native output (JPEG). We’re going to need several things here: more dynamic range to extend the raw data to work with; a way of allocating that in a nonlinear manner to make highlight rolloffs smoother and more natural-looking – akin to what we see with our own eyes. One of the reasons I still use film for a lot of my monochrome work is because of this inbuilt nonlinearity, which both preserves enormous dynamic range and manages to store it in a format that’s easily digitised and still retains that look. Finally again, there’s the need for better display standards – not just for output, but also file handling. Surely there must be a better compression mechanism than JPEG available by now, and one that supports more than eight bits?

The polar bear in the room is of course the sack of meat behind the camera: we can of course argue that the composition takes precedence over the theoretical image quality, and a strong idea with slightly weak technicals will always make a better image than a sharp but boring one. The obvious question then becomes: how do we improve the weak link? More importantly, how do the camera companies help people to improve themselves and stay in business? The paradox being that if you are happy with what you’ve got, then you won’t buy any more gear and that’s bad for sales…

Yes and no. If I ran a camera company, I’d be on a three-step plan: firstly, get people to take more photographs by making the process fun and simple. Then, educate them; they’ll realize they need more, and that’s when you have the products ready. Finally, there has to be a culture of innovation in the company itself. Not Sony-style product ADD, but genuine common sense: know your customer. If your product is targeted at families on holiday, actually find out what they want and make it work flawlessly. Test it. Cut out all of the unnecessary stuff and the stuff that’s there ‘because it always has been’. Be consistent. Do it at all product levels. Even if you fail at the other parts, people will buy it because it’s different. They’ll only stay in your system because it’s better or more logical. And be serious about the education part: it isn’t a short term game and it won’t have directly measurable ROI, but you can be sure that there will be much better customer loyalty to a company who cares. The more educated and savvy your consumer, the easier it is to sell them niche and specialized (read: high margin) equipment. It is much easier to convince me to buy an Otus than a 100D, and much easier to convince a studio pro to buy an IQ180 than a Coolpix. Knowing industry margins: you’d have to convince a thousand people to buy Coolpixes to make up for one IQ180.

The tricky part is making the whole process fun and unintimidating to bring in new photographers. At the moment, the only card being played is the retro one; that makes no sense for reasons I detailed in the Df test. Simply put: a different machine needs different control logic, even if the output is the same. I shoot frequently with my Hasselblad V series cameras, which have three buttons and a few knobs – one to release the lens, one to take the picture, one to lock up the mirror; rings to wind and set focus and exposure. That’s it. I also shoot frequently with the D800E, which has a mind-boggling thirty six external buttons, switches, toggles and levers – excluding those on the vertical grip. Yet all of them are necessary and make sense. Even a ‘simple’ digital that offers full control, like the Ricoh GR, which I consider to be the absolute minimum, has fifteen buttons/ toggles/ whatnots. And then the iPhone came along with two, in its simplest form: one tap to focus/ expose, one tap to capture. Yet it works well, because the target audience doesn’t care about the details; those who know how to use it work around or use an app. And that is brilliant in its simplicity.

As an object, the feeling in-hand matters: you have to want to use your camera, and in doing so, you’ll take more pictures. If you take more pictures, you’ll eventually want to learn how to make the pictures look like what you imagined; that’s where education comes in. Though some manufacturers get the haptics and tactility part right (even if ergonomics often leaves something to be desired), none of them do education properly or consistently. Cameras should include not just manuals, but some photography education – video, books, whatever – aimed at bringing the intended buyer of the camera to a slightly higher level than before they owned it. That too is a simple sell: why wouldn’t you choose to buy a camera that actually helped you improve over one that presents you with a phone-book sized list of custom functions instead? Hint: custom functions are useless if you don’t know how they relate to practical photography. And to any manufacturers reading this, I’d certainly be very interested in collaborating on such an educational venture if you’re interested; consider the challenge issued. In the meantime, I’ll just continue to wait for the people who go through the entire gear buying cycle to realize it isn’t the equipment that’s holding them back.

And there’s my list: improved output/ presentation, fun, common sense/ intelligent design and education. Not quite what you expected, was it? MT

____________

2014 Making Outstanding Images Workshops: Melbourne, Sydney and London – click here for more information and to book!

____________

Visit the Teaching Store to up your photographic game – including workshop and Photoshop Workflow videos and the customized Email School of Photography; or go mobile with the Photography Compendium for iPad. You can also get your gear from B&H and Amazon. Prices are the same as normal, however a small portion of your purchase value is referred back to me. Thanks!

Don’t forget to like us on Facebook and join the reader Flickr group!

appstorebadge

Images and content copyright Ming Thein | mingthein.com 2012 onwards. All rights reserved

Comments

  1. Chris Searle says:

    A camera manufacturers primary business is generating profits for it’s shareholders. There are clearly lots of people who feel that they need 36 or 50 MP to photograph their kids playing in the backyard, therefore the blind desire for bigger and better will continue to drive the numbers up so long as it is physically possible ( does Moore’s law apply to sensors I wonder?) .

    • It should do, in theory. But Moore’s law doesn’t apply to the whole optical system since diffraction is diffraction and one of the outcomes of optics…

  2. Hello Ming Thein, I admit to not having read all the comments, so please forgive me if this question has been asked before: What size prints can you comfortably do with this camera (and I guess all other m4/3 cameras)? I am presently using a D700 and print up to A2 with it. Could I do that with this camera?

    • If that’s your threshold of acceptable, then 3×5 feet is no problem. But I don’t go larger than 8×12″ with the M4/3 cameras anymore because of the Ultraprint requirements. Even the D800E tops out at 10×15″.

  3. John Cleaver says:

    One slightly contrary point (which I don’t think has been covered in the original piece or the comments – but this piece has attracted so many comments that I may have missed something).

    The assumption that higher-resolution sensors inherently require better glass can be turned around. One valid reason for providing extra pixels may be to permit better correction for images generated with less good lenses – for instance, the interpolation associated with locally re-scaling colour-separated images for reducing the effect of lateral chromatic aberration will be facilitated by making the original image at higher pixel density.

    As an aside, one may speculate that the coincidence in timing for the releases of the monochrome Leica and of the 50mm Apo-Summicron-M f/2 Asph actually was a recognition of the loss of correction possibilities that was an inherent consequence of the use of a monochrome sensor.

    So, even when the appropriate output pixel number corresponds to that from my D3 – which meets nearly all my needs very well – having two or three times the number of pixels could be beneficial for intermediate process stages for correction, particularly of the outer parts of the field. Of course, this applies only when the correction software is sufficiently good – and, even more important, when the lens performance is symmetrical and consistent over time, to make its aberrations amenable to correction.

    • You definitely didn’t miss it – that’s a new and very valid point entirely. And of course there’s the benefits from downsizing; we’re looking at interpolated Bayer output not native X MP. As such, a D800E file downsized to a D4 file will always look better than the native D4 file.

      Monochrome sensors forgive all sorts of errors – I deliberately tested the 50AA on the M9 too, for that reason.

      • John Cleaver says:

        I agree that there is forgiveness in the monochrome image in the sense that the colour fringing from lateral chromatic aberration inherently is not apparent; however, the superposition of imperfectly-matched desaturated colour images must degrade the spatial resolution (or at least the contrast transfer) in that final monochrome image. Probably it’s not significant in virtually all general photography, but it could be an issue in specialist areas such as the imaging of ancient documents where the fidelity of the image must be constant over the entire image area.

  4. Manfred says:

    Maybe a D70 would be a good bet to cover he most needs of 90% of today’s camera buyers. With it’s CCD sensor it had a very good tonal response and even blown up copies, that I’ve seen, did look excellent.
    For some it may be the fascination of advancing technology, while others (the victims of sheer numbers) could not bear seeing a smartphone next, having better numbers, thus they need to have the latest and greatest to support their believe to have the best.
    You are right, the generation of a Olympus EM M1, the Panasonic GH4 and the Fuji X-T1 have reached the level of DSLRs, while being smaller and lighter they are not really less expensive. And for DSLR owners they require to set up a new system as even the lens adaptors are rather a emergency solution than the way for daily use.
    Especially the two remaining big DSLR makers decided for the race of numbers and features. But just decision may take them into serious difficulties. I guess they come into the typical Innovators dilemma (what Apple has apparently to face right now).
    From the lower end high end smartphones and a plethora of P&S and entry-level DSLRs keep pushing. (I guess a D3300 would outperform the older top-end professional cameras like the D2Xs in most applications. These are only few years apart. In the middle section, the (above mentioned) mirrorless cut in, while the big two, it seems, have missed the train. (Nikon 1 and Nikon A are fails, at least in western countries. At the same time the air in top level section has become considerably thinner.
    If one would see Nikons 16MP DX sensor as a stepping stone on the way to the D800’s 36MP sensor, the actual 24 MP DX sensor points to a 54MP sensor. I bet the big two do have something like this in their sleeves already. But they can’t bring it out as they don’t have the glass to cope with such a resolution. The D800 already showed, how merciless high resolution sensors reveals lens weaknesses.

    Taking a look at the later lenses, we can observe the tendency to larger lenses and larger diameters. The Otus is not only the best 50mm lens, but by far the largest and comes with a 77mm filter thread. Thus it does not appear to be too farfetched, to assume that lenses for the next sensor generation need to be considerable bigger, which raises the question if that can still be done with the Nikon’s and Canon’s standard mount. Well Canon have already trained their customers to get familiar with new mounts. But for Nikon it was always their USP to stay with their mount going back to the 60ties.
    Is that the reason, why Nikon is so hesitant, binging out new lenses though they have an overwhelming amount of new lens patents? And they have to watch, how Zeiss and Sigma Art and some other particular lenses too, cut into their business. Or is it plain ignorance?
    Maybe one can compare the situation of the big two somehow with the situation the traditional German camera makers were in a decade after WW II. The vast majority of them lost their business to the Japanese. Not that I’d see another DSLR maker on the horizon, but the business philosophy is nevertheless devastating. Nikon always stresses to listen to the customers but they don’t or listen only to a small fraction like wedding photographer. How else could one explain Nikon’s fail coming up with their 58mm after The Otus had set the bar in terms of sharpness. And now even Sigma’s 50mm seem to blow Nikon’s 58mm out of the water at half of the price.

    Being required to change to a new mount would probably piss off a lot of customers. They may then stay with what they already have, which wouldn’t be a too unreasonable decision, as the actual DSLR models have certainly reached some sort of summit. Therefore, I guess, Canon and Nikon shy away from that like the devil the sacred water.

    The big question is: “Quo vadis?”
    Perhaps the major part of photography is covered by smartphones and entry-level gear with kit lenses. And that is sufficient for the applications of the majority: shots from baby-box and parties, immediate uploads to facebook, me-in-front-of and some casual holiday shots.
    Those wanting more, have to spent a big time money for gear covering many aspects of photography. It is in fact not an art to spent 20.000 € for a rather complete equipment.
    Airline travel comes with more and more limitations as no liability for valuables in the check-in luggage, and heavy weight-limited hand-carry. Thus one is forced to compromise to change to smaller and lighter gear (which points to high end mirrorles).

    The other side is the very limited possibility to refinance equipment with photography. Aside some well payed assignments only accessible for well named photographers, the rest of the flock has to battle with stock Photography and their increasingly bad conditions for photographers.
    On top of it, in the 70ties to 90ties making photos was still welcome. Most of the people were rather enthusiastic being photographed and public places and sights were still open to photography. Not anymore. Today one have to apply and beg for permits often ending being confronted with financial claims, while still not knowing if the shots concerned will ever generate a single penny, or even a general No.
    And when one is lucky enough to generate someone’s interest in any photography for payment, then it seems rather the rule than the exception, to be asked with what equipment one can come up with. What? No Medium Format? No Hasselblad?
    And little later it turns out, that the price they’d be willing to pay would cover a shooting with a P&S in the best case. I think 95% of photography, if sold at all, goes over the counter for less than on a Black Friday super sale.
    At the same time most of the photo journalists get sourced out or are forced into self-employment. A good share of sports-photographers earn so less that they can’t even pay for their health-insurance.
    Topics as landscape and travel are so vastly covered that you either literally need to go to the end of the world or the ground of the ocean. Other than that one need excellent connections to National Geographic or is thrown back to publish on an own blog free of charge.

    Bottom line, photography on a high level increasingly becomes merely a hobby for wealthy people, who are content with an acknowledgement at the one or the other juncture.

    This conglomerate of side conditions certainly bears down to the sales options for the makers. But what they seemingly don’t understand is, if they give up on their middle and high end customers e.g. by prioritizing Nikon 1 the whole thing will implode one day.

    • I fear that in the not-so-distant future, making images professionally is going to become so financially unviable that there may not really be a profession of photography as we currently know it. Somehow we’ll land up with a pursuit that’s simultaneously mainstream and at the same time esoteric. What then?

  5. johncarvill says:

    >Cameras should include not just manuals, but some photography education – video, books, whatever – aimed at bringing the intended buyer of the camera to a slightly higher level than before they owned it.

    A very intriguing suggestion, which I don’t think I’ve seen elsewhere. Imagine if one of the differentiating factors between, say, a flagship prosumer Nikon DSLR and its Canon equiavlent, was the quality, quantity, and style of the accompanying educational materials. The mind boggles. Maybe I’m just in a bad mood today, but it feels unlikely we’ll ever see such a phenomenon taking root. I wish it were otherwise.

    • Frankly, I feel like you should master one level of camera before you’re even allowed to buy the next one – it would get rid of a lot of ‘how do I set X’ questions…

  6. GREGORIO Donikian says:

    We are missing The stev jobs of The image business, with all The cameras aroound i dont have what i really want , i,stock with lenses and flashes from diferentes systems and Im tiren of triángulo a new camera every 2 years. I love my d800 and my fuji x100. Im Droping al The other systems and may be going to an Xpro 2 sólo re or later.

    Greg

  7. Ron Scubadiver says:

    Ming, as far as MP goes, few need 36, but it is nice to have so many because it gives one the liberty to crop. I suppose one could say have the right lens, or frame carefully, but life is full of the unexpected. Well, you know, there is all sorts of gear out there, but most improvements come from improving the photographer.

    • Cropping is evil, Ron. 🙂

    • Few can fully use 36. But some of us have found a way to use all that and more – I suppose I’m in the very small minority.

    • mosswings says:

      I guess that I would restate that as “grants one the privilege of cropping for the cost of greater discipline in its capture”. And I would also add that for many, 24MP may be too much as well. So if we’re constrained to something around 16MP by the motor skills limitations of the average photographer, where does that take us? Probably where u4/3 is today. Extremely small differences in IQ between it and APS-C, the ability to get f2.8 pro quality midrange lenses in the same form factor as an f4-5.6 step-up DX zoom, And primes that are truly tiny. Hmmm…

      • Ron Scubadiver says:

        Discipline in capture sounds good on paper, but reality is different. Most of what I shoot is instantaneous and changing. Going a little wider really helps. You can always take away, but you can’t put back. The smaller formats have their uses, at the expense of dynamic range, high ISO noise and lack of DOF control. I will not go so far as to say that I must have 36 mp, but that is what my D800 came with. It may be overkill, but better too much than not enough.

      • Stabilizers skew that equation somewhat, but you also have to remember the resolution-diffraction-aperture-sensor size tradeoff…

  8. Very interesting observations, particularly the last couple of graphs or so.

    I definitely agree that the camera itself, as an instrument, can certainly inspire one to push their photography, in much the same way as a fine performance automobile beckons its owner to drive it … possibly to even become a more competent driver.

    I think we have had enough decades now of the camera as both creator of art … and as object d’art in itself. Witness all the “camera porn” photographs we see … some of which are elaborately staged and shot (to say nothing of the collector market). History has shown that serious shooters will flock to an innovative, attractive — and even upscale — product, if its compelling. Volume may have to go down, and price up, but I believe that’s where the market is heading anyway.

    Chasing the lower end of the consumer market is, in this, 2014, a surefire race to the bottom.

    Thus, taking a company like Nikon as an example, here’s what I would do, were I them:

    – pair down the number of DSLR choices in the roster … way down. 17+ models (depending on region) is just FAR too many. IMO, they only need 5 or 6 DSLRs: professional / prosumer / enthusiast FX; prosumer / enthusiast DX

    – introduce a new mirrorless system based around the APS-C format. Study what Fuji has done with the X system re design, lenses, cohesion, marketing, etc. Give consumers a high spec, high quality, sexy camera, possibly with a combination of retro inspiration but modern functionality (one touch sharing to social media from the camera, full wifi transfer of images and control of camera like the E-M1, etc). Start with 2 models: prosumer and enthusiast. (I can actually confirm that Nikon has looked at the idea of resurrecting something akin to the classic S rangefinder system, but in modern digital guise.)

    – Take a good look at the popularity of the Go-Pro cameras. Come up with something similar, but do it better. Much better. Create a lineup of 2 or 3 cameras that can be used anywhere, during any sort of weather, and in almost any terrain … that offer modularity and system growth for all sorts of portable environmental use and varied applications, from affixing to a helmut for extreme sports … to going underwater … to being used remotely with infrared activation for scientific use in the field. Perhaps even make it the first modular system. Nikon already has the perfect name for such a camera: Nikonos. Bring it back with a vengeance.

    I realize I’m glossing over a lot of things here (e.g. Nikon will never introduce a serious mirrorless system until they see they can make money on it), but those are some broad strokes.

    One last thing: For many reasons I believe Nikon will have to shrink in size as a company over the next decade or so.

    Ironically, their imaging division may end up looking a lot more like it did in the late 1970s. Consider that back in 1977 Nikon’s SLR lineup looked like this: F2, EL2, FM, FT3 … 4 cameras, and, of course, a humungous system of lenses and accessories built around them. Plus the Nikonos system. Aside from the Nikon R10 & R8 Super 8 film cameras, that was it.

    • That’s still far too many. There needs to be four models, with appropriate sensor/ system completeness: an all-in-one consumer model, like the RX10; a pro compact like the GR; an intermediate/ high end amateur DSLR/ mirrorless, and a pro DSLR. The latter two can share lenses via an adaptor. I think we’re looking at 1″, APS-C, FF, and 44×33. Remember, planning today won’t translate into a product until 2-3 years from now – so we’ve got to take market/ competition and behavioural evolution into account, too.

      • If I were speculating, I would postulate that Nikon will eventually shift DX to mirrorless, aka APS-C will become the province of their mirrorless line. But I still don’t think we’ll see such a system from them for at least a couple more years (I’m sure the designs are on the drawing boards right now).

        I just hope they do it right, and use the opportunity to innovate at the same time. Nikon needs a new halo product to infuse the brand with some excitement again (the Df was most definitely NOT it). If you read around on forums, the brand is getting savaged like never before in its history.

        Eventually, I suspect the D-Series pro cameras will go mirrorless, too. Maybe not with the D5, but certainly by the time a D6 arrives. The mirror box is costly to produce and once we hit 3 megapixels of EVF resolution and greater refresh rates, I’m not sure it will be needed anymore.

        As to the IQ and megapixel wars, my feeling is they will continue for a while yet. Eventually the manufacturers will figure out that what many consumers want is NOT more megapixels, but rather high IQ, good DR, and terrific low light performance in a SMALLER sensor, so that they can have a highly capable camera that doesn’t feel like you have an albatross strapped to your neck.

        Olympus is already heading in that direction with the E-M1, but I think there are still gains to be had in m4/3, especially in dynamic range and high ISO performance.

  9. Jordi P says:

    Hi there Ming,
    That’s an excellent point that is missing attention. Might I take the liberty to comment on the perspective of a photographer into consumer point?

    Although I’m a m43 and 135 film shooter I recently got (rather seriously) into cellphone photography through a galaxy s4.
    So much fun. Being a student always on the move and having a decent phonecam is great. Despite the limitations, it is inconspicuous in use, convenient and has connectivity! So great to have the files home as they synched through WiFi quickly.

    Point is, through choosing what to buy and many friends+acquitance conversations I’ve gotten to know the panorama as seen by many consumers and I’ve been observing and experiencing it.
    “Oh a 13MP camera” people comment to me with a look of approval. Funniest thing is that almost no one uses it. Aside of cropping, printing is the other way that the files can get pushed to the limits.

    As a test I plan to order some small 6×8″ C-Prints and see how it manages (mainly the limited DR), and maybe give some of them away. I recently gave some 4×6″ prints to some friends and both they didn’t expect it and loved it.

    Let me explain how I had to endure some lack of education about the matter (I didn’t have the S4 yet):

    Most of the 4×6″ were photos from a beautiful university event we held. A photographer was hired, yet I could only get the files through facebook. Despite my best efforts contacting everyone that could have the originals, nope, nothing.
    So, Official shots wise I had to make do with some 900x600px files, thankfully the group shot was at 2048 px in the long side. These aside of the downscaling include some compression artifacts.
    Some other prints were sourced from my friend’s iPhone 4. At ISO 1000 and shabby WB you can imagine… Well, I ended up converting to B&W and adding grain.

    Yes! EDUCATION for the public please!

    Oh, and the worst regret is that I brought the m43 stashed into my pockets, Given there was a photographer I didn’t end using it…
    So next time I have an event like this I will rent an RX100 or similar.
    Oh, and turns out the photographer gave a middle finger to us. He actually wrote a comment about how pathetic he found the job and the event. Very unprofessional indeed. However, he was one of those $200 guys you can hire around.

    Sorry about some ranting, but yes, companies made believe consumers that linear resolution is what they need. Yet these throw them away; Many of the factors you explained about bigger pixelage (file size et al) act into it, but much more on the sharing flow that the consumer itself (compression in social media).

    And also, photographs as seen by the plain consumer; Snapshots sadly have become somewhat of a single use commodity.
    Snaps have become so ubiquitous that once they have been seen, they are relegated to being forgotten.

    • There’s simply too many images out there for anything to be memorable unless it’s truly spectacular. And this is very, very difficult to do.

      I’ve learned not to trust anybody else at being able to get the image I want – not because of their skill level, but simply because each of us see differently and everything is subjective: if you want an image you’re satisfied with, take it yourself.

  10. Martin Fritter says:

    I’d like a digital version of a Leica M2. I borrowed an M8 and it came close but had a number of obvious problems. I’d like an M9 or Monochrome, but the price is off-putting. I have zero interest in autofocus!

    But what about the role of the software companies – specifically Adobe? Digital cameras as jpeg capture devices are usable without post production software. But I bet 100% of the users of this site are immersed (mired?) in software post-processing. It’s the inertia of Adobe that slows down sensor development or the adoption of new solutions. One could argue that Adobe is the most important image processing company on the planet – much more important than any camera manufacturer. Is there any reason to get a high-end camera without getting involved with their products?

    So, Ming, I’d love to see you tackle the software PP area and would be very curious to get your opinion about alternatives to the Adobe monopoly. Not something you may care to do, but they are the elephant in the room. If all vendors are using basically the same sensors and processing using the same software, is the only differentiation to be found in the lenses (and the humans who use them)?

    • Well, there’s the UI, too.

      But having the same sensors and software is a good thing, in my book – it means we can use multiple cameras and have consistent images and workflow. Much the same as provia is always provia regardless of the camera it’s put through. Workflow matters for pros…

  11. Thanks, Ming. Nice overview of what we all have known or suspected for some time. I waited patiently for a full frame Digital Nikon to use my then existing lenses on, and as soon as I picked up the D700 I just set it down and walked out of the store. I wanted a digital version of my film SLR’s, not a tank. I soon went up to the 16MP M5 and then M1 MFT, and 18MP Leica M9, and quickly discovered I had to upgrade my computer system and storage space. Not cheap. Happy with the improvement but vowed not to increase the numbers again unless absolutely necessary. That rules out 24MP and 36MP, regardless of any image improvement. And all for the reasons you’ve laid out above. Not necessary, and I also like to print these images with an EPSON R3000, and yeah, I don’t want to buy anything better soon. The ink cost too much already. That leaves my aging wide Dell Screen. It won’t break! That’s the one thing I might improve on if I feel like spending another $1000 or so to get one.

    That leaves marketing and advances in engineering. We have to admit that we’ve all benefited greatly up to this point for the incredible improvement in digital cameras (up to the 16-24MPB EVF, etc) that were in part driven by competition among companies to get the marketing numbers up and up and then up to get people to replace their (still working well) outdated cameras. Looks like it has all come to a standstill now if only Leica and one large format company are still making a profit with small niche markets. I laugh when anyone criticizes Leica for charging so much: perhaps the other companies would be more profitable if they increased their prices and focused on what photographers need and want. By the way, you didn’t mention how well the Leica T fits that definition of minimally sufficient MP with quick, easy interface a la smart phones. And I for one think that it will sell at that price, but time will tell.

    Meanwhile, to your other key point. Happy with my M5 and M1 MFT experience I helped my (adult) daughter select the kind of camera that she think she needed. I relied on your recommendation of the Olympus E-PM2 Mirrorless Micro Four Thirds Digital Camera. She stumbled on one on sale in Canada with 14-42mm and 40-150mm Lenses Kit for $300 (It’s now $400 at B&H here, so that could be a currency difference). Two lenses and that camera for only US $ 400. And what happened. She took it back after 2-3 days and was too embarrassed to tell me. Not getting any photos she finally admitted taking it back, only saying that she was not ready for a camera with interchangeable lenses! Had to be more than that: just don’t change the lens and set it on auto. Didn’t work. I strongly suspect that as small as that camera is, she really wanted one to put in her purse and had just a very simple interface and menu: NOT the killer menu system that Olympus is providing. And she wanted a telephoto lens to photograph her daughter’s sports. She may also have balked at what to do with those files from the 16MP sensor. Learn photoshop?

    Seems like this is the dilemma the industry is in. To me, with prices like these (my Pentax TTL SLR cost me about the same price in 1969, which would be about $ 2000 today!), I don’t see why she wouldn’t just buy two cameras. But that’s my thinking.

    Please consider adding a line in your list of technical requirements that deals with the retina screen on the latest iPads and similar tablets. This is where a lot of people will look at photos in the near future. What are the minimum requirements for that use?

  12. Enjoyed your article very much. If I own a company (not just in the camera industry), I’d love to interview you for my CEO!

  13. Education itself is a huge industry (granted with an emphasis to learning new skills for your vocation) and as such many photographers have recognised this. However combining education with a product becomes a mixed message for the marketing team! Sure, the odd “Nikon” or “Canon” sponsored event is fine but as a mainstream business….too hard basket and fair enough. Leica try and overlay this a little more with the Leica Akedemie concept but really it’s still more about the product. Too expensive to pay for ongoing instructors, rental, logistics, etc to make it worthwhile. So nextstep would be to buy a well made scaleable series like you have, brand it Nikon, Canon, etc and provide a log in to each consumer as they purchase the product. Obviously marketing and brand will still be key here. So Ming if it’s a “log in by subscription” model of learning, you have plenty of content you could sell as long as it packaged properly……and collect a clip for every login issued in store (i.e. new camera sold for that brand or perhaps for a store like B&H as an example regardless of brand!!).

    As for the camera hype itself it’s just all boring now. Megapixels, ISO, blah, blah, blah. I really have had little interest in upgrading,etc beyond trying the camera I wanted to try (which i did with the MM and recently sold) and now commit to a Mamiya 7 for a year or so to learn more about film, yet retain a simple ergonomic platform with great lenses AND have cash in the bank to complete a couple of planned trips dedicated to creating some bodies of work that can sold as prints. It won’t be film for ever but it’ll be a nice challenge for a while.

    • The problem is not so much the mixed sausage as the fact that education is often at odds with brand promotion: it is really independent, and probably not in the immediately obvious best interests of the camera companies to promote that. Sadly few recognize that honesty is worth far more to consumers than blind fanboyism…

  14. plevyadophy says:

    “Surely there must be a better compression mechanism than JPEG available by now, and one that supports more than eight bits?”

    Well there is, and has been for ages.

    It was created by Microsoft and not too long ago was ratified by the JPEG as an official format. It has the best of both worlds, JPEG and raw.

    But for some reason I can’t fathom no-one is supporting it in a big way.

    • Daniel says:

      Yep – you are referring to JPEG XR, aka Microsoft HD Photo, previously known as Windows Media Photo. See http://en.wikipedia.org/wiki/JPEG_XR. We should all be kicking and screaming until camera and cell phone manufacturers and software publishers support it. Regular 8-bit JPEG is an abhorrence that should have been abandoned 10 years ago.

      • plevyadophy says:

        Daniel,

        Thanks man, yeah that’s EXACTLY it; I’d forgotten it’s precise name.

        I saw a review and demo of that file system years ago, and I was amazed. The thing can even cope with that floating point thingy that one needs for HDR, and do it natively.

        And if cam companies won’t do JPEG XR, then they should at least be doing DNG. To me there are only three companies that can really justify having proprietory file formats, and that’s Fuji, Sigma, and Sony, and even that’s debatable.

        • Even those who can justify it should be making every effort to make it as open source as possible to encourage adoption – Sigma and Fuji are a bit of a disaster in this instance. Workflow is terrible, and I suspect what’s keeping a lot of people from buying their cameras.

          • plevyadophy says:

            Well, I know for me, the Fuji X-Trans sensor, and the inability of any software vendor to render the raw files accurately, has kept me from selling my kidneys and abusing my overdraft facility to buy yet another cam, the X-T1, which in my view, with the exception of the way the LCD articulates, is ergonomically perfect.

            • Basically, there’s no point in having a camera whose files are always going to be a bit disappointing. Especially when the body is that good…

  15. Ming,

    Just a great, great article – your thought process and quality of ideas really stands out.

    I’m surprised you didn’t mention Sigma in the article – if they can raise the level of the basic functionality of the next DP series cameras, they will meet a lot of the criteria you mention above. They are clearly not trying to be all things to all photographers and are, I think, really trying to design the new camera around its purpose.

    Anyway, the point of this post is simply to keep up the great work – it’s refreshing to read such intellectually strong articles.

    Best Regards,

    ACG

    • Thank you. The Sigma isn’t mentioned not because of functionality, but workflow – it simply isn’t mature, and frankly, it’s the only thing holding me back from getting one, because image quality is superb…

  16. Hi Ming

    Just a thought regarding the jpeg format
    Have you had any experience of jpg2000

    Is it worthwhile
    Regards
    Tony

    • No, I haven’t bothered because it isn’t that widely supported. An LZW compressed TIFF is probably a better choice.

  17. Love that illustrating pic, Ming!

    Your present article sums up what you’ve been advocating for a long time bit by bit, here and there: tech specs which need to make sense, ergonomics which need to make sense, UI which need to make sense, customer requirements which too need to make sense, etc.
    To which I agree overall. Don’t bloat the boat else you’re sinking.

    That said, I was thinking that the caveat of needing more complex/heavy/expensive glass to match increased sensor resolution or aperture will be mitigated by the appearance of “curved sensors” which are being talking about – at least when it comes to primes, that is.
    They will require less optical corrections than present sensors, if I understood correctly, thus allowing for cheaper and more compact lenses.

    • That might well be the case, but we’d need new mount specs too – so again we’d have to pour more money after relatively small gains (for most consumers). Unclear if that makes sense, and a large curved sensor is going to cost a fortune to make…

  18. I’ve come to appreciate more and more my D700 – enough resolution for my personal work. However, I would like some ‘extra resolution’, but I think we need to move up to medium format so that shot discipline is not so severe to gain in resolution (I’m looking at you Pentax 😉 ). Heck the 16MP on the GR is demanding enough….

    • Not true. Shot discipline demands are proportional to the angular resolution, i.e. Number of pixels per degree of FOV. That Pentax is going to be another level up on the D800E; thinking otherwise will be foolish. It’s a good thing that it appears to have usable high ISOs so we can keep shutter speeds up while handheld – and the handy Pentax/ Ricoh TAv mode. It also doesn’t help that with physically larger bodies, the moving components are also larger, which usually results in more recoil…the Hasselblad with digital back is definitely a step up from the D800E to use handheld.

      • Though I’m under no illusion that a medium format camera will live mostly on a tripod, i would have expected about the same level as D800E. But indeed shutter speed is becoming more of a critical point than aperture. Goes to show how important a well implemented IBIS is….

        • mosswings says:

          Indeed, a well implemented IBIS, VR, OS are important…for operator shakiness. But for moving subjects (and most subjects are moving), shutter speed rules supreme. If we are constrained to keep our SS above 1/200 to freeze even slow moving subjects sufficiently to produce a sharp image on our super-resolution cameras, this puts even more pressure on bulk sensor characteristics improvements, more so than fast lenses. If we want to control depth of field, ISO is the enabling parameter, and every doubling of resolution increases the threshold ISO by 1/2 stop. So if we were satisfied with ISO 100 with our 12MP D700, ISO 200 would probably be needed for a 54MP D800 followon. How many stops of bulk sensor characteristics improvements are there left in the shed? Less than a stop. There’s a reason why so many sensor companies are pursuing alternatives to Bayer CFAs…

          • I think the camera is now more of a limitation than the subject – 1/60s will freeze moderate speed people, but we may well need 1/200s for camera shake. And even more for higher resolutions.

            I agree – if each generation doubles the pixel count and still magically gains a stop of ISO cleanliness, we are increasing usable resolution but not really enlarging the shooting envelope.

        • Well, we’re going to have about 20% more pixels per degree on each axis, which means going from 1/100>1/125s, for instance. And then you have to add in the extra recoil from larger moving parts; the camera is a bit heavier than a D800E+grip, but not much. So realistically, it’s 1/100>1/150s or 1/200s. Level up!

          As for IBIS – the P645 system is the only one with any form of stabilisation; the 90/2.8 macro has IS in the lens. Incidentally, it’s also much larger than you’d expect. In fact, there’s one sitting on my desk now…and it’s larger than the 2/135 APO.

          • plevyadophy says:

            WTF!!!! Really? We now have stabilisation in MF land? Wow!!! About bloody time too. Well done Ricoh Pentax.
            Wow, on paper at least, this new Pentax body is looking like the perfect tool; live view, articulating LCD, AND stabilisation. With a set of top drawer lenses it seems to me that the Pentax body is quite irresistable, Just a shame about the small crop sensor (small in medium format land).

            • The size of the sensor is one thing, unknown lens performance on that density of sensor and CMOS tonality are others. I plan to use mostly Hassy V glass – from what I can find, the lenses I have perform very well on the 645D – better even than the Pentax counterparts. I highly doubt this is going to have the tonality of the CCD cameras without a lot of work in post. That said, it probably won’t be that different to the D800E, and I think I have that one licked.

              • plevyadophy says:

                Yeah, those are my concerns too.

                It’s my understanding, based on, like many folks’ understanding, rumours and leaks and what not, that Ricoh Pentax have some new lenses in the pipeline for the 645. I guess these will be optimized for digital.

                The CMOS thing is a shame really. I really think MF land should have stuck with CCD but, like you always say, the development dollars is in CMOS technology and as a result CCD sensors are years behind and along with Leica’s work with Belgian chip designer, CMOSIS, it seems like the makers of big sensor cams have thrown in the towel and surrrendered to CMOS.

                On the other hand though, maybe there is the thinking that pretty much everyone is used to using CMOS cams, amateurs as well as pros, so a MF cam with a CMOS sensor is just more of the same (more pixels, more sensor, more pixel pitch) so what’s not to like? 😦

                • Now that I have a workflow solution, I’d actually rather have CMOS for most things. The benefits mostly outweigh the disadvantages; I suppose in an ideal world I’d have one of each to use (645D and 645Z) depending on the task at hand…

          • Good point Ming, none of this really increases the actual shooting envelope, but no doubt you will see a boost vs the Hasselblad digital back.

            Also good to hear some parts of the ‘system’ are making it’s way to you. No surprise on lens size, a wee bit of research I’ve done showed the Pentax lenses can get massive! Now we need you to get hold of the camera!

            But even though the 90 has IS, IBIS is useful at all focal lengths – the rule of only longer lenses requiring them is now a bit naive. But I can also appreciate it’s probably easier to implement on a m 4/3 sensor than with a MF one.

  19. Kristian Wannebo says:

    !! 🙂

    I agree that tech companies invest too little in user feedback during development.
    ( It was the same with VHS machines, very few were really easy to program – New Scientist once had a long discussion between scientists who found them difficult.)

    Your idea of education from camera companies is even better!
    —–

    [ I also read “Technology, art, and pushing the boundaries”.
    !! .

    Re: “*I believe the popular analogy goes ‘what’s amazing is that the horse can talk at all, not so much what it has to say’.”,
    I can’t help mentioning a good read, Tobermory by Saki; (what a cat might say …)
    http://www.sff.net/people/doylemacdonald/l_tober.htm
    PDF at
    https://www.google.com/url?sa=t&source=web&rct=j&ei=P6hrU8eNMKTayQPSsYCABQ&url=http://homepage.ntu.edu.tw/~karchung/Tobermory.pdf&cd=2&ved=0CCsQFjAB&usg=AFQjCNFtzEGQkpqzD1SNKcQczo3tqmfdJg
    ]

  20. Dear Ming, interesting, but.. having an engineering background, I have always been approaching photography from a more number-driven, technological approach. It’s not the bottom line in science where we have to strive for, but the upper level of what’s technically possible – only this way our products will continuously improve. Here’s my claim, a reduced 12MP-resolution may be enough for a final picture in post-production but what happens in your camera to come to a native 12MP extrapolated picture is a completely different story, based on pure estimation/demosaicing math. Numbers are important and higher sensor resolutions also are. A 12/16MP picture effects – due to its CFA color separation in different channels – into the extrapolation of in fact 3 or 4 times a 3 or 4MP picture only. That’s an awfully lot less than most people would think… (I don’t dare to think what the Fuji X-trans actually brings in real resolution & color accuracy, the pixel peeper know very precisely this is not exactly a ‘wow-feeling’ you’ll get due to the very disturbing artifacts). Take in account quite a few other inaccuracies (the spatial factors, angle and size under which light is captured for each sensor pixel, sensor filters, wavelength issues, non-linearities, noise, but also f.i. the new tendency to use a part of the pixels for contrast AF-purposes), artifacts due to miscalculated pixels,… and it’s clear for me that striving for a higher resolution is not only having impact on the overall detail and crispiness + ability to print larger, but also on your overall color accuracy and even the effective, usable dynamics in your picture. Compare a D800 with a Fuji X-trans, both are theoretically having the same pixel density and a sensor from the same Sony-family, but no way the rich colors and dynamics from the D800 are beaten by the Fuji (Fujistas won’t agree, but they never agree on anything else being better than a Fuji). In the margin, it is funny that the Foveon concept never got more followers than just Sigma. Even knowing that Canon holds a patent for a similar sensor.

    • There’s definitely room for more – I’s be hypocritical if I said otherwise because I’m personally chasing all the resolution I can get for the Ultraprints – it’s just that for the vast majority, it’s overkill. People simply do not have the shot discipline to make any tangible gains to offset the increased requirements in file handling, lens quality etc.

    • mosswings says:

      D JP, are you conflating sensor size with areal resolution? One of the big reasons why that D800 image looks so great is that it’s collecting more photons than the XT-1. Beyond those irreducible size differences, resolution can provide significant benefits, if the tool is wielded appropriately. But the entire point of this article is that the D800 is likely beyond the capabilities and/or the desires of most who might be interested in it.

      I come from an engineering background as well, but a while ago I learned that at some point you have to shoot the engineers and ship the dang thing. The goal of engineering is not necessarily to make the best technically achievable widget, but one that is best suited to its purpose. We as engineers often get so lost in the pursuit of the possible that we forget that the customer’s needs are what pays our salaries. Focusing on this has made Apple the wealthiest company in the world. Focusing on pure engineering excellence is what is now causing Nikon especially and others to a comparable extent so many of its woes.

      • Nothing wrong with pure engineering excellence: it’s making that excellence accessible in a meaningful way to the end user that appears to be the challenge for most camera companies…

      • Frans Moquette says:

        I’m not sure Nikon is focussing on engineering excellence anymore…
        I think Nikon needs to clean up their product line-up. Focus on user segments (consumer, pro-sumer, pro) and camera types (compact, mirrorless, DSLR), and create a best fitting product for each of those 9 segments. Make sure there is an upgrade path and compatibility of accesoiries between segments. Address user needs and offer excellent customer service, worldwide. How hard can that be?

  21. mosswings says:

    I couldn’t agree more, Ming. And I absolutely LOVE the title photograph.

    Many professionals and experienced amateurs are coming to exactly the same conclusion these days. My showdown with shot discipline came just recently with the D7100. I bought it for its noise performance, but for street and travel photography, compared to my lower res D90, it just wasn’t as much…fun…to work with, though clearly far more capable under the right conditions. The one area in which it was clearly superior was its AF coverage and responsiveness, which I find is the single most important useability feature of a camera – aside from the largest viewfinder I can get. Based on this realization, I leave the 36+MP cameras to younger and less shaky hands and sharper eyes such as yours.

    Thom Hogan has been beating this drum for some years now, though more pithily than you, but the point is the same: manufacturers are not solving actual user problems anymore. 4K video on DSLRs is a classic example of this. Only a very few will be able to use the capability of such cameras and are willing to afford the infrastructure changes they’ll need to support processing and editing. Yet the manufacturers shoehorn it into their products because it’s the next big thing…and make sure that you have to buy it by crippling last year’s flagship. This is what is going on with the 4K push in HDTVs. The only place that you can find top-quality black levels, motion processing, color gamut, and gamma tracking – the stuff that most plasma sets had in abundance at a decent price and that is the dominant determinant of customer satisfaction with image quality – was ripped out of their 1080P sets and put into their 4K sets. So those of us who appreciate this stuff will be buying 4K sets, but not because they’re 4K, and not at intro prices.

    Anyway, back to Hogan. He states your thesis as: workflow, complexity, appropriateness. Your 3-step plan is exactly what camera companies DON’T do; even those that have all the pieces of the market under their control – Sony especially and to some extent Canon – can’t seem to put it together. Apple understands ecosystem and end user better than anyone, but struggles with the step-up customer. The frustrating thing about it is that all the pieces are there – but educating the manufacturer about the consumer, as well as the consumer about what’s really important, is missing. Hogan’s assertion is that it will be a consumer software company like Apple that finally disrupts this creaking fracturing market, and the mechanicals that it now obsesses over will become just modules in an entire workflow solution that is mediated by something other than a tool obsessed with exposing the beauty of its own internal workings.

    Sorry, I’m ranting. But you’ve touched a very sensitive nerve of mine. And I appreciate your eloquence.

    • I’m inclined to agree: it’s basically innovate or due. We’ve had the pieces if the puzzle for some time now: there is no excuse to sell a product that has ergonomic or technical compromises either out of protectionism for the next generation or just plain laziness/ design inertia. To me, the last major paradigm shift in the way we shoot came with the iPhone – sufficiency, simplicity and distillation of the essence. Most people, and myself included, don’t need to know the exact technical shooting parameters 99% if the time so long as we don’t get over/under exposure or shake. For the remaining 1%, choices abound. And there’s no excuse there, either – other than what the market appears to be willing to bear, which is too much. Again, it falls back to education: people then vote with their wallets.

      • mosswings says:

        “Most people, and myself included, don’t need to know the exact technical shooting parameters 99% if the time so long as we don’t get over/under exposure or shake. For the remaining 1%, choices abound.”

        Again, exactly right, and I appreciate that you include yourself in the 99%, given your predilection for precision and control. There is actually a tremendous amount of leeway built into today’s cameras that should make picture taking, even rather critical picture taking, a fairly painless and automatic process. This was one of the beauties of film negatives that slide snobs like myself refused to recognize…you could fix things at the lab to a surprising degree. We returned to the slide era for a decade as digital was just getting started, and now we’ve entered the digital film negative era, if you will. DR and noise are so large now that we can make cameras behave more filmicly…like Oly did in making its u4/3/Sony sensored cameras perform disturbingly close to APS-C. That is how camera manufacturers should be utilizing their Olympian (sorry about the pun) sensor capabilities…but it doesn’t sell cameras as easily as the crappy lensed and sensored cellphone with the stone-simple user interface. The user doesn’t see all the magic behind the camera, and doesn’t appreciate it. But the user does appreciate that all the mysterious buttons and knobs aren’t there and aren’t missed.

        I used to drive a stick shift car. I loved it, and thought that the control and engagement it provided were an essential part of driving. But then I bought an automatic shift car…one with a really good automatic, and found that it wasn’t. Fun driving was everything but, and having a stick there was really more of an admission of lack of technology than quintessence. If I want to hold the car in a gear there’s always the paddles. But the car does a better job of power management than I do, and I’m happy to let it manage.

        • And here we come back to education: a lot of people still believe that the stick is necessary for a good image. Mastery of one does probably suggest that the user knows what they’re doing, is a bit more conscious of the various parameters and is therefore likely to to produce a better image, but even this is a stretch. I certainly prefer to concentrate on composition and timing and let the camera do the rest – assuming that it isn’t trying to work against me, in which case give me the stick any day.

  22. A very nice and thoughtful article that hits the spot! If the manufacturers thought like you, we would have a DVD with several different types of tutorials, list of lenses that really fit the camera and of course a manual! We would also probably have more people refusing the D800 and staying with the D700, like me, because it really fits what I’m doing and maybe always will do. I don’t think a D800 will make me make better images, but I may be able to print A2 instead of A3 comfortably. For the same reasons I stick to the D300S instead of the D7100, the former simply still being a better camera for DX shooting. Not for very high ISO, but then my D700 will do just fine, thank you!
    I think I would consider a 16 or 20MP evolution of the D700, but I will not go higher than that, well maybe 24MP :)!

    Kudos to you Ming Thein for speaking up for pragmatic, reasonable approaches to buying equipment that is what we need, not what the marketing guys say we need and should have.

  23. This is digressing a bit, but I thought I’d comment about the ongoing production of not only new technology, but products that address the shortfall of the previous generations. More and more I believe camera companies are building control and handling issues into their products so that they can create a future version that fixes the issues (but, of course introduces more of its own, too).

    I mean, look at some of the faults with the Fuji X-T1. Case in point: that bloody ridiculous four-way controller nonsense that they put in. It’s not like this is Fuji’s first barbecue and anyone with any interest in handling in the company would have immediately commented about the poor control. Can you imagine someone like Jonny Ives tolerating such a breach of usability? Now imagine Steve Jobs telling him that it will be fixed in the X-T2 so that people will have another reason to upgrade (among other reasons).

    You just need to look at how well implemented the four-way controller is on the Nikon D7100/D610 and get the feeling that Fuji probably botched the design on the X-T1 on purpose. As they say in their marketing, “we’ve been making cameras for 70 years”.

    • I really wonder about that sometimes. Nikon changing their grip shape on every subsequent generation (and not always for the better) is another good example. And let’s not even talk about the DF…

    • It seems that no manufacturer can get four-way controllers right. The X-E1 was too loose, the E-M5 and E-M1 too easy to accidentially press also, and I also have had the same problem with just about every Panasonic camera. I was actually glad they are moving in the right direction with the X-T1.

      • Nikon does a reasonable job on its DSLRs, but that implementation requires a lot of space and isn’t so suited to use for shortcuts.

  24. William H. Widen says:

    Your article makes a great case for using film as an option because you can get nice scans of 35mm negatives, even with a home scanner, that translate into the electronic file size that most people will use–including printing up to 12 x 18 (which I find excellent for printing 35mm film).

    • Barry Reid says:

      I am in agreement on the basis of pure resolution. While I do believe that a fine grained 35mm film well scanned equates to about 16Mmp, certainly my SLR with good film and a good lens could out-resolve my 12mp Eos 1Ds & 24-70. Ultimately though film will always lose because, when most images are viewed on screen, because, though detail is there the interaction of the film structure and the scan can cause issues for screen viewing.

      The ultimate nail in the film coffin though is undoubtedly high ISO – I well remember Konica SR-V 3200…

      • William H. Widen says:

        Agree that digital can be far better at high ISO–even a modest camera like a Nikon D3300 (or even its lower specs predecessor the D3100) is better and more flexible in low light. A film rangefinder may compensate a bit with the ability to shoot at slower shutter speeds, but even with a fast f/2 or better lens on a rangefinder, a cheap f/1.8 lens on a budget Nikon will likely equal or exceed on a technical level. It can be a challenge to shoot film at 1/15th or 1/30th without camera shake intruding.

        • Actually, the minimum shutter speed thresholds for modern high-resolution cameras are higher than that if your want to avoid camera shake. Due to the linear nature of the resolving medium, there’s a tradeoff point at which it makes no sense to have any more pixels or a larger sensor for most people because the shot discipline required either becomes too demanding or minimum shutter speeds outweigh the other gains.

          • William H. Widen says:

            For my Nikons, I do not like to go much below 1/125th. Within reason, I will boost ISO rather than lower shutter speed. I find for a rangefinder, my safe threshold is about 1/60th. Because of the ISO limitations with film I will go lower in a pinch, to 1/30th (with about a 1 in 4 keeper rate) and in extreme cases 1/15th–but that is even more hit or miss. I do not do it unless forced. The higher ISO on the digital allows me to keep the shutter speed higher (again, within reason). At a rational level, there is little reason to use anything other than the D3100 or D3300 with an f/1.8 lens. Film really is about the hobby and shooting experience for me. And, though there are limits with film, I tend to find my “best” photos are with film. That is partly due to the look and partly due to a sense of satisfaction (on a technical level it could be more in my mind than in reality). If I had to get keepers for a business, I would use digital (maybe with a few film shots for fun). For a hobby, film is still very satisfying for me. What your article showed well is that by using film from a resolution standpoint you are not really losing much (but of course all of the other factors that favor digital remain).

            • Agreed with your limits in general. Film also seems to be a bit more forgiving of shake/misfocus/etc due to the nature of the medium…

              • William H. Widen says:

                Thank you very much for that observation. I had not known that the medium of film itself, rather than the mechanics of the camera, might contribute to better results at a lower shutter speed with film. Very much appreciate knowing that!

                • Daniel says:

                  Well, that’s the big and slightly dirty secret of grainy B/W film in particular.. 🙂 When the little lumps of silver that make up the image are visible in the print (which they are, in a good print), there’s always an illusion of sharpness because this film grain looks sharp, even when the actual image was out of focus. Good examples of this would be rock and jazz photos from the 50’s-70’s that due to low available light were often underexposed and push-developed, resulting in strong, coarse film grain.

                  • William H. Widen says:

                    Interesting! Do you find this effect to be particularly noticeable for different development methods (i.e. not just push processing?).The only home development method that I use is stand in Rodinal. Otherwise I send out or use a C41 or convert a color film to black and white. I have noticed that stand in Rodinal gives well defined edge effects but had attributed this to the method of development and not the film per se. This may not be correct. I simply have defaulted to stand at home because of the ease of development. I have little patience or expertise for precise timing and temperature development–hence stand.

                    • Stand development is good for smooth tones if you have the patience and don’t mind risking haloes from bromide drag. It doesn’t work so well when you’ve got a lot of rolls or larger formats – at least in my limited experience…

                    • Well, I haven’t used film in a long time but some types were grainier than others. Tri-X was particularly grainy, but even some C41 dye-based B/W films like Ilford’s XP1 (rated at 400 ISO like Tri-X) showed enough grain to be clearly visible in medium sized prints. That’s not necessarily desirable -most people wanted less grain- but i.m.o. it does add to the impression of sharpness.

      • Though film still holds some advantages for monochrome tonality. I think the real nail for most people is convenience…for the serious, it’s control.

        • William H. Widen says:

          Thank you for your remarks. Convenience and cost, I would add, makes film a different proposition. And, I am paranoid when I travel about not putting film through an x-ray machine. Sometimes that is not possible–then you worry a lot if you used a higher speed film. When using air travel, I always take a film camera and a digital one for this reason.

          • That makes two of us. One pass for low ISO film might be fine, but if you’ve got to go through several legs, and with places that put you through two security checks – it might be a disaster.

            • iskabibble says:

              Untrue. I’ve sent my film (ISO 800 and lower) through X ray machines up to 8 times with NO noticeable effects. Same film, 8 scans, no problems. I USED to be worried about X rays, but not anymore. The results speak for themselves.

              Ming, you are so methodical, why not do the test? You travel enough. Put a roll of film in your *carry on* bag and let it get scanned, over and over. You WILL see that nothing happens to it.

              FUD should not be part of this web site.

              • I have, twice. I haven’t reported because the results were inconclusive and I need to do it again to be sure. One roll was okay, one had base fog. And that’s not the kind of risk I’m willing to take with images that actually matter and can’t be replaced. My suspicion is that not all films or x-ray machines are created equal (especially ageing machines in third world countries), and that’s not so predictable.

                • iskabibble says:

                  My film goes through ONLY 3rd world country X-Ray machines and that includes the UK! 🙂 6 or 8 or even 10 times through the X-ray scanner, no effect At all. Ever.

        • Though sometimes too much control can be a creative hindrance, I find. I’ve actually started shooting much more with film than digital (including with a Hasselblad 501c which I bought because of your article on the subject) in part because I was spending a lot of time post-processing and I wasn’t enjoying the art as much. I’ll probably be back to digital eventually, but for now I’m relaxing in the knowledge that almost all my control is up front, when I’m viewing the scene, selecting a film stock, metering, etc. Once the image is made, there’s only a limited post-scanning input I can make. In the same way, it is instructive to reflect that the golden age of Russian literature was all written under heavy censorship. The limitations which constrained those artists allowed them to be more creative. Anyway, that’s my semi-profound thought for the day.

    • True, but there’s one catch: not so much obtaining film, but processing it well. And for low light work that number drops dramatically…

  25. Ming,

    again a thoughtful and brilliant artice!

    You combine technological knowledge, artistic views and almost philosphical comments to unique articles, it could go 1:1 to print and being published as one of the best photographic newspapers in the world … Congratulations!

    One question (probably an amateuric one): The prints you can buy (on photographic paper) are they as well imited in dpi output (i.e. 240 dpi) or is it possible to see high pixels already in 13 x 18 cam prints?

    Keep up the great work,

    all the best from Munich, Jay.

    • Thanks Jay – I used to run a magazine; when I told the publisher I wanted to make something like this site, I was told to keep giving sponsored products positive reviews. I left, and here we are.

      Linear resolution is a function of the print process, not the paper or physical size. My Ultraprints run at least 720dpi, and start from 8×12″.

  26. Personally, I don’t think the camera companies can do much more to bring in new system camera buyers because the market is saturated. The cost of entry is certainly now low enough – a very good CSC or refurbished DSLR can be bought for $200-300 on sale nowadays in the US, which was previously in compact price territory. Although the interfaces can certainly be improved further (similar to other personal gadgets before the iPod/iPhone), the Auto and Scene modes of these cameras are easy enough to use for the beginner (I’ll admit I stayed in these modes when I bought my first DSLR). The average consumer who shares photos on social media and views them on small screens is likely to be satisfied with his/her smartphone, compact, or old DSLR/CSC. I can see why the companies market specs because it’s only the techies that constantly upgrade their electronics now, because the point of sufficiency has been reached.

    • Agreed: the only people buying now are the gear heads and those who make a living from their equipment – but even then, there has to be a significant improvement in capabilities to justify the outlay. At the end of the day, that’s simple business economics.

  27. Reblogged this on Scribbles and Snaps.

  28. Luis Fornero says:

    Nice article Ming, as always ! I’d love to read your thoughts/opinions about the Sony A7s, I’ve switched from Canon (5DMkIII + lots of heavy zooms) to the A7 and rangefinder lenses I love this camera…. I just don’t care about extreme resolution, I prefer good colors, tonality and image style (old lenses give me the caracter I want), so the new 12Mp A7s is very tempting…. altough going from 24 to 12mp makes me dizzy, your article makes me rationalize it…

  29. Tom Liles says:

    If I were more proactive and less sleep deprived, I’d love to start a petition for 16bit RAW output being an option on all — or as many as we can — cameras. I had this thought walking back to the office from lunch, last week. We should do it. I don’t see a downside. If they don’t listen they don’t listen… But some maker somewhere probably would, and that’s all it’d take.
    This is, honestly, my only remaining demand of camera makers—the only demand I feel isn’t made of them on any regular basis (contrast with things like clearer finders and better controls) and won’t be answered in the natural flow of product development and innovation.

    We just want the option, turn on or turn off, of 16bit RAW recording and output—from the iPhone to the D4s. There is no drawback. People who don’t understand don’t have to even touch, use or think about it. Ascetics and contrarians can also leave the option turned off. But those of us that do understand need no persuading of the benefits.

    I can see how bit-limited RAW data made sense back in the early days—there perhaps wasn’t the computing power, or the know-how at a general user level (even the maker level?) to make it make sense… indeed, only the early adopter digital medium format shooters of the early 00s and drum scan anoraks probably knew what to do with more than 8bits in a channel (and they got 14bits at best). But times have changed. This should be a bigger demand on our part, surely?

    It would also, we hope, help drive output medium innovation—which has to start from the capture devices, the first link in the imaging chain.

    A Rambling Aside and Response to the Article
    Eric, look away now, but, I came this close to getting rid of the Sony A7 I awarded myself in January — picture a close up on me doing the pincers with my thumb and index finger, bokeh my face out, etc., yeah that close — 1) because of the lossy Sony compression [though a neat idea; I plain don’t trust it, sorry. And I find it condescending: and for someone with a sense of entitlement and self-importance as out of proportion as me, that is hard to deal with] and 2) its spectacular inability to get skin tones in bright sun right. My friends coming out irreparably orange in glorious sunshine is infuriating, at the very least; though the camera was everything I’d hoped for in flat light and the shade… Now imagine, a photographer having to avoid the good light for the bad. It’s just stupid. My A7 isn’t a lemon, let me also make that clear.
    Streetlight ghosting at night with wide angle and less tele-centric lenses [pretty much settled upon by internet opinion now as internal reflections at the filter pack (problem is more prevalent on A7 than A7r)] and less than expected high ISO performance (I don’t mean books on a shelf in a modestly lit room, I mean handheld street shots, actual photography, outside in the dead of night, a genuine necessity for me—almost 50% of my chance for photography is walking home from work; I work until gone 21:00 most days) these things had agitated my dissatisfaction, along with a host of minor Sony quibbles: easily scratched and total dust magnet bodies being a mountain that has grown from a molehill…. I was right on the edge, I had the A7 boxed, the auction photos done (D3 laughing its ass off at the A7 there) and it was literally a registration and upload away from being gone. But I didn’t follow through.
    The A7 and I are currently negotiating a peace treaty and new terms and conditions (it’s going Ok 🙂 ); but my snap decision is not unlike what you speak about above, MT. In fact, it’s probably exactly consumers like me that your piece pulls the pants down on (along with the camera companies).

    What caused me to snap like that was mostly me (I have Irish and French heritage, can I blame it on that?); but also unrealistic expectations. My expectations are ultimately my responsibility, but they must come from somewhere. I don’t, and can’t, get free review samples of cameras and equipment, so all I really have to go on is my gut and internet opinion. I would posit that along with camera companies doing what you’ve outlined above, Ming, the whole camera eco-system needs to modify its mentality too—review sites and popular opinion as found on fora, especially. Us, basically. And how realistic is that? Then again, the world is what we make it.
    Reading reviews of the A7, you’d be hard pressed to not believe that ISO 6400 on the A7 would be a cakewalk. That the “lower” resolution would be kind (kind just because it isn’t 36Mpx?). That the AF would be great (just because the spec sheet shows PDAF sites on sensor). But the reality certainly isn’t like that at all. The resolution is ferocious. The AF is dodgy as hell—watch it bang against the walls in AF-C (yes, in the PDAF equipped center zone) and even manage to cock up lock on objects that aren’t moving for God’s sake. Regarding 6400, it’s hit and miss. You may remember that photo in my bathroom mirror at ISO 1600, MT, that’s the reality—at two stops down from 6400.
    To veer from this to “Ah, see, the A7 isn’t good at high ISO, it isn’t all that: all the reviewers were wrong!” would also be a mistake; it’s the flip side of the exact same coin. The coin that ratchets rhetoric up to Macaulay Culkin 11 (that one’s for Peter!) and only knows how to speak in binary logic and hyperbole. Everything is super-good or God awful.
    The A7 is decent, it’s good, it’s nice in a lot of cases. And that is saying something. On the high ISO (a big, big reason I got one) it’s alright, if you expose a certain way, process a certain way, downsize a certain way but even then, these are still ISO 6400 shots we are talking about… 2^6 times less light than ideally necessary is hitting the sensor, and hardware/software has to make up for that. There is no such thing as a free lunch. What did we expect? This is not really much better or worse than other stuff I have, or have used.
    I did the test against my D3 — real world nighttime photographs — downsized the A7 to 12Mpx, and sure enough the A7 was usually better (more on color and DR than actual noise—both can be sandstorms of luma noise at pixels when they want to be). But “better,” is symptomatic of the issue I’m talking about. Taken on face value, it sets up the wrong expectation. I can only appreciate “better” now I know a little more about photographs in general and specifically photographs at high ISO—even then, it seems too strong a word (though not incorrect). A slightly upbeat “mmm” would be how I’d phrase the A7 in the comparison. Good, decent, useable, but not a different league… not even a different street. Just, “mmm.” I’m singling out the A7 because I own one; but let’s be real, this is all cameras we are talking about… the differences are only apparent to us—not even all of us, a subset of us.

    The way people write about cameras, no two ways about it, invites impressionable beginners like me (bats innocent eyes) to think the unthinkable may be thinkable. Because what do we know! Your site bucks the trend, Ming, but even with 10^7 visits, this is a relative backwater.
    DxO numbers are quite useful and I use them more than anything to help my gut when pondering (= about 5 to 10 seconds total in Tom World)—whatever we think about the premises, the testing is at least a little more of an objective means to weigh things up. But the downside is that the data are often used to back hype up, and we’re adrift now in a sea of hypothetical performance stats, stuck in a current of bench test monomania which pulls us away from the simple pleasure of pointing and taking a photo, and making a picture.
    And therein, lies the best education, in my opinion. I’ve benefitted greatly from all the free knowledge that you supply us with on this site, Ming. Genuinely, benefitted. But I’m quite sure what has contributed the most to my development is simply rampant picture taking. The more I take, the better I get. I wouldn’t call this an efficient or quick method. Or even a universal one? That’d be interesting to hear everyone’s take on… I don’t see any other way, honestly, but we all know what I’m like.

    So the thing that resonated with me the most in this article was the idea that better, on our terms, cameras just encourage us to shoot more. That’s all the education I’ll ever aspire to.

    Now, give me more bits.

    • Hear hear. At least the Nikons have the option to go to 14 bit uncompressed – though frankly I cannot see the difference between that and 14 bit lossless compressed.

      I, for one, am in favour of more pixels even if you aren’t going for resolution out and out – what the additional information does do is improve tonal gradation because you’ve got additional spatial ‘steps’ between each edge transition, even if you downsize – it just becomes more accurate, and as a bonus acuity improves. I had a similar surprise to you: a D800E image downsized to the same size as the D4 walks all over the D4; overall noise is similar or slightly lower, but resolution/ acuity are much, much higher – bear in mind we are comparing 36MP non-AA to 16MP with AA (albeit weak).

      Compared to sites with fora, I’m definitely a backwater – DPR probably does 10-20x the traffic I do, if not more; though I don’t know how that stacks up in terms of repeat visitors. But quantity does not of course equal quality.

      Rampant picture taking? So long as you look at the result and assess it to some degree, I’d say there’s benefit in the process: think of it as the feedback/ experimentation cycle. If you only take one photograph a year, you won’t have the opportunity to take a better one until next year (by which time you’ll probably have forgotten how to turn the damn thing on). If you take one every couple of minutes – well, you do the math.

      • Carlos El Sabio says:

        Ming, incredible article – as always. How do you do it? A lot of food for thought here. I would have expected Tom to respond on this one (good to hear from you.) I am in his camp at this time with the rampant picture taking. I assess each one in an attempt to learn, if ever so slowly. Thanks for your hard work and sharing.
        Carlos

        • Thanks. Lots of coffee and little sleep? :p

          In all seriousness – I think it’s probably a consequence of going down the rabbit hole, shooting/ experimenting lots and somewhat objectively looking at the output at the same time…

          • Carlos El Sabio says:

            WWWaaayyyy off the subject…. I have to believe there is some great coffee from Malaysia, but I have not seen it here. I do get coffee from Sumatra, however. It’s great. I also am a coffee drinker. Good for the thinking processes.

        • Tom Liles says:

          Hi Carlos, good to hear from you too. I’ll be in and out for a while as we’ve been in the middle of a house move; now on the other side of it, but without internet. So I have to do this at work—serious internet page in a tab at the ready for whenever someone important walks past. Like being in a Hollywood spy movie, except my computer doesn’t make beeps when I hit a button.

          Oh here comes a manager!

          • A very complex excel spreadsheet works well, too. That said, I don’t really need to do that anymore. Browsing photography-related stuff is part of my job description now…

            • Tom Liles says:

              Excel is literally AMAZING now. It seems much more powerful than I recall from my university days… I’m simply in awe of what’s possible in there. An honorable mention for the free Apple utility “Grapher” too—that thing can map some pretty pretty complex expressions. Amazing stuff. And I thank the heavens I don’t have to use a bit of it—the most difficult calcs I have to do in advertising are how long to the next tea break, or how many marketers it takes to change a lightbulb.

              • I used to do finite element simulation in earlier versions of excel…it worked, but wasn’t exactly elegant. Unfortunately it still sucks on a Mac regardless of which version you use.

                • plevyadophy says:

                  Ming,

                  Can you not use that virtual machine software, Parallels I think its called, to create a virtual “Wintel” machine and then install the pc version of Excel, assuming the pc version is satisfactory?

                  • Sure – but why not just use a PC in the first place? In any case, I left the corporate world and all of its Excel models years ago. The most complicated thing I do now with it is keep track of my expenses.

      • Tom Liles says:

        Agreed on Nikon’s 14bit lossless versus full-fat uncompressed 14-bit; I can’t see the difference. But I’m probably the furthest from being an authority on that as it could possibly get.
        I can feel the difference between those RAWs and Sony’s 11-bit + 7-bit delta RAW => “14-bit” RAW in the edit though. The speed with which Sony ARWs fall apart in the highlights seems more than mere “character.” I the ARW files are just brittle at the top-end, plain and simple; and that’s because of the compression Sony does.
        I don’t mind that they do it—I do mind that I have no choice in the matter. And refusing to give me that choice is worse than Nannying, it’s Big Brother. The compression scheme may have emerged from a genuine concern for the customer, but it’s passed over into something else now. The RX100s the RX1s and now the A7-series are all great cameras retarded by this lack of an option.

        The same thought process applies to makers neglecting to give us 16-bit RAW data. It was probably a decision made from genuine thought for the customer early on, but has now turned into something approaching dogma. I doubt we will ever see it unless makers are provoked by some vigorous activism on out part.

        It’s sad that they can think up pet beauty smile mode, but not this…

        • The highlights and any sort of smooth tonal transition areas are going to be the most heavily affected. It’s why MF highlights do look quite a bit better in both tonality and color fidelity.

          Easier to explain pet smile beauty mode than why consumers would need 16 bit raw – ‘why are the files so big?’ you can hear them cry…

        • Frans Moquette says:

          No-one can see the difference between lossless vs uncompressed because lossless is, wel, without any loss. In contrast to lossy compression, like JPEG, which throws out data, lossless, like ZIP, does not lose any data at all. You can compress lossless and decompress again and again a million times and all the data will still be there, exactly as it was from the beginning. When you compress lossy and decompress again and again you will lose data and you will see that sooner or later depending on how much compression is applied.

          • Manufacturer claims vs empirical presentation aren’t always quite the same 🙂

          • plevyadophy says:

            Hi,
            You can in fact use image editing packages that enable you to manipulate JPEGs in many ways without loss, thus giving you Lossless JPEGs. A prime example of this is the FREE, and extremely well laid out, FastStone Image Viewer. I have been using it for years for this reason and the fact that it’s layout is wonderful; I tend to use it as my default image viewer and then send images off elsewhere for editing at times but even then a great deal of my JPEG editing is done within this progam e.g. I don’t trust anything else when it comes to resizing JPEGs, the level of control given by FastStone is excellent.

            • What format does it save as? Because in theory you could edit your jpegs in PS, save as TIFF or PSD and also not incur any losses.

              • plevyadophy says:

                It saves JPEG to JPEG without loss, but only for a few image manipulation types (over time they add new features, and currently one can do lossless rotation and lossless cropping and a few other things). It’s my go to package when I want to downsize a large JPEG file and want the resultant file to be “strong”, and I have found that even for operations that should result in a lossy file the resultant lossy file doesn’t appear that badly affected. They have recently added raw support but it’s not that great, e.g. no White Balance settings. And the browser facility layout is very nice.

                Previously, when I had JPEG originals I used to convert them first to TIFF so as to ensure I don’t end up with a mess at final image stage,but now with FastStone I have no such concerns; I just do what it is I wanna do with the file and go from JPEG to JPEG.

                Google’s free Picassa package also offers lossless or near lossless manipulation of JPEG files if you use the SAVE AS option to save final files rather than the SAVE option (which makes an obvious mess of your final output). This is because, akin to Lightroom, Picassa uses sidecar files that contain all of your adjustments rather than making adjustments to the original file, so if you set JPEG output quality to 100%, select SAVE AS and are of course making just one save (of the final file) you will end up with a very good quality JPEG. The only problem is, Google have made it rather a chore to get this quality ouput (silly dialog boxes and hoops one has to jump through, well when I last looked that was the case).

                Of the two packages, if one is happy to use a lightweight package (light in terms of system requirements) for JPEG work, I think FastStone will be more appealing to the advanced user (and unlike another popular and lightweight packaage, IrfanView, it doesn’t require the installation of a zillion plug-ins to use all of its features).

      • knickerhawk says:

        I really doubt that there’s anything to be gained by going from 14-bit to 16-bit. There’s extraordinarily little (if any) value in the jump from 12 to 14, so we’re really getting well into dividing angels-on-the-head-of-needle territory by going from 14 to 16-bit. If you’ve ever seen “images” generated from the 13th and 14th bits stripped from 14-bit files, you’ll quickly realize there’s just nothing but noise (except in the special case of the Nikon D300). In short, all you’re getting from those extra bits is dithering, NOT information. And, of course, dithering can be added later anyway.

        On the other hand, gaining more data via smaller pixels does provide more REAL information at least if we keep the pixel sizes above a couple of microns or so. The returns diminish, of course, as you add pixel density but as you’ve found with your ultraprint efforts, there are some circumstances in which the visible benefit is still present. For most of us most of the time, it’s a non-issue. Most of the time, the value of the “smaller” rig is greater because it’s easier to carry, less demanding, etc. as you note. BUT every once in a while…

        That’s why, to me, the crux of the issue is (to quote Ming): “My personal conflict comes when I might just happen to encounter something that would make a good ultraprint but I didn’t bring enough resolution to make it happen, especially knowing that the tech and shot discipline is very much is within my reach and the compromise was due to personal laziness.” Indeed, those opportunities are relatively rare for many of us, but when they come along and we don’t have the tripod, the medium format camera, the big fat ultra-expensive lens, we get upset. In response to this fear of being under-prepared, many photographers overcompensate by ALWAYS dragging around the big gear. What they don’t realize, I think, is that this over-compensation comes with it’s own set of penalties in terms of lost shooting opportunities.

        My “solution” to the problem would be a relatively small format camera that’s convenient to carry, stabilized and comes with quality lens options, ISOless behavior, but one that is also truly optimized for shooting multiple exposures to be stitched or stacked when the occasion arises. We’re not far from that now, but there could be some ergonomic and shooting aids added to the current crop of cameras that would help. Same for true RAW histograms. The beauty of this strategy from a camera maker’s perspective is that largely requires minimal differentiation in product design for general/casual consumers and the power users. It’s mostly just a question of software and processing power. Yes, it doesn’t help in some shooting situations involving moving subjects, but it takes a big chunk out of the problem.

        • There’s definitely value in going from 12-14; less so from 14-16, but as output methods improve the difference will be more visible. I can see the difference especially when I start to do any kind of tonal manipulation to the file, regardless of whether this is done in a 16 bit space or not.

          And there’s definitely a tradeoff caused by overcompensation – but I think this is manageable; go out with one or two lenses and be able to make the most of those FOVs compositionally rather than lug the entire suite of lenses around, for example. And there’s probably a sweet spot somewhere in the middle.

          Why we don’t have RAW histograms is a mystery to me…or at least a more accurate interpretation of the data than the standard JPEG.

        • Tom Liles says:

          Hi Knickerhawk, my alarm bells are ringing as I suspect I’ve bumped into someone who knows what they are talking about.

          Here’s how my brain works:
          From first principles… if the binary series runs, 2^0, 2^1, 2^2… then by the 14th bit of a 14bit number we are at 2^13; adding another bit puts a 2^14 delta between what we could map — number values that we had at our disposal — at 14bits; and adding a 16th bit doubles that difference again. So I think there should be a jump between ideal 14bit and 16bit files…
          Since sensors are linear — optic flux more-or-less linearly proportional to the potential difference it creates in the sensel/electron well; double the light, double the current — those two “stops,” the extra two bits of RGB values, should make, we’d think on first principles, quite a difference.
          The electric current created thanks to the photoelectric effect is essentially an analog signal until it’s quantized at the ADC, and downstream of this the magic happens; but the more numbers the ADC and downstream processing have to quantize into can only mean better data, surely? I’m not sure why the top bits, the big numbers, in a RAW file should be noise as you mention—aren’t these values at almost full-well saturation? The signal to noise ratio should be extremely favorable indeed, and indeed on first principles again, those bits should be the least noisy. I must be confusing something…

          But looking at the top bit in isolation as the people you mentioned did doesn’t sound right to me—we’re interested in the whole file. Even if the 14th bit of a 14bit file were somehow all noise, the data is down the register no? at 2^11 or 2^10, etc… if all the noise and dithering and whatnot is all that resides in the top bit, OK, and the meaningful data, in a 16bit file is better placed because it may be in the range of values between 2^14 and 2^12, say—-that’s a lot more numbers and range than previously with the 14bit file. Again, if the hardware has a bigger range of numbers it can write color data into — even if our output media can’t yet match the gamut — won’t that mean sturdier files and much more color accurate (analogous to the “more resolution –> downsize –> better noise performance” thing we know so well) files?

          I’m prepared to be wrong, but I think there will be a difference…

          I can say anecdotally: like Ming, I already experience this difference, between 12bit and 14bit RAWs for example, in the edit… I experience it as more malleable data that does not posterize readily.
          And yes, even on an 8bit screen, I think that difference is tonally apparent, too.

          I may well be wrong; but even so, why not give us the option in-camera, and let’s see?

          • mosswings says:

            Dithering is something that’s added to the least significant bits of a digital word to obscure noise and other artifacts. Yes, a 16 bit word is able to discretize a signal into 4 times as many levels. In a linear encoder, half of those 16 bits are assigned to the 1st stop of the dynamic range. If we want to emulate the exponential saturation behavior of film, we would like to have a lot of resolution in the highlights to preserve information. Dithering would have no effect on the highest order bits of a digital word.

            Now the question I have is, are cameras even digitizing at 16 bits? I’m not sure…simplistically, an additional 2 bits of resolution could imply 4 times longer settling time to preserve the accuracy of conversion that we want. Remember how the D300 slows down when 14 bit recording is enabled? That’s mostly increased conversion time, not write speed.

            16 bits, or even larger words, offer benefits in postprocessing by giving space for higher precision computations without roundoff-induced posterization and artifacting. But programs like LR, PS, and Capture 1 already work in 16 bits or greater.

            • That’s the key question: are sensors capable of outputting anything but random noise in the lowest bits? If not, then adding more bits won’t help. Of course, computation in post-processing has to be done with much wider words (consider that multiplying 2 14-bit numbers requires a 28-bit number to hold the result without any possibility of rounding).

              Another open question is the quality of those bits. 14 bits from one sensor is not necessarily the same quality as from another sensor. 14 bits from one processing chip (eg. Canon DIGIC) may not be as good as from another chip (eg. Nikon Exspeed). We know that RAW data is cooked in most cameras these days, so this can determine the quality of those bits. For example, one camera’s output could posterize more easily than another’s because its processing chip has done its lens correction and other RAW cookery with just enough precision so that the limits are just hidden under the noise of a straight SOOC JPEG or straight JPEG from their own RAW processing software (the software you install on your desktop).

              Or consider the possibility that one kind of lens may require more correction than another, so it eats up more precision, and RAW files shot with that lens posterize more easily than some other lens requiring less correction, all on the same camera body. The kind of algorithm used to cook the RAW file (eg. lens correction, again) can also affect how malleable the file is.

              They might do this kind of limited precision computation because they don’t have enough room to add more transistors to their chip, or their chip might be too slow, or have run up against design budget limitations, or any other number of reasons.

              To answer Tom’s question above about adding bits and doubling range. Yes, that is true, but in a linearly-encoded PCM system, it’s better to think of each additional bit letting the system tell a difference between adjacent levels with twice the precision. Since it’s linear, you could get more top-end (twice as bit), or if you hold the top end constant (eg. output of 2 Volts), then it lets you encode small differences with twice as much precision.

              • knickerhawk says:

                The link below takes you to an online analysis that’s much better than anything I could hope to offer. A little old, but the basic principals certainly haven’t changed and I’m not aware of any major sensor/ADC design improvements since then that would make the effort of going from 14-bit to 16-bit read out worth the cost. By the way, the author is a particle physicist at the University of Chicago and the Fermi Lab. I’m not going to question him..are you? 🙂

                http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/noise-p3.html

                I’ve also seen several examples where the files were padded with zeros to the 12th bit. When viewed, nobody could make out discernible information. It all looked like pure noise with the one exception of the D300 as explained Martinec’s paper. As you add more meaningful bits like the 12th, 11th, 10th, etc. you an image emerge but surprisingly slowly. It’s pretty compelling evidence. Martinec’s paper also includes a gif you can roll over to see the difference between a 6-bit and 8-bit image. Surely, at those levels we’re talking major differences in quality, right? See for yourself.

                I respect Ming’s opinion that he can see the differences, but I’ve personally worked with my D300 12-bit and 14-bit files, doing all sorts of manipulations to them to try to prove to my own satisfaction that it was worth the effort to shoot 14-bit. Even though there is a known explanation for the D300 14-bit files being better, the reality is that the difference is quite small and only becomes evident in heavy lifts and manipulations of the shadows. I never saw anything in midtones and highlights. Thus, I’m supremely skeptical that any 16-bit sensor+ADC is going to yield any useful information, even with severe processing and, accordingly, I would MUCH rather see the camera makers focus on more pixels than on more bit depth when it comes to adding processing and storage “costs”.

                • knickerhawk says:

                  “Principles” Duh!

                • Sorry if this is getting technical, but Martinec’s demonstration of truncation from 8 bits to 5 bits is not showing that we are insensitive to low levels of precision. Instead, it shows the effectiveness of dither when it’s done properly on the right kind of signal. In his case, he added noise (dither) to his 8-bit signal, and then truncated it to 5 bits. The magnitude of the noise was such that we could not see the posterization. Later on, he shows how with much grosser truncation (the 2-bit example) for the size of the image he is showing, the amplitude and distribution of noise being used is not enough to mask the posterization.

                  What he is showing is different than quantizing an analog signal, which is what happens in a camera sensor. Martinec is showing how one can make a reduction in precision in something that is already digitized.

                  In plain English, this also means that the bit depth required is very much subject dependent. The reason we cannot see the 8- to 5-bit truncation is because the size of the transitions (which are also determined by the size of the step wedge) is smaller than the “size” of the noise. Size is in quotes because there is a more technical term for it, but I hope this is a simplification that gets the point across. The reason we can see the 2-bit quantization is because the steps now span a much larger length than the noise.

                  • Sorry, one more thought … One may wonder how the dithering noise can represent any hidden underlying information that is supposedly lost by truncation. For example, in the Martinec example above, we can still tell that the step wedge transitions smoothly from left to right.

                    Think of the noise as having a certain distribution on your screen — the so-called probability density of a noise function which tells us how likely a dot will appear at a pixel on our screen or output media. The underlying information being truncated will subtly bias the noise distribution, putting more dots in one local area than another, so that when viewed from sufficiently far away, our eyes integrate the noise into the big picture and we can still see the underlying signal’s behavior. This is why the size of the image (and the viewing distance and size of the detail in the picture) determines what kind of noise and precision we need.

                    • Tom Liles says:

                      Thank you Andre, mosswings and knickerhawk, this is fascinating.

                    • knickerhawk says:

                      Andre, the relevant portion of Martinec’s discussion is farther down past the step wedge examples. Be sure to check out the city skyline image example and accompanying text that simulates the removal of the two least significant bits from a 14 bit file. Also check out the concluding paragraphs in the section.

                    • Thanks for pointing that out. I have no disagreement with his findings: if noise dominates the lowest bits, then truncating them doesn’t hurt. Another thing that I found very interesting is that noise at the lowest end for that Canon sensor improves with increasing ISO!

                    • How on earth is that last bit possible? Is that due to data truncation too?

                    • mosswings says:

                      I doubt that it’s data truncation…Canon cameras tend to have rather high ADC noise which compromises their low ISO performance. If you look at a 6D’s DR and noise vs. a D610’s, you’ll find that the 6D has about a 1/2 stop better performance at the higher ISOs…the analog gain has been tailored for best performance at high ISOs. the D610, by contrast, shows the typical Sony characteristic of a single analog gain tailored for best low ISO performance with a monotonic (“ISOless”) degradation above that.

                    • I wonder if there’s any way to have the best of both worlds?

                    • mosswings says:

                      How about a D4? It matches the 6D at ISO 3200+ for DR and is a stop better below that…but the D800 is better than the D4 below ISO 400…lots better. Even at 100%. Sony is able to work magic by its massively parallel on-chip ADC architecture. Doing that permits digitally compensating for a whole host of imperfections, and it reduces noise. Off-chip conversion may be quieter, but getting the signal there entails losses and corruption.

                    • Does the D4 use a Sony chip? I thought like its predecessors, the maker was still unclear. But that observation is similar to my experience in practice: DR falls as ISO increases, but not by as much as you’d think. It doesn’t seem to be a linear loss like the D800E. And at low ISOs, the D800E more than holds it’s own with the medium format kings – though jus not in a very linear way.

                    • mosswings says:

                      The sensor is a Nikon NC81366W, while the D800 uses a Sony IMX094AQP. The D3x’ sensor appears to be Sony-derived. With the exception of the D3200, D5200 and D7100, all other Nikon cameras have used Sony sensors, according to Hogan:
                      http://www.bythom.com/currentdslr.htm
                      If you look at the DR graphs of these sensors you can sort out the Sony-derived from non-Sony products; all the Sony products have that characteristic monotonic fall in DR with ISO. All the others have some limiting of DR at low ISOs. The D3200 has the least, but also the worst overall performance. The D3s and D4 have been clearly optimized for high ISO, and show the most low-ISO limiting.

                    • Thanks for the info. Surely Nikon doesn’t make the sensor themselves, though – why would they use Sony designs in everything else if they had in-house capability?

                    • mosswings says:

                      They don’t. Aptina is one of their manufacturing partners (Series 1 and Coolpix, most notably), and there’s one other manufacturer besides Toshiba whose name escapes me right now that supposedly fabs their FF pro sensors.

                    • plevyadophy says:

                      As far as I am aware, as far as the D3x and I think also the D800 image chips are concerned, Nikon are on record as saying that the chips are Nikon DESIGNED/MODIFIED but Sony MANUFACTURED. If my memory serves me correctly, it was either DPReview or Imaging Resource who got that info out of them; in fact my recollection was that the announcement came across as a rather terse “listen you know-it-all internet forum chattering types, you’re wrong, it’s not a Sony sensor; we designed the thing and commissioned Sony to make it. So there!!”

                      Further, and again relying on my recollection of things, the only digital sensor that was truly a Nikon sensor was that rather unsuccessful LABCAST sensor (at least that’s what I think it was called)

                    • LBCAST, in the D2H. Great tonality but absolutely lousy latitude. I had one. If nothing, it taught me exposure discipline and how to get the absolute most out of a tiny sensor…

                • It just occurred to me that it’s quite possible that 12 bit yields 8 real bits after subtracting out the padding and useless lower levels, and 14 bits yields 10; so we’re seeing a difference, but it still isn’t the full 12 (or 14) – let alone 16.

                  No point in more pixels if we’re sacrificing acuity (look at the 24MP APS-C sensors), SNR or color accuracy. And there’s the whole diffraction problem to deal with…

                • Tom Liles says:

                  Evening knickerhawk,

                  So I’ve had a couple of goes at Mr Martinec’s write-up, and I think I’ll need a couple more. I’m like HAL2000 at the end of 2001 these days: slow and dull and losing my mind, I can feel it, I’m losing my mind Dave… etc. Give it another couple of years and I’ll be slurring my name and singing “Daisy,” no doubt. Anyway, from a quick butcher’s at the whole thing and a once through of the first two parts, these two lines stuck right out for me:

                  — “RAW data is never posterized

                  — “When posterization does arise, one must reconsider the processing chain that led to it and try to find an alternative route that avoids it. The main point of emphasis here is that the bit depth of the raw data is never the culprit.

                  That’s quite bold.

                  I’m not sure what he means on the first one. If he means “RAW data” on any terms we’re all familiar with then he means digital data emerging from the other side of an ADC. To say that that isn’t posterized is to contradict himself from a few paragraphs preceding:

                  In the absence of noise, the quantization of an analog signal introduces an error, as analog values are rounded off to a nearby digitized value in the ADC. In images, this quantization error can result in so-called posterization as nearby pixel values are all rounded to the same digitized value.
                  [bolding mine]

                  So are RAWs posterized or aren’t they?
                  (Yes, no?)

                  This calls into question his corollary, the second quote I clipped, about RAW files and their bit depth never bring the culprit on (subsequent) posterization. Though I agree wholeheartedly that the processing chain plays a huge part, and I wouldn’t want my questioning to downplay this aspect of his point.

                  Mr Martinec educates us that ever finer quantization is pointless wrt to posterization since noise steals all this thunder when the magnitude of the tonal jumps it introduces are greater than the grain of the tonal digitization (a relationship which holds down the processing line, he says). He showed us some good examples with a delta of “12” for some noise. Is real life going to work like that. I’m not sure (since I don’t fully understand). But I think he’s smuggled in some assumptions about noise there; I’m not confident I follow him well enough to speak on it—suffice it to say though, I thought we knew signal-to-noise was the real consideration and since that relationship is a dynamic one, couldn’t we imagine strong signals at the top of the shop, where the noise (in ratio to the signal) would be quite quite small… perhaps small enough to negate this dithering effect he mentions (if I understood mosswings correctly, he said the same thing)?

                  A lot to get into on that page though. Thanks for sharing it knickerhawk.

                  Lastly, some things that make me feel like I’ve landed here from another planet, but could someone explain how the top stop of a RAW file, the first half of the data, is mostly noise? I’ve been completely bamboozled by that. Does not compute…
                  Neither does this thing that people have managed to show a picture stripped down to the first (the top) stop or two and found there’s not much discernible to look at there… That’s half the data in the file, so what gives? How is that possible? I’m confused dot com.

                  (P/S Fermilab is such a blast from the past. I used to work in the nuclear industry — health physics: radiation detectors, instruments, dose meters — and Fermilab is up there for me. Not quite Mt. Olympus — they haven’t had the highest energy accelerators for some time — but way up there. I used to read and read about this place—never been there; would LOVE a tour round the place. Maybe some day! And wouldn’t it be great to bump into Mr Emil Martinec 🙂 )

                  • Don’t think of it as being the top half. Think about those extra bits as letting you tell differences between adjacent pixels with twice as much precision per extra bit. Not intuitive, but there it is.

                    • mosswings says:

                      Exactly. Adding bits to a digital word is just like adding cents to a $10000 check. $10000.00 is exactly the same amount, expressed with two decimal places more precision. So if you put that $10000 in the bank, now you can track more precisely the excruciatingly small interest the bank will pay you every month. 😉 Dithering in a digital system is equivalent to making the cents column in that $10000 vary randomly. It has essentially no effect on the $10,000 column, but if your calculations are so bad as to be 5 cents off out of 1,000,000. that randomly varying cents column will hide the fact when you average over many such calculations.

                    • Tom Liles says:

                      How does this fit with the ETTR mantra? The high priest being Thomas Knoll, if I recall correctly.

                      In most cameras I’ve used, it seems to work…
                      (Not so much the Sony A7)

                    • Doesn’t change a thing.

                    • Tom Liles says:

                      Just to qualify: “this” is referring to your advice not to think of the first stop as the first half of data, Andre. That doesn’t sound compatible with what we (beginners) have been told in the good church of ETTR

                      Thanks Andre and mosswings; I said I was prepared to be wrong above and this conversation has persuaded me that I am (re: benefit of 16bit RAWs). It seems, too, as though a fair bit of other stuff I’ve picked up isn’t exactly right either…

                    • Tom, my vague understanding of ETTR is that it’s trying to expose the sensor to as many photons as possible without overloading it. So ETTR works more on the analog side of the sensor than the digital side. The extra bits on the digital side let you record the current made by those photons with greater precision.

                    • Tom Liles says:

                      Andre, yes your post above mentioned a maximum PD (2V, say) at the photosite, and assuming a ceiling does not change, I can easily see how more bits just means more resolution at the ADC—more precision in discriminating between values as you say. To use mosswings’ example, our bank account is never getting greater than $10,000, though our accounting skills may improve. I get this.

                      So Andre, ETTR encourages us to expose as brightly as we can (before clipping what we wouldn’t like to clip) since, agreed, it means more signal and therefore a better SNR. If we pull the exposure slider down in post and bring the image back to a “normal” looking exposure, we should find a more saturated and less noisy result. That this actually works in practice suggests a lot to me. Though as Martinec pointed out, at high ISOs, say a dark 1600 vs. a brighter 3200 — and let’s say the normal exposure was in the 2000s — exposing to the right isn’t especially a good idea, in fact the opposite is probably the case: brightening a dark 1600 in software will often net a better picture than darkening the 3200 exposure in the same software. I’ve found this to be the case, more-or-less, in practice, definitely with the Exmoor in the A7: shadows and darks are more readily recovered for less penalty than I’m used to (and thank God because the top end shoots like slide film); with other cameras though, my cut off for ETTR is 400 (with the A7 I refrain from doing it at any ISO, except in flat light). This made sense to me before I even saw Emil’s page, as I understood any exposure at an ISO higher than native to be an effective underexposure and gaining up of values in hardware. So ETTRing at anything other than base (native) ISO seemed a bit silly, on paper; in practice, as I say, for whatever reason, I find an ETTR exposure at 400, say, brought down a stop and toned in post looks better than if I just made the ISO 200 exposure at the scene… In some lucky scenarios this even holds out to 640… but by 800 the advantage seems gone; and past that most definitely disappeared.

                      My first encounter with the term ETTR certainly considered things both sides of the ADC, and this is where my understanding that the top stop of a RAW file held the largest allocation of tones (linear encoding) came from. I’m confused now if we’re saying that’s the case or not. Knickerhawk’s example of the 13th and 14th bits being just noise seems uncontested…
                      I understand gamma has a part to play here and we must compress parts of the RAW data to make it visible on a screen. But I confess, this thing that the top bit, i.e., the 14th one in a man-size NEF, is mostly noise makes zero sense to me. I understood mosswings to say previously that dithering (mosswings made a distinction, but these terms — noise and dithering — seem to be used interchangeably) wouldn’t have an effect in the top bit(s) of a digital word and that made sense to me as surely the signal is too strong for the dithering effects of noise to be visually relevant? But again, that is working from the “half the data in the top stop; strongest signal at the top” premise.

                      All in, I’d been working from the premise that we want to bunch a digital exposure up on the RHS of the histogram as best we can, then tone it left in post, to create the most tone rich and pleasing pictures we possibly can. More tones available on the right = better pictures.
                      [Not too dissimilar to shooting C41 where we bump up the shadows (expose for shadows) when metering and taking the shot; let the low contrast emulsion compress the DR of the scene as best it can, develop the film, then burn in post (bring histogram left and spread it with contrast) to yield a pleasant result.]

                      But yeah, on the digital front this is all up in the air for me now:

                      1) RAWs aren’t posterized?
                      2) RAW bit-depth has zero contribution to posterization in post?
                      3) Top stop is just noise?

                      My experience is amateur and limited, but I find:

                      4) 14bit RAWs edit and print better than 12bit ones
                      5) 12bit RAWs edit and print much better than 8bit jpegs
                      6) In a typical D3 file I’m always happy to see a hump at the RHS of the histogram — often something that looks flatly bright on screen — because that is a gold mine of tones waiting for me. A classic high key image histogram (looks like the climb a roller coaster makes) is also mana from heaven: pull that a stop down, really put a bend on the tone curve (most of the bend below the y=x line, i.e., a tiny top half of the “S”), and you get richly saturated and sumptuous pictures… It certainly feels like a great amount of tones are in the top bits…

                      Anyway, enjoy the irony MT—this was the banter that “Beyond the Numbers” liberated 😀

                    • Sorry if I sound like a broken record, but that 14th bit is recording extra resolution, not headroom. No matter what happens in the RAW processing engine, the sensor is still outputting the same data. When the physical sensor is overloaded, it doesn’t matter how many bits are used to record that overloaded output — it’s still blown. What more bits may allow you to do is to tell the difference between two tones near the top that have a very subtle difference that may be numerically indistinguishable (and therefore blown for highlights or blocked up for shadows) when quantized to a lower number of bits.

                    • Tom Liles says:

                      Not at all, Andre. I’m sure I get this: as above, we have $10,000 in the bank, adding a bit (a digital bit, not a bit of money) doesn’t magically give us more money in the bank—it just lets us look at the money we do have at a much finer resolution (with added implications for accuracy).

                      Understood on full-well values; but don’t we need only one number to record that? From [saturated number – 1] on down to the last useful number before the signal is hard to discriminate from the noise, isn’t it all data? The more resolution there is in each useable stop, the more opportunity I have for creating quality contrast… As you say, more bits:

                      … allow you to … to tell the difference between two tones near the top that have a very subtle difference that may be numerically indistinguishable (and therefore blown for highlights or blocked up for shadows) when quantized to a lower number of bits.

                      This feels like it’s an important point for contrasting (exaggerating local input/ouput ramps) and darkening (normalizing an ETTR image); but also for recovering highlights.

                      I appreciate that only 0 to 255 is available to me in my output medium, and at a specific gamma, but I’ve always thought of this like color gamuts, where the rule was start as wide as you can and work your way down to your output gamut. I can calculate (post process) a better output 0-255 from starting materials 0-16383 than from starting materials 0-4095, or even 0-255. I see it as a bit like sculpture—start with a big block of untouched raw material, chip away at it until the finished product—which will certainly not be the same amount of material as you started with. A 14bit RAW may seem like a gargantuan lump of rock incommensurate with the intended output size; but it just means that more shapes and expressions are open to us. Plus Adobe does all the labor! We just sit back and bark orders.

                      Anyway, if I could beg one answer from anyone, it’s this:

                      — how is the top stop of a RAW file mostly noise? I just cannot force myself to understand that…

                    • mosswings says:

                      Tom – the top stop of a RAW file is NOT mostly noise. It includes all of the data from the half-full scale point to the full scale point. There is of course noise buried in this huge signal, but you can’t really see it until you start looking at the bottom stops of the file. Those bottom stops are encoded by the least significant bits of the file.

                      BTW, convention is to call bit #1 the MOST significant bit – i.e., when it’s 1, the signal is in the upper half of the dynamic range; when it’s 0, the signal is in the lower half. Bit N (N=12, 13, 14, 16…) is the LEAST significant bit. Now, let’s consider a signal with 1 LSB worth of noise. This means that that LSB will be randomly flipping back and forth between zero and 1. So let’s consider that half-scale signal again: it might be encoded as 100000000000 or 100000000001 depending on what the noise value is at the time of digitization.

                      Now, what the most significant BIT of a RAW file word encodes might be different, depending on the tonal curve used. But it still encodes a very large signal, and the least significant bit of a RAW file encodes a very small signal, or mostly noise.

                    • I think mosswings has a pretty good explanation.

                      The italicized part of my post that you quoted … Let’s consider an example: suppose the sensor outputs 0.99992 Volts for a pixel’s brightness and 0.99990 V for its neighbor, and the maximum signal the sensor can output is 1V. With 12 bits, both pixels will appear to be identical (about 1.0000V), so you get a 2-pixel line instead of two distinct pixels. With 14 bits, you can tell the difference between the two pixels, and once you can tell the difference, you can do curves, recovery, etc. This assumes that noise is lower than the actual difference between the pixels.

                    • Tom Liles says:

                      Thank you mosswings. Well I’m glad to know I’m not going batty (though I have form, no doubt!). Just to back up, that was a misinterpretation on my part of knickerhawk’s line:

                      If you’ve ever seen “images” generated from the 13th and 14th bits stripped from 14-bit files, you’ll quickly realize there’s just nothing but noise

                      I wasn’t aware about the convention for speaking about higher order bits with lower numbers, and lower order bits with higher numbers, i.e., knickerhawk’s 13th and 14th bits there, are the last two bits in a 14bit file: 2^1 and 2^0. On those terms (the correct ones!) it seems surplus to requirements to mention that these last two bits are mostly noise. This said, when dealing with thickos like me I can understand the desire to leave nothing to chance 🙂

                      I’m not sure what we all think about 16bit RAW recording now, I know knickerhawk isn’t persuaded and his link to Mr Martinec’s page gives good reason why… But through the course of our conversation here, I still carry the nagging feeling that more resolution would effectively help save tones from those lower bits; since on first principles again, with more numbers (bins) inside the same spread of signal, i.e., a more finely quantized data set, doesn’t that offer us the chance to have more tones, just on brute numbers, above the “lost to noise” line? We’re not physically getting more photons on sensor, I’m not under that impression, what I mean to say is I can, data numerically, put more signal on the right hand side (of the histogram) then, and when I pull the data left in post get a better result than if I’d just put the tones there at exposure, or had less bits to bin into to start with. Perhaps the analogy to print film is a little labored, but I do see it as a chance to compress, more accurately, information into useable parts of the file. I might have to revisit MT’s article on BW conversions and DR (that’s after I’ve learned how to count!).
                      I understood Emil Martinec’s point that this ever finer graduation of tones is not actually as useful as I’d think since dithering (from noise; or as you suggested mosswings, purposefully added) masks any visual effect (not that we’d see any of this on an 8bit display anyway). I’m still getting to grips with his page: it’s been a while since I used my brain in anger… But it’s hard to tear up over a year’s worth of experience and received learning that more bits means more colors (tones, obviously) and more robust, less noise prone files in the edit.

                      I shall have to find a new photographic crusade to get into! This has been an excellent discussion though, so thanks to:

                      knickerhawk
                      mosswings
                      Andre
                      plevyadophy
                      and, of course, our esteemed host, El Mingito!

                    • Tom Liles says:

                      And thank you Andre! You got in there while I was dithering (garden variety, not digital) over my comment. Sorry to make you guys give me the Sesame Street street version! 🙂

              • Older sensors – no, newer sensors – the usable lowest bits are getting lower; partially due to processing and partially due to the on-chip circuitry itself. Look at how much recoverable shadow detail there is in the D800E’s files…

  30. nothingbeforecoffee says:

    Ming; a great and meaningful piece. As long as we adopt a , we are what we own mentality, we will continue to be seduced the sirens song of more and better.

  31. Fantastic insights as usual Ming! Sorry to you and the rest of the core gang that I have not commented in a while.

    One thing that continues to jump out at me regarding many of your posts on the topic of technological sufficiency is a common theme around at least one major remaining hurdle for the film to digital evolution…nonlinearity. While I don’t fully know or understand all of the physics involved in how the film (or sensors) and lenses contribute to the linear vs. nonlinear image capture, processing, and final shared file types, it seems like this might be a ripe realm for continued technological innovation…both hardware and software, right? The JPEG2000 standard uses a wavelet approach for file compression right? Is this a nonlinear type of compression?

    I’ve previously shared with you my interest and graduate school research in fractal dimension. I’d love to see a future post from you on the nonlinearity issue. Get all geeky and use your physics knowledge to educate us! As always, thanks for your continued wisdom and generosity in sharing your thoughts and images with us!

    • Bingo. Nonlinearity is difficult because it isn’t a software or algorithmic issue; it’s a fundamental physical/ design consideration/ limitation of the sensor hardware itself. We can always increase the number of recording bits without too much issue – that is merely a question of data throughput. I don’t think it’s complicated: our eyes aren’t linear, nor are chemical reactions due to saturation points and irregularly sized ‘photosites’. Irregular sized pixels will be the next leap, I tell you…as soon as somebody figures out how to lay out and wire up the array (not to mention deal with the enormous leap in computing power required to convert it to linear output to match our monitors).

      • In the fever swamps of various Fuji forums, this kind of thing is getting a lot of discussion. From what I gather, Fuji (and others) are trying to do this, as a sort of logical next step from the X-trans design. From the comments you’ve made in other places, and the robust discussion (that occupies so many who would probably be better off getting out and taking pictures) on the matter, it seems that last step, converting non-Bayer layouts to display properly on a screen, is still quite a hang-up.

        • The short of it is that you’re trying to map an irregular layout onto a rectangular grid in a non-discernable, continuous fashion – it’s only going to work if you have an irregular grid, or so many mapping locations (i.e. pixels) that the eye can no longer distinguish individual ones. And we’re not there yet…not even close.

  32. JohnAmes says:

    Reblogged this on John Ames Blog and commented:
    If more of us REALLY listened to this point of view photography would take a giant step forward and camera companies would sell many less cameras.

  33. Excellent information, thanks! I’m really glad that I realized the same thing (12-16 MP is enough) already in 2009, and even now I’m not tempted by the “latest and greatest”. My girlfriend and I are leaving on a 6-month world trip, and guess what we are taking with us: http://pcurious.com/2014/05/04/6-months-camera-bag/

    I think / hope Ming will approve 🙂

    • BTW at home I print on an Epson 3800 up to 13×19″ (A2+ size in Europe), and 12-16 MP are enough for this use-case also. So instead of updating my gear I pay for ink and paper instead…

      • Not if you’re making Ultraprints 🙂

        • Ming, I’m very interested in the idea of Ultraprints, but for the trip that we are planing there is no way I can think about taking any larger piece of gear. Which I don’t even own, so maybe I can make 4×6 Ultraprints. 😉

  34. albertopr says:

    What do you think about the light field technology from lytro they get people to take more photographs by making the process fun and simple. Enjoy the day over there ;D

    • I think it has potential to fix focus errors afterwards, but I don’t like the fact that the viewer can effectively recompose your image by changing the focal point – and I like even less that the license agreement for their technology is so restrictive you’re effectively giving up your rights. Until the latter changes, I doubt it will ever see wide adoption professionally.

      • albertopr says:

        Yes I share your opinion that it may become an interesting tool personally it´s too interactive for me I still prefer seeing a work on paper instead of a screen but who knows…

  35. Willem Kotze says:

    I think a key factor would be for companies to build cameras that free people from the tyranny of choice in such a way that they do not feel deprived.

    • As in not restrict electronic/ ‘free’ features by price level?

    • David Challenor says:

      Completely agree with this sentiment. My cameras, EM1 and D700 give me a huge choice to customise the buttons and settings. However, if I use one camera for an extended time and then go back to the other, I inadvertently disturb the setting and end up needing to get the instructions book out again.Ok,I should use one system only, but many of us have multiple systems so it would be nice, once a camera has been set up, to be able to lock it from accidental (tyrannical) changes.

  36. I agree, I made the painful decision last year to spend my upgrade money on education not gear (a very hard decision) and I have many more keepers now. I am now motivated to get it right in camera because I do not have the time to PP a hundred images a day. I was lucky that Ming was generous enough to share his knowledge and personal workflow with me (via his video classes) and I continue to believe that the results of more training will beat out the results from that new camera every time.

  37. Yesterday I was talking to my photographer friend about future cameras. Next generation in 20 yrs from now may not prefer large size camera. If you show images from camera display to any kids of today, they will touch the screen to move the pictures. They may like screen size 8 inch or more in 4K resolution rather than 920K, without the weight but with excellent picture quality. Camera will see lot of changes in future together with display media & internet. Hope I am still alive to witness it.

  38. Love the article! However, on the contrary, none of the high(er) megapixels make sense if the workflow stops at a screen display. You and I know how much effect it takes just to make a small Ultraprint, a concept which demands the best in both technical, composition and artistic articulation. On the other hand, a perfect picture may not necessary touches the heart.

    Coming back to brand owners and manufacturers, most are struggling to make numbers and pushes boxes without much seeding and education. Marketing is misleading consumers into ‘more is surely better…less is inferior’. This is sad. It’s getting less ‘photography’, more into ‘equipment’ now, evidently so by current trend of products biasing towards aesthetic appearance and lifestyle. The drawback is, people ‘feel good’ owning such cameras, and not so much as proud on images they make from them.

    • I think you’ll find few people are proud of their images on either a technical or an aesthetic level – but education doesn’t make camera manufacturers money (at least not in the immediately visible term for the current CEO) so none of them choose to do any serious investing into it. Surely one day somebody will realize that it’s much easier to sell a specialized, high end (read: expensive and high margin) product to an educated consumer than an ignorant one. Sadly however many of those companies won’t last long enough to get there.

      • Brilliant post as always. You are 100% right about education being the enabler, although in nearly any “rabbit hole” there is an inflection point where initially education drives spending as you learn about the craft, then after another point where sales go way down as you gain contentment and avoid the marketing gimmick items and learn that skill trumps gear. I know I have spent way more on cameras as/because I’ve learned more (thanks Ming) gathering and experimenting with primes for instance than I ever would being stuck in P&S “I need a bigger zoom range” mode. The industry has been able to sell “a new camera will make your pictures better” for 100 years and the dumb consumer was the best market for this. You are spot on to address education as the next frontier, with possibly apps /UI taking precedence over hardware arms races.

        • Haha, thank you. Spot on with the inflection point – there’s yet another turn down the end of the road where if you choose to push the limits it gets very, very expensive again – and that’s where I find myself now.

  39. Another timely and excellent article, Ming. I have been wondering about Sony’s new approach with the soon to be released A7s. A 12 MP FF with possibly as good or better low-light capability as that found on the D4s.

    • Even then – I’m still wondering what’s next after that; how much more do we need? Low light capability is now at the point where our eyes stop working before our cameras do.

      • Yeah, I agree. As you point out the usability of the camera needs to supported by the manufacturer and Sony hasn’t been good at this. I wonder if this is an area where new-comers like Samsung will develop?

  40. Wonderful article Ming! Imagine a camera that came with the Making Outstanding Images Series. That person would take amazing pictures….

  41. Wow! Such an incredible article, Ming! Camera manufacturers speak in the language of technology not user experience. And if we were to evaluate each customer, not for their immediate purchase but for their lifetime value, we’d educate them, befriend them, socialize them and bring them into the fold. This is why Apple succeeds in general. They aren’t plugging numbers (which have often been exceeded by the competition) but focusing on the user experience and the quality of the user’s output. Now, if we could just put Apple and Leica together, we’d be in great shape!

    • Thanks Roger. I think Leica tried to do it with the T, but the tech part has to be transparent and just ‘work’ – and they’re not quite there yet.

  42. Henry Zacharias says:

    Well, there is no “next” for me so far, my perfect camera has been build by Nikon and it´s the D4. It fits into my hand, every finger is on it´s place, resolution, DR, sensor size and speed by all means are more than I need and last but not least: the lenses offered with the right bayonet do exactly what I want. I would´t mind for less weight but the size is right and though the weight follows optical laws, at least for the lenses. Since I am not a lightweight myself, not a big deal to carry 4 or 5 premium primes through a day walking.
    So yes, I am all set within the digital age of photography. A Df with better support to focus manual lenses, especially for my Zeiss, would be an option (hybrid finder?) but besides this, don´t worry Nikon, I´ll stay with you until the end, as it seems!

    • Well, you’d have the same thing with a better focusing screen for the D4 – or any other SLR for that matter – it seems that all of the manufacturers have regressed over the years since autofocus.

      • Henry Zacharias says:

        That´s true! I would appreciate something like this in the Df more as in the D4, since I´d like to have a specific camera to use for manual focusing, which for me always also is a different style of taking pictures in general. Especially compared to the D4, which first of all is a speed monster that I use mainly for sports and documentary.

        • I think ALL of their cameras could benefit from it – after all, if you’re going to engineer it once – why not make the market as large as possible to get the best return on investment?

  43. Well balanced points and you are absolutely right About the Marketing folk driving the technology!

Trackbacks

  1. […] Ming Thein is an absolutely fantastic photography blogger. His command of technical details and technique with an uncanny ability to simplify it all in a comprehensive way blows my mind every time I read his posts. This latest commentary on camera technology that gives us G.A.S. while we struggle with why our pictures don’t seem to get any better is an excellent reminder about why the “best” isn’t always better. Read his outstanding article about Beyond the numbers: what’s next? […]