Brave new world: the surprising iPhone 11 Pro

IMG_4268 copy

Field dispatch, Berlin, December 2019: I normally don’t write things on the road, both because I prefer to see where I’m going and because I find observations on anything need some sitting time; think of it as a curation of thoughts. But I’ve been slapped upside the head a little bit on this trip. Firstly, it isn’t a photographic one – it’s a spend-time-with-the-family one; even so, I’ve been paring down gear more and more of late to the point that a Nikon Z7 and two lenses is about the most I’ll do. In this case, the 24-70/4 S and the 85/1.8 S. Both are excellent but I find myself hardly using both the camera, and when I do, the 24-70 is left feeling lonely. Why? Well, I picked up the iPhone 11 Pro shortly before I left.

IMG_4130 copy

Regular readers will know that I am not easily impressed, especially given how much hardware I’ve had the privilege of using over the last years, and the number of times said hardware has done unimpressive things in the field (or thrown unexpected results, gotten in the way, or worse, failed entirely). As good as phone cameras have started becoming in the last few years, the smartphone has always been a communications device – and later, a means to manage business when not at one’s computer. It is increasingly a portable field computer more than a means to talk to somebody. My previous phone, the iPhone XS Max, had an impressive and genuinely useful primary (26mm-e) camera, and a useful-in-good-light tele (56mm-e) camera. Both stabilised, both small-sensor with minimal computational photography.

IMG_4370 2

The iPhone 11 Pro, on the other hand, has a rather sloppy looking design and placement of the camera modules (especially for Apple) with three of them now in a triangular pattern*; a 13mm-e non-stabilised very small sensor thing with fixed focus; a faster-lensed 26mm-e thing with the largest sensor, stabilisation and the full computational feature set (stitching, stacking/oversampling for noise reduction and detail retention etc) and a 56mm-e with a slightly smaller sensor but otherwise the same feature set as the 26mm-e module. This is so far nothing new or special; multi-camera and multi shot computational abilities have been seen before in other phones.

*I suspect this might have something to do with the array required for spatial calculations/ depth of field effects to maximise the information collected by each camera module.

IMG_4210 copy

What is different – and surprising – is the implementation. Usually, there’s an abrupt switch or jump in quality between computational (read: stacked) and non-computational (single shot) modes. You usually have to manually deploy the computational modes (“night mode”) and there are limitations in the way it must be used. The iPhone 11 Pro is the first phone – nay, camera – that’s done this effectively seamlessly: when light levels fall below what the camera deems is an acceptable single shot result, it starts stacking. You literally just compose and press the button; it’s possible to affect focus point or exposure, but most of the time the camera gets both of those spot on. I suspect this is less of a consequence of PDAF on sensor (which it has) as a lot of subject recognition algorithms. Furthermore, with long exposures – effective 10s is the highest I’ve seen, but most of the time 3s is determined sufficient – all of this is done handheld, with no apparent shake but very nice motion blur of subjects that should be moving (e.g. cars or people transiting a scene). All in all, it feels as liberating as moving from the APSC CCD D200 to the FF CMOS D3 back in the day.

IMG_4345 copy

I believe it works as follows: the camera is shooting a lot of frames during that “total exposure time” but then aggregating them for both noise reduction and to mask and separate out moving elements that should not be moving (i.e. camera shake). There is almost certainly also data taken from the phone’s accelerometers to determine what in the scene is intentional motion blur, and what’s caused by phone motion. To do this requires both advanced pattern recognition of a) blur b) subjects and c) sensor noise, and a huge amount of computational power: there’s only the slightest appreciable lag between releasing the shutter/completing the exposure and seeing a processed result. It may well be doing the calculations real time.

IMG_4176 copy

The iPhone 11 Pro seems to be doing most of this during daylight images, too: it isn’t clear how many images the camera is actually shooting to make a single one, given that HDR output in daylight requires multiple exposures, too. The upshot is that it’s very difficult to clip any highlights, and exposure adjustment isn’t required anywhere near as often as for previous cameraphones. Editing post-capture in the new editor (that has controls a lot more like a conventional photo editor, and finally, separation of the temperature and tint axes of white balance) – shows a pliancy and nonlinearity to the files that feels like they came from a much larger sensor. I suspect they’d even print pretty well up to reasonable sizes; perhaps not as large as a FF 12MP DSLR, but no worse than M4/3. And you can always get significantly more resolution by doing a pano sweep.

Pixel level results aren’t the mush you’d expect, either: there’s decent bite even at very low light levels, though they’re not going to be at a large sensor. The real question is, how large does ‘large’ have to be to be better?

IMG_4273 copy

My guess is that depending on which camera module (the three have differently-sized sensors), we’re talking between 2/3″-1″ for the ultra wide, to 1″-M4/3 for the tele, and M4/3 to APSC for the wide. I realise this is a controversial opinion, but bear with me. The largest sensor is a 1/2.55″ type, which is about 7x5mm, or 35mm2. APSC is 24x16mm, or 384mm2. You’d only have to stack 11 images (best case) to provide equivalent performance; and remember, there are some advantages to a perfectly matched lens system, too – which I’m sure the iPhone comes quite close to. I’m also pretty sure that during long exposures, the iPhone must be taking more than 11 images – no way each image during a 3s stack is that long, given the limitations of stabilisers and hand holding. Read out the sensor fast enough, throw enough computing power and coding cleverness at subject recognition and frame combination, and I think it’s easy to see how we’ve now got a larger sensor under basically every situation. It’s this, and the seamless of integration that’s really impressive.

IMG_4142 copy

It’s impressive to the point that I find myself using either the phone or the Z7, and not really anything in between – because the shooting envelope of anything else is worse. A camera is a very linear device in which if light levels fall you either need more sensor area, more aperture or more exposure; it can’t readout at 240fps (or higher?) and then have the computational power to chunk through all of that data. Remember the new iPhone processors have been benchmarked at almost laptop-grade; this is in line with my experience of forcing LR mobile to work with the Z7’s raw files and relative speeds compared to my 2018 MacBook pro. (It’ll also handle a 100MP Hasselblad file just fine.) If it’s doing nothing else but parsing a 12MP JPEG or HEIF file, there’s definitely a lot of power to spare.

IMG_4324 copy

But, as with everything – it isn’t all perfect. There are still some annoying limitations, a few of which are to do with software; some with hardware. The first of them for me is still the lack of a two stage shutter button – as fast as AF is, and as nice to use as tap to focus is, the finger dance required to engage AE/AF lock (if you’re waiting for a subject) is annoying. Just hitting the shutter and trusting the camera is not as annoying as it used to be as metering is better at not getting thrown for small, bright subjects (probably pattern recognition again) and AF is fast enough to be effectively instantaneous. The expanded viewing area outside the current camera module (visible on wide and tele modes) is really distracting, and confusing if any of your other cameras have an information overlay that you learn to see through; here, the overlays are not in the frame.

IMG_4053 copy

The ultrawide camera lacks stabilisation and AF, which means focus is optimised for one (middleish) distance and hyperfocal; it isn’t as sharp as the other two cameras nor can you do long exposures at decent quality (which is one of the more interesting uses of an ultrawide). Finally, the implementation of ‘zoom’ handoff between the cameras remains poor: this is one of the few situations in which you can see quality noticeably drop. I suspect it’s because an intermediate zoom level crops from the next longest lens rather than trying to combine information from the wider and the longer one. I wish the tele was an 85mm-e, or there was perhaps a 120mm additional lens, but I there are undoubtedly significant physical constraints there, and it’d need a sensor at least of the quality of the 56mm-e module to be useful. Lastly – and this criticism can be levelled at any new or high technology: it’s really expensive as a camera; but not so much if you consider you also get a phone and computer thrown in, too. Even then, remember the old adage about technology: small, cheap, good: choose any two.

IMG_3953 copy

What I find really interesting though: there’s no way phone camera technology is going to get any worse. Not only is the compact dead but things that are higher up the food chain also have numbered days. I’d go so far as to argue that the shooting envelope of the iPhone 11 Pro is greater than the XF10 or GR3, even if peak IQ isn’t as high under ideal conditions. Remember: interesting stuff doesn’t tend to happen under ideal conditions most of the time…and we’re again back to the best camera being the one you always have with you. We now have not just a truly pocketable visual scrapbook, but one which is transparent enough not to have to require imagination or make excuses for. And that, I think deserves our support. MT

Images shot with an iPhone 11 Pro, mostly out of camera JPG with watermarking in PS. Any adjustments made were using the built in editor.

__________________

Prints from this series are available on request here

__________________

Visit the Teaching Store to up your photographic game – including workshop videos, and the individual Email School of Photography. You can also support the site by purchasing from B&H and Amazon – thanks!

We are also on Facebook and there is a curated reader Flickr pool.

Images and content copyright Ming Thein | mingthein.com 2012 onwards unless otherwise stated. All rights reserved

Comments

  1. I got an iPhone 11 pro max just before Christmas but have only done a few test shots. Age related physical limitations have forced me toward lighter gear so phone photography is a welcome development to be explored. Your post has peaked my interest.

    Rather than use the unfamiliar new iPhone 11, I shot family Christmas pictures with an Rx100.5 and Rx100.6 because I know the cameras well, have fill flash dialed in, and have plenty of experience in post processing the files. Even though I have full frame gear, I have come to appreciate how much I can get out of the Rx100’s. I look forward to exploring how to best employ the Rx100’s vs the iPhone 11.

  2. Shoibal Datta says:

    It’s certainly feels like phone cameras have reached the “point of sufficiency” for a more discerning audience. I’m curious about the EXIF in some of the shots – am I reading this wrong or did the flash actually fire on the Christmas market, chair and department store shots? I would have not expected it to fire. Also the folks over at Halide have done a good job of describing the 11’s imaging pipeline at https://blog.halide.cam/inside-the-iphone-11-camera-part-1-a-completely-new-camera-28ea5d091071.

    • It definitely didn’t fire – perhaps some properties are tagged incorrectly (fusion vs long exposure etc)

      Addendum: that link is an exceptionally good read. Thanks!

  3. Taildraggin says:

    Not only ‘below FF’ is starting to be challenged but below 50mm use is. If it’s wide, I’m thinking, “Just use the phone?”
    On the other hand, they’re as easy to handle as a live eel, esp. w/o a case.

  4. I shoot a lot of real estate (main camera/lens 750/14-24 : workflow = bracketing/Nik HDR Pro FX/PS CC and various speedlight techniques) … have recently been providing more video … bought the 11 Pro/DJI gimbal (NOT the Z6) … because I needed a new phone AND the video is just great. Video editing is in Final Cut. I figure by the iPhone 13 I won’t need a traditional camera at all … Surprisingly – I’m not to worried (yet) about job security. “They eye” is a hard thing to replace.

  5. For my usage, my digital cameras (both FF and APS-C) are gathering dust now since iPhone has become so good in grab shots. Even if go on a dedicated photo trip now, I prefer just a 35mm film camera and iphone. Film camera fulfills my need to “engage in the photographic process” and iPhone fills my need for digital (mainly instant social media, grab shots etc.). My digital cameras are reserved for technical shots like macro and tele.

    Ironically, iPhone has pushed me towards film by making my digital camera redundant. 😀 It is indeed “Brave New World!” Thanks for pointing. 😀

    I am wondering whether others have gone similar path.

    • I suspect the others are either on Instagram and not as ‘serious’ about the photography side online, or serious/ confident enough to work in their own little bubble…

  6. Really interesting article, thanks for writing that. Any idea if the normal iPhone 11 has similar computational features? I know the wider 2 cameras are similar (or the same?), but can it do the same clever stuff in processing?

    • I believe it does, but the Apple technical documentation is more marketing related in scope and less technical, so it’s difficult to be sure

  7. It will be a very long time before I buy another dedicated camera (if ever again). I’ve kept a full frame DSLR with a couple of prime lenses around for occasional portraiture. Otherwise, I love the 11 Pro for general purpose photography and use it every day. I will likely trade in / upgrade the iPhone annually now.

  8. Amazing, Ming as usual.
    A question: how well does it handle motion? Do you find 11 Pro to be a practical solution for photography that involves considerable motion, in the main subject, like kids and pets? As you mentioned, even in bright conditions the iPhone relies on multi image aggregation over few moments to enhance tonality and detail. Is computational technology there yet to simulate frozen motion without weird throughout the frame?

    • I haven’t really noticed motion blur artefacts, but instead the kind of trails I’d expect for moving objects in long exposures (e.g. car lights). Admittedly I have not tried it on hyperactive children (this has never been what I shoot) and I don’t have pets…though given the majority of Apple’s instagram-friendly market, I suspect they probably took care of this through some sort of subject recognition…

  9. Very interesting write up. I have two questions:
    1) Roughly what format would you equate the quality the 11Pro is giving you to? For reference I shoot with the Olympus EM1.2 + Pro glass. Of course it may not be linear like that because of the complexity involved.

    2) What’s the handling like? At a certain point this becomes even more important than outright IQ because it helps me distinguish my work from the average consumer with a smartphone. I can precisely focus on a certain part of a scene quickly and catch split second moments with the EM1.2 +45 1.2 Pro. I can see myself fighting with the 11Pro to focus where I want and dropping the phone when someone jostles me.

    • Depends on the camera module, as they are different sizes and have different computational abilities. The ultrawide is probably about 1/1.7” at best, and definitely behaves like a compact. The main 26mm-e is as good as APSC most of the time, perhaps a bit better in low light due to the stacking. The tele is closer to M4/3, but can fall on either side depending on whether the subject is moving or not (and thus if stacking works).

      Handling: it’s less unwieldy than the larger plus models, and focus is intelligent enough and dynamic range wide enough you generally don’t need to do anything other than compose and hit the button. I would not use it naked and will probably add the silicon battery case both for grippiness and the dedicated camera button.

      • Matt Leeg says:

        Thanks so much, Ming, for a very helpful review. One follow-up question on this:
        When you say that the main (26mm EFL) lens/sensor combination produces image quality comparable to APS-C, is that irrespective of overall resolution, i.e. the iPhone’s 12Mp files vs. the 24-26Mp files of the current generation of Sony and Canon sensors, or not?
        Of course 12Mp is more than sufficient for many uses, esp. on the web, but I do find the higher resolution of those true APS-C sensors, and even the current 20Mp Sony 1-inch sensor, to be very useful for cropping, and often a good substitute for having a longer lens w/ me. Just wondering if your Equivalency comparison is at the pixel-level, or more holistic than that?
        …and I’m guessing your not even accounting for the iPhone’s Super-Resolution mode, which I think outputs at something like 20 megapixels… but of course some of the newer APS-C cameras also have these modes, though admittedly not as sophisticated.
        Anyway thanks again, and I value your thoughts!

        • I’m looking at an overall image at typical viewing sizes on a screen – or perhaps a small print. Pixel level is definitely better on APS-C, but not by as much as you’d think. Not sure what super-resolution mode you’re referring to on the iPhone; it seems to stack for NR only, not explicitly outputting higher resolutions (though of course stacking also improves pixel level output at native sensor resolution).

          • Matt Leeg says:

            Thanks Ming, I understand.
            And as far as the super-resolution mode I was referring to, I was thinking of what Apple called Deep Fusion when they released it in the iOS 13.2 update around the end of October, working with the iPhone 11 models only. I looked back at the initial reports I’d read on this mode, and they talked about it outputting 24Mp files, but that turned out not to be the case, and I think what happened was that several reporters misunderstood an (admittedly rather confusing) marketing-speak line from Apple’s keynote presentation on these new phones when they 1st came out, talking about the Deep Fusion feature that would be added a few weeks later with the iOS 13.2 Update.

            I went back and read subsequent reports published after that feature went live, it looks like the Deep Fusion mode does improve resolution when it comes on automatically, but not by increasing output size. Instead it improves pixel-level resolution at the native sensor resolution, by using stacking techniques to improve high-frequency detail in some shots, such as those with hair or fabric.

            I believe it works in conjunction with the ”Night-Mode” stacking feature which I think you mentioned, which uses similar techniques to reduce noise when light levels get low enough. I am curious to know when one feature comes on vs. the other, and where the break-point is exactly, but I think it’s supposed to do this “seamlessly”, or whatever, and not tell you which feature(s) are active in a fully processed shot.

            In any event these techniques appear to be quite effective, and for shots w/ the iPhone’s main wide angle camera especially, this appears to be a setup that could replace something like a Ricoh GR I or II (which I have), and perhaps even a GR III when output sizes are moderate, which is, let’s face it, most of the time for most of us.
            Anyway thanks again, and I look forward to seeing just how far you can go with this… my guess is it will be pretty far. 🙂

            • Ah okay – that makes more sense. I haven’t seen any way to output 24MP files. There’s a HIEC mode but that’s about it.

              What deep fusion appears to be doing is compensating for NR loses in detail by using multiple exposures, but it doesn’t do this with normal daylight images. It does however appear to automatically shoot a bunch and pick the one with the least motion blur/highest detail and output only a single image – I’ve not seen any camera shake in my images, which is not something I could say of the XS Max even though also similarly stabilised.

              Personally – the 11 Pro’s wide camera has definitely replaced pretty much anything else as my reportage/social camera of choice, not just for size and convenience, but ‘more than sufficient’ output, too.

  10. This makes me wish there was a good 1.63x magnification lens accessory. You’d get around 85mm-e on the tele, or 42mm-e on the wide (close enough to 40, I reckon). Those are my two main focal lengths, and my camera would probably bite the dust if such a thing existed.

  11. Smartphones are disposable devices. They get handled in a way so that it’s very likely that they get dropped and break physically. Also they have a pre-defined end-of-life because of their software dependence. They are made for the trash bin the day they are manufactured whereas it’s common for digital cameras to live very long lives. Even now over 10 years old cameras still get used.
    This all makes it hard to understand for me how the SP can be seen as the next solution in the photography world. You have to spend more than 1000 dollars for this iPhone and it’s made to be trash within a few years.
    Personally I prefer to spend only around 200 dollars for such a kind of device.

    • They’re disposable because of the way they’re used and treated. If the average camera was subjected to the same abuse, it wouldn’t last 10 years either. I go through at least one camera a year through wear only; at times in the past, more. But I use my cameras about as much as the average teenage girl uses her smartphone…

      • I know, but that wasn’t my point. It’s clear that the camera would not survive this treatment. My critic is that the SPs are made to be treated like that and for that the price is too high. In my opinion.

        • It’s definitely high. But given it can replace your laptop and your camera, and costs less than either…I think we can always vote with our wallets and enjoy the economies of scale…

  12. Kristian Wannebo says:

    Kirk Tuck discusses the advantages of the iPhone 11 Pro – and where high end cameras take over…
    … and comes to similar conclusions.

    https://visualsciencelab.blogspot.com/2019/12/which-lumix-s-camera-is-better-one-s1.html?m=1

  13. Hi Ming, good timing as i am considering similar option for having “image capture” device and smartphone just makes sense. Could you share or point me to a workflow to get images out of the phone. Currently i use Lightroom to import images from camera (desktop version).

    • I just use the built in editor, if at all. Usually adjustments are minor as this isn’t a serious output device that requires perfect file prep – for now…

  14. If Apple ever decided to make a dedicated camera “device” aimed at “serious” photographers, perhaps with larger sensors and the capability of using various lenses, they’d conquer the market in how many months? Is ten million times $2500 per year a large enough market for them to be concerned with? Certainly an iCamPro would be more popular than their watches and I think 2-3x the price of a top phone would be realistic.

    • Frank Petronio says:

      And yes I also upgraded from a iPhone 7+ to the small iPhone 11pro and have been much impressed, you are correct in rationalizing that the next jump is a FX camera with a good lens. The panoramas alone, from an image quality standpoint, compete with or beat my old Noblex and 6×17 medium format results. Certainly beats spending as much as the phone on Really Right Stuff nodal point gadgets.

      • The panos are impressive for the live stitching ability and being able to put out 40+MP at one go. I’m not sure they’re better at a total information level than stitching a bunch of singles after the fact, but the seamlessness/ease is definitely very convenient (and better than the single ultrawide, I might add).

    • I’m not sure that Apple’s next jump is into a system camera. That may be our predelictions as photographers talking. Apple’s purpose in providing cameras in its phones is to support the way that we communicate today – by sharing images (disposable, ephemeral mostly) along with our written or spoken phone conversations with each other. Given that the entire ILC market is heading rapidly towards 4M units, it’s hard for me to see how a large format system iPhone or dedicated camera system using Apple’s computational algorithms would attract back the 20M customers lost over the past 5 years. The vast majority don’t need such a thing.

      On the other hand, it could capture a large part of the much smaller dedicated photographer market – bearing in mind, however, for this crowd the tactile experience of a fine tool in the hand is a huge part of the enjoyment of the craft.

      You don’t capture a mass market by marketing a product that doesn’t fit in a pocket…

      • …you capture a mass market by making it as easy as possible for people not to know they’ve just been hooked and addicted, which is exactly what they’ve done.

    • I’m not sure about that – firstly, they sell so many watches they’re now the largest manufacturer on the planet, and have taken a huge number of the traditional suppliers out of the chain for the rest of us; secondly, they’ve managed to build a market where they sell 100 million new devices at $1k a year, every year – you’re not going to do that with a $2500 camera…I suspect they have us right where they want us.

  15. I enjoyed reading this, and great pictures as always.

    I thought the previous generation XS series used Smart HDR to stack a bunch of exposures automatically. The older phones don’t have night mode like the 11 Pro, but in daytime light I don’t really see much of a difference between photos shot with the 11 Pro compared to the XS.

    • The difference I see isn’t in dynamic range, but fine detail – my XS definitely didn’t have the same pixel level bite the 11 Pro has. I suspect the processing has definitely changed there…

      • I don’t know about the bite, but I tested 4 iPhone 11 Pros, the pixel level bite on my iPhone X & 8 Plus was much better. Probably some sort of sharpness lottery going on here.

        • I’m finding the opposite…the XS max lost a bit compared to the 8 plus on the wide camera but gained on the tele camera, and the 11 Pro gains all around – but the ultrawide is pretty useless (might as well just do a pano).

  16. Well done with the samples – and impressive results esp. on the game booth photo. That’s a real horror scenario for any camera.

  17. I’ve been mulling over the prospect of a Z 50 purchase. The lens speed has been an issue for me, but slowly I’ve been coming to the realization that sticking honking lenses on this beautiful example of a traditional camera misses its reason for being, and that it would be perfectly fine as a “last camera” purchase for my needs, which consist mostly of travel and events.

    But your article lens another perspective to the changing market – that single-sample wide-shooting-envelope cameras have to be large format to best what a quality phonecam can often do…and not ony that, but may be technically incapable of doing what it can do.

    Based on that understanding, it’s now hard to see where even APS-C as a system choice fits in to the picture…perhaps even buying a traditional single-sample crop sensor general purpose camera at all. That to me sucks. Even though I use a smartphone for most of my images these days, too…

    • I’m thinking anything below FX is actually of limited use now, unless you need the pixel density and reach for say wildlife or sport. A Z6 (low light ability etc) or Z7 (resolution) represents the next big jump in envelope. I’ll likely pair this with either the 16-50 DX, 28 pancake+85/1.8 for low light, or 24-120/4 all in one. The lens roadmap is making more and more sense actually…

      • I still do think that there’s a need for a smaller FX body – a Z 5 to the Z 50. But then you get into resolution sufficiency issues with a 24MP FF sensor operating DX…as well as no IBIS for most of the Z lenses (I suspect a Z 5, should it appear, will be a slightly thicker Z 50 to house the IBIS). But a higher resolution sensor would be a pushme-pullyu at this level of the product ladder. Perhaps what is needed is a Z 50 with IBIS at the $850 price point, not a Z 5. That way there would be a lower cost entry into the Z system. It still won’t attract most of generation Q; they’re lost to the market. But at least it won’t cost $5K to join in, and you can always upgrade your membership.

        The major problem we have in the larger format sensors is readout speed and processing. Sony has just announced a quad-pixel quad microlens sensor that enables X-Y PDAF on a per-pixel basis without Canon-DPAF sensitivity losses. This and the enabling block data readout architecture will get us at the door of computational imaging for large formats; then it’s a matter of the necessary processing power and algorithms. As much as Nikon needs to rethink its tracking UI (what’s wrong with just tracking and autofocusing simultaneously, like with 3D Tracking? And why all the OK button presses? And what’s good about continuing to track when you stop autofocusing?), physical changes to sensor structure will also be important drivers to ease of use during the capture phase. And Sony will of course introduce them on its cameras first.

        • If there’s single pixel readout for large sensors, we may not even need the computational imaging – just relative gain scaling would be enough to get to the point we never clip; a double exposure or binning would solve noise issues and color accuracy. It doesn’t need the same heavyweight processing a tiny sensor does simply because we have much more information to start with, and the same threshold of perceived sufficiency for both formats.

          • That’s true, a bit lighter on the CP the larger the format. But things like really smart exposure stacking would enable slower lighter smaller lenses to perform, at least for quasistatic subjects from a total light perspective, similar to much larger and more expensive faster lenses. With the body size differences between formats being a thing of the past, the convenience factor of a system camera will be a function of its lenses – and more convenience is a draw, although not an addicting one like smartphones are.

            • Even here there’s a disconnect. Manufacturers seem to be making all sorts of halo lenses because the enthusiasts and consumers think they want them, but in reality have limited appetite to carry and funds to purchase. The lighter zooms are seen as the poorer cousin even though they’re genuinely more useful and probably have higher margins and revenues. Yet the internet decries ‘not another kit lens’… failing to understand that a good mid speed zoom is much more useful than another f0.95 monster.

  18. I think this rapid series of developments in the iPhone will lead to something akin to freedom for some of us. Freedom from complexity. I’ll wait out another iteration or two before upgrading. That will give Apple time to get even more right. And give me time to wrestle with the intricacies of my new M4/3 camera. Yes, as the last person to buy that sensor/format I can confirm it: some people are just irrationally stubborn.

    • It’s certainly feeling quite liberating, though I’m starting too see some complexities creep back in at the edges – there are now settings menus in the capture screen that weren’t there before, and honestly, being able to see beyond your composition using the other cameras is just annoying – not only do the edges not line up a lot of the time, it isn’t differentiated enough not to be distracting and mistake your frame edge for being further out than it actually is.

  19. I wonder how many of the software features could have been ported over for us XS users, but Apple being Apple trying to lure people to upgrade and so they don’t do it. A shame as definitely looks a big upgrade over the XS.

    • My guess is not all, seeing as a new sensor is required to read out fast enough to do the stacking/fusion/computational stuff; it would probably work but not as seamlessly.

  20. Super excited to read this and see your images. I’m going to stick with my old iPhone 6s until it dies, because zomg the cost of a new iPhone. But when it *does* die, I know I can step right into using my iPhone as a genuine tool for quality work. If only they could figure out how to make it easier to hold while using it as a camera.

    • Actually, it seems the new battery case has a camera button on it – both to activate and shoot, though it’s single-stage. I find there are only two ‘economic’ ways to work with iPhones: use it til it dies, or trade up every cycle whilst the old on still retains maximal value.

  21. Kristian Wannebo says:

    How long a lens in a phone is practically possible?

    For 12 Mpx and an aperture number > 2x pixel pitch in microns to minimize diffraction, the aperture diameter needs to be at least 1/17 of the FF-equivalent focal length.

    E.g. a 5mm wide aperture for an 85mm-Eq. lens.
    And for higher resolutions this limit gets proportionally tighter.

    • Good question. If there’s folded optics involved, perhaps 100mm or so – and even then you’d have diffraction (already visible on the 56mm module).

      • Kristian Wannebo says:

        GSMarena says:
        ” 12 MP, f/2.0, 52mm (telephoto), 1/3.4″, 1.0µm ”
        So, wide open the f-value is 2x the pixel pitch in microns.

        Many say 1.5x is a better anti-diffraction limit – as shown by your observation of diffraction with this tele camera.

        So then a 12 Mpx 100mm-eq. phone camera would need an aperture diameter of at least 8mm to be reasonably free of diffraction – which needs a thicker phone even with folded optics.

        How marketable are thicker phones – except for photo enthusiasts?

        • My guess is not very marketable, but then again I doubt many will notice the diffraction, either. And there are ways around this with stacking and oversampling – so maybe it’s not an issue at all. As for thickness – I have to admit I’m contemplating the battery pack case with shutter button and extended capacity since I’m using it that much…

          • Kristian Wannebo says:

            I believe you are right.

            A pity I so often want a longer lens.

            So my thoughts at the moment go towards the Canon G5 x II (24-120mm-eq. + EVF), not thicker than my dead Fuji XF1 with folding screen loupe attached.

            The reports on the lens differ, some say better than, some say not up to the previous lens ( 24-100mm-eq.)

            ( Nikon is rumored to have a compact in the pipeline and Panasonic will have to update the LX10/15…)

            • I suspect the lens is highly susceptible to sample variation – being both a relatively high ratio zoom, fast aperture AND collapsible does the engineering requirements no favours…

              • Kristian Wannebo says:

                Ah, yes, I suspected that.

                And – not being experienced enough – I don’t feel like trying out a number…
                If the worse reports aren’t toooo bad, I might be too tempted though, but I think I’ll see first what (if..) Nikon comes up with.

  22. I was very pleasantly surprised to read this. As the only photography blogger (if you don’t find that term objectionable!) whose opinion I take into serious consideration when looking at a purchase, I had hoped (but not expected) to read your take on the 11 pro. Quite a ringing endorsement it is, too.

    I’ve been using a 7 plus for several years and it still does the job well, so I’ve never felt the need to upgrade. However, from what I’ve read up to this point, the 11 pro would be a massive upgrade in basically every conceivable way, including the camera (which would be the main reason I got it). Trouble is – and we’re back in “it’s not the camera, it’s the photographer” territory – the only review of the 11 pro I could find which featured any decent pictures (until this one) was by a chap called Austin Mann who did some good stuff with it. Everything else (and I mean everything else) was absolute garbage. Night mode shots of an empty street, wide angle shots with terrible, uncorrected distortion, pictures taken in the dullest light imaginable, you name it.

    So while I fully understand that these shots are a result of your eye and experience, it’s still a relief to see someone getting some good stuff from the 11 pro, and you’ve given me some very good reasons to seriously consider the upgrade.

    • It’s already a massive upgrade on the XS, which I myself was skeptical of – but a photographer friend convinced me. It isn’t so much the ability to shoot in no light as the ability to capture fine nuance in very low light and see as your eyes do, without the added hardware…

  23. “Pixel level results aren’t the mush you’d expect, either: there’s decent bite even at very low light levels”,

    That’s interesting Ming. Because Lloyd Chambers has been complaining: ‘most of the panos I took have severely mutilated pixels, with heavy noise with heavy-handed compression, posterization and pixellation’ (https://diglloyd.com/blog/2019/20191208_2150-EurekaDunes.html).

    Not sure what to make of this. Maybe Lloyd is being hyper… it’s been known to happen.

    • Lloyd’s expectations are sort of an absolute standard to which he holds everything, be it phone or medium format. I temper mine somewhat depending on the hardware and shooting conditions; bear in mind though that the pano functions do not have the same oversampling/stacking as normally ‘fused’ images. This may also have something to do with his analysis…

  24. giuseppe pagnoni says:

    Impressive. However, I feel that noise reduction/sharpness settings are still too heavy handed and/or optimized for viewing the images on the phone screen. See the texture of the suit of the man at the museum with the painting, viewed at the largest magnification in Flickr: plasticky and wormy. Perhaps you won’t see it at the usual print size, however, it would be much better if you could turn off noise reduction and sharpness or, alternatively, save a raw file. I am afraid that that wouldn’t jive with the massive computational work performed on the raw data, though…

    • I agree, but remember the whole concept of sufficiency is also tied to output medium: 47 perfect MP wouldn’t be sufficient for a wall sized ultraprint, but it’d be overkill for social media. I have printed from earlier phones with worse IQ and gotten decent results up to 8×10” or 13×16” depending on the subject matter; still enough for most applications. And there’s still oversampling even when the images are viewed on a 4K monitor.

      The really interesting thing though is there IS a raw mode accessible if you capture via another app (say LR) – but though peak image quality is higher, that advantage disappears very quickly as expected because the computational benefits start to outweigh control over sharpening and NR very quickly…

  25. Tim Shoebridge says:

    Very nice images. Unfortunately I’m too old school to use a smartphone effectively. No matter how good the IQ, they’ll need to make one with an EVF before I’ll ever consider ditching my LX100 or god forbid any of my mirrorless cameras.

  26. I agree with your thoughts. I tried out the iPhone 11 Pro and it changed my camera purchase decision. I had been considering a smaller sensor, but realised I would need the biggest IQ jump I could afford to make a dedicated camera worthwhile and so went for a Z 7.

    The iPhone is now capable of really good photos at times, but like you say, there are times it doesn’t quite get it right. I found portrait mode made some poor choices when masking areas to blur, but you can see the progress. Very soon it will have enough data and power to be a contender.

    You last shot is also weaker from an IQ perspective. It looks digital and noisy, perhaps because of the colour. Others look really good though, proving your point. If you have this kind of quality in your phone, your dedicated camera needs a great sensor / glass to differentiate. For sensors now it is go big or go home.

    • Personally not using portrait mode for that reason – masking is still poor. As for the last image, it’s not noise (though light levels were very low) – it’s metallic paint…

  27. Bharat Varma says:

    Have you tried the Google pixel phones? The IQ that they had was very visibly superior to the iPhone 10, not sure how well they compare with the 11.

    • I agree they were a big step over the 10/XS, which was very much still a single shot device. Not the case anymore with the 11. Though image quality is similar, implementation is more seamless on the 11.

  28. I am still using an iPhone 5s. I’ve been a good boy most of this year and hope Santa will remember me. lol…

  29. Nice set of images. Apple certainly has the resources and talent to push the envelope. I can only imagine if they ever wanted to design a camera what would happen to the market.

    –Ken

    • Thanks! I think they did design a camera: just not what we expect from the traditional sense of the word.

      • Yes, the iPhone does fit that description. I was hoping more for something with a larger sensor and interchangeable lenses, but with all of their know how and design applied accordingly.

        • I don’t think it fits apple’s model of simplified technology for the consumer – it’s also too messy from a product planning perspective given how everything else is neatly tiered…

  30. My wife has the exact same phones and what’s shocking is how consistently good it is. The quality is so good that I am fine with leaving the Z6 or EM-1.2 at home unless I’m willing to pull out the 12-100 f/4 or 10-25 f/1.7. It’s the future and we should embrace it.

    • I hit enter before I was done. To my mind, the new iPhones make things like the Pen-F with primes obsolete.

      • In a lot of ways, yes – unless you need more reach or shallower DOF (or single shot ability in lower light for subjects that don’t lend themselves to computational compositing).

        • I’m sure the DOF ‘problem’ will be fixed when Apple adds a TOF (Time of Flight) sensor to the camera in a future iPhone, which undoubtedly will have even more computing power. Your next and last camera could be one with a ‘free’ mobile computer thrown in for good measure. 😉

          • They might not even need that. If PDAF distance information can be combined with stereo information from a second camera, you’ve effectively got a TOF device already.

    • I’ve been carrying the Z7 with me for the last few days, but admittedly hardly using it unless I need longer than 56mm or the talents of the 85/1.8…

Trackbacks

  1. […] This series was shot with a Nikon Z7, mostly the 24-70/4 S and my custom SOOC JPEG profiles, with a couple of cameo appearances from an iPhone 11 Pro. […]

  2. […] series was shot with an iPhone 11 Pro and edited in-phone, except for the second image, which was shot with a Nikon Z7, a 24-70/4 S and […]

  3. […] series was shot with an iPhone 11 Pro and edited in-phone, except for the second image, which was shot with a Nikon Z7, a 24-70/4 S and […]

  4. […] series was shot with an iPhone 11 Pro and edited in-phone, except for the second image, which was shot with a Nikon Z7, a 24-70/4 S and […]

  5. […] series was shot with an iPhone 11 Pro, with processing via Photoshop Workflow […]

  6. […] S lenses, using my custom SOOC JPEG picture controls. There are also a couple of images from the iPhone 11 Pro in here, […]

  7. […] 📷 Ming Thien gives a quick review of the iPhone 11 Pro camera. Computational photography is here and it’s a game changer. Read Brave new world: the surprising iPhone 11 Pro […]

  8. […] View Reddit by bonelesslollipop – View Source […]