Stabilisation is good…but only up to a point.

Screen Shot 2016-05-17 at 19.01.42 (2)
Look at the 100% view: clear smearing at 1/60s and 32mm-e – on a 16MP camera, with stabiliser on? Does not compute. Important to figure out why, yes?

I’ve been party to several discussions of late in which the merits of stabilised systems have been discussed at length, and wanted to share my experiences here for the simple reason that I don’t think the benefits – or not – of stabilisers are quite so clear cut anymore. To clarify, because I wouldn’t be surprised if my comments were taken out of context: I think stabilisers have their place, but only up to a point. Beyond that, we either need improvements in the underlying stabiliser tech or we need to accept that it’s not as effective as we’ve previously been used to.

Firstly, it’s important to understand how any sort of stabilisation system works – whether in-lens or in-body. Camera motion is detected by accelerometers or gyroscopes, and then the opposite motion computed. This motion is translated into a compensating motion either by a moving lens element (to aim the focused image somewhere else, where the camera is now – not was at time of exposure start) or moving the sensor so that the sensor’s relative position to the subject is fixed. In both cases, the aim is to keep the image projected on the sensor without any relative motion for the duration of the exposure. These systems can usually compensate for pan in X-Y axes, roll, and sometimes also pitch.

There is one fundamental problem here: the camera determines the necessary compensatory motion with some degree of delay, because you need to establish a pattern to the motion before you can compensate for it. This has to do with the sampling frequency of the system, and explains why stabilisers are less effective – and can actually be destructive to image quality – above certain shutter speeds. The camera is simply not sampling fast enough, and making false corrections because it doesn’t have enough information to determine in which direction the moving bits should be compensated. The slower the speed, the more effective stabilisers are at compensating motion compared to no handheld simply because there’s more information to work with – down to the limits of displacement of the system. If you move too much, there’s simply no travel left. If you move to fast, we’ve got the sampling frequency problem again.

There’s one more limitation with the nature of stabilisation: there’s a limit to the resolution over which the compensating elements can move; the heavier the parts being moved, the more difficult it is to move them precisely in small increments. This is one of the reasons many people have found sensor-based systems on large (FF) sensors aren’t as effective as say M4/3. Compounding this is the increase in resolution on these systems – to maintain perfect pixel acuity, we have to maintain the relative position of the image on sensor over three axes to within half a pixel displacement in every direction (more than half a pixel and we’ll see blur) – that’s a tolerance of under two microns, in some cases.

Given that many cameras are simply not built to this level of precision – mounts, element alignment, sensor alignment etc. can vary by more than this quite easily – this is not a simple thing to achieve from an an engineering perspective. In practice, what this means is that there are cases in which stabilisation isn’t as useful as you might think, or useful at all. And they’re more extensive than you might imagine – and the envelope of effectiveness diminishes as resolution increases, or physical size of lenses or sensor increase (the moving mass problem again). Remember that even if everything is locked down and turned off, there is no way we can be sure that it’s locked down in perfect alignment – especially when it comes to lenses and multiple elements.

Basically, stabilisation works best when:

  • Resolution is low
  • Mass of the moving sensor or lens is low
  • Shutter speeds are moderate – well below the sampling frequency, and above the limits of displacement of the system vs. camera shake motion; probably about 1/0.3x focal length in my experience – though that depends heavily again on resolution
  • There’s more mass to couple the rest of the system to – heavier lenses seem to work better than lighter ones

And it doesn’t work when:

  • Resolution is high
  • Mass of moving parts is high
  • Shutter speeds are high – above 1/500-1/1000s or so (you don’t need it anyway)
  • Shutter speeds are very low – below 1/30s or so for longer lenses, 1/5-1/10s for shorter ones
  • The whole system is too light
  • There are any resonances between the moving parts (usually involving magnets) and the shutter (also usually involving magnets) – note that this is NOT the same as shutter shock, because what we see tends to disappear if the stabiliser is switched off. The Nikon D810 and 300/4 PF VR combination was notorious for this; to the point that a firmware fix for the lens had to be issued to resolve this. However, DPR’s sample image in that link also clearly illustrate what happens when VR goes wrong.

On the subject of dual IS systems – i.e. moving optical elements moving sensor – potentially, capable of greater stabilisation effectiveness because your possible maximum displacement is increased; however, you may well see that whilst the overall structure of the image is maintained out to slower shutter speeds, the micro contrast is not – we now have two sets of moving parts to keep perfectly parallel! Of course, below a certain resolution – dependent on your hardware – it may not be visible.

Screen Shot 2016-05-17 at 18.18.56 (2)
More smearing – at 1/125s. This one is borderline and may be due to camera shake at 150mm, but ~1/fl shouldn’t be a problem, surely? (100% view)

When stabilisation fails, the obvious problem is we start to see motion blur in our images at shutter speed/ focal length combinations you don’t expect – usually when you would otherwise experience a crisp image even handheld. However, it can also take the form of slight double imaging when the moving elements ‘jump’ into position. This is not to be confused with letting the system ‘settle’ (i.e. gather enough information about the motion) before capture – the jump happens no matter how much settling time is allowed. It is very similar to shutter shock in appearance, and in some cases may be related – if it doesn’t go away when IS/VR/IBIS is switched off, then the shutter is likely also the culprit.

Screen Shot 2016-05-17 at 18.15.42 (2)
Smearing at 1/350 and 120mm. (100% view)

Screen Shot 2016-05-17 at 18.15.37 (2)
Here’s what things are supposed to look like – what I think of as critical acuity, similar shutter speed and FL to previous image – in fact, they were shot just a few frames apart. (100% view)

Many of us expected the Sony A7RII to be the magic bullet for full frame not least because of its stabilisation system – especially given the effectiveness of similar systems in the Olympus M4/3 cameras. However, in further testing, it seems we are not getting the expected 3-4 consistent stops; more like 1-2. And I’ve found the 20MP M4/3 cameras to be about a stop less effective in holding critical sharpness at the pixel level, too. The problem is, to fully map out the situations in which the systems don’t work as expected requires a lot of time and effort, and may not be conclusive.

Since the move to 36MP and higher, I’ve had enough images ruined by stabilisation mishaps under unexpected circumstances that I’ve now become very cautious, especially as resolution increases. As far as I’m concerned, the higher the resolution, the higher the tolerances everything in the imaging chain must be capable of achieving – and moving parts are rather antithetical to that. Stabilisation’s envelope of usefulness is no longer quite what we were used to – especially assuming that we want to maintain pixel acuity (and thus resolution) – otherwise we might as well just shoot smaller files. The sample images in this post were shot with a couple of different cameras under different circumstances, but illustrate clearly the difference between optimal and degraded acuity.

The worrying thing is that I’ve seen this behaviour from all brands – suggesting that we are approaching limitations of physics. Worse still, over time, I’ve seen on several lenses and cameras that internal components such as springs can weaken and begin to sag; this creates differences in optics when the camera is rotated (since systems are often optimised for horizontal panning) or even when stabilisation is switched off – often the lower or upper side of the frame is softer than the rest, as though something has slipped out of position. There seems to be no ‘good’ lens or camera or brand – over time (actual VR run time, not age of lens), all lens-based systems I’ve used seem to be susceptible to sag; the magnetically-suspended sensor systems are better, but can still return inexplicable results. I can’t help but wonder if electronic or leaf shutters plus better feeling release buttons are a better way of eliminating shake; for the moment, it’s stabiliser off – or better absent entirely – for me. Consider yourself informed if you’re trying to get the most out of your camera… MT


Masterclass Prague (September 2016) is open for booking.


Visit the Teaching Store to up your photographic game – including workshop and Photoshop Workflow videos and the customized Email School of Photography. You can also support the site by purchasing from B&H and Amazon – thanks!

We are also on Facebook and there is a curated reader Flickr pool.

Images and content copyright Ming Thein | 2012 onwards. All rights reserved


  1. tony estcourt says:

    Great article on a subject not really covered (in particular by the manufacturers). On my Fuji X series camera there is an IS mode which only initiates IS during the exposure. Do you think this makes any difference to using IS with telephoto lenses at relatively high speeds (say 1/250). I’m still not sure when to use mode 1 (on all the time) or mode 2 as above.

    • Sorry, impossible to say as the various modes differ with various manufacturers…for some you get better results with IS off at high speeds, some makes no difference. And Mode 1/2 can mean different things (but usually panning only or all axes).

  2. If the image stabilization in a lens is never turned on from day one should there still be a problem with wear and sagging of lens elements over time?

  3. Ming, is this a reason why only flash photographs look really sharp? Cause even if I’m using a 10mp apsc dslr with a 50mm lens@1/80s there is motion blur on fine structures (mtf charts, scales of a focus adjustment target) compared to using flash. Can you please do a test if such “stabilizer induced blur” also happens with flash?

    • Possibly – but it depends on your flash duration and fill factor, which in turn determines your effective exposure duration. Most flashes are 1/300 to 1/20,000 (or even shorter) – much better than 1/80. It’s also possible you have shutter shock or may need to work on your technique, though 1/80 for 75mm-e is a little low.

  4. They started from scratch with the magnetic one (the IBIS in A-mount cameras use springs too but the sensor is supported by a double sliding tray link to the body since it’s not a 5-axis’one) so yes the IBIS in the A7 serie mkII is a prototype and buyers are guineapig (I’m one), but isn’t it always the case? Since you get sharp pictures on long exposures with 3 stop improvements, your camera has demonstrated it can handle the higher inertia successfully.

  5. A good article, thanks for sharing!

  6. Jorge Balarin. says:

    Thank you very much for the information.

  7. Interesting article that raises some valid points, however there are some technical errors, hearsay and incorrect assumptions. Some examples:
    – “systems can usually compensate for pan in X-Y axes, roll, and sometimes also pitch”. This is incorrect, pitch is what most IS systems compensate for (easy signal from piezo gyro) and only the latest 5-axis systems compensate for pan (requires accelerometers)
    – “The camera is simply not sampling fast enough, and making false corrections because it doesn’t have enough information” sampling a signal fast is easy, actuating fast is hard. Sampling frequency is not what limits IS system performance, available actuation power and sensor frequency bandwidth are. (with regards to sensor bandwidth, one of the reason we are suddenly seeing 5-6 stop IS systems (e.g. Olympus 300mm) is that there is a new piezo gyro sensor that has become available from Murata in the last 2 years which has a broader bandwidth allowing)
    – “springs can weaken and begin to sag” coil spring stiffness relaxation over time will typically not occur to a meaningful amount unless overloaded or exposed to high temperature. Fracture due to fatigue after many cycles (10e7++) might. Also the positional feedback incorporated in all IS system can compensate for small variation in stiffness if it did occur.
    – Out of plane alignment of the sensor in in-body IS systems is unlikely. The sensor is typically mounted on a plate which is sandwiched between two other plates separated by small ball rolling elements. This is a robust system with only rolling contact subject to negligible wear/misalignment over time.
    – The power required to actuate the lens element/sensor is linearly proportional to mass, not squared as mentioned in the comments i.e. P=F*d/t = (m)*(a)*(d)/(t).
    – When comparing m43 vs FF sensors for a given field of view and number of pixels, a m43 sensor requires higher IS system precision (minimum distance over which positional control is maintained). A FF sensor requires more power.
    – The reason heavier cameras are easier to stabilise is that the resonant frequency of the photographers arms+camera system is lower for a heavier camera mass (resonant frequency = sqrt(holding stiffness/camera mass)). The lower frequency is easier to compensate for (less power)

    • “only the latest 5-axis systems compensate for pan (requires accelerometers)”
      Not true, Canon and Nikon have had automatic panning modes in lens stabilisation since the very first generation.

      “– “The camera is simply not sampling fast enough, and making false corrections because it doesn’t have enough information” sampling a signal fast is easy, actuating fast is hard. Sampling frequency is not what limits IS system performance, available actuation power and sensor frequency bandwidth are. (with regards to sensor bandwidth, one of the reason we are suddenly seeing 5-6 stop IS systems (e.g. Olympus 300mm) is that there is a new piezo gyro sensor that has become available from Murata in the last 2 years which has a broader bandwidth allowing)”

      My understanding from speaking to Oly engineers is that the system requires information from both gyros and sensor to compensate motion since the sensors alone do not have enough information to fully correct off-axis shifts in focal plane that may result in unwanted tilt effects – this can only come from the CDAF system, and that’s limited by sensor readout speed. Of course, something could have gotten lost in translation…

      “– “springs can weaken and begin to sag” coil spring stiffness relaxation over time will typically not occur to a meaningful amount unless overloaded or exposed to high temperature.”
      How do you explain the effects seen empirically? Since this is the main mechanical component subject to wear – the rest being electromagnetic – I can’t think of anything else that could cause the degradation seen. It’s not binary: lens systems initially all work well, then slowly loses effectiveness – with multiple lenses, with significant use. It can’t be a function of electronics, either since that doesn’t degrade in a progressive manner.

      “– Out of plane alignment of the sensor in in-body IS systems is unlikely. The sensor is typically mounted on a plate which is sandwiched between two other plates separated by small ball rolling elements. This is a robust system with only rolling contact subject to negligible wear/misalignment over time.”
      Zeiss demonstrated that focal plane misalignment or shift by just 1um is enough to cause a significant change in resolving power. Anything that moves is subject to wear, misalignment, and hell, even settling in a slightly different place because of a bit of dust – 1um is not a lot of latitude for error.

      “– When comparing m43 vs FF sensors for a given field of view and number of pixels, a m43 sensor requires higher IS system precision (minimum distance over which positional control is maintained). A FF sensor requires more power.”
      Agreed, assuming resolution on both sensors remains the same. Except, it doesn’t: I can’t think of any 16MP FF sensors with stabilisation, or 42MP M4/3 sensors. So we can only compare what we’ve got: in practice, FF requires both more power AND nearly as much precision, which makes the whole thing even more difficult.

      The reason heavier cameras are easier to stabilise is that the resonant frequency of the photographers arms+camera system is lower for a heavier camera mass (resonant frequency = sqrt(holding stiffness/camera mass)). The lower frequency is easier to compensate for (less power)
      Agreed, basically inertial damping. The only thing that can throw this out is structures within the camera that components (mirror, shutter) might be mounted to that can resonate on their own because they’re not rigidly attached to the rest of the camera – so unless elements like this are taken into consideration, attaching dive weights to the tripod socket might not do you any good. 🙂

  8. Hi Ming, I am a little bit surprised by your conclusions related to angular resolution, i.e. that stabilization is much more difficult with a camera like the 42MP A7r II or with VR on your 36MP D810 than on a 16MP M4/3 sensor. On the basis of angular resolution and IS precision, shouldn’t the physical size of the photosites be the more significant variable? On a pixel area basis, the 16MP M4/3 sensor is already struggling for pixel level acuity at the same level as a 61MP FF sensor, and at 20MP it’s at the level of 77MP FF, territory which has yet to be broached (though surely it is only a matter of time)…

    I certainly can buy the argument that moving a larger sensor is significantly more difficult, however, and perhaps that challenge makes it more than twice as difficult, which would be the rough break-even point for pixel level resolution of IBIS on FF compared to M4/3…

    • No, because it’s dependent on both field of view and resolution. E.g. if we take a 50deg horizontal field of view, 16MP is about 4700px on the long axis and 0.011deg/pixel; 36MP is 7360px wide and 0.0067deg/pixel: clearly, you need a lot less movement before one pixel becomes smeared. It’s more difficult to stabilise small motions because that requires much higher control precision and speed, which is made even more difficult by the increased mass of the sensor assembly.

      • But you are assuming the pixels are the same size, where they are not for the m43 (16MP) vs FF (36MP) discussion in this context. With the pixel size being smaller on m43 sensor, the resolution demands of the stabilisation system are higher for the same angle of view

        • No, they’re not. Does it take more effort to move something light a short distance or heavy a long one?

          • We are talking about precision here (positional accuracy required from IS system) not power required (which would be higher for a larger sensor). Higher precision is required for m43 sensor as stated.

            • It’s not so straightforward: precision requires acceleration and deceleration, but that applies equally to large and small sensors. More energy is required to accelerate and decelerate a larger sensor, but the tolerances to do so are greater since you’ve got more distance over which to do it before smearing becomes noticeable (i.e. the acceleration or deceleration period). With a smaller sensor, you require higher precision, but less energy. Control systems are never linear and inevitably it’s easier to do small/short/fast than larger/further/same speed. However, this argument is really somewhat moot anyway because M4/3 has a ~3.8u pixel pitch at 16MP, and the 42MP FF sensors have ~4.5u pixels. Assuming things are linear, you only get 18% more permitted ‘slop’ before seeing motion – so the case is even worse for FF because the precision and acceleration/deceleration requirements do not change, but the mass being moved is significantly heavier.

  9. Garrit Marinescu says:

    Hi Ming,

    my question is, could we use Image Stabilization to get a photograph as good as it gets, and if there are any micro shakes than correct them in a postprocessing workflow. I’ve read that a software tool/plugin like “piccure+” uses a mathematical method called deconvolution to sharpen the image without a lens based profile (good for legacy lenses on mirrorless cameras) and for correcting micro shakes. Should we use that in postprocessing to get rid of existing micro shakes or lens softness in combination with Image Stabilisation from the camera/lens system? Would the result be satisfying from a professional perspective?

    Thank you,


    • Good question – I’ve tried, but the results are at best smeary. You can’t really ‘shift back’ something that’s either doubled or smeared, and even if you can, you’ll land up with far less information than a stable image (or one shot with higher shutter speed to begin with). In short: yes, no, not much point and inherently compromised 🙂

  10. Hubert Chen says:

    Hi Ming,

    I wholeheartedly agree to all you wrote. As always in a very clear fashion. I am suggesting however that you have not covered the entire shooting envelope. Your focus is on the high resolution side of the envelope. I am suggesting that this will not be the case for every photographer.

    There was a time when I was considering the A7R simply because it had great 4K video, Full Frame and IBIS. I never cared for the ultra high resolution. The most my images are seen on a 8 MP display, thus anything more is not relevant to me.

    Like you wrote, on lower resolution the IBIS will appear to be more effective.

    There is one more side note. I recently shot a dragon boat race with a 300 mm lens on a tripod with ball head, but without the ball head to be fixed. I enjoyed the speed of moving my camera freely while having the tripod carry the weight (At my age I can’t hold that beast myself anymore for an extended period). I shot at shutter speeds of 1/250. I shot with and without IBIS and found the shots with IBIS to have more critical acuity. I am using a Pentax Camera with 16 MP sensor and I frequently cropped to keep 1:1 pixel for the extended reach.

    It seems like you said that results of IBIS are no longer clean cut and the photographer has to find what works for him.

    In any event you helped me understand some more physical relationships on IBIS, for which I am very grateful!



    • Perhaps I wasn’t clear: I found that it definitely was useful up to 16MP, but above that – and especially with larger sensors – the results tend to be a somewhat unpredictable mixed bag. I’ll leave it on if I have the chance at another shot, but I’d rather deal with a bit more noise but have a higher shutter speed and much greater certainty if I can’t.

      • Hubert Chen says:

        Hi Ming,

        Thanks for your reply. I read your article again and indeed you said so. Interesting I overlooked it the first time. My apologies.

        It just got the feeling you suggested that in a 36 MP camera IBIS does not make much sense. What I wanted to say is that people looking for IBIS & Full frame only get the 36 MP option. (To my knowledge there is no full frame with IBIS and say 16 MP resolution). If they will shoot at 8 or 16 MP they may still enjoy the benefits of IBIS.

        As such it makes sense to include IBIS in a 36 MP model as not everybody needs 36 MP in their images. But at 36 MP IBIS may have greatly reduced effect as you pointed out.

        • Sony does 12, 24 and 42MP with IBIS, and you can of course use IS/VR lenses with the lower resolution Canons or Nikons. I suspect 36MP with artifacts and down sampled will still be slightly better than 16 or 18 without.

          • Hubert Chen says:

            Thanks for your reply. You know current features and models of cameras way better than I do. That said, I made a research about a year ago, where I ended up considering a High Mega Pixel count Sony Camera which had no equivalent in lower pixel count to fit my bill:

            – I wanted full frame. Mainly for better tonality. But also to get more use out of my 35 mm and 50 mm full frame lenses.
            – I wanted a mirror less camera so I could adapt my existing lenses. If I would needed to buy new lenses this would have made it an 18,000 USD investment rather than 3,000 USD, which was out of the question. Please keep in mind I have full frame lenses and I only need 8 MP output resolution.
            – I wanted IBIS. I have it on my current camera and I do not want to miss it.
            – The other reason for IBIS was that when later buying native lenses I would have gone Zeiss, which has no in lens stabilization.
            – I wanted 4K high quality video. (My current camera is a Pentax, which has only 1080p, which even is not very good).

            To my knowledge only the Sony 7 cameras fit the bill. And I think the 24 MP model at that time had no 4K, so I would have needed to go with the 36 MP sensor just to get 4K video.

            The point I was trying to make was that not everybody who buys a > 24 MP camera may actually use this high mega pixel count, but may not had a choice to buy less. For such users the IBIS shall still be as effective as before.

            As a side note (and I thought you may get a kick out of this): In the end I decided to not buy the camera and to invest the money into lights, studio accessories and training materials on script writing and movie editing. In variation of Ansel Adams: Nothing is worse than a sharp video of a fuzzy story 🙂

            • Doesn’t the 12MP version also do 4K? 🙂

              • Hubert Chen says:

                Hi Ming,

                Seems I am coming off as in idiot 🙂 .

                I wrote my initial post with what I knew from memory, which was the market situation about a year or so ago. I am not sure the 12 MP camera was already out by then, or it might have been even more expensive.

                I seem to fail to bring across my point that I felt your article did not differentiate enough usefulness of IBIS depends on resolution of your output. Maybe a table like so would deliver this message:

                | Resolution | IBIS effectiveness in stops |
                | 12 MP | 3 Stops |
                | 24 MP | 2 Stops |
                | 36 MP | 1 Stops |

                This is of course not accurate, but shows the trend.

                I am nitpicking here. Your article demonstrated a very important point which I have not seen elsewhere. Thank you very much for that!

                • Hubert Chen says:

                  Oh, the other discrepancy with my experience was that for me effectiveness on higher shutter speed remained intact. This was for the use case as described. Your explanation on why on high shutter speed it may be less useful seems not to apply for my camera. IBIS engages once I half press the shutter. It is true that it needs some time to lock in and compensate for shake. But this is done before the moment of full shutter depression aka taking the picture. I am observing about 3 stops improvement of IBIS at shutter speeds of 1/8 of a second and still see the same 3 stops improvements at 1/125 to 1/250 when using a long lens hand hold or mono pod hold. Maybe the Sony IBIS is different there. Pentax is doing IBIS way longer than Sony and maybe Sony still has room for improvement here? How was your Olympus doing at higher shutter speeds and long lenses. Did you saw a decline in effectiveness like with your Sony, or did you see it to remain intact? I would expect the latter.

                  • The Olympus worked fine – it remained intact. The Sony diminished at higher shutter speeds, and sometimes instead produced artefacts – which I suspect has to do with inertia of the components…

                    • Hubert Chen says:

                      My instinct tells me it that this is not related to inertia of components. Your camera shake will have the same frequency and characteristics no matter the shutter speed you use to take the picture. A longer exposure is actually more challenging than a short, as the camera need to compensate high frequency vibration and long term drift. In order to deliver a crisp image it need to be able to do that with accuracy of +/- 1/2 pixel. Since you get sharp pictures on long exposures with 3 stop improvements, your camera has demonstrated it can handle the higher inertia successfully. On a short exposure it only needs to compensate for the high frequency vibration. If anything, the number of f-stop improvement shall go up. As such I am expecting Sony is struggling with an independent problem, that they will fix eventually.

                • I get what you’re trying to say – however, it also depends on sensor size since that has an influence on the difficulty of physically moving the sensor or lens elements to compensate for motion…

  11. I pretty much agree with all the points you raise. The OM-D EM-5 was groundbreaking in what its 5-axis stabilization could do, so naturally everyone started to expect IBIS to be present in all future models. The Sonys are a cold shower of reality; with a far bigger sensor and resolution, the A7RII gets nowhere near the stabilization that Olympus did with their 16 megapixel cameras. Still, I agree with your 1-2 stop estimate and with combined body and lens stabilization I have gone much further, so it’s still a useful feature in my book, though not critical.

    A practical problem is turning stabilization on and off, with manufacturers confusing the issue by not always being perfectly clear with the envelope if their systems. Shooting on tripod is the classic case, in the best case the camera should be able to figure out that vibrations are too low frequency to matter and be able to turn the system off. Similarly for high shutter speeds the camera should figure out that stabilization won’t improve things. Whether or not current software does that is up to speculation or a lot of testing.

    • Sony and Olympus both use suspended magnetic systems, to the best of my knowledge. If my memory serves, the amount of power/field you need increases by the square of the mass increase – so a sensor 4x larger/heavier (probably more when you count the rest of the support structure) requires 16x the power; never mind the necessity to move it twice as fast to translate the same relative angle of view over time (i.e. counteract the same camera shake) and do it more precisely because angular resolution of the sensor has also tripled…I think we can see that it’s an uphill battle, and perhaps remarkable that they managed even 1-2 stops at all.

      Agreed on the switching on and off part. However, a counter argument: if we switch it off, the IS/VR/sensor elements are no longer suspended, and may sag/slump; surely in that case it would make more sense to leave it switched on and supported?

      In short, when it comes to anything critical – I’ve increasingly begun to avoid IS altogether.

      • Indeed. the increase in acceleration and decelaration required to keep up with the resolution is significant and with increased mass also the opposing force gets a lot bigger, causing shake in itself if not taken into account. Clearly, there are limits to how far this approach can go.

        By turning the system off I was thinking of turning off all sorts of compensation, returning it to a fixed neutral state. In a system that requires power to maintain that state then one must have power even if no IS is done, so I’m essentially in agreement with you, just worded myself imprecisely.

        I haven’t had massive problems due to forgetting to turn IS off, but it’s an annoyance to have yet one more thing to remember to configure and sometimes there are problems. Old cameras were simpler since there weren’t many features — now we need smarter cameras to cope with all the features (don’t get me started on choosing the right focusing mode at the right time…)

        • The question is whether the manufacturers even bother to do this when the system is switched off, or as with most lenses – just fix it mechanically. Either way, there are a lot of potential problems with planarity and alignment…

          Honestly – at least with an all-fixed system, if there’s shake, I know it’s me. 🙂

          • That is easy to test. If on a powered-on system but with stabilisation switched off you hear no rattle but you hear a rattle when the power is switched off, then something must be holding something in a fixed position when the power is on.

      • That is not correct. Power = Fd/t = (m)(a)(d)/t.

        • Power is *work* per unit time, not force.

          • And what is work? Force * distance as I wrote (Fd). The power required is linearly proportional to mass.

            • Power and work are NOT the same thing. One has a time component involved, the other does not. Moving a light object quickly and moving a heavy object slowly can use the same amount of overall work (energy, J) but one will have greater power requirements than the other because of the rate of spending energy. I suggest you check your physics…

              • Did you even read what I wrote? here is is again “Power = Fd/t = (m)(a)(d)/t.”

                How is this incorrect? Power = work / time = (force * distance) / time = ((mass * acceleration) * distance ) / time.

                I suggest you check your understanding of physics

      • hello Ming,
        You are mainly comparing the Olympus and Sony 5-axis assuming the only difference in those IBIS is the sensor (size and Pixel count) to make your deduction but I’m not sure that the Olympus 5-axis system use a suspended magnetic system.
        I’m sorry I can’t give you a reference link because I don’t remember where I read it, but I’ve found once an interview of an Sony’s engineer explaining how their system differ from Olympus ‘one, since they were a lot of suspicions about Sony borrowing their 5-axis from Olympus due to their 10% shareholder status.
        From what I remember, he admitted having a look at the Olympus system and talking to Olympus’ engineers but couldn’t use it simply because the springs actuated and suspended sensor used by olympus wasn’t strong and precise enough for the mass a FF sensor.
        Instead they opted for permanent magnets to sustained the sensor in flotation (if I recall Sony still use mechanical actuators).
        Since they used different technologies, this may also explain why there is a difference in acuity in both systems. Sony just couldn’t replicate for the moment the precision of the Olympus’ system because the advantage given by springs for a micro 4/3 sensor is not suitable anymore above a given weight.
        They started from scratch with the magnetic one (the IBIS in A-mount cameras use springs too but the sensor is supported by a double sliding tray link to the body since it’s not a 5-axis’one) so yes the IBIS in the A7 serie mkII is a prototype and buyers are guineapig (I’m one), but isn’t it always the case?
        He also said that he see a lot of potential in what could be achieve with it so maybe the second iteration could change things in a good way and a good calibration ( new movement sensor, an overall redesign with magnetic field coil actuators…) could change everything regardless of the size or the MP count of the sensor.

    • > The Sonys are a cold shower of reality; with a far bigger sensor and resolution, the A7RII gets nowhere near the stabilization that Olympus did with their 16 megapixel cameras.

      A7R2 has a genuine EFCS… none of m43 have that (a fix introduced first in E-M1 was not a EFCS and still none of Olympus or Panasonic cameras have EFCS)…1/60s is a shutter shock zone and naturally IBIS can’t handle that…

      • I think the EM1 with latest firmware and definitely the EM5II both have true EFCS (and full E shutter). However, it seems that all of those cameras give away something in dynamic range or capture rate whilst doing so.

  12. Thanks for putting in words the explanation for what I had blamed on personal bias. I get enough shots which are unsatisfactory with my Sony A7II, that I tend to default to my A7. I had put it down to a difference in sensor stability but your engineering perspective gives me more detail to ponder. Of course, Steady Shot in camera does influence sensor stability so I was part way there. It just makes sense that parts which are designed to move come with tolerances which can allow for imperfect return to battery.

    • There’s definitely a difference in effectiveness with the larger stabilised sensors – simply because there is much more mass to move around means that it can’t move as fast unless power consumption and supporting structures/ mechanisms etc. become significantly larger…

  13. Alex Carnes says:

    Can’t say that I’ve had much joy with VR either. My beloved 90mm Tamron macro is all to pot already and although I’ve certainly given it some use, it hasn’t had THAT much on-time. It’s now clearly slanted and just out of warranty. It’s a pity because it was optically beautiful when new with staggering sharpness and lovely bokeh. It’s hard to a kid the conclusion that stability is best achieved with a steady hand or a tripod.

    • Alex Carnes says:

      *avoid* – sorry, my phone is telling me what to type! 😡

    • And I’m guessing locking it down in ‘VR OFF’ doesn’t help either? 😦

      • Alex Carnes says:

        No, doesn’t help at all; I fear its days are numbered. Fortunately I’m not really doing much macro at the moment, and I’ve gone back to using my 85/1.8G for short tele. I’ll find something else at some point… I don’t know if Sigma have got an Art series macro up their sleeves, in which case that’d be the way forward. Is you Leica Q’s VR showing any signs of age yet? I think my Ricoh GR would have to be prized from my dead hands, but the Leica still interests me a bit…

  14. I have noticed the high frequency detail in the subject makes these small movements apparent. I find it very difficult to make fur look like fur without a flash. The smallest movement of the animal or the camera turns fine dense hair into a mat. I have notice VR sometimes makes it worse. I’d appreciate any suggestions anyone cares to offer.

    • Unfortunately, the hairs/feathers move a bit more than you might think: often the animal’s movement is enough to set them twitching, too. The only remedy is a shorter effective exposure time – either by using flash or a higher shutter speeds.

  15. Martin Fritter says:

    Limits of physics and limits of manufacturing, I suppose. Plus sample variance. Now if one had no choice other than to buy image-stabilized lenses/bodies, would the sure-fire way to avoid this rot (which is what it is) be simply to never turn it on? I assume the Hasselblad medium-format gear is not subject to these problems. As William of Occam said, “There’s no free lunch.”

    Props to your technical writing – quite a remarkable skill. You know, the camera manufacturers – or at least Nikon, Sigma, Sony – produce horrible documentation. Improving it and putting into a good UI would be such a good idea. Do the Europeans do any better I wonder?

    • Turning it off locks the moving elements in supposedly the zero-displacement position. The problem is all mechanical systems are subject to wear, tolerances and alignment, which means that even if it’s switched off, that moving element may not be in alignment with the other elements in the lens. Obviously, there’s a lot less risk of something like this happening if all of the elements are fixed together in the same tube.

      The non-IS hardware isn’t subject to these problems – regardless of brand. One only hopes sample variation and QC are a little better, as a result – and in general, this tends to be the case in practice.

      As for UI, the Europeans tend to go minimalist rather than feature overload – I suppose it’s similar to their design philosophy for cars… 🙂

  16. How interesting, thank you Ming. I’m browsing for my next lens, probably a portrait/macro, and while I can get a very good deal on older non-VR lenses, I’m too used to (spoiled by!) VR on all my current gear. I think VR for macro would be particularly useful because I want to keep the shutter speed low if I’m in poor light (under a bush, for example) and tripods are antithetical to my normal shooting (landscapes notwithstanding). I’ll have to think about this, thank you.

    • VR for macro is definitely not a magic bullet – I sold my 105VR because it was starting to show artefacts even on the D3, let alone later higher resolution bodies. At normal distances however, it was great. For macro work – VR off, and flash (and/or tripod). No way around this, I’m afraid…

  17. A very good post and one that is apt to be taken out of context. Ming, thanks taking the time to bring the issue to the fore. I do recall a tech note Nikon released around the time the D800 was introduced. My take away from that note was: “if you want 36 MP results from this camera, put it on a rock solid tripod.” I certainly have seen my share of inconsistent results from image stabilization on my Nikon gear. I have come to think of IS as basically something that will help me achieve 1/fl performance and no more. I do suspect that IS in combination with AF is a battery hog. I do get far more shots per battery when shooting with older MF Nikkor lenses. Fortunately, we generally have excellent higher ISO performance from our sensors today and, within limits recognizing loss of dynamic range, this does allow us to shut off IS and shoot hand-held at somewhat higher ISO ratings

    • It’s the internet, taking things out of context deliberately or through ignorance is almost certain 🙂

      That sounds about right – though sometimes we get lucky at a bit below 1/fl – perhaps 1/0.5fl – 1/fl is about all I can consistently achieve, and even then, there are anomalous double image results at times regardless of IS setting. It’s the same case with the A7RII – IS was not a magic bullet there, and often resulted in strange smearing.

      It does make me wonder whether our expectations of ‘sharp’ are unrealistic, though. That said, given the limitations of both IS and AF precision as resolution increases – I’m now inclined to take MF prime glass where possible to eliminate one more possible source of motion/misalignment.

  18. One of my favorite challenges (in a non-paid environment) is shooting “car-to-car” (out of a tailgate or otherwise) with a stabilized UWA. Getting sharp results at 1/8-1/2 or better. Under a paid shoot, you have to be on your game – and having a crack sharp editor is needed absolutely. Road conditions, temperature, and environmental distractions are all part of the game. But learning how to overcome those limitations and successfully control your own movements and breathing is as close to zen as I can imagine. Thank you for taking the time to post this information Ming, always happy to see your brain at work.

    • I think UWAs work better with stabilisation because a little bit of lens or sensor movement goes a long way to compensating for motion; much further than with a telephoto (small shake or changes in angular displacement are equivalent to much larger changes in percentage of field of view change).

  19. Ming, you have covered the I.S. issue in nice detail, as is your normal practice. This is my limited experience with Canon’s I.S. system, for what it’s worth …
    I am a Pro Dance Photographer, and I rely on my Canon I.S. system in only some of the situations where I need extra help in getting a sharpe image. Two scenarios where I.S. does NOT work are as follows: Using a mono, or tripod, or for that matter, using my arms and body position to brace the camera. I.S. “on” will ruin the sharpness factor to the point of the file being unusable, every time. It’s really designed for the Photographer who has to hand hold the camera/lens, “free form”, as they “pan” about seeking their composition. The second failure is if the shutter is too fast. I can hand hold a 200mm lens at 1/15th of a second and get a completely usable file, as far as sharpe goes, but higher than 1/125th, and it starts to get very dicey on capturing a sharpe file. Above 1/250th of a second, forget getting a sharpe file. Canon has told me the system is designed for a “moving” lens/camera, and a slower shutter. It works well enough under these conditions, which is where it should work. They also mentioned that too often, Photographers assume that I.S. will work wonders under ALL situations, which is obviously not how it is designed to work. I do have a usable (sharpe) file from a hand held, 200mm lens capture, at 1/10th of a second, F2.8 … I think it worked well because it was moored boat, slowly rocking in the current. I simply started following the rhythm of the boat’s movements, and squeezed the shutter. It was perfectly sharpe. So I.S. can work wonders, but it is not a “fix all” … (oddly, using my Leica M9 a lot now, makes me really appreciate Canon’s I.S. system, even with it’s limitations.)

    • There’s no way you’re going to get a sharp image at 200mm and between 1/125 and 1/250 or so though. If anything, it would be nice if IS/VR would continuously extend the shooting envelope from the lower point of where handheld is impossible…but it makes no sense if you still need a tripod in between.

  20. The unpredictability of the IS systems in some usage scenarios is undoubtedly frustrating. I too have seen some odd behaviour at faster shutter speeds. That being said I have some superb portraits of my young son interacting with his mother in the garden shot on a Canon EOS 5Ds with 100-400mk2 near the long end at 1/50s which are pin sharp, images that without IS would have been totally impossible. When it works it is a superb feature and certainly for longer lenses I would sooner have it than not!


    • Faster shutter speeds aren’t the surprise – we know that the system can’t respond fast enough. Slower is asking too much. The inapplicability happens when your shutter speeds are in the supposedly good operating range for IS/VR…

  21. Props to Sony for not adding IBIS to the RX1R II. Great article Ming. For the most part I agree however the IBIS / Lens IS synchronization feature on the E-M1 works brilliantly with the new 300MM (600MM equ) Oly pro prime. I can’t imagine shooting this lens without it.

    • M4/3 resolution isn’t quite at the point yet where we’re seeing the edge of the stabiliser’s envelope; any possible sensor tilt is covered by greater DOF. There does seem to be a bit of difference in effectiveness for critical sharpness between 16 and 20MP bodies though…

      • Should be interesting to see if Olympus touts greater IS effectiveness (hopefully with an explanation of how it is achieved) in the soon to be announced E-M1 MKII. Perhaps they have found a way to mitigate the hi-rez/IS conundrum given that hand-held hi-rez mode is forthcoming. And Sony must also deal with it in the upcoming A9.

        • They do, but it’s likely that with higher resolution we’re only going to see things kept at par. Remember that at higher resolutions, the same degree of angular movement per second may not be enough to prevent blur spilling to the next pixel…

  22. So I would surmise that a sensor based system gives the best of both worlds. If you dont need IS, switch it off. If it fails, the sensor wont be moving – no image impact presumably. Besides bodies are volatile technology and are more likely to be replaced long before good glass. On the other hand if lens IS goes then you could be up for repairs to correct the floating elements that could go out of alignment and degrade image quality.

    • The sensor can still fail or lock in a position that’s out of plane with the mount, which is worse: then every image will come with free Scheimpflug tilt…

      • Frans Richard says:

        So true. I’ve had the experience with my OM-D EM-10 that the image in the viewfinder suddenly started flapping about when IS was on. Made framing any picture completely impossible. I turned IS off and could continue to shoot, raising the shutter speed. Later, when I scrutinized the pictures I had taken without IS I noticed a slight tilt in the focus plane.
        I agree IS is usefull only at certain shutter speeds. But it also helpfull stabilizing the viewfinder, no matter what the shutter speed, so I mostly keep it turned on. It would be nice if the engineers could implement a method that would turn IS off just when you press the shutter button and the shutter speed is above a user selectable value. But perhaps this would result in increased shutter lag.

        • That sounds very bad – and like the camera should really go home for a checkup. If you’re noticing tilt with IS off, it’s quite possible that even with it on your sensor may not be in plane.

          Stabilising the finder is definitely useful, though at times I find it makes it very difficult to adjust the composition precisely because the view ‘swims’ a bit as though the camera doesn’t quite know what to lock on to. Some systems only turn it on for the shot, which makes even less sense as the system has to move immediately before and stop afterwards – leading to more inertia and motion related issues.

          • Frans Richard says:

            Oh yes, it was very bad. The next day the camera failed completely. After shortly flashing IS in red it turned itself off and would not turn on anymore. Luckily that was after the weekend I went out shooting and I don’t do photography for a living. The camera went back to Olympus and they fixed it under warranty. Had it back within two weeks. The report said they replaced the image stabilizer and the motherboard. Must have been an expensive one for Olympus. But they’ve one happy customer because everything is fine again! 🙂

            I’ve noticed the ‘swimming’ too. Like you said, at times IS is useful, at times it’s not. On Nikon VR lenses you can turn it off with the flick of a switch, which is great. On Olympus cameras, unfortunately, you have to go to the menu. If anyone knows a way to program a button on the EM-10 to toggle IS on/off with a single press I’d love to know.

            • This is why we pros must always have more than one camera – if something like that happens in the field, it’s game over…and you can’t make those excuses to a client. So, two sets of lenses, two Nikons, two Hasselblads, two Leicas or whatever it is…

              I think there’s an IS mode shortcut, but not necessarily one that does direct on/off.

  23. If only the manufacturers would stop chasing megapixels and focus on the rest of the imaging chain, if only for a generation or two. We’re well past the point of sufficiency for Instagram after all :p

    • Actually, they’ve stopped at the low end – it’s been 18-24MP for quite a number of years now. We’re getting selfie modes and wifi now 😉

  24. It’s good to see someone else thinking about this.

    Photographing sports (most of what I do), there are a few times when stabilization should make a positive difference and yet, those are the times that stabilization seems to fail me.

    I’ve had very good luck with Olympus and Pentax and almost a terrible time with Panasonic. I believe that the quality of the parts varies wildly.

    The combination of in-body and lens IS seems to hold great promise, although what I’ve seen from Panasonic hasn’t shown me much yet.

    • I think it’s the unpredictability of response that doesn’t help. Similarly, if you need 1/1000s (let’s say) to freeze action, stabilisation isn’t going to make much of a difference anyway…

      • For still–very still–photography, stabilization has been good and I’ve been surprised at some photos I’ve been able to capture but you’re right–being able to guess what works and what doesn’t is frustrating.

        Part of the problem for me is when the action goes into the evening. Naturally, video seems to complicate any situation. I have a typical tripod as well as a table top-sized tripod and they could work, but the flexibility isn’t there.

  25. Vincent Zhang says:

    Thanks for the post Ming. I’ve got half of my lenses equipped with IS (Canon) and half without, what kind of usage life did you get with VR lenses before they started to show problems like you’ve mentioned above? And also, have you noticed age-related image quality issues with non-VR lenses?

    • I’m guessing it’s probably actuation-cycle based and even then dependent on the duration the VR system is on, so it’s hard to say. Perhaps 15-20,000 shots?

      No issues with non-VR lenses other than AF motor failures or lubricants getting dry.

  26. jean pierre (pete) guaron says:

    Thanks for your post, Ming – I really appreciate it, although it’s much more technical than I am – I’ll study it in depth later.

    It seems to make a valuable contribution to the discussion surrounding image stabilisation. What you are saying doesn’t surprise me, because I’ve had the feeling for quite some time that manufacturers push stabilisation in an effort to promote their cameras over their competitors. Some of the promo they’ve produced had that whiff of “snake oil” about it. I’ve also noticed there seem to be at least two fundamentally different stabilisation systems being promoted – I don’t know how that works – all it does for me is to cause confusion.

    My first introduction to stabilisation was the Zeiss Contarex (the so-called “cyclops” camera). Stabilising it worked on a very simple principle. The camera was a hefty piece of gear, and the inertia of all that weight made it relatively easy to take hand held shots, without a trace of camera movement. Of course you could blow that, by shoving a tele lens on and dropping the shutter speed to 1/30 – and ramming the shutter button instead of squeezing it, But it was simple and worked, and I took something like 15,000 photos with that camera, over the years.

    Shutter speed of course helps – it cannot eliminate camera movement, but it will increase or decrease the effect of any movement. And when I cannot avail myself of a ‘pod, I’d far rather crank up the ISO or open the aperture, than crash the photo by using too slow a shutter speed.

    • Well, stabilisation does work and can be highly effective, up to a point. I’ve got 1s images at 24mm-e, handheld, from an E-M5 that would otherwise be impossible. But I’ve also got strange artefacts at 1/250s and 120mm on my D810/24-120 VR combo – and yet other artefacts that only happen when the camera is in portrait orientation, and only with certain lenses. I have no idea why this is the case, other than it is almost certainly the result of some unforeseen interactions in a complex system. What I don’t have at the moment is any 100% certainty of a clean image unless the shutter speed is high enough, and to be honest, I’d almost rather not be tempted to take the chance otherwise and lose the shot completely. Motion blur we can clean up to some extent; double images we cannot.

  27. Ming, you don’t expressly mention the Q – have you found notable issues with its stabilisation system?

    Thank you,


    • It has limits like the others…mine just came back from Leica after two months (!!) during which components of the system were replaced because…it was making double images. So much for a professional tool and service…yet there still isn’t anything with quite the same shooting envelope, so we persevere.

      • I shoot the Q with the IS off due to Leica’s early comments that it can slightly degrade image quality, and your comments seem to back this comment up. I never gave it a thought with my A7RII though, but now I will….thank you for the article.


  1. […] Stabilisation is good…but only up to a point. […]

  2. […] We might see optical IS on medium format – Pentax has already been doing it, though I find IS results in general are somewhat hit and miss beyond a certain resolution – but to suspend a 54x40mm sensor plus mount and ancillaries just isn’t going to happen […]

%d bloggers like this: