What’s old is new again, history goes in cycles etc. – is all true. One of the earliest widespread experiments in photography – dating to the mid 1800s or earlier – was that of stereoscopy: the making of a three-dimensional image from two normal flat images but shot from a relatively offset position. Though there are many methods of varying complexity that can be used to create the illusion of three dimensions, they all fall back to the same fundamental theory: we humans physiologically have stereoscopic vision because we perceive an object from two slightly different positions; our brains interpret both the difference in images and probably also the physical position of eyeball, focus muscles, iris etc. to gauge relative spatial position and absolute distance. Without this – two dimensional images are reliant on cues such as overlap, shadows, fade/haze etc. to create suggestions of distance and position. Photography itself is the projection of a three dimensional world onto a two dimensional recording medium: this brings about significant limitations in reproduction and fidelity, but at the same time opens up great possibilities for artistic interpretation that a person with normal vision simply cannot see with their naked eyes. In essence, we are forcing both eyes to see the same image at the same time.
Stereoscopic photography is pretty much the opposite: we want to force each eye to see a slightly different view, as though the physical object was viewed from the position of that eye*. We are in effect forcing the eyes to view their usual different images, forcing the brain to process them, and thus recreating the illusion of depth. However: it’s more complex than simply putting a divider between two images, extending to your nose, and forcing each eye to view a different image – the illusion doesn’t work, because the brain knows what’s going on: the rest of the physiological requirements are not satisfied – i.e. the brain knows the eyes are looking at images in different spatial locations because of the feedback given from the physical apparatus of the eyeball itself. We need to try a little bit harder: the images have to actually physically overlap.
*This becomes a critical consideration later on when attempting to construct a setup that has the desired effects.
There are really only three ways of doing this in full color: 1. putting the image for the left eye on the right, the image for the right eye on the left and crossing your eyes (a lot of the early stereo viewers worked this way, with the aid of lenses to force your eyes to focus ‘incorrectly’); 2. using a system of prisms to create a virtual image for each eye that is in the same physical position but uses a beam splitter type arrangement, or 3. an interlaced system that displays left/right eye images alternately and at a higher speed than persistence of vision, used in conjunction with polarised glasses that alternate at the same frequency so only the intended eye sees the intended image (i.e. blacking out the unintended eye). There is also the usual red-blue separation of image that requires filtered glasses, but we will not consider this as there is zero color fidelity. In essence however it works the same way: each eye sees a different image, but the irrelevant image is filtered out by the glasses. (I do not consider holograms part of this group as they are not stereographic: they are really a single source interference pattern that both eyes view simultaneously.)
The simplest method is of course the first one: crossing your eyes. It requires no additional equipment, can be viewed by most people on normal screens, and retains full color. However: the downside is that not everybody can cross their eyes at will, and if the images are not properly prepared, can be fatiguing and induce nausea and headaches. In essence, you need to cross your eyes until the two images overlap and ‘snap’ into place, creating a third one in the middle: this is the three dimensional image, and once you focus here, you should be able to hold the picture fairly easily. Moving your head and looking around a little won’t cause the illusion to break, though there are obviously limitations in how much ‘to the side’ of the subject you can see given the way the image was shot. If anybody remembers those Magic Eye books from the 90s, the principle is the same.
I’ve found that if the image is prepared correctly, the best way to view it (perhaps for shortsighted people like myself; I can’t speak for normal and longsighted) is to have the images approximately 6″ wide on the long axis, at 20-30cm distance away, and preferably on a very high resolution screen; this does wonders for the illusion of transparency. Remove your spectacles if you are shortsighted. My iPhone 8+ in landscape orientation is perfect. (Image pairs obviously have to be left-right, not up-down – unless your eyes are differently laid out to most people’s). Cross your eyes until the two images overlap to create a third, then focus on the third. If it’s too tiring or painful, then move the image pair further away from your eyes – this way the angle of subvention between the images required to create overlap is less, and less displacement is required from your eye muscles. It’s also easier to start with portrait pairs rather than landscape – once again, less physical eye crossing is required for a given image area.
Believe it or not, the viewing is actually the easy bit. Preparing the images is heavily trial and error as there are a huge number of variables to consider. Firstly, the subject distance: if everything is at infinity, you won’t have much of a stereoscopic effect since the image pair will effectively be the same. If the subject is too close, you won’t be able to get enough of the image in focus. If you use the wrong focal length, the perspective will likely give you a headache because it’s unnatural; after much experimentation I’ve found that something in the 60-80mm-equivalent range is about right; it seems to match the native field of view of one eye when you’re concentrating on something. Anything wider feels odd, and anything longer lacks the separation. It’s also best done on a smaller camera format as you can more easily get everything in focus; I shot the accompany images to this article on micro four thirds for this reason. (Additionally, high resolution isn’t required anyway – you simply can’t view these images large because of the required eye displacement).
There are two more critical issues: timing, and positioning. The first is easy with a static subject: you can use one camera and move it. With a moving subject, either you use a single camera and have to build a beam splitter that’s really only optimised for subjects at one distance, or a complex one that is adjustable with distance and focal length; or you use a pair of cameras with identical settings, focus distances, etc. synced on release. We used one camera here, though future experiments may require a second one. More complex is positioning, i.e. the separation and aiming angle of the camera(s) in the two positions. This varies with focal length, subject distance, and to a certain degree – the intended ‘3D-ness’, for want of a better word. I do not have a formula for this – I’m sure it’s derivable but I’m far more likely to get the mathematics wrong this way than through experimentation – but basically, the position of the camera has to roughly replicate the position of a pair of human eyes if they were viewing from the intended camera position. You need to reduce the spacing compared to normal human intraocular distances a bit when close up, and actually re-aim the camera back towards the subject (our eyes slightly converge when focusing on near objects), but for more distant objects, greater spacing is fine, and re-aiming often not really required. Longer focal lengths require wider separation, shorter, less.
Lastly, when laying out the images – make sure that the left eye image goes on the right, and the right eye image on the left – if not, you will not be able to ‘lock’ the third image when crossing your eyes. It’s also extremely important to make sure that final exposure and any local adjustments are absolutely identical; if not, the third image won’t lock, either. I suggest using ACR and editing images in pairs, avoiding any local adjustments in PS at all – it will be impossible to identically duplicate dodging and burning brushstrokes, for instance. This is one of the reasons I’ve used our MING 19.01 here as the crash test dummy: controlled lighting, static subject, and strong underlying motivations to find a way of representing the product in a non-two dimensional manner. In the watch world, it’s very common to be told something looks completely different in the metal or on the wrist (unless it’s because the released images were renderings, in which case such statements are completely deserved) – and when shooting, it’s very difficult to capture the structure and interplay of complex mechanisms, no matter how much focus stacking is used – if anything, it tends to remove spatial cues such as depth of field. Hence, there’s a strong desire to bring as realistic an experience as possible to those who might not be able to view a watch in person – more so when you’re a new brand and your business is online.
In any case, enjoy. Please resize the screen window until the images are the right size; start small and gradually work larger. You can click on any of the images to download the original large files. I’m actually curious to see how many people manage to get the stereoscopic image to form – please let me know in the comments! MT
__________________
Visit the Teaching Store to up your photographic game – including workshop videos, and the individual Email School of Photography. You can also support the site by purchasing from B&H and Amazon – thanks!
We are also on Facebook and there is a curated reader Flickr pool.
Images and content copyright Ming Thein | mingthein.com 2012 onwards unless otherwise stated. All rights reserved
I really liked your post and the images are really excellent to view. I have been looking at things stereoscopically for many years without understanding it – thanks for your helpful detail explaining stereoscopy and sharing how you created the images. My favorite image was the third one with the movements in clear view. I was able to “see” the 3D effect most clearly in that one.
Thanks!
I tried viewing these on my 3D phone but there are some alignment issues. I assume the images are right eye first? Did you run stereo alignment on these?
Cross-eyed is the worst 3d-viewing technique. I would never instruct anyone to use it.
I don’t think they were designed for the method you used…
I found your blog when looking for images to view with an Owl 3d viewer. They work really well, I especially like the ones with the mechanism visible. I couldn’t do the cross-eyed technique, and as another comment says, the images are not reversed. I agree with you about the ideal focal length being 60-80mm. I have made a little jig that fits on top of my tripod and allows me to slide a compact camera 65mm to left and right. Gives good results. Indian I had tried this years ago, it is good fun capturing the depth of a scene. Thanks for sharing the photos.
Thanks – I have to look up that Owl viewer…
Very nice first attempt at stereophotography – my first efforts were not nearly so successful. A couple of the images were a bit hyper-stereo – the lens was too far apart between exposures, causing a sense of unnatural elongation when the images are merged in the visual cortex – but overall, outstanding for a first effort.
You did miss one method of displaying full-color images though – projection through filters that are polarized at 90 degree angles from each other, with glasses that are likewise fitted with PL filters – causing a perfect merge. The upside being that it’s cheaper to hand out 50 pairs of paper PL glasses than to buy a flicker system which alternates polarization at 50 FPS; the downside being that the projection medium needs to be a proper old-school silvered screen.
Anyhow – great post 🙂
I threw away the ones that didn’t work 🙂
Good catch on that other method – I wasn’t aware of it…
Worked really good at 15cm wide on my laptop. I guess I can do both cross and parallel, but the latter is more comfortable.
Now I badly want one of your watches. I love the Magic Eye books, always cool to look at.
🙂
Marco points to the death of 3D HDTV for home viewing. This highlights the problem with 3D: a viable means to view it. Consumers voted with their feet and 3D HDTV died as even the largest flat panel TV is too small to do 3D justice. 3D calls out for a truly immersive experience.
In 2003, during a visit to Futuroscope, about 10km north of Poitiers, France, I experienced a gob-smacking 3D cinema presentation. As a prologue to what was to come we were presented with an animation film of a little alien being who first appeared to be on a theatre stage, but then a platform (walkway) suddenly extended to bring the alien to a distance that appeared to be no more than 10ft away. You could see the audience literally reaching out to try and touch it, such was the illusion. But not only this, even the voice appeared to follow the alien and emanate from it. What followed was a presentation of a couple of 3D animations and a less successful (technically) conventional 3D movie which paled in comparison.
Sadly mainstream 3D movies are nothing close…just nausea inducing. 😦
In 1953 or 1954 my father took me to see the only 3D film I’ve ever seen, Hondo, with John Wayne. I don’t recall now how effective Hondo was as a 3D presentation, as my abiding memory is of the supporting demo 3D films, which had objects coming at you from “out of the screen”, so to speak. But Hondo is a famous 3D production, probably one of the best of the genre.
A history of cinema 3D can be found here: http://www.3dfilmarchive.com/home/hondo-3-d-release
Thanks – a very interesting link!
“Hondo” was the only 3-D film you’ve ever seen? Shame on you! The Windmill cinema in London used to play (..in the seventies?..) a Diana Dors colour 3-D film (with polarising glasses to watch it) so I asked to see how they projected it ..and was walked outside and up a very rickety fire escape, and into a very rickety projection booth, with a very rickety projector (..like those in “The Smallest Show On Earth”..) with an incredibly rickety spinning wheel with two polarising filters in front of the very rickety projector, showing alternate film frames through the alternatingly polarised filters. (Projectionists are usually very happy to show people around their cramped domains.)
That was colour 3-D, but there was, previously, a host of 1950s anaglyph (red and green filtered) black and white 3-D movies ..”Creature From The Black Lagoon”, etcetera ..all great fun, as long as your red and green vision is OK. But with my poor red vision, they never looked as alarming or three dimensional as the polarised 3-D movies.
Several Imax movies have been shown in 3-D, such as the 3-D Space Station film ..which jumped out of sync when I saw it in the Munich Imax / Deutsches Museum (mentioned elsewhere here) so that every frame looked “inside-out”! Ugh! ..Most of the audience felt so sick that the film was stopped, and we were all given a tour through the projection room instead, and a free snippet of Imax film by way of apology.
But 3-D is really marvellous in the French “Futuroscope” theme park!
Yes, my only 3-D movie experience, but what a classic! My second viewing was last November when it received an airing on UK free-to-air TV in 16:9 format. I keep a Toshiba 36″ CRT TV purely for viewing standard definition TV broadcasts as it does a far better job than viewing on my HD TV. The SD quality was top notch and this prompted me to get the Blu-ray disc.
Isn’t Futuroscope wonderful?! ..Yee-e-ess!
Yee-e-ess eet is!
With the death of 3D HDTV the playback of stereoscopic imagery looks like it will be problematic.
Instead of focusing on stereoscopic 3D rendering, why not focus on making sensors and lenses that give 2D photographs sufficient 3D depth cues via shallow DOF, high microcontrast, and high color tonality? I’m talking here about fast lenses with as few elements as needed paired with trichromatic CFA sensors that can record the light as the eye sees it. Leica did it successfully with the M9 and Noctilux 50/.95 but it seems Canikon have been more interested in pairing complex, optically-corrected lenses with weak CFA sensors tuned for high ISO. The results seem to be… very high resolution flat photography.
“..trichromatic CFA sensors that can record the light as the eye sees it … Leica did it successfully with the M9 and Noctilux 50/.95..”
Well, it’s not so much sensors which “..record the light as the eye sees it..” but the camera-company-chosen processing, afterwards, which delivers photos “..as the eye sees it..” ..from three sets of black-and-white inputs. The camera company (Olympus, Leica, Canon, Fuji, etc ..and we must mention Hasselblad here, of course!) chooses what they want the results from their own brand of cameras to look like.
Oddly, I can’t see what’s so special about the M9, but that’s probably just my own poor-quality vision: my M9 photos – mainly with very wide angle lenses – all look sort of “dreamy”, but perhaps that’s because I’ve shot most of them in very bright light ..as the M9’s hopeless, compared with other brands, in low light, so maybe excess light is flooding the photos. (The M9’s really more like a film camera, but with lower high-ISO than any 3200 or 6400 film.)
“..Canikon have been more interested in pairing complex, optically-corrected lenses with weak CFA sensors tuned for high ISO..” ..huh? “..Complex, optically-corrected lenses..” isn’t that what most lens manufacturers make? I think that all photographic lenses – apart from those intentionally poor plastic Holga, Lens-Baby and Lomography optics – are “optically-corrected”, whatever that may mean. Don’t all manufacturers aim for lenses with “..as few elements as needed..”? Or do some companies say “let’s throw in an extra couple of elements – we don’t actually need them, but it’ll make the lens feel heavier and more expensive” – like the small pieces of steel inside the cheap-plastic-bodied Nimslo 3-D cameras?
I didn’t realise than Canon and Nikon have been using “..weak CFA sensors tuned for high ISO”. I wonder in what ways they’re “weak’. I’d have thought that, above all, it’s Sony which has been making sensor-&-electronics combinations for high ISO, like the A7S and its 409,000 ISO setting, which is wonderful for moonlight shots.
This article is headed “Experiments with stereoscopic photography” ..so I don’t think that Ming’s suggesting that 3-D should go mainstream; his aim seems to have always been, apart from this experiment, “..very high resolution flat photography”.
Or have I misunderstood something?
David, good points. I chose to reference the Leica M9 mainly because it has an old-school CCD sensor (entirely designed for color quality, not high-ISO) and because it can mount the Noctilux 50mm f/95. I’m intrigued by the dreamy effect you say you get from your M9, but the type of 3D immersion I am talking about can best be seen on Thorsten Overgaard’s review of the 50/.95 below:
http://www.overgaard.dk/leica-50mm-Noctilux-M-ASPH-f-095.html
I’ve found that this is a 3D look you cannot get from Sony, Canon, Nikon, or Fuji. Only a few medium format cameras approach this level of 3D immersion when paired with the right lenses… but at a significant cost to portability.
I believe Canikon have essentially pushed themselves into a corner by chasing ever higher-ISO and high-resolution 35mm sensors. To support smaller pixels and better high-ISO they’ve weakened their color filter arrays to allow more light through to the sensor. But this has come at the expense of some color fidelity as weak CFA’s have more difficulty in separating subtle hues. Color tonality is an essential depth cue and skin tones in particular suffer when weak CFA’s are employed. You may enjoy these videos on the subject:
https://luminous-landscape.com/phase-one-trichromatic-sensor-explained/
Similarly, high-resolution 35mm sensors have prompted Canikon to produce sharper optics with fewer aberrations to better match the demands of 36, 42, and 50MP cameras. But this too has come with tradeoffs. Lenses only get sharper when aberrations are reduced via additional elements and exotic glass (SLD, ED, aspherics). Consequently we are now seeing modern primes that have as many elements as zoom lenses! Not only are they bigger and heavier – they also transmit less light (T-stop) and optically render the image as flat as zooms do.
The bottom line is that thanks to all of this sensor “progress” we are now entering a period of very flat, high resolution photography. It makes me wish manufacturers would turn the clock back to the period when color fidelity ruled digital photography.
Marco, I was going to leave it at that, and leave the last word to Ming.
But then I thought that might be disrespectful to your comment (..my not leaving a reply..) so here are some responses..
Thorsten’s always rattling on about the 50mm Noctilux, though I don’t like it because it’s big and heavy, and Thorsten never mentions that for just a few hundred pounds (not thousands) you could buy a (Leica screw-thread) Canon lens with double the focal length – 100mm – and a quarter the aperture – f1.8 – and get almost identical pictures to that Leica 50mm f0.9. The Noctilux (or the Canon 100mm f1.8) provides a separation of the foreground from the background to give what you call “3-D immersion” at, say 20 feet away. But at normal person-to-person portrait distances I find that the limited depth-of-field of the Noctilux means that if a nose is in sharp focus, then an ear and wisps of hair at the back of someone’s neck are out of focus, and blend into the background and that does NOT separate the person from what’s behind them. So I prefer the Leica 50mm f1.4, because (a) it’s smaller and lighter, and (b) it gives better separation (..as would the Noctilux at f1.4..) between a person’s face and what’s behind them.
You say “..I’ve found that this is a 3D look you cannot get from Sony, Canon, Nikon, or Fuji. Only a few medium format cameras approach this level of 3D immersion”. But I find that, say, the current Canon 85mm f1.2 gives just that kind of separation which you like ..and so I – and many others! – often use an 85 1.2 for portraits ..just as good as, if not better than, a 50mm Noctilux. Thorsten, though, has hitched his wagon to Leica, whereas I’m brand-agnostic ..I don’t care which brand I use as long as the picture looks how I want it: I’ll often choose a lens, and then think “which camera does that work well with?”
You say “..I’m intrigued by the dreamy effect you say you get from your M9..” and, to me, these look what I’d call “dreamy”:

https://tinyurl.com/y7yrcggk
..but maybe that’s because I photograph ideas which are in my head, dreamy views which have a hint of “Juliet Of The Spirits”. I don’t take photos of what’s in front of the lens, as if I were using a portable Xerox machine: I take pictures which are in my mind, to print them ..and that makes them tangible. So these aren’t photos of, say, “the sea front at Whitstable”, and “woman reading with boats beyond her”.
The second one’s more “my thoughts of what her thoughts may be; drifting away..” and the first one is more “my thoughts of what the seaside might be..” unlike, for instance, Martin Parr’s views of “details of what people do at the seaside”. These are not meant to be particularly sharp, or to have “accurate” or realistic or “true-to-life” colours, they’re there just to invoke feelings of being by the sea.
They were shot with an M9 (..and a 21mm, or wider, lens..) not because I prefer to use an M9, but because I felt that those were M9-kind-of days ..slow days, with bright sunshine, calm atmosphere, olde-worlde-charm ..so that’s what I chose to use: a “full-frame” camera, with “olde-worlde” charm, something which needs bright sunshine and would feel at home in those places.
Nothing to do with technical capabilities ..but just because it fitted in with the “feel” of the day ..which was “dreamy”.
Count me in as another one who prefers the 50/1.4 over the 0.95, and not just because it gives full coverage over 44×33 which the 0.95 does not – it just makes no sense to pay silly multiples for that extra stop and not use it all the time, but then run the risk of your images looking like everybody else’s who has that lens…
My aim has always been about transparency in representation of an idea – whether that’s through transparency in resolution, or an extra perceived dimension – is both down to subject and intention. Hence, continual experiments to better understand where to use what…
Nice to see an article about 3D, it’s so rate nowadays. Just a few inaccuracies to point out:
You talk about crossing your eyes, but all the images here are in parallel (left on left, vice versa) NOT cross view (left on right, vice versa). The Magic eye books used parallel too. If you think you’re crossing your eyes you may be mistaken, because that would actually give you inverted depth. The eyes need to be diverged for these. Along that line, the early stereo viewers you mentioned also used parallel, not cross. You also describe an interlaced viewing system that uses polarized glasses, but then say that it goes back and forth at a high speed. You’re combining the two major 3D display technologies here. Polarized is passive, interlaced but no flickering. Active shutter glasses do flicker however, giving full resolution images.
Thanks for the corrections. I’ve always felt as though I was crossing my eyes, but I guess this must be the relaxed convergence – close focus thing.
I first reduced the pictures to 600 wide, then viewed them by placing a vertical piece of cardboard between my nose and the center of the double image. Presto: 3-D. By this method I’m sure I’m viewing parallel. I also experimented by swapping the two images, and the 3-D is reversed: near things appear far and vice-versa. I didn’t do this for long, might give serious headache. To view bigger images I’ll make a system of mirrors, as it’s too hard for me diverge my eyes.
This brings to mind experiments people have done
https://www.theguardian.com/education/2012/nov/12/improbable-research-seeing-upside-down
inverting the images, and quickly the subject got used to it and was able to function normally. I wonder if we would switch the real life images reaching our brains, ie, what should be the right image to what ever part of the brain is supposed to see the left image, could we get used to that, too?
Our bodies are a marvel, and that we do harm to each other is a real shame.
Ever used a twin-lens reflex, like a Rolleiflex? ..or an old 6x6cm camera like a film Hasselblad, or a Kowa Six, or a Bronica?
The image you see, looking down into the viewfinder – which has a mirror beneath, but no pentaprism to swap left for right – is transposed left-to-right. So if you’re following, say, a cyclist travelling right-to-left in front of you, you have to move the camera left-to-right to keep the cycle in the frame!
But people get used to that very quickly. (When I was about seven my Auntie Minnie – a headmistress – gave me (..yet another..) a science book which described, and showed photos of, day old chicks fitted with goggles to turn their vision upside down. They quickly adapted, and then the goggles were removed, and they quickly adapted back again.)
Don’t forget that WE see everything upside down anyway, and our brains effectively turn the images the other way up, so that we can cope with the world. The focusing lenses at the front of our eyes do exactly what camera lenses do: they produce an upside-down image – on the retina at the back of the eye (..in Australia and New Zealand, of course, everything’s the right way up!..) but we cope perfectly with seeing an upside-down world ..without our even realising it!
Neat! No problem whatsoever viewing these in 3D on my iPad, both in portrait orientation (smaller images) as in landscape orientation (larger images). I do have the ability to cross my eyes at will, so perhaps that has helped.
Cool! What do you find the largest size/ closest distance that you can still make the images ‘work’ to be? Trying to get an idea of how much convergent ability exists for most people…
Hello Ming Thein,
Thanks for the post about Stereo 3D photos.
I‘ve done quite some cross viewing type 3D photos with 2 Cameras for some years. You are right to say to reverse the right with with the left photo for cross eyed viewing, but you didn‘t reverse them. I downloaded your Flickr photo of the back of your nice watch and switched the sides, and then the crossed view worked (on a 15“ Retina screen).
best regards
Martin
Thanks for the tip. Will reverse for round two! 🙂 Interesting some people can view fine, some need to reverse…perhaps we are all confusing parallel/cross viewing techniques. More investigation required…
I’ve read that there is the parallel version of stereo photography, but never came across it. On Flickr you’ll find a lot of examples for cross eye viewed stereo photos. When I tried it the first time, or after a long time not doing stereo photography, I found that not changing the sides looks odd, the layers look the wrongly stacked. Crossing the eyes is very easy, as we do it anyway when looking at near things or faces for example.
Just two days ago I assembled a “new” 3D stereo camera set, two very cheap Lumix G6, two 20/1.7 and two 14-45ers. It’s a wonderfull set. Workes very nice even wide and near. I could email you one of the best photos, if you like. Contact me via martin@magus-harps.de.
Thanks – I saw the image but the depth map looks reversed to me – I must be doing something wrong with my eyes!
Then you are probably a parallel viewer.
Awesome, Ming!
I’ve been a “user” of the crossed eye technique for a long time and can pull it off without any problems. Works great with your photos on my iPad. I also loved my ViewMaster experience when I was a kid. Such a cool little device.
The Deutsches Museum in Munich (world’s largest technical museum) has a great collection of high resolution black & white stereoscopic aerial images from the 20th Century mounted in viewing booths. Amazing to look at!
Something not many seem to know: Brian May, legendary Queen guitarist is a big advocate of stereoscopic photography. He owns the London Stereoscopic Company http://www.londonstereo.com and he just released stereoscopic photographs of Queen’s performances.
Thanks – very cool!
Thanks for very interesting post.
Sincerely,
Anatoly
Pleasure!
No success at all on the monitor, but on my phone (LG V20) I could clearly see the effect on image 5, and *kind of* see the effect on the other images. The challenge is that the other photos have quite a lot of detail that I found distracting me from the image as a whole and weakening the 3d effect – you’re definitely on to something, but you might be better experimenting with less subtle subjects 🙂 Then again, I took off my glasses and I have unequal degrees of short-sightedness in each eye so it might be that focusing on detail is causing one eye to dominate over the other.
As the others say, it’s definitely not crossing your eyes, it’s uncrossing them, focusing further into the distance. My eyes tend to do this anyway when I’m tired, especially with my phone, so it was a very familiar phenomenon – although the first time I’ve seen it deliberately exploited for effect.
I will try again tomorrow when I’ve recovered from spending the day staring at a computer screen, but keep up the experiments – interested to see if you can reproduce the phenomenon on demand!
I agree, it seems to work better on a phone as you can hold it closer – I suspect also because the ‘transparency’ seems a bit better precisely because the information density is higher; on lower resolution screens you can start to see the pixel mask if you go too close, which of course ruins the effect…
I’ve done direct cross viewing for many years and I found I had to download and swap your image positions to get the proper 3-D effect. When Yowayowa Camera Woman featured this in her photo blog, she would offer the view in both cross and parallel modes:
http://yowayowacamera.com/banana/20110620112230.html
Are you parallel or cross viewing? Following that link – it would appear personally that my own technique is parallel viewing.
Cross viewing. I could parallel view when I was young, but not anymore.
Fuji’s FinePix Real 3D W3 was an interesting, but short-lived, venture into portable stereo image capture. The spec is nothing special and for a compact digital camera it is a little on the large size. Unusually, the W3 comprises two independent digital cameras and which in 2D mode can be set to take different images at the same time. It also shoots 720P video at 30fps. Even back when the W3 was released, its IQ compared to a normal 2D camera half its price, left something to be desired. However, on its sharp screen images/video looked very impressive.
Whilst images can be replayed on a 3D TV, no special kit is needed to view on the camera’s 3.5″ screen once one has adjusted the “parallax” setting. It relies on the autostereogram principle and once one’s eyes become adjusted the stereo effect, whilst dependant upon subject matter, can be remarkable. As ever, though, the stereo effect works best (for stills and video) at relatively near to mid-distances, and so taking views requires a little forethought to ensure the scene contains something within this range to enhance the depth effect.
Like a good photographer, I dutifully backed up all my 3D images but later discovered, to my horror, that the proprietary back up hard drive (ClickFree) did not recognise the MPO file which contains all the stereo data, and only backed up one 2D jpeg. I still have the W3, but video is best left to my usual digital cameras which can shoot in far higher quality and can be viewed on a monitor or standard TV.
I was curious about that camera too – given the lens spacing was fixed (but you could zoom with the folding optics on either side) – how did it adjust for the increasing convergence required as you got closer to the subject? I found this becomes increasingly critical as distance reduces otherwise the effect is lost once the framing starts to drift between L/R frames…
A macro provision is provided, but I never tested this out. For the very reason you state, macro results are not really its strength; it’s more of a point and shoot for portraits, groups, and relies a lot on the understanding of the photographer as to what subject composition makes for a good 3D image when taking pics at greater distances. Obviously, the further away, the less noticeable will be any 3D effect, if at all.
Because the optical axis of each lens is fixed, and they are parallel, there is a limit to how close one can get and still achieve an acceptable 3D result. The camera requires the subject to be centred, so too close, and it simply doesn’t work at all. But checking the user manual re macro I came across another means of taking stereo images and this partially overcomes this issue.
Just as the optical axis of our eyes converge when we look at very close objects, it is possible to take two images independently and combine them for the 3D image. As an example, the first image is taken with the axis of the R lens turned in slightly, and then the camera is moved horizontally to the left and a second image is taken with the L lens turned in slightly. It’s been seven years since I last used the camera and can’t at the moment lay my hands on it to try all this out. I’ll see if I can locate it and try out its macro capability and report back.
Makes sense – thanks for clarifying. Was secretly hoping they might have included some form of parallax adjustment (as on viewfinders)…
There is a manual parallax adjustment provided (..the normal 3-D ‘AUTO’ parallax adjustment alters the convergence automatically, depending on the distance of your subject as determined by the autofocus sensing).
You press the ∞ (+ and -) buttons on the camera’s keypad, as described on page 13 of the instruction manual ..I’ll have to drag my old 3D W1 out of the cupboard and shoot some pictures with it again! By experimentation, you can choose the best parallax (convergence) to suit the particular photos which you want to shoot. This seems to be an actual physical adjustment, rather than just pushing the image sideways electronically, whereas the post-shooting DISPLAY parallax adjustment seems to be just a variable sliding superimposition, and it doesn’t affect the perspective of the actual separate photos. (The same post-shooting parallax adjustment can also be done on the separate, and larger, Fujifilm “photo frame” display, which can cycle through a whole set of 3-D pictures which you load into it from a memory card. Unfortunately, the coarseness of its display – it’s only 800×600 pixels! – doesn’t make the 3-D effect much more convincing than the lenticular displays for old analogue 3-D photos.)
Perhaps the more useful aspect of the Fuji camera is that, as Terry says, the two lenses can be set individually (..in normal 2-D mode..) so that one shoots a wide view while the other simultaneously shoots a normal or telephoto view, although it has only a maximum 3x zoom. That’s handy for one-off unrepeatable events (weddings and barmitzvahs?) for simultaneous wide and close views of the same event / person / scene, without afterwards cropping the wide view for lower-resolution close-ups.
I can think of just one problem: you can’t frame optimally for both wide and tele, so the wide frame might have the main subject off-centre for optimal composition, and the tele gets you the back of a head 😛
..But like any competent photographer, you take that into account!
David, reading your post it seems we had the same reaction to our Fuji’s – interesting, but flawed for displaying the results such that we assigned them to a cupboard or box somewhere around the home to gather dust and to be forgotten until Ming’s article. :D)
Viewing was the system’s Achilles heel. The 8″ monitor you refer too was not only had poor resolution, but was very expensive, well here in the UK at least. It was pushing at commercial 3D TV set prices. And we know what has happened to this technology for home users. One technology I didn’t follow, I’m pleased to say.
Look’s like we have a race as to who can find their cameras first. :D)
I’m going out for lunch now – so I’ll have another look for it afterwards!
I was very pleased with its 10 megapixel individual 2-D photos even in lowish light. (I was given the camera for a birthday or Christmas – plus the extra display – so I didn’t personally buy it.) I don’t know if I can embed a photo here, so here’s a link to one: http://edituk.com/Photos_files/%20for%20edituk.jpg ..that’s at ISO 800 and handheld at a ninth of a second, hence the people are blurred, but everything else looks OK ..though you can’t really tell much from such a tiny reproduction.
And yes, I’m usually here in the UK, too. Hasta la vista, babee..
No, haven’t yet found it ..and that’s odd, because I had it in my hand just the other month: I regularly recharge all my batteries, just to make sure that they don’t expire, and I remove batts from older cameras which take button cells (OM2, M7, etc) to make sure that they don’t corrode. But I last really USED it around 2009.
Tobias is correct in saying “..You need to look at them with the eyes in parallel direction..” ..but most people say – without realising what they’re doing – that you should “squint” or “cross your eyes”. The whole point is to do the opposite, and to RELAX your vision, so that you DON’T converge your eyes, and – as Tobias says – “look at them with the eyes … parallel”.
If you touch together the tips of your left and right index fingers about ten or twelve inches in front of your eyes, and then let your eyes RELAX so that they stop converging, then (with luck) a third, stubby little sausage-shaped “double-ended” finger will appear between the two real fingers. That’s the result of the overlap of the left and right eyes’ vision.
That’s what you need to do when viewing stereo images: you do the OPPOSITE of “crossing your eyes” ..that’s just the colloquial description; what you’re actually doing is looking straight ahead, without any convergence. The tricky thing is to keep your focus close while relaxing your convergence. (In normal vision, as you increase the convergence to see things which are close, so your focus muscles squeeze harder to focus closer too. When viewing stereo pairs you should “unsqueeze” the convergence muscles, while still squeezing the focus muscles!)
The left image should be on the left, as you have them here, and the right image on the right. The left eye then sees the photo as shot by the left-hand camera (that is, the image from the left-hand camera position) and the right eye sees the right-hand camera image. They blend together to reveal what your left and right eyes would have seen if they’d been in those camera positions “in real life”.
As an aid to seeing that (..I was shown how to do this by the ‘Third Dimension Society” up in Durham, UK, back in about 1979..) put those fingertips about 10-12 inches away, with the images which you want to look at a few inches beyond your fingers, then let your convergence relax till you get that third “sausage” finger, and then lower your fingers slightly and continue looking, but look at the images beyond the fingers, instead of looking at the fingers. That normally does the trick, till you get used to doing it without using the fingers first.
[You can sometimes see this effect inadvertently if you’re in one of those telephone booths which has repeating sound-absorbent perforations all over the inside; while talking on the phone and paying attention to sounds instead of vision, your eyes tend to relax and drift to a non-converged state, whereupon the repeating pattern (..it could also be on tiles, or any other small, nearby repeating pattern..) may appear in focus, but at a different distance away – as the non-converged eyes give your brain the idea that you’re seeing something far away. That can give the feeling of floating, or of being detached from your actual surroundings.]
Anyway, “crossing your eyes” is a back-to-front description; that’s what it feels like, but in actuality you’re doing the opposite, and “un-converging” your eyes, to keep them parallel.
Your photos here could have done with a little more separation when shooting them, to make the 3-D effect a bit more pronounced. There used to be calibrated 3-D sliding camera mounts available from photo suppliers like Calumet, B&H, etc, but I don’t know if these are still easily available. They show and provide the most appropriate lens separation for various distances of your subject from the camera.
3-D photos with out-of-focus areas, like your top two pictures with the out-of-focus strap, can be a bit weird to look at, and “best practice” is generally thought to be to keep all that you want to look three dimensional in sharp focus ..which can be awkward for close-ups, of course, and may need teeny apertures or focus stacking, or some such. But background-only blur, as distinct from any foreground blur, generally helps to accentuate the 3-D effect of whatever photos are being looked at.
So your pics are correctly positioned (..left pic on the left, right on the right..) but they might be enhanced with a little more separation of the lens positions to get a bit more pronounced effect at such close shooting distances.
Thanks for the detailed explanation – makes sense that basically you’re not trying to cross your eyes (i.e. converge) but focus close, and it’s the non-overlapping images that cause the virtual ‘third’ image. I actually see this on my keyboard quite a bit when fatigued, but if you try to focus on the letters the illusion then breaks.
Focus stacking/ DOF – I agree that all in focus or all in focus foregrounds work best, but wanted to try to see if we could get the cinematic feeling in 3D from an OOF foreground.
Separation: I tried various distances depending on focus distance, but found that too much separation made them impossible to view. Too little as you say has very little 3D-ness to it. There’s probably a sophisticated calculation of some sort to figure out exactly what separation should be for a given distance and focal length (taking projection into account too) – but my guess is based on extrapolating normal human intraocular distance in a triangle convergent on the subject, and assuming you’re in the 60-80mm-e FL range or so (given ‘focused’ or ‘concentrated’ vision). I think some had too much (difficult to view) and others had too little (no 3D effect) – and I suspect viewing distance and size of the final image pair also plays into it. More experimentation required…
Ming, these are wonderful, viewed on my Eizo screen at maybe twice the long-axis length you recommended (sitting back from the screen, of course. Perhaps the same viewing angle). I’m using the x-eyed technique that I’ve practiced for years (so the image comes into focus easily for me). I’ve long thought that myopia makes this technique a bit easier to master.
But I wonder about your suggestion to remove spectacles. Many myopes have some degree of astigmatism (acc. to my opthalmic surgeon). I certainly do. So removing specs. makes it more difficult for me to achieve the stereoscopic effect.
Still, I love the views into the mechanism. I’m not a watch nut. But I love “Mechanism” and the Ming 19.01 has an abundance of shiny interest.
Best wishes, P.
Thanks – I definitely found it easier to view without my spectacles, but my myopia doesn’t have astigmatism (so that may have something to do with it, too). You’re probably right in that if you do have astigmatism, best to keep them on! 🙂
Thanks for reviving stereoscopy, I have always loved it. But contrary to your description in the article, you have actually put the images in parallel viewing mode (left eye has to look at the left picture), so crossing the eyes (squinting inward) doesn’t work with these.
Perhaps I got the description or the left-right flip wrong – but I can definitely see the stereo image…
I could see the stereo image. Neat.
Glad it wasn’t just me! 🙂
Yes, you can see SOME effect, but not the desired one. Just by looking at the pairs (without squinting), you can see that the left images have been taken more from the left and the right ones from the right. You need to look at them with the eyes in parallel direction which can be difficult or impossible if the images are too big / too far apart. But it works
Cheers
Should they be reversed, so you view the left image with the right eye and vice versa?
For me, both viewing methods ( I I vs X ) are alright (as long as the stereo basis isn’t too wide – I can’t squint outward). It’s just that the description in the text doesn’t fit with the images. If you squint inward at Images according to the presentet layout (left eye image on the left, right on the right) the eyes see the wrong pictures and the depth information is oddly reversed (closer objects feel farther away and vice versa). However, some may prefer I I over X (e.g. Terry B below).
I have to admit that I have no idea what my eyes are actually doing. I think I need to video myself while viewing as obviously a mirror wouldn’t work since you would be changing your focus by looking at it and trying to notice self! 😛
Tobias, agreed. Ming’s is the first reference I’ve seen for the need to crossing eyes or reversing the images. We call it “stereo” but it is actually emulating binocular vision, hence the need to keep the L – R images in their correct spatial position we view, and looking straight ahead. Actually, if one has one of the Victorian 19th century style of hand viewer, one will see that the two views are separated by a partition to PREVENT the left eye seeing the right image and vice-versa. This is also why we can see “stereo vision” when looking through a pair of binoculars which are set for the correct pupillary distance, at which point the window we see is actually a circle, not the comic cinema rendition we see in films of two overlapping circles.
Ming, to answer your question, no, the images should not be reversed.
Oddly they don’t work for me if you cover the middle with a partition/ paper/ divider – so I must have gotten the setup wrong somewhere…
Ming, are you viewing the images on your monitor that you reproduced above within the text? I’m not quite sure how this relates to the more traditional means of viewing stereo via prints in pairs. This would be why I referred to a divider. The viewer contains lenses to focus and view and offers eye-strain eliminating viewing.
Correct, I am viewing these at about 4″ wide per side, 1ft away (plus minus a bit). Smaller and closer or farther and larger works too. No divider.