Computational photography: what ‘format’ is it?

IMG_5107b copy
Gratuitous header; moment of enlightenment.

One of the unavoidable buzzwords of the last couple of years has been ‘computational photography’. Besides sounding slightly oxymoronic and insulting to the ‘real’ photographer who presumably represents what they see and doesn’t attempt to manipulate objects into (or out of) being that aren’t physically there, the reality is that it’s unavoidable and has been unavoidable since the start of the digital era. Everything that requires photons to be converted into electrical signals and back to photons again (whether off a display or reflected off a print) – must be mathematically interpreted and altered in some form before output. It is not possible to avoid this: the Bayer interpolation, in-camera JPEG conversions, any file format saving, conversion to print color space – a ‘computation’ has to be performed to translate the data. Hell, there’s already an implicit computation in the analog to digital stage (although arguably photons are already ‘digital’ since they represent discrete quanta of energy, but that’s another discussion for another time). However, what I’d like to discuss today* is something one step further down that road, and following on from the previous posts on format illusions: in light of the broader possibilities of computational photography, what does ‘format’ even mean?

*I.e. excluding things like subject recognition for tracking, depth mapping and simulated shallow DOF transitions etc. for the time being; we’ll revisit that later.

[Read more…]