(Featured image; Photograph taken by friend; all rights transferred to User:Mdd4696, CC BY-SA 2.5 <https://creativecommons.org/licenses/by-sa/2.5>, via Wikimedia Commons)
Whenever we use a camera to record a scene, we are transforming analog signals. Each element introduces yet another error, albeit small.
The perfect is the enemy of the good
It’s an apt saying. The followup is
What is good enough in a complex system?
Errors propagate in complex systems. Redundancy has its benefits. It’s natural. And there are optimal settings to achieve reasonably accurate results.
Errors and more errors
We are carbon-based life-forms. Our body continually produces copies of DNA – almost perfect copies. But not quite. That’s why… Let’s not get morbid.
Scanning film results in an imperfect copy. Even just making an image on digital sensor (or film) is imperfect. By its very nature a digital sensor samples the image producing a fixed number of pixels at each scanning resolution. And there are limits beyond which higher resolutions will degrade the image (think lenses and stopping down further and further). There are sweet spots.
The sweet spot – an example
The only digital camera I have is an iPhone. But I can demonstrate the sweet spot easily using a digital scan.
The next two images are closeups of the result of scanning a b/w negative at the maximum resolution [3200dpi] and one at a lower resolution [2400dpi] both scans include a portion of the unexposed portion of the film, with parts devoid of film markings. While negatives were converted to positives and “colour corrected”, there was no sharpening. The scans were output as RAW 16 bit greyscale. The colour correction included removing the affect of the colour cast of the film base, inversion, cropping to the expose portion and then adjusting levels so that the brightest potion of the image was white (255) and the darkest was black (0).
And zooming in, it is clear that the 2400dpi scan is slightly sharper than the one at 3200dpi. This is expected and documented in many places on the web.
So more pixels is not necessarily better – when by “better” we mean “sharper” and by sharper more line pairs per mm. The question really is how sharp does the image have to be? That depends on the output medium and the intent. After all a photograph is an interpretation of a scene. In Ansel Adams’ words
“The negative is the equivalent of the composer’s score, and the print the performance.” Ansel Adams
Two other examples demonstrate the problems in “capturing” an object with a “camera”. These examples that are relevant to using a digital camera to create images either in the digital (digital camera) or analogue realm (film camera), as well as to scanners.
Photocopying is not perfect. Photocopy a letter, photocopy the photocopy and repeat. Eventually the text in letter will be a blur. Reason: errors propagate. But they even propagate within a system.
Consider, photocopying a page in a book. How and where the book is placed matters when copying a particular page:
- Noise. If the scanner glass has dust or fingerprints in the area being scanned, they will affect the photocopy.
- Out of focus. If the page does not lie flat on the scanner glass, parts of the page will be out of focus and darker (underexposed).
- Missing data: If vital parts are too close to the edge, they may not appear in the photocopy. If the page is not positioned properly on the scanner glass, the photocopy may be missing parts of the page.
- Skewed. If the book is crooked, the copy will be crooked.
- Light pollution – Since the cover cannot be closed all the way, light will be scattered and, as a result, interfere with the “rendering”, resulting in a muddied copy. A grey background rather than a white one.
- Blurring. Movement before or during the scan. If the cover is held down by hand, the book may move before or during copying, resulting in a skewed copy or even one that is blurred.
And so on …
In the past, slides were copied with cameras. This is highly relevant to digitising negs. In fact it is a precedent. It was a laborious process.
- Film camera.
- A macro lens capable of 1:1 rendering.
- Slide reproduction film.
- Slide returned from a lab in its holder
- Copy stand for holding the camera above the slide to ensure that the film plane and slide are parallel
- Holder for the slide to keep the slide square and the film flat,
- Light source to illuminate the slide evenly at the proper colour temperature.
- Fairly dark space to eliminate stray light from interfering with the illuminated image.
And then the operator will have to focus the camera precisely.
And then there’s dust. Dust in the air. On the slides. In the lens.
For slide copying the variables are: resolution and colour fidelity of the film, the aperture setting to optimise resolution, the fidelity of the light source, (even and correct colour temperature), the precise focusing, the lens, vibration, the skill of the operator. It was complicated. Each element introduced an error. And errors propagated, becoming worse with each succeeding step.
The connection to digitising negs is obvious. Best practises for slide copying have their equivalent in digitising negs and also in photographing small objects at life-size or larger and other subjects.
That camera may have 50 mega-pixels but that is the optimal, theoretical number, which is dependent on the other elements being perfect, introducing no errors. But they will, of course.
Would you rather buy something that you can use out-of-the-box or would you prefer to construct it out of elements that you choose, knowing that each element will need to be properly vetted? Think of the time and the agony of choosing. And the regret. Buyer’s remorse. That certainly applies to a digital scanning setup. But it also applies to ordinary in-the-field photography. Think lens, camera, sensor/film.
How can we evaluate how good even the camera can be?
So the stated resolution is 50 mega-pixels. But that’s only one element in the chain of error propagation. Lens, filter, sensor, image processing system, storage device. In that closed system. It is a theoretical upper bound, the most/best one can expect. After the camera lens does its number, it is less. After the camera corrects for the fact that each pixel only registers red, green, or blue, it is even less, for example, during a process known as demosaicing or even with less error-prone pixel-shifting.
Line pairs per mm [lp/mm]
One measure of image resolution is the number of line pairs per mm, that is to say the number of lines that can be distinguished from each other, rather than melding into a blur.
The actual resolution of 35mm original camera negatives is the subject of much debate. Measured resolutions of negative film have ranged from 25–200 LP/mm, which equates to a range of 325 lines for 2-perf, to (theoretically) over 2300 lines for 4-perf shot on T-Max 100 Kodak states that 35 mm film has the equivalent of 6K resolution horizontally according to a Senior Vice President of IMAX
So if we knew the resolution of the sensor with a perfect lens and used the line pair metric, we would still only know the upper bound. But that would not be what we observed.
And doubling the number of pixels, does not double the resolution, instead it is less than 1,5x [hint square root of 2, 1,416…]. To double the resolution, the number of pixels has to be quadrupled. So to have twice the resolution of a 25 mega-pixel sensor will require a 100 mega-pixel sensor. Pricey. And to what end?
What is the true resolution of that digital camera? How many line pairs can it distinguish? Or does it anti-alias to eliminate the artefacts of demosaicing, effectively reducing the resolution of line pairs per mm? Do we have to look under the hood?
The measures of resolution
- Number of pixels – a rather crude measure which reveals very little about the acutance of the recorded image. Pixels per mm or inch will be a measure of how precise it is but it is only a crude measure. It won’t account for common anomalies like smearing, colour shifts, colour fringing etc.
- Spectral resolution – the ICC profile or colour space these are device/media dependent. It is actually considerably more complex and includes way of mapping from one space [display] to another (e.g. printing on a specific paper).
- Spatial resolution – in general this is the distance between independent measurements, For images this is the “resolving” power of the medium. It is usually expressed in terms of line pairs per mm. This means the number of lines in a millimetre that can be distinguished from one another, usually measured with a target like this.
At some point the line pairs dissolve into a blob, since they can no longer be distinguished. Just like photocopying a photocopy and then photocopying the result ad nauseam. Copies of this target are available in transparency or on paper that can be scanned as a film negative, scanned as a reflective document or reproduced using a lens to determine the accuracy (spatial resolution) of the lens/film/sensor. There are other ways of measuring this, for example, using a Modulation transfer function (MTF) which is less reliant on the visual perception of the observer and the contrast of the media.
- Colour depth – number of bits per colour channel, in RGB space for displays/sensors and some variant of CYMK space for printed output. For digital cameras and scanners, the maximum colour depth for greyscale is 8 bits per channel (16 bits if there is an IR (infrared) channel) and for colour 48 bits (64 bits if there is an IR channel). JPG files have at most 8 bits per channel or 8 bits in grey scale and 24 in colour
But there’s a wrinkle – a lens does not have the same acutance over the entire sensor/film. So it depends on where you measure the line pairs/mm.
There are other considerations even with perfect lenses. For example, how will the output be viewed. And from what distance and under what lighting conditions. Perhaps upping the resolution will have no effect on the final image. Then you’ll be spending for more resolution that you won’t necessarily use. Add in the variability of the human visual processing system and… you get the idea.
This is tedious. So many questions. They keep popping up. Like the heads of a hydra.
My Solution to all this
Just pointing out the issues. Doesn’t mean I’ve solved them. Or necessarily made peace with them. I’ve taken another route. Having spent so much time working with computers – graphics, quality assurance, user experience, and more – I prefer the analogue approach, the slow contemplative nature of film, the wabi sabi. But I do tweak the images with computer programs. I’m not that much of a Luddite. And yes, I have made darkroom prints and still develop b&w at home. But I much prefer the digital print-room to the wet-darkroom. The noxious fumes. The setup. It seems so crude to me now. And other impediments. While the computer provides more tools to manipulate images, I prefer to use as few of them as possible: curves, spotting/healing, cloning etc. I do not want to be stuck in the endless hole of just one more fix until the image becomes soulless and inert.
Share this post: