Light as waves – diffraction blur
Photographers are aware that reducing the aperture beyond a certain point introduces a blurring of the image due to diffraction. This becomes significant at a certain f-number whatever the focal length of the lens.
You might expect this f-number to be independent of the size of the camera. It is not, in fact the problem gets worse as the camera gets smaller. Diffraction is a manifestation of the wave nature of light.
Light as particles – photon noise
A future article will look at photon noise, an example of light behaving as particles.
Diffraction – what happens when light passes through a hole
The problems caused by diffraction have long been familiar to astronomers. In reality, the closest thing to a “point source” of light is a star but the image formed by a telescope or camera will never be a single point of light. This is due to diffraction. Since all images, landscapes, portraits or whatever are simply aggregates of points of light the effects of diffraction are equally applicable across the whole range of photographic subjects.
This effect is explained by the wave nature of light. When a wave meets an obstacle such as the the aperture stop in a camera, interference effects occur being more pronounced if the diffracting object has similar dimensions to the wavelength of the light.
When a point source of light is photographed the variation of the intensity of light on the film or sensor plane is as shown above. The pattern gets wider as the lens aperture is reduced. This might seem counter-intuitive.
The shape of the curve (in any vertical plane through the maximum) is the square of a sinc function: Intensity = (sincθ)2 = (sinθ/θ)2
The bright area in the centre is known as the Airy disc, named after Sir George Biddel Airy, an English mathematician (1801 – 1892), and its diameter (at the first minimum) is given by:
D = 2.44 λN
Where λ = the wavelength of the light and N = the f-number.
So, as an example, for green light with a wavelength (λ) of 550nm (in the middle of the visible range) and an aperture of, say, f/8 (i.e. N = 8), we have:
D = 2.44 x 550 x 8 (nanometres), approximately 10 μm (micrometres, formerly microns)
At an aperture of f/8 the image of a point of light will be a disc of diameter 10 μm (= 0.01mm). This is independent of all other factors, e.g. the focal length of the lens, the size of the camera.
Note that for a particular wavelength of light the diameter of the Airy disc depends only on the aperture and is proportional to the f-number, N. Smaller apertures result in larger Airy discs, i.e. more softening of the image.
Diffraction limited aperture defined relative to pixel size
As we are considering digital cameras, most of which have a rectangular array of pixels, it is interesting to superimpose this diffraction pattern on a pixel array drawn to the same scale. In the graphs which follow the intensity of light falling on the sensor is shown as a blue line and the effect on the individual pixels of the sensor is shown in red. A spreadsheet is used to produce the graphs. The red line is calculated by integrating the intensity across each pixel.
The model is one-dimensional, whereas the reality is two-dimensional and the Airy pattern has circular symmetry as shown in the previous figure, and the pixels are on a square grid. In what follows I am basing conclusions on this one dimensional model and will ignore the possible improvements that a 2 dimensional model might provide. There is also the point that in a Bayer matrix of pixels, the different primary colours have different effective pixel pitch. If, however, we are considering the light intensity and ignoring colour all pixels contribute to this and the basic pixel pitch is relevant. Also whatever difference a 2 dimensional model makes it only introduces a constant factor into the calculation, so comparison between cameras is not affected.
Disclaimer: These graphs are calculated based on the manufacturers’ published data and do not represent the results of tests on products. Any errors in calculation are entirely mine. Although captioned with specific model numbers, the only thing that affects the appearance of the graph is the ratio of the f-number (N) to the pixel pitch (the distance form the centre of one pixel to the centre of the next nearest pixel)
The first graph shows the situation where the image of a point source (using an aperture of f/2.8) falls entirely within the width of a pixel so clearly diffraction, though present, is not at all significant. In this case sharpness is limited by pixel pitch. The x-axis scale indicates positions of pixels. Five pixels are plotted centred on x = -4, -2, 0, 2 4. i.e. pixel pitch is scaled to 2 units on the x-axis in every graph.
In the second graph we have the other extreme, the image, as a result of diffraction is spread over more than five pixels. This is at f/32, often the smallest available aperture on lenses intended for 35mm cameras. In this case sharpness is seriously limited by diffraction.
It follows that for a particular pixel pitch there is a point between these two extremes where the effects are comparable. I refer to this as the “Diffraction limited f-number”.
I suggest that this third graph represents such a point.
My proposed criterion is “… when the intensity recorded by the adjacent off-centre pixels is 20% of that of the centre pixel.” Coincidentally this corresponds very closely to the more practical criterion:
The “Diffraction limited f-number” is equal to twice the pixel pitch in micrometres
Consider a full frame SLR, in this case the Canon 5D Mk II with a pixel pitch of 6.41 micrometres. At apertures smaller than f/13 diffraction will have a significant effect. At larger apertures the effect will be negligible, but it is not a binary on/off effect, it is a smooth transition.
Other examples are given in the table:
If your camera does not appear in the table you can calculate its pixel pitch as follows:
- Suppose you are given the sensor size: 36 x 24 (mm) and the total number of pixels: 24.3Mp. It is likely that the image on the sensor occupies 24Mp.
- The aspect ratio is 36:24 = 3:2.
- Consider the image area to consist of 6 squares, in 2 rows of 3 (to provide the aspect ratio).
- Each square therefore comprises 4Mp (24÷6) i.e. 2000 x 2000 pixels.
- The whole sensor must therefore be (3 x 2000) x (2 x 2000) = 6000 x 4000.
- On the long side a row of 6000 pixels covers 36mm, so pixel pitch = 36mm÷6000 = 6 micrometres.
(This example uses the Sony a7 data).
The right hand column shows the predictions of the calculator on the Cambridge in Colour website, http://www.cambridgeincolour.com/tutorials/digital-camera-sensor-size.htm. I don’t know how these are calculated but they are in reasonable agreement with my calculated values.
The results should be interpreted with caution. The Pentax K-3 (23.35 Mpix) shows a “Limiting f-number” of f/8, whereas the much earlier Pentax K10D (10.2 Mpix) will go to f/12 – nearly “as good” as the Canon 5D Mk II (21.1 Mpix) at f/13.
The value f/12 for the K10D arises because of its limited pixel density. It has the same size sensor (1.5 crop) as the K-3 but the K-3 shows diffraction effects sooner (i.e. at a larger aperture) because of its higher pixel density. This is not a fault of the K-3, rather a fault of the K10D which does not have sufficient pixel density to see the diffraction which is present.
The difference between the K-3 (f/8) and the Canon 5D Mk II (f/13) – similar total pixels – can be explained entirely by the crop factor. The 5D is a full frame camera (36mm x 24mm) whereas the K-3 is a 1.5 crop, so the same diameter of the Airy disc will be 1.5x more significant.
So what does this mean for compact cameras?
For the SLRs considered a wide range of apertures is available before diffraction starts to limit sharpness, but for the compacts it is a rather different story. Consider the G9, this has a range of maximum apertures of f/2.8 at shortest focal length to f/4.8 at its longest focal length.
But we have established that to avoid significant diffraction we should use apertures wider than f/3.8 – Over a wide range of focal lengths this is just not possible, diffraction blur will be significant at all available apertures.
Of course the megapixel race has one objective – to sell cameras. In my opinion it is difficult to justify more than about 8 megapixels in a camera with a sensor only 7.6 x 5.7 (mm).
A practical demonstration of diffraction blur
A portion of the image, the red square, from the centre of the frame is shown at 100% below:
The photographs were taken with a Pentax K5 IIs (16Mp) with a Sigma 17-70mm zoom at 70mm. iso 400. Mounted on a tripod and using the 2s shutter delay to avoid camera shake. Exposures were taken with apertures of f/4.5 and f/40.
At f/4.5 the calculated Airy disc diameter = 1.25 pixels.
At f/40 the calculated Airy disc diameter = 11.2 pixels.
The 200 x 200 pixel (100%) samples measure about 7mm wide on the page in the book.
The raw files were processed in RawTherapee using the amaze demosaicing algorithm with 2 steps of false colour suppression. Adjustment of white balance was necessary (I used a fluorescent light source) but no sharpening or noise filtering was applied.
The aperture of f/40 might seem rather severe but I wanted to be sure to see an effect!
Diffraction limited aperture defined relative to visual acuity
The long established criterion for the diffraction limited aperture is based on the question “How sharp does a 10” x 8” print need to be so that when viewed from a distance of 10” it looks perfectly sharp to a person with normal vision”. I won’t repeat the calculation here but the conclusion is that the ‘circle of confusion’ (the image of a point source) should not exceed 0.025 mm in diameter on the film plane of a 35mm camera (36mm x 24mm format). This is to allow for a x8 enlargement.
The first point to consider (before diffraction) is, does our digital camera have sufficient resolution as determined by pixel pitch?
If we apply the criterion to a full frame digital camera the ‘circle of confusion’ should equate to, say, 2 x pixel pitch, requiring a pixel pitch of 12.5 micrometres or less. All the full frame cameras in our table satisfy this.
If we now claculate the f-number based on the Airy disc formula, equating the diameter of the Airy disc to the diameter of the circle of confusion, we have:
D = 2.44 λN
so N = D ÷ 2.44λ N = [0.025 ÷ (2.44 x 550)] x 1,000,000 N = 18.6
we need the factor of 1,000,000 on the top line because D is in mm (10-3m) and λ is in nm (10-9m).
Suggesting that we need not worry about diffraction until we stop down beyond f/18.
However, many of us print larger than this for competition and exhibition, typically A3 (420mm x 297mm) or if you prefer (16.5” x 11.7”).
If we require the same level of sharpness as the 10” x 8” referred to above (not unreasonable as someone will always look at a large print from too close) I propose we scale our f-number accordingly: 16.5”/10” = 1.65 18/1.65 = 10.9, say f/11.
So we have demonstrated a criterion for the diffraction limited aperture based only on the sensor format (i.e. physical size) and the required resolution on the final print:
The Diffraction limited aperture for a full frame (36mm x 24mm) camera is f/11
For a camera with a 1.5 crop factor this f-number reduces proportionately, to about f/8.
Comparison of the two criteria
The criterion relating diffraction limited aperture to pixel pitch rather than to a specific resolution in a final print aims to make good use of the available pixel resolution regardless of the final product. However as pixel numbers increase this could become too demanding – as it already has done for some compact cameras, (resolution being limited by diffraction rather than by insufficient pixels over much of the aperture range). The older criterion, pre-dating digital cameras, is still valid and whether you demand that extra stop (f/18 to f/11) is up to you.
Diffraction blur – does it matter?
Yes, if you are an astronomer. If you are a photographer interested in documentary photography, particularly areas like architecture or natural history, then you will want to record your subject as accurately as possible with the maximum amount of detail. If you are not constrained in this way then understanding diffraction blur and how to control it gives you the artistic freedom to use it creatively. Personally, I prefer to shoot for sharpness, and blur in post-processing – you can always add blur, you can’t remove it. (So-called sharpening tools in processing applications can make an image look sharper but cannot retrieve lost information or create information from nothing).
Perhaps the more familiar effect, from an artistic point of view, of varying the aperture is the effect on depth of field and here the situation is exactly opposite, choosing the appropriate aperture at the taking stage is always preferable to attempting to create a blurred background in post-processing.
Don’t forget that practical lenses have defects (axial chromatic aberration, spherical aberration, geometric distortion, etc.) which degrade the image at the larger apertures. So as we stop down these effects reduce gradually as the diffraction blur (not a lens defect but a natural phenomenon) increases and it is usually accepted that their optimum aperture is about 3 stops down from the maximum, usually meaning f/8 or f/11.
As the megapixel count gets larger we could get overly concerned about diffraction blur preventing us making full use of the available resolution. Relating the diffraction limited aperture to pixel pitch, as I have done, could push us to use larger apertures where common sense might dictate otherwise. It is another situation where you should learn the rules then learn to break them.
A final thought: As the race to infinity continues we must reach a point where the pixel pitch becomes irrelevant and the old criterion for diffraction limited aperture comes into its own again.
Cambridge in Colour, Digital Camera Sensor Sizes
A well illustrated site with tutorials on all aspects of photography