occurs naturally in every camera no matter how expensive the lens.
How much blurring of the image depends on the aperture, specifically
the f-number. It is not affected by the focal length of the lens; a
100mm lens at f/16 will produce the same “size” blur on the
sensor as a 200mm lens at f/16 on the same camera. Perhaps
surprisingly, the blurring increases as the aperture is reduced, i.e.
as we “stop down” i.e. as the f-number increases.
So if we want to retain the maximum level of detail in our images – the best that our sensor is capable of – we should avoid using apertures smaller than a certain value. To take an example, my 16Mp, 1.5 crop Pentax K5 IIs will (theoretically) show diffraction blur at f-numbers higher than about f/10. This does not stop me using f/16 when the situation demands.
I referred above to
the “size” of blur on the sensor. For a particular aperture this
will be the same whether you have a 6Mp sensor or a 24Mp sensor,
however the 24Mp sensor may be able to “see” the effect whilst
the 6Mp sensor may not – simply because of its limited resolution.
My previous articles on this subject covered it in some detail. This article aims to present the essentials in a way that is useful to a wider audience. The following table allows you to find the “Diffraction limited aperture” for your camera – stopping down beyond that point will produce some blurring.
Stopping down to smaller apertures increases the amount of diffraction blur, the table shows at what point it becomes significant relative to the resolution of the sensor. This is a gradual effect.
Sensor size: Megapixels (Mp)
Aperture: 1.5 crop
Aperture: Full frame (35mm)
Suppose you have a camera with a 16 Mp sensor and 1.5 crop factor. What is the Diffraction limited aperture (DLA)? The Mp column has a line for 12 and for 24 with respective DLAs of f/11 and f/8. Our 16Mp is nearer to 12 than to 24 so the best approximation for DLA is f/10. Note that this is all you need to remember – the rest of the table does not apply to your camera.
Consider a full frame SLR with a 50Mp sensor. We could, of course, extend the table – the next line would be 96Mp – f/4 – f/5.6 (following the rule that we double the Mp and reduce the aperture number by a factor of √2, i.e. one stop) – however 50Mp is so close to 48Mp that we can accept the value of f/8 from the table.
Remember in all this we are referring to the aperture at which the diffraction blur is about the size of a pixel on the sensor. Thus a 6Mp camera will be more tolerant of diffraction blur at f/11 than a 24Mp camera at f/11 only because it can’t see it!
The focal length of the lens is irrelevant; it does not appear in the calculation of DLA. Also we assume a “perfect” lens. Real lenses have distortions which may be more serious than diffraction blur. Diffraction blur is a natural phenomenon and cannot be designed out by Canon, Nikon or even Pentax.
Having been an occasional user of Hugin for many years, I have described my recent experience of stitching High-dydnamic-range (HDR) and normal, (Low-dynamic-range) panoramas from a set of 9 images shot one evening at Salford Quays. The article should prove interesting and useful to anyone new to Hugin, or to those, like me, who use Hugin infrequently and never quite become “experts”.
As usual, Hugin did an excellent job of stitching, but I recommend outputting an HDR file in EXR format for tone-mapping in, for example, Luminance HDR.
After writing two articles on the Nature of Light and its relevance to digital photography, I found that the subject of noise still fascinated me and decided that I had to make some measurements. Looking at the wiggly waveforms of my previous article might indicate that camera A is noisier than camera B but can we measure the noise in a rigorous way? This present article explains how to do that using free software. As well as presenting graphs of the measurements I have attempted to explain the results from physical principles – and evidently the noise is predominantly photon noise (aka shot noise).
I have updated the page on this subject in order to clarify the distinction between calibration and profiling (or characterisation), and to remove reference to commercial operating systems now obsolete.
To avoid confusion I have now included only a version of each test card with sRGB profiles assigned and saved as jpegs. These should be viewed in an application that is colour aware (i.e. one that recognises and uses the embedded profile).
In the second of two articles I look at another natural phenomenon, photon noise (also known as Shot noise). As with diffraction blur, the problem becomes more serious as the physical size of the sensor is reduced.
Whilst this is not the only source of noise, it is now the dominant one in the darker areas of an image where only a relatively small number of photons are incident on the sensor. It is the counting of photons, which is subject to Poisson statistics, which produces the noise.
Reducing the physical size of a camera, even if the total number of pixels is maintained, inevitably reduces the quality of the images because of two fundamental properties of light itself. This technical article looks at diffraction, usually explained by considering light as waves. A future article will look at photon noise, explained by considering light as particles.
A simple rule-of-thumb is established for determining the “diffraction limited f-number” by relating this to pixel pitch on the sensor.
The earlier, pre-digital, criterion for diffraction limited aperture (based on required print sharpness) is revisited and considered to be still valid – perhaps with a little sharpening.
Basic adjustment of monitors and projection systems
When I last updated my page on this subject (2012) I decided that it was best to offer the test cards as .png files and leave it to the user to assign a profile (presumably sRGB) and to view the result in an application such as Photoshop. For convenience, I have now added a version of each test card with sRGB profiles assigned and saved as jpegs. These should be viewed in an application that is colour aware (i.e. one that recognises and uses the embedded profile).
Comparison of Demosaicing Methods available in Free, Open Source Raw Processors
My previous article included a table listing the various demosaicing algorithms offered by the four raw processors considered and I wondered why we (as users) needed such a wide choice. The table is reproduced below.I decided to investigate those offered by RawTherapee by looking closely at the detail in an image of tree branches against the sky – the same part of the same raw file processed by each of the algorithms.
Comparison of four free raw file processors: RawTherapee, Darktable, Lightzone and Photivo
With the exception of Darktable, which is not yet available for Windows, all of the applications are available for Windows, Mac and Linux. All are free and open source downloads.
I am looking for a raw file processor that will allow me to develop raw images to produce files ready for projection (at 1400 x 1050 pixels) and files at full resolution for further development, as necessary, to make high quality prints. I don’t expect to print directly from the raw processing application though this might be an advantage.
I have used both RawTherapee and Darktable for over a year and have recently tried Lightzone and Photivo so I will restrict my comments to these four.
As part of a recent event in Saddleworth, we were treated to a flypast of a DC47 Dakota. Perhaps a few tips on how to photograph this sort of subject would be useful to others facing a similar challenge.
Allowing the camera to make the decisions on speed and aperture is not a good idea in this case, so let’s get back to basics. When I first took up photography I was told that the only S A F E way to take a picture was to consider (shutter) Speed, Aperture, Focus, then Expose. Read the full article