Is there any advantage in stacking multiple images vs a single long exposure?

Stacking is something that is done all the time in infrared astronomy. This is done because CCD technology doesn't work for wavelengths in the range of roughly 2 to 10 microns, and beyond, so they use infrared detector arrays like the HAWAII line of infrared arrays by TeleDyne. Typically, though somewhat less so as time goes on, the infrared arrays will have defects like stuck pixels, cross talk noise, etc.

To work around this, the users of such arrays will take an image, move the telescope by a fraction of the field of view, take another image, and repeat as necessary. This allows them to reject bad pixels and smooth out pattern noise. The other advantages of doing this are that it increases the dynamic range of the stacked image, as you asked about, and allows for the removal of transient signals like: asteroids, satellites, and cosmic ray hits. The LSST will use image pairs to aid in cosmic ray rejection.

The downside of this approach is that the process of reading the data out of the detector it adds an amount of noise to the signal (known as "read noise"). Because of this, your sensitivity will go up like the square root of the number of images (roughly, square root of time), instead of linearly with time. Because of this, if you want high dynamic range you're better off not just stacking, but also varying your exposure time - short image(s) for the bright parts of the field, and long images for the faint parts, with the rejection of the saturated pixels done in software. If you're taking this sort of approach with a CCD, you're going to want to rotate the fields between the long exposure because when CCDs saturate they tend to cross bleed along a row.

I don't know how possible this is in a non-professional setting, but an approach some cameras use is something called a "drift scan" (for example, the SDSS imaging detector). See, CCDs read out the charge from the pixels by shifting the whole image across the pixel array, and reading out when the charge reaches an edge of the chip. If you move the charge across the chip at the same rate the image of the sky is moving across the chip, you can continuously scan a strip of the sky.


The voice of bitter experience, here, to tell you about a problem that a properly working observatory shouldn't have to worry about. But I did the time I was working on a "serious" astronomy project.


Stacking medium length images provides some protection against faulty tracking. In the event of a tracking failure during a single long exposure there is little you can do to recover, but a few tracking failures during a run of five to twenty medium length frames will still leave you with more than half the data (and a headache in-so-far as combining sets of frames that have different relative pointing, but it can be done).


My experience came in the early 1990s when putting a cooled CCD behind a 14" scope putting the whole thing on a mountain without human oversight and letting schoolkids submit observing requests over the internet (by FTP, because this was before the web) was a really cool new idea. But for cost and alignment reasons the project was using software tracking. Written is vBasic. We could get 30 second runs all the time. Five minute runs with some regularity, and essentially never got more that twenty minutes without a tracking fault.

But by stacking ten or so three minute runs we were able to image down to the nineteenth magnitude even with the box on top of the physic building for testing. And we actually tracked a MACHO-micro-lensing event light curve and matched the big boys which was my first "real" science experience.


If your exposures are short enough (a fraction of a second), you can even combat turbulence in the atmosphere. The trick is to do very many short images then pick the ones where a (bright) point source is sharpest and only stack those.

The technique is called Lucky Imaging and can deliver images as sharp as the Hubble Space telescope from ground-based instruments.

As an aside, your question could be - what should be my criterion for when not to stack images? - because the advantages to doing so, in terms of bad pixel rejection, cosmic ray removal and dynamic range, are so great. For optical CCD images, the break-even point is normally when the readout noise becomes a negligible contributor to the signal to noise of whatever you are trying to measure. Another consideration can be how long it takes to read out the CCD, which results in "dead-time".

Lucky Imaging relies on special electron-multiplying CCDs that can be read out very rapidly with modest readout noise, at the expense of a dispersion in the gain (number of output electrons per input photon). Most other astronomical CCDs minimise readout noise at the expense of readout times of tens of seconds, but are highly linear.