Much of what I fill my headspace with is stuff of a rather technical nature, and rather than ignore that part of me, I've decided to include it in my blog as well. I'll just have to see how it works. If you're curious about what I do for a living then these 'Techie Talks' might give you some glimpse. (Although not everything I do is as tedious as the work in this entry).
And so...
Observing in the infrared (one of my particular specialties) can be a bit of a pain. The infrared sky is actually quite bright, even at night, and so observing is never a simple matter of just opening the camera shutter for a long time. Instead, you always end up taking lots of very short exposures, and 'nodding' the telescope all over the place. It's a painfully inefficient process and at the end of the night you're left with a great big pile of images on your disk drive that you then have to shuffle together in a variety of ways to 'reduce' your data to a useful form.
Usually, I don't bother trying to keep track of the uncertainties by propagating the 'errors' through the reduction process. Instead, I usually just use the scatter in the reduced images to estimate the noise level in the final data product and leave it at that. However, for the data I'm currently working with, I need to understand the noise characteristics of the data at a much earlier step, in an effort to characterize the new detector I'm working with. There are also some potential advantages to keeping track of the S/N through out the process. In particular, if I'm keeping track of the variance as I reduce the data, then I can combine the various data sets in an 'ideal' manner and optimize the resulting S/N. It will also help to write code that can automatically recognize bad data. Since I'm in the process of writing the data reduction pipeline for this instrument, all this seems like a good idea. So, I decided to look into starting with the "CCD equation" and trying to calculate what the noise levels actually should be.
Now for a nice optical CCD, this is not too bad. The sources of noise are simply Poisson noise from the photon statistics, (both from the source and from the background sky), 'readnoise' from the readout amplifiers, and maybe dark current if you've got an uncooled detector See here for a discussion of noise in optical CCDs..
For my work with SupIRCam, the dark current is pretty negligible, and the source noise is usually not an issue (most of the noise is background), so it should be even simpler, right? The problem, however, is that during the reduction process I'm combining the same images in different ways and subtracting them from one another, so it's vitally important to keep track of what's getting subtracted from what, because some of the noise from individual exposures will get subtracted from itself, and therefore disappear exactly. Thus a naive "propagation" of the errors won't work. Some of these "errors" are correlated (100%) because they are based on exactly the same data. Confused? Well, lets look at my data pipeline in particular.
A typical observation consists of a set of images, all of the same exposure time. Between each image, the telescope is nodded slightly so the target lands on a slightly different portion of the detector. So at the end we have several images of the target and background sky, each of the same integration time, but slightly different positions. Immediately before and after this group of images, we take a "dark frame," and exposure of the same length of time, but with the entrance to the detector closed. (Literally a picture of the dark!). This is important for calibration reasons, to remove the small amount of 'dark current' in the image, and to remove the instrumental 'bias' of the detector.
Ignoring the dark current (which is very small), the noise level of any of these exposures is described by
where S_m is the average level of the background sky counts, g is the 'gain' of the detector (number of photons per count) and r is the 'readnoise' in counts. V is the expected statistical variance for an exposure. Now for a dark image, there is no signal from the sky background, so S_m is 0, and we're just left with
Now the first thing we do is combine the two dark frames to make an average dark. Just using standard error propagation gives
This combined dark is then subtracted from each of the individual exposures. The variance of the resulting dark-subtracted images is
So far, this is not so bad, and in fact this is just how far I needed to go to do the noise calculations that started this investigation. However, it's in the next step that things start to get a bit ugly. More on that later though.... It's bed time now....
.... The next step in the reduction is to build a sky image by combining all the dithered ('nodded') images of the target. However, we can't just take an average, since the target will be contributing to the mean, and we'd end up subtracting the target from itself. The solution is to throw out some of the data before taking the average. For each pixel, we look at that pixel in each of the individual images, and sort them by the number of counts they get. If a star shows up in that pixel for one of the images, it will be brighter than in the other images. So if we throw out the highest pixel value for each image we should be left with a clean sky image. (If you have a really crowded field, you may have a star in the pixel for more than one dither, so you may have to throw out more!).
There's a bit of a catch though: the sky background in the IR varies quite rapidly in time. As a result, the sky will be slightly brighter or dimmer in each of the dithered images. But this means that our trick of just throwing out the highest data won't necessarily get rid of all the stars, especially the fainter ones. The solution is to scale each of the images so that the average of the background is the same for all the images, then throw out the highest values for each of the pixels and average the rest of the data together.
To further complicate things, you also typically want to do a weighted average of the data rather than just a straight average. Since the background level will be changing slightly with each image, so will the S/N. To avoid reducing the higher S/N data by mixing it with lower S/N data, you typically want to weight each of the images by the variance of that image. In background limited images (very often the case with IR work), the sky noise will completely dominate over the readnoise, and you can ignore the readnoise in the variance calculation. In this case, the variance is just the sky counts, and variance weighting is just applying the inverse of the scaling applied to match the sky levels. However, in the general case, the variance weighting will be slightly different than just the scaling factor.
Sigh.... this will take a while.... Gotta go again... more later....
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment