You may or may not have heard of this term quantum efficiency (QE) when people talk about camera sensors. This is a very complex topic, but it is very important to things like Astrophotography, where getting every photon of light possible is of the utmost importance.
What’s Actually Happening?
So let’s say you have a new camera, it’s a CMOS camera which is the kind of sensor in basically all smartphones, DSLRs and mirrorless cameras. Let’s say you’re taking a picture of the milky way for this example as well. When the shutter opens and the camera let’s light hit the sensor, the light that left the milky way galaxy is hitting your camera sensor and every time a photon hits the sensor, a tiny detector turns that photon into an electrical signal which registers as light on a pixel. Each time a photon hits a particular pixel, that pixel gets a little bit brighter. If no photons hit a pixel, you just get black. In a color camera, these photons are filtered to create color through a Bayer matrix as well, which decreases quantum efficiency.
The best way I can put this is that if you have a 10MP camera and your sensor gets hit with 1 million photons, you’re likely to retain only a fraction of those photons as an actual brightness level on your pixels. In the case of my camera, the ZWO ASI1600mm-Cool, that is 60%, so I retain about 600,000 photons. I should clarify, I don’t exactly mean retain, the photons are converted into a digital, electrical signal when they hit the sensor 60% of the time.
This is why so many people choose to shoot with a CCD sensor. CCD stands for charged couple device and they can have QE values of 90% or even more in some cases. What this means is that beyond hot and cold pixels, CCD sensors have extremely little noise which makes them ideal for astrophotography.
Combating QE
You might be thinking right now, “well then I should just get a CCD camera and be done with it”, but that isn’t necessarily true. CCD cameras have very slow readout times which means when you take a picture, it could take almost a minute for your picture to actually download and register, obviously not useful if you’re shooting video. They also have more hot pixels generally than a CMOS camera which are harder to correct for and they are generally lower resolution. All of these reasons are why they generally fell out of favor with consumers, but they still have value. If the resolution doesn’t bother you, CCD cameras still have the lowest read noise, or the amount of noise a sensor will consistently produce, of all the main sensor types.
So what can you do to combat this with a CMOS camera?
Dithering
Every camera has a base level of noise that it will output in every frame, called read noise. This noise will have a specific pattern at each temperature the sensor sits at meaning this read noise changes depending on the temperature of the sensor. When taking images, if you stack say ten images on top of each other you will effectively amplify that read noise in your final image. To combat this, you use a process called dithering.
This will only work if you have an autoguiding setup, but it’s a simple box you have to check in PHD2. What it does is effectively, after a given number of frames, your rig will actually shift in a random direction by a couple of pixels, moving that read noise to a new spot. This keeps the brightness of those noise pixels very low in the data which makes them much harder to see when you edit your image. I highly recommend giving this a shot if you haven’t already as it makes a massive difference in noise for your final image.
Dark Frames
You can combine dithering with dark frames to get the absolute lowest amount of noise possible. If you use a DSLR, at the end of your imaging session simply put the cap on your scope and take 15 or so more images at the same exposure time and ISO as your light frames. They will show up as basically totally black, but what you’re doing is actually isolating that read noise from the main data and allowing a stacking program like Deep Sky Stacker to subtract those pixels from your main data. You need to take them at the same temperature as your light frames though since that read noise will change at different temperatures.
If you have a cooled camera you can do something called a dark library. This is when you take dark frames at any time of day, with your camera cooled to different temperatures and have a library of dark frames at all different exposure times and temperatures. This saves time at night and also just makes it a lot easier on you in processing since you don’t have to worry about darks ever.
Some Extra Things To Keep In Mind
It’s worth noting that these methods will not remove all of your noise completely, you will still have some left even if you combine both of these methods. My standard recommendations for processing in regard to saturation and stretching still apply since you can still end up with a lot of noise.
Some people would also say take a dark frame after each light frame you take since that can result in a more accurate dark library. This is true, but I haven’t found that it makes enough of a difference to spend valuable imaging time doing this.
If you do one or both of these things though, you should end up with markedly less noise and now next time you’re buying a camera you’ll know what it means when an ad says 60% QE or 90% QE.