TL;DR. Most uncooled camera chips give you maybe 10 or 11 bits of dynamic range, and light is subject to Poisson noise, meaning the brighter a pixel, the noiser it is in absolute (not relative) terms. If you have to solve a big giant matrix inversion to do the job of a collimating lens, you're composing each pixel as a sum of many others instead of just itself, some of them being way brighter than the reconstructed image, meaning your reconstructed pixel is always noisier. Cool idea, and certainly has its applications, but the best images will always come from big fat optics.
No. First, on these you are mostly limited by the thermal noise of the sensor which is miles above the photon noise for this application. Then you are still thinking that a pixel receives the same flux (power per surface area) as a traditional camera. This is not correct as each of the pixels collect flux from a much larger angular portion of the scene (due to the lack of optical focusing).
If you have to solve a big giant matrix inversion to do the job of a collimating lens, you're composing each pixel as a sum of many others instead of just itself, some of them being way brighter than the reconstructed image, meaning your reconstructed pixel is always noisier.
Not really.
When you average a large number of samples the noise tends to partially cancel out while the signal keeps adding up. Though the noise goes up with more samples, the signal goes up more, improving the signal to noise ratio. Even if you end up adding in some bright signals, with extra noise, that's still stomped by signal when you have enough samples.
No, the original poster was more correct. They're not averaging together a bunch of pixels, but applying an inverse matrix , which will weigh pixels differently, and quite frequently can involve very high weights assigned to noisier signals. This can result in an emphasis that amplifies noise. There is a lot of work done on different ways of clipping or modifying such matrix equations to make it slightly less accurate in an ideal world, but much less noisy in the real world.
Also, no averaging of noise will happen if you try to produce images with similar pixel count to the number of detectors.
I agree completely there. (I'd say there is averaging but you're averaging in as much extra noise from other pixels as you're averaging out from multiple samples of the target pixel - and even if the noise were merely proportional the pixel brightness, rather than disproportionate as they get brighter, the bright ones would noise up the dim ones.)
No. As someone with patents on multiplex imaging, I can tell you that inverse problems lead to greater noise or equivalently reduced dynamic range. You can see this in their photos.
The fancy is indeed no other than a mode of memory emancipated from the order
of space and time. -- Samuel Taylor Coleridge
Dynamic range? (Score:5, Insightful)
Re:Dynamic range? (Score:5, Insightful)
No. First, on these you are mostly limited by the thermal noise of the sensor which is miles above the photon noise for this application. Then you are still thinking that a pixel receives the same flux (power per surface area) as a traditional camera. This is not correct as each of the pixels collect flux from a much larger angular portion of the scene (due to the lack of optical focusing).
Re:Dynamic range? (Score:5, Informative)
If you have to solve a big giant matrix inversion to do the job of a collimating lens, you're composing each pixel as a sum of many others instead of just itself, some of them being way brighter than the reconstructed image, meaning your reconstructed pixel is always noisier.
Not really.
When you average a large number of samples the noise tends to partially cancel out while the signal keeps adding up. Though the noise goes up with more samples, the signal goes up more, improving the signal to noise ratio. Even if you end up adding in some bright signals, with extra noise, that's still stomped by signal when you have enough samples.
Re: (Score:3, Informative)
No, the original poster was more correct. They're not averaging together a bunch of pixels, but applying an inverse matrix , which will weigh pixels differently, and quite frequently can involve very high weights assigned to noisier signals. This can result in an emphasis that amplifies noise. There is a lot of work done on different ways of clipping or modifying such matrix equations to make it slightly less accurate in an ideal world, but much less noisy in the real world.
Also, no averaging of noise w
Re: (Score:2)
Also, no averaging of noise will happen if you try to produce images with similar pixel count to the number of detectors.
I agree completely there. (I'd say there is averaging but you're averaging in as much extra noise from other pixels as you're averaging out from multiple samples of the target pixel - and even if the noise were merely proportional the pixel brightness, rather than disproportionate as they get brighter, the bright ones would noise up the dim ones.)
They're not averaging together a bunch of
Re: (Score:2)
No. As someone with patents on multiplex imaging, I can tell you that inverse problems lead to greater noise or equivalently reduced dynamic range. You can see this in their photos.