04 Apr '14 15:51>3 edits
http://phys.org/news/2014-04-pixels-nanowires-paradigm-digital-cameras.html
I had previously learned about how conventional colour cameras work and noted that they waste at least half the photons of light through colour filters before they can reach the light sensors and that halves the cameras sensitivity to dim light in dim light conditions. But here they found a way to do away with those wasteful colour filters and this should at least double the sensitivity of colour cameras to dim light. They achieve this by making layers of vertical silicon nanowires each with all having a different radius that causes them to absorb a different range of wavelengths of light -one wavelength range for each primary colour. The link also states this has other advantages of “higher pixel densities and higher resolution”.
But one quote that confuses me though is where it says:
“...the pixels with different color responses can be defined at the same time through a single lithography step.”
does that imply that all the silicon nanowires for the different primary colours are put in exactly the same layer rather than stacked one in front of the other with one layer for each primary colour? Because, if so, surely that would mean wasted photons and that would defeat the whole point!? Or am I probably just mentally visualizing this “ single lithography step” incorrectly (somehow ) ?
I had previously learned about how conventional colour cameras work and noted that they waste at least half the photons of light through colour filters before they can reach the light sensors and that halves the cameras sensitivity to dim light in dim light conditions. But here they found a way to do away with those wasteful colour filters and this should at least double the sensitivity of colour cameras to dim light. They achieve this by making layers of vertical silicon nanowires each with all having a different radius that causes them to absorb a different range of wavelengths of light -one wavelength range for each primary colour. The link also states this has other advantages of “higher pixel densities and higher resolution”.
But one quote that confuses me though is where it says:
“...the pixels with different color responses can be defined at the same time through a single lithography step.”
does that imply that all the silicon nanowires for the different primary colours are put in exactly the same layer rather than stacked one in front of the other with one layer for each primary colour? Because, if so, surely that would mean wasted photons and that would defeat the whole point!? Or am I probably just mentally visualizing this “ single lithography step” incorrectly (somehow ) ?