In the latest of a string of potential disruptions to the traditional digital-camera imaging model ---- startup is gearing up to ship its first , which the company hopes will displace the more common silicon-based CMOS sensors.
Unlike traditional silicon-based sensors, whose photosensitive layer consists of "buckets" that collect the electrons created when photos hit a silicon layer, InVisage's QuantumFilm replaces the buckets with liquid nanoparticles (so-called quantum dots) suspended in a substrate, sort of the way silver halide grains in film are suspended in gelatin. When a photon hits a dot, it releases an electron and a positively charged hole. The positive and negative charges flow through the QuantumFilm toward the electrodes which sandwich it, streaming them out to an analog-to-digital converter just like a silicon sensor. The capture process and the characteristics of the quantum dots are what differentiate QuantumFilm from a typical CMOS sensor.InVisage's first QuantumFilm product is the 13-megapixel Quantum13,
a sensor with 1.1-micron pixels that fits in an 8.5mm-square by 4mm-deep module. The company expects to be able to ship the Quantum13 to phone manufacturers by the end of this year.
This technology does enable a couple of important improvements. Because it dumps an entire frame of image data at a time (unlike other sensors, which read it out a line at a time), it can potentially eradicate rolling shutter -- wobble is one of the ugliest problems with phone video.
It can also potentially eradicate the color filter array (CFA) at the front of the sensor, which is how the sensor captures the color information. Dropping the CFA would allow more light through, which I think would unambiguously improve low-light photo quality. However, despite admitting to this possibility when the company started up five years ago, the Quantum13 sensor still has a CFA.
InVisage previously released some photo samples and a video from October shot with a prototype, and based on those I have mixed feelings. QuantumFilm's light-response characteristics are a cross between film and silicon: in the bright areas it acts like film, gradually losing details as brightness increases (nonlinear response, the way your eye sees), but in the dark areas and midtones, it responds like a silicon sensor, losing detail in a direct relationship to the decrease in brightness (linear response). Based on the samples, I'm impressed with the performance in the bright areas, but not so much in the overall photo quality.
That said, it looks like the lens they used for these samples was terrible, and each manufacturer will be able to tweak and optimize the imaging pipeline and other hardware, so assuming they don't run into any implementation problems, I'm optimistic.