Types and methodologies of CMOS sensors

6 August 2024

In a camera system, the image sensor receives incident light (photons) that is focused through a lens or other optics. Depending on whether the sensor is CCD or CMOS, it will transfer information to the next stage as either a voltage or a digital signal. CMOS sensors convert photons into electrons, then to a voltage, and then into a digital value using an on-chip Analog to Digital Converter (ADC).

As all modern day cameras are of CMOS, it would be nice to know something about these CMOS sensors for any machine vision engineer. This would help select a suitable camera for the project.

 

 

Thanks to LUCID VISION LABS.

In CMOS, while it is manufactured or in architecture wise, this can be classified as Back illuminated, Front Illuminated and Stacked CMOS.

A traditional, front-illuminated  camera is constructed in a fashion similar to the human eye, with a lens at the front and photodetectors at the back. This traditional orientation of the sensor places the active matrix of the digital camera image sensor—a matrix of individual picture elements—on its front surface and simplifies manufacturing. The matrix and its wiring, however, block some of the light, and thus the photocathode layer can only receive the remainder of the incoming light; the blockages reduces the signal that is available to be captured.

A back-illuminated sensor contains the same elements, but arranges the wiring behind the photocathode layer by flipping the silicon wafer during manufacturing and then thinning its reverse side so that light can strike the photocathode layer without passing through the wiring layer. This change can improve the chance of an input photon being captured from about 60% to over 90%.

Stacked CMOS chips improve upon the Back illuminated CMOS concept. They place components in a similar arrangement, but the design also stacks the image signal processor and its ultra-fast DRAM memory into the same silicon. This makes readout speeds even faster.

The 2-Layer Transistor Pixel is the world’s first*1 stacked CMOS image sensor technology with a pixel structure that separates photodiodes and pixel transistors on different substrate layers, as opposed to the conventional style of having the two on the same substrate. This new structure approximately doubles*2 saturation signal level*3 relative to conventional image sensors, widens dynamic range and reduces noise. This pixel structure will enable pixels to significantly improve their imaging properties even at smaller pixel sizes.

 

 

We also need to know two other important technologies or methodologies namely global and Rolling shutter in terms of Shutter or read out (camera sensor reads the signal off given pixels)

Global Shutter:

After a global shutter camera has been exposed to a signal from a sample, all sensor pixels are read out simultaneously, hence the term global shutter. This means that images obtained from cameras with these sensors are snapshots of a single point in time. This is advantageous when synchronizing camera exposure to the light source activation using hardware triggers, as the exposure happens at the same time across the sensor. The more pixels on the sensor to transfer, the slower the total frame rate, even if the whole frame can be captured at once. In addition, a global shutter can result in increased read noise, limited framerates, and longer duty cycles for the camera.

Rolling Shutter:

While global shutter reads out the entire sensor at the same time when exposed, some camera sensors readout row by row when exposed, with the readout ‘rolling’ down the camera sensor rows, which is why this method is known as the rolling shutter. Each row takes a certain amount of time to read out (e.g. 10 μs), known as the ‘line time’, meaning that the resulting image features a small time delay between each row. Due to this slight delay in reading out each sensor row, the top row of the sensor can be starting to image a new frame while the bottom row is still reading out the previous frame. This staggered readout can lead to image artifacts with high-speed samples

Global and Rolling Shutter capture difference:

 

 

 

 

 

Based on the Light sensitivity needed, motion or stationary objects to be captured, effective usage of Triggers and Lighting (especially Strobing), one has to decide on the type of camera needed for the project apart from deciding on the resolution of the camera based on FOV and accuracy.

All these parameters discussed above, different Mounts, Pixel sizes, interfaces all matter in selecting a camera for any Vision application.

 

Compiled and written by S.SUKUMAR – DIRECTOR – PROJECTS LinkedIn

 

Note:

This blog may contain some compilation of different related Contents and Images from general purpose web sites or from the web site of the companies represented by us in India. Any one finding any objection to the use of the same can report to us at [email protected] and we shall make corrections as per legal requirements. This blog is more of informative purposes only.

Share Article

Get the latest updates from ONLSOL directly in your inbox

We'll be glad to help you! Please contact our Sales Team for more information.

Would you like to learn more about Product Details?

Chat with Sales Team

We'll be glad to help you! Please contact our Sales Team for more information.