search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Editor’s choice


pitch and the ratio of light-sensitive area to total area (also called the pixel fill factor). More fundamental changes to pixel physics can improve parameters such as the gain, efficiency and dynamic range. Sensor manufacturers have also improved the technologies for reading data from pixels, achieving enhancements like greater SNR, frame rate and linearity. Among the most important trends driving


Mouser Electronics’ Mark Patrick explores ways to enhance system speed and functionality in industrial machine vision while offering greater simplicity


food packaging inspection or aerial surveying, which similarly adds to the overall expense and complexity of the system. In a system requiring all-round inspection of


objects, such as labels applied to bottles in a packaging plant, 360-degree vision can be achieved using a hyper-centric or peri-centric lens, positioned directly above the object itself. A hyper-centric lens captures rays of light as if they have originated from a single point located a certain distance in front of the lens. This convergence point and the perimeter of the lens define a viewing cone. Positioning the object within this viewing cone, directly beneath the downward-facing lens, allows light from the top surface and vertical sides of the object to enter the lens simultaneously. Focusing the light on the sensor allows the entire image to be captured in a single frame. Similar principles mean that cameras can capture a 360-degree view inside a hole or cavity, thereby eliminating any need to insert an optical probe. Other techniques to capture multiple images of an object in a single frame combine the hyper-centric lens with an array of mirrors that effectively see each side of the object simultaneously.


SEnSorS – PhySicS and Fabrication Increasing the resolution of CMOS image sensors is key to capturing more finely detailed images, although simply reducing the pixel size can result in poorer image quality due to a lower signal-to-noise ratio (SNR). Achieving superior resolution calls for technical advancements to reduce pixel size without detracting from sensor performance. These can be achieved in a number of areas, such as optimising the pixel


Instrumentation Monthly September 2020


sensor performance over the last decade or so is use of back-side illuminated (BSI) sensors. These absorb light through upper and lower surfaces, enabling pixel miniaturisation without degrading key performance parameters (like well capacity, quantum efficiency, dark current, etc.). This has been followed in more recent times by three-dimensional (3D) stacking of sensor and image processing dies to achieve smaller form factors. Subsequently, 3D hybrid stacking, which involves bonding of both silicon-oxide and metal pads, eliminates through-silicon vias (TSVs) in favour of more efficient direct connections between the two chips. Most recently, sequential integration has been developed, which enables fabrication of monolithic image sensors that each combine a phototransistor array with 3D stackable pixel-readout logic and memory, connected using integrated high-density I/Os.


Global ShuttEr EnhancES iMaGinG on thE MovE In high-speed industrial automation, as well as automotive and drone applications, there is a need to capture clear and sharp images of fast- moving objects. This challenges the performance of traditional rolling-shutter image sensors, which read image data from the sensor pixels into the frame buffer one line at a time. If the object is moving, the change in position during the time between reading the image from one line and reading the next can cause distortion such as blurring or bending of the image. Global shuttering improves image sharpness


when photographing fast-moving objects or when the camera is mounted on a moving vehicle. First featured in high-end still cameras, the technique is now in demand to enhance the performance of industrial and automotive vision systems. In global shuttering, the charge value of all pixels is stored simultaneously to a


small in-pixel memory before being read sequentially into the frame buffer line by line as before. This results in a clear image, free of rolling-shutter distortion. Several challenges have been overcome to create


global-shutter image sensors that achieve elevated SNR and dynamic range without increasing pixel size to compensate for the presence of in-pixel memory, which effectively reduces the pixel area that can be used for photon absorption. An example of such image sensors is the 1Mpixel, 1/4-inch format ON Semiconductor ARO144. Global shutter pixels feature high quantum


efficiency to ensure fast charging while at the same time remaining insensitive to charging effects not related to the image, such as crosstalk resulting from electron diffusion. In addition, optical shielding is applied close to the sensor to exclude stray illumination from the pixel surface.


ai in iMaGE ProcESSinG In the signal-processing pipeline, sitting behind the camera optics and the sensor, commercialisation of machine learning (leveraging deep neural networks) is enabling a revolution in the way images are constructed and information is subsequently extracted from them. One example can be seen in the use of AI to significantly improve low-light performance, enabling high-quality images to be taken in near-dark conditions. Raw data captured under low-light conditions is


known to challenge traditional signal-processing pipelines. Electronically raising sensor sensitivity (ISO number) can add noticeable noise to images resulting in poor image quality, and applying de- noising to the image has only limited effectiveness. Other techniques to improve image quality include extending the exposure time, although this is often impractical in industrial applications or cameras on board vehicles. More recently, an ingenious technique has


been developed that leverages machine learning to greatly reduce the detectable noise in images constructed from raw low-light data. A deep neural network is trained using datasets that contain raw short-exposure low-light images and corresponding long-exposure reference images. When the network is fully trained, it is capable of creating high-quality images by working directly on raw short-exposure data. This technique is coming into the market in top-of- the-range smartphones – enabling delivery of better-looking photographs. It is also applicable to capturing better images for industrial and security applications, such as production line inspection or surveillance systems.


Figure 2: The ARO144 image sensor from ON Semiconductor.


concluSion Numerous technical improvements are being developed throughout modern image processing systems – from the camera lens at the front end of the system to the image sensing and image processing apparatus behind it. Together, these are expected to drive expansion in the breadth of applications that can be addressed, as well as increasing system performance benchmarks.


Mouser Electronics www.mouser.co.uk 13


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68  |  Page 69  |  Page 70  |  Page 71  |  Page 72  |  Page 73  |  Page 74  |  Page 75  |  Page 76  |  Page 77  |  Page 78