search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
ILLUMINATION


Guided by the light


Matthew Dale explores the power of computational imaging, all made possible by clever illumination


M


achine vision’s success rests on its ability to highlight features in an image, and illumination is key


here. Regardless of the amount of processing power available, it is impossible to identify a feature in an image if it is not visible at the point of capture because the lighting is inadequate. Computational imaging (CI) aims to


overcome this issue. It is a technique whereby a sequence of images are captured, each with different lighting or optical configurations. Data can then be extracted from each image and merged to create a single, high-quality output that contains details relevant to a particular application. Hence, different combinations of illumination – varying light direction, angle, wavelength, intensity, focus, polarisation or scattering – can bring out different features in an image. Fast sensors, high-speed data interfaces,


and cheap computing power mean that CI is now a viable imaging method. Two common CI techniques are high dynamic range and stereovision, the first combining multiple exposures to produce a single image with increased dynamic range, and the second combining images from two different viewpoints to produce a 3D image. Other CI techniques include extended depth of field imaging, ultra-resolution colour imaging, and photometric stereo imaging. Te lighting firm CCS has developed a


range of lights dedicated to CI. For example, the firm’s ring lights have been designed to offer four independently controllable quadrants, making them suited to providing the multi-directional lighting required to perform photometric stereo imaging. Photometric stereo, according to Marc


Landman, senior technical advisor at CCS America, is currently one of the most widely adopted CI techniques in machine vision,


A setup for photometric stereo imaging in which multiple lights are used to illuminate an object from different direction


30 IMAGING AND MACHINE VISION EUROPE JUNE/JULY 2020 @imveurope | www.imveurope.com


with most vision firms now having an implementation of photometric stereo in their software. It is used to extract relevant 3D features on the surface of an object that would otherwise be difficult to capture in an image under normal lighting conditions. Landman highlighted the greater


awareness and uptake of CI in a presentation he gave about computational imaging as part of the AIA’s recent Vision Week, which was held online in May in place of the US Vision Show. He told Imaging and Machine Vision Europe that machine vision manufacturers are now responding to this trend with CI- compatible hardware and software. Photometric stereo works by capturing


at least four images of an object, with light coming from a different direction in each image – for example, north, east, south and west, Landman explained. Tis throws shadows on any 3D structures on the surface, which are captured in four different directions in the four images. Te edges and the shadow information can then be merged using a photometric algorithm,


and the surface relief can be extracted three dimensionally. Tis greatly accentuates the features on the surface. An example of where this can be used is for


inspecting pharmaceutical cartons. Under conventional lighting, both the printed universal product code (UPC) and embossed date or lot code can be seen, but they are superimposed, making it virtually impossible for both to be read by a machine vision system. Te problem can be overcome with photometric stereo. Te carton flap is imaged sequentially


using CSS’ four-segment ring light. Photometric stereo algorithms are then used to combine the images to produce a shape image and a texture image of the flap. Te shape image contains only the depth data obtained from the shadowing cast on the embossed text, and therefore reveals the date or lot code, while in the texture image only the UPC code is readable. Te output can then be read using a conventional OCR algorithm. CCS has developed both a white and


Advanced illumination


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42