search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
FEATURE Machine vision


The DLPC350 controller from


Texas Instruments provides input and output trigger signals for


synchronising displayed patterns with a camera. It works with digital micromirror devices,


designed to impart 3D machine vision to industrial, medical, and security equipment


• Pattern recognition is used to identify specific objects. At its simplest, this might mean looking for a specific well-defined mechanical part on a conveyor.


encoding and compression. Advantages over analogue image processing include minimised noise and distortion, as well as the availability of more algorithms. One early image enhancement use was correction of the first close-range images of the lunar surface. This used photogrammetric mapping as well as noise filters and corrections for the geometric distortions arising from the imaging camera’s alignment. Digital image enhancement often involves increasing the image contrast, and may also make geometric corrections for viewing angle and lens distortion. Compression is typically achieved by approximating a complex signal to a combination of cosine functions – a type of Fourier transform known as a discrete cosine transform, or DCT. The JPEG file format is the most popular application of DCT. Image


restoration may also use Fourier transforms to remove noise and blurring. • Photogrammetry uses a form of feature identification to extract measurements from images. These measurements can include 3D information when multiple images of the same scene have been obtained from different positions.


The simplest photogrammetry systems measure the distance between two points in an image employing a scale. Including a known scale reference in the image is normally required for this purpose. • Feature detection allows computers to identify edges and corners or points in an image. This is a required first step for photogrammetry, as well as the identification of objects and motion. Blob detection can identify regions with edges that are too smooth for edge or corner detection.


3D scanners capture 2D images of an object to create a 3D model of it. In some cases, the digital models are then used to 3D print copies


• 3D reconstruction determines the 3D form of objects from 2D images. It can be achieved by photogrammetric methods in which the height of common features (identified in images from different observation points) are determined by triangulation. 3D reconstruction is also possible using a single 2D image; here, software interprets (among other things) the geometric relationships between edges or regions of shading. A human can mentally reconstruct a cube from a simple line-art representation with ease – and a sphere from a shaded circle. Shading gives indication of the surfaces’ slopes. However, the process of such deduction is more complicated than it seems, because shading is a one-dimensional parameter while slope occurs in two dimensions. This can lead to ambiguities – a fact demonstrated by art depicting objects that can’t be physically rendered.


How machine-vision tasks are ordered


Many machine-vision systems progressively combine these techniques, by starting with low-level operations and then advancing one by one to higher- level operations. At the lowest level, all of an image’s pixels are held as high- bandwidth data. Then each operation in the sequence identifies image features and represents information of interest with relatively small amounts of data. The low-level operations of image enhancement and restoration come first, followed by feature detection. Where multiple sensors are used, low-level operations may therefore be carried out by distributed processes dedicated to individual sensors. Once features in individual images are detected, higher- level photogrammetric measurements can occur – as can any object identification or other tasks relying on the combined data from multiple images and sensors.


Direct computations and learning algorithms A direct computation in the context of machine vision is a set of mathematical functions that are manually defined


automationmagazine.co.uk Automation | May 2024 17


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68  |  Page 69  |  Page 70  |  Page 71  |  Page 72  |  Page 73  |  Page 74  |  Page 75  |  Page 76  |  Page 77  |  Page 78  |  Page 79  |  Page 80  |  Page 81  |  Page 82  |  Page 83  |  Page 84  |  Page 85  |  Page 86  |  Page 87  |  Page 88  |  Page 89  |  Page 90  |  Page 91  |  Page 92  |  Page 93  |  Page 94  |  Page 95  |  Page 96  |  Page 97  |  Page 98  |  Page 99  |  Page 100  |  Page 101  |  Page 102