search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
ADVANCED MANUFACTURING NOW Modern Manufacturing Processes, Solutions & Strategies


Say goodbye to single-model pattern-matching algorithm


T


he classic machine vision in- spection application in manu- facturing is binary. For example,


in a typical application, the vision system captures an image of a part resting on the infeed conveyor and determines whether or not it is the right part. If not, the part is rejected without any thought being given to what it actually might be. This type of application typically


uses a pattern-matching algorithm trained on an image of the correct part. The pattern-matching algorithm learns the object’s geometry, and when a part is inspected it looks for those shapes in the image. A more challenging application


comes when the vision system is asked to go beyond a true or false question to answer a multiple-choice question. Let’s say that we want the vision


system to capture an image and tell us which of 12 different parts is pres- ent so we can match it with the bill of materials for the build. This is not a problem for most vi-


sion systems, but it requires consider- able extra work. Instead of training the pattern-matching algorithm on a single part, it must be trained on 12 parts. Then, for each part that is inspected, the algorithm must search the acquired image 12 different times. Getting all 12 of these inspections done in a single cycle can be challenging. For example, a typical inspection operation might take 38 milliseconds (ms) to compare the acquired image to each model for a total of 460 ms or almost one half second to compare all 12 models. A multiple-model, pattern-match- ing algorithm can substantially reduce


60


the amount of time required to per- form this type of inspection. The algorithm works by capturing images of all of the parts or features that need to be inspected and combin- ing them into a single, multiple model. After an image is acquired, the algorithm only needs to search it a single time.


The multiple-model pattern-match- ing algorithm is considerably more efficient than conventional single- model pattern-matching because the features of the captured image only need to be inspected once. Today, there is available a mul- tiple-model pattern-matching al- gorithm that can capture an image


Intelligent, self-learning, composite pattern-matching tools can improve inspection accuracy and simplify application setup by automating the process of learning to zero in on the image features.


Based on the geometric features


contained in the model, the algorithm determines which part is present. As you can imagine, this approach


takes considerably less time than a traditional single-model pattern- matching algorithm. Let’s look at a typical example of


an automobile assembly plant that uses 12 different wheels. Four vision systems are used to


ensure that each of the four wheels on each vehicle matches the build order. Under the hood, the multiple-model pattern-matching tool collects the distinguishing features of each wheel and stores them in a single model. When the multiple-model pattern- matching algorithm runs, it returns a registration or inspection result based on the model that produces the best result. If the part being inspected does not match any of the stored models, then the algorithm returns a not-found result.


and determine which of 12 different wheels are present in only 190 mil- liseconds, about 40% of the time required by a single-model pattern- matching algorithm and fast enough to meet the cycle time of just about any assembly line. This approach also reduces the amount of memory required for the vision application. Attempting to train a conventional


pattern in applications where there is a lot of variation in the appearance of good parts often produces an unus- able pattern since the pattern includes numerous features that are not pres- ent in other run-time part images. Intelligent, self-learning, composite


pattern-matching tools can improve inspection accuracy and simplify application setup by automating the process of learning to distinguish between image features that are im- portant and those that can be ignored without creating an issue.


David Michael


Director of Engineering, Core Vision Tools Cognex Corp


Fall 2016


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68