DEEP LEARNING
Automated pill inspection using Suakit deep learning software
Czardybon, general manager of Adaptive Vision, demonstrated the possibilities of his company’s deep learning tools. Adaptive Vision has used deep learning on projects ranging from detecting cracks in photovoltaic panels, to satellite image segmentation and robotic pick-and-place applications. Its latest tool, Instance Segmentation, was released for the Automatica trade fair in Munich in June, and is designed with robot pick-and-place tasks in mind. It combines object location, classification and region detection, and has been shown to work in a logistics application where the robot is handling lots of different types of object and large changes in lighting. Te tool is trained with sample images using masks; it can separate objects that are touching and even intersect objects, according to Czardybon.
Preparing for success Training a CNN from scratch typically requires hundreds of thousands of images. Tis is a huge effort – not only in terms of acquiring that amount of images, but also labelling them. Training the network takes time, too. Most machine vision soſtware packages get
around this problem, in the case of MVTec’s Halcon by providing two pre-trained neural networks that only need a few hundred images
to fine-tune. Tis is called transfer learning. Te basic network is trained by MVTec, and the customer then tunes it using a few hundred of their own labelled images. A CNN consists of multiple layers that
extract features relevant for distinguishing between classes. Te first layers are important when it comes to general features, such as edges, colour or light changes – basic features that nearly all kinds of applications have in common. Te last layers extract more detailed, application-specific features. It’s these last layers that are fine-tuned by the customer. Both MVTec and Matrox Imaging suggest a
few hundred images for their respective CNNs, while Adaptive Vision says 20 to 50 images for training. In general, however, the more images the better the results. Halcon’s pre-trained CNNs are based on
an in-house image dataset covering different industrial applications, including electronics, food and automotive, among others. Te pre- trained network is optimised for industrial applications and is based on millions of images. Tere are pre-trained, open source neural
networks available, but they will typically be trained using everyday images, not industrial images. In some cases, the images used to train open source CNNs can be subject to copyright
26 Imaging and Machine Vision Europe • August/September 2018
restrictions. A common image dataset, called ImageNet, consists of millions of images – most of which carry a disclaimer stating that the images can only be used for the purpose of research and not for commercial gain. ‘When our customers design an application
for their customers, they have to make sure they don’t infringe any licence agreements,’ explained Hiltner. Te networks MVTec supply have no copyright licence issues. Of the two pre-trained networks Halcon
offers, one is optimised for speed and the other for recognition rate. Most of the applications in the industrial machine vision market can be solved using the speed-optimised network, Hiltner said. Work is still needed, however, to prepare
images to train a CNN, even for a pre-trained network. Pierantonio Boriero, director, product management for Matrox Imaging, commented: ‘A fair quantity of carefully labelled images is needed to effectively train and validate a CNN for a given purpose. Not only does this need to be done initially, but also continuously, especially when new or unforeseen cases arise in a particular application, to maintain system accuracy.’ Matrox Imaging employs machine learning
in one of its optical character recognition (OCR) tools. Te company has also released
@imveurope
www.imveurope.com
Adaptive Vision
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44