July, 2021
www.us-tech.com Continued from previous page
customization and personaliza- tion also increase production environment variance, which can throw off traditional machine vision processes. Quality inspec- tion procedures for such varia- tions may include multiple algo- rithms applied in a specific sequence to extract useful infor- mation from captured images. Traditional rules-based algo-
rithms define defects using math- ematical values and logic rules, but using such algorithms to cre- ate an accurate and reliable inspection routine — free of exten- sive false negatives and false pos- itives — can take hours or days, depending on the product and the programmer’s skill level. Multiply that time require-
ment by hundreds of product variants and quality inspection becomes unfeasible. Further, defining complex assemblies and shapes using mathematical val- ues results in a rigid rule set may not offer the best solution for modern production lines. If a system inspects elec-
tronic connectors to verify pin presence, changing lighting con- ditions could make a pin appear to be crooked or missing and the vision system might then fail the entire connector. If the connector is a critical
system component and the OEM must catch 100 percent of defec- tive connectors regardless of waste, the manufacturer may have to scrap 10, 20 or 30 percent of its product to meet customer specifications, resulting in unnec- essary waste and high costs. Manufacturers require not
just automated inspection sys- tems but also flexible inspection technology that adapts as prod- ucts, processes and environmen- tal conditions change.
Hybrid and Standalone Seeking an automated visu-
al inspection system that could suit its unique needs, the cus- tomer contracted
Kitov.ai to develop its system. The compa- ny’s standalone deep learning product inspection station comes standard with several pre- trained neural networks for locating and inspecting screws, surfaces, labels, and data ports, and it also performs optical char- acter recognition.
Kitov.ai constantly develops
and adds new pretrained neural networks, referred to as semantic detectors, that help more cus- tomers in a wider variety of industries solve their most per- plexing inspection tasks. Inspec - ting highly variable finished products presents a significant machine vision challenge. Trad - itional, rules-based systems can produce unacceptably high false- negative and false-positive rates. Using only deep learning to
solve the problem involves train- ing the system to recognize every component in an assembly and then combine the components into a single assembly for the final quality inspection step. Achieving acceptable false-
negative rates without allowing too many defective products to escape the quality check often proves difficult, even for experi- enced vision system designers.
Kitov.ai’s system combines tradi- tional machine vision algorithms with deep learning capabilities to enable the inspection of com- plex assemblies and to continual- ly adapt to changing conditions. It essentially allows the sys- tem to learn what makes good
and bad parts during production runs, and not only during the training phase of system devel- opment, placing automated con- tinuous improvement of industri- al processes within reach.
Intelligent Software for Complex Needs
The smart visual inspection
system uses an off-the-shelf CMOS camera with multiple brightfield and darkfield lighting elements in a photometric inspection configuration to cap- ture 2D images. The software then combines these images into a single 3D image. Because the technology uses common semantic terms, such as “screw,” “port,”
Page 47 Using Hybrid Vision to Inspect High-Mix Electronics
“label,” “barcode,” and “surface,” rather than mach ine vision pro- gramming terms, like “blob,” “threshold,” “pixel,” and “con- trast,” non-experts can learn to modify or create new inspection plans quickly and easily. Deep learning software clas-
sifies potential defects discovered by the traditional machine vision 3D algorithms, and the software’s intelligent robot planner uses mathematical algorithms to auto- matically maneuver a robot with an optical head without the need for operator input. The algorithms decide where
to move the camera, choose an illumination condition from a set
Continued on page 49
Worldwide Headquarters Vista, California, USA. +1 (760) 438-1138
sales@visionpro.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68 |
Page 69 |
Page 70 |
Page 71 |
Page 72 |
Page 73 |
Page 74 |
Page 75 |
Page 76