search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
SMART MANUFACTURING


and the corresponding results are visualised on a screen. The inspection application was developed by DGH on an industrial PC. The system continuously monitors communication with the production plant’s PLC and several 2D cameras. The MVTec HALCON machine vision software forms the heart of the setup.


DEEP LEARNING METHODS FROM MVTEC Two deep learning methods are used from the machine vision software to reliably detect the defects. Firstly, “instance segmentation” is used to localise the relevant area, i.e. the weld seam, on the images captured. This deep learning technology is able to assign a class to different, trained objects with pixel precision. In the next step, “anomaly detection” is used. Deep learning-based anomaly detection enables automated surface inspection and accurately detects deviations, i.e. defects, of any kind. “Anomaly detection had two decisive advantages for us: On


the one hand, the detection rates are very high and robust. On the other hand, training the underlying neural networks was simple. This is because mainly “good images” of the welded joints, i.e., images of weld seams without defects, were required to train the deep learning networks. The Anomaly Detection Network is trained using only good images. Availability of “defect images” is not a requirement for Anomaly Detection. However, few defect images can help find an optimal threshold value to differentiate between good and defect weld seams. This threshold value is applied on the anomaly score, which is the output of the Anomaly Detection network. Determination of the threshold is not a part of the training. We therefore only needed a small number of images. This is very practical, as good images are available quickly and easily. Large number of images showing defects are much more difficult to organise, not to mention the fact that it is impossible to obtain images of all possible defects. This is where deep learning has a clear advantage,” explains Guillermo Martín. In images of weld seams that differ from the trained images, the anomalies or defects are reliably detected. The size of the delta between OK and NOK is determined by the so-called threshold value. The threshold value is a parameter within deep learning methods that regulates the value up to which the image to be checked may deviate from the trained “good image”. The user can freely set this parameter and therefore can bring transparency into the “black box” of artificial intelligence decision-making.


IMPORTANT PREPARATION FOR DEEP LEARNING: LABELING THE IMAGES AND TRAINING THE NEURAL NETWORKS Deep learning technology requires the neural networks to be trained with images before operation. These images must first be labeled for training. DGH used MVTec’s Deep Learning Tool for this preliminary work. With this free of cost tool, image data can be easily labeled and then comfortably trained. To do this, DGH first collected images of weld seams. The knowledge of the employees was also used here. They checked each image to ensure that mainly


“good images” were used for training. An incorrectly used “bad image” would falsify the results of the training. The “good images” are then loaded into the Deep Learning Tool and labeled there specifically for the instance segmentation technology. The “Smart Label Tool” is available for this purpose. All the user has to do is select the welded joint area with a mouse click, and the “Smart Label Tool” automatically labels the welded joint. This ensures that the Deep Learning Tool then only trains on the basis of the relevant areas of the image. The car manufacturer’s employees were also involved in this step. They knew which areas within the image contained important information about the welded joint and how large the corresponding frame around the welded joint should be. After the images are labeled, a split is created. This involves splitting the image data set, usually in a ratio of 50 percent for training, 25 percent for validation, and another 25 percent for testing. Training, validation, and testing are carried out easily and conveniently in the Deep Learning Tool at the touch of a button. The trained model is then saved and loaded in HALCON for inference. This is possible due to the seamless compatibility between the Deep Learning Tool and HALCON. The software is now ready for operation.


MACHINE VISION SOFTWARE AS A CORE COMPONENT OF THE INSPECTION SYSTEM “We at DGH have been working with MVTec for over ten years and are therefore familiar with their powerful tools and algorithms. That’s why we decided to trust MVTec HALCON for this project as well,” reveals Guillermo Martín. The challenges in terms of training and speed due to the tight cycle times have already been mentioned. There was also another requirement for the machine vision software: the environment is difficult due to reflective metal surfaces and the different lighting conditions.


DGH was able to overcome all challenges and deliver a system with the desired high quality. “In the beginning of 2024, the first system was put into operation at the car manufacturer’s plant. Once it ran successfully, we received a new request from the same manufacturer in April 2024 to implement a second system for inspecting welded joints,” says a delighted Guillermo Martín. In particular, the goals of reducing the dependence on skilled workers for quality inspection processes in times of a labor shortage and subsequently increasing the degree of automation were achieved. Automation based on machine vision and artificial intelligence has significantly reduced errors and ensured consistent and reliable detection of welding defects. For this reason, Guillermo Martín also believes that despite the large number of machine vision systems already in use in various industries and manufacturing processes, there is still potential for growth – for example for deep learning solutions in particularly demanding and complex applications.


MVTec www.mvtec.com


UKManufacturing Autumn 2025 19


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48