Pharmaceutical & medical
for suitable applications and enable very short processing times. At the end of the inspection, a clear decision is made as to whether the ampoule in question is OK or NOK.
that manual inspection is very time-consuming and costly. For this reason, the new process should be automated. Mickael Denis explains: “Since the inspection has to be done visually, it was clear that we could only implement the process with machine vision and no other technology. We also had to adapt to the particularly demanding validation processes that apply in the pharmaceutical industry. This ensures that the new system tests the ampoules at the required speed, but at the same time with the highest accuracy and robustness.”
INSPECTION OF AMPOULES WITH 14 IMAGES
A total of 12 cameras that comply with the industry standard GigE Vision are used in the new system. Good lighting is also important to make the particles clearly visible. Machine Vision is performed on an industrial PC. On the software side, the machine vision solution is based on MVTec HALCON, the standard software for machine vision with over 2,100 operators for almost all image processing tasks including deep learning. Due to the extremely difficult conditions in Aspen’s application, deep learning had clear advantages over classic rule- based methods. With classic machine vision methods, it was not possible to find a set of rules that was robust and flexible enough to detect the defects.
Instrumentation Monthly January 2026
In practice, the process now looks like this: the test vials are manually placed on the conveyor belt of the system and thus reach the inspection machine. The 12 cameras are positioned so that they capture a total of up to 14 images of each ampoule from different stations. The large number of images is helpful for deep learning- based inspection, as there are images in which the particles are not visible but can be seen when viewed from a different angle. It should also be noted that a particle is only identified as such if it is found in a certain number of images. This successfully reduces the number of false positives. Once the images have been captured, they are transmitted to MVTec HALCON. There, various machine vision methods are used to perform the checks. Aspen uses the deep learning-based semantic segmentation included in HALCON to detect foreign matters. In addition to inspecting the liquid for particles, other inspection tasks are also performed in parallel with MVTec HALCON. These include so-called cosmetic defects. The system checks whether the fill level is correct, whether the color is appropriate, and whether the closure complies with specifications. Classic machine vision methods such as matching and blob analysis are used for these tasks. Classic methods have the advantage that they deliver very robust results
MVTEC PROVIDES CONCEPTUAL CONSULTING AND SOFTWARE SUPPORT “The task was very complex – certainly one of the most challenging we have ever faced. This was particularly true when it came to preparing the images for training. We at MVTec were called in to assist with conceptual preparation, process implementation, and documentation,” explains Patrick Ratzinger, project manager at MVTec. Deep learning is only able to make robust decisions if the images are prepared appropriately. MVTec’s support consisted of sorting, post- processing, and recompiling the data previously labeled by Aspen, and then training it multiple times. The same data set was trained multiple times in order to compare the different results. MVTec thus created a neural network that Aspen can use to perform inspections on its own equipment. This network is trained to segment the particles from the background and thus reliably detect them. For the training, test ampoules were manipulated to simulate possible defects that could occur. Images of these test ampoules were captured and labeled using the Deep Learning Tool. The Deep Learning Tool is an MVTec software product that allows images to be labeled for deep learning applications. The labeled data was combined with good images, meaning ampoules without any defects, to create a dataset used for training the deep learning networks. The training runs, which are carried out multiple times, are then evaluated to see if they work effectively in real-world use. “Through our consulting and support work, we have built up extensive knowledge about the data and its use. For example, how best to label defects, what the ideal composition of datasets looks like, and how best to interpret the results. We have passed on this knowledge to Aspen as part of our consulting services,” explains Ratzinger.
ROBUST DETECTION RATES: INCREASED ACCURACY IN DETECTING ERRORS, REDUCED FALSE NEGATIVE RESULTS Both production lines are now running. “Our goal was to develop an application that reflects the current state of machine vision technology. It was clear that we wanted to use deep learning, also to expand our internal knowledge. With the support of our colleagues at MVTec, we have succeeded in significantly increasing the error detection rate and reducing false negative results,” explains Vincent Trombetta. The company plans to implement further automation based on machine vision in the future.
MVTec Software
www.mvtec.com
49
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68 |
Page 69 |
Page 70 |
Page 71 |
Page 72 |
Page 73