SPONSORED CONTENT
Photometric Stereo for 3D surface inspection
T
hough a large majority of machine vision applications are solved using two-dimensional imaging, machine
vision applications using or requiring 3D measurement and inspection are growing significantly. Numerous techniques are used for extracting 3D information from scenes. Let’s mention structured light (including laser-scanning based triangulation), stereo or stereoscopic vision and time of flight sensors. One, probably lesser known, of these
techniques is Photometric Stereo. Euresys’ Photometric Stereo function estimates the orientation and albedo of each point of a surface by acquiring several images of the same surface taken from a single viewpoint, but under illumination from different directions. Te different images are acquired
‘Photometric Stereo is suitable for the detection or inspection of details’
in sequence, in synchronisation with the lighting, thus requiring only one camera. Photometric Stereo is suitable for the
detection or inspection of details (be they defects or information) present on the surface of objects. Te Photometric Stereo algorithm is
available in Euresys’ Open eVision Easy3D library. It can be used as a preprocessing phase before other operations, such as code reading (with the EasyMatrixCode, EasyQRCode or EasyBarcode libraries), optical character recognition (EasyOCR), alignment (EasyMatch or EasyFind), measurement (EasyGauge) or defect detection (EasyObject or EasySegment).
How does it work? For a given vision set up (positions and angles of the object to be inspected, the lights (typically 3 or 4) and the camera), the software tool first requires the calibration on a reference object [(hemi-)sphere], or the manual introduction of the set up’s precise geometric characteristics.
Process From then on, the image capture of inspected objects is performed in multiple steps corresponding to the various lighting angles. At the user’s request, Photometric Stereo
www.imveurope.com | @imveurope Figure 2. Images of blister under various lighting conditions and inspection
individually extracts a number of variables (normal to the surface, albedo, X & Y gradients, mean and Gaussian curvatures). Tese are used for the reconstruction/ rendering of the 3D information in the 2D domain, making it ready for further processing by other libraries. Te above Figure 2 illustrates the
entire process: • Te object captured by the camera under four lighting conditions (images 1 to 4)
• Te reconstructed image based on the selected measurements (image A)
• Te isolated subset of data useful for the chosen inspection (image B) (here we have chosen to use the Gaussian curvature information)
• Te results of the inspection by deep learning, using the Open eVision EasySegment library, supervised mode (image C) applied to the image in (B). Te two puncture points can clearly be identified.
Optimisation A few tests allow the identification by the user of the most appropriate variables to recover for is application. Tese steps will
result in the optimisation and potentially speed up a time-critical process. For example, some detections will require
specific linear detections, or alternatively the detection of sharp edges. Te specification of a specific Region of Interest (ROI) is also possible. Tuning these parameters to the actual application results in significant speed improvement. Considering the occasional less than perfect
object position, observation conditions or lighting, the function also allows for the compensation of: • Ambient lighting (dark image); and • Non-uniform lighting (flat reference image).
Conclusion One can see how the useful information originally not detectable from the original image can be enhanced by Photometric Stereo to be effectively exploited by a standard eVision library (EasySegment in this instance). Te Photometric Stereo Imager functionality is available in Euresys’ Open eVision Easy3D Library. O
www.euresys.com/en/Products/Machine- Vision-Software/Open-eVision-Libraries/ Easy3D
AUGUST/SEPTEMBER 020 FEBRUARY/MARCH 2 2021 IMAGING AND MACHINE VISION EUROPE 19
Figure 1. Inspected object with multiple lighting angles in front of the camera
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44