Feature: Machine vision
Figure 1. Depth stitching algorithm
Figure 3. A merged point cloud
Figure 2. Transforming the point cloud with data from camera positions
A host machine is connected to multiple TOF sensors over a high-
speed connection such as USB. It collects depth and IR frames and stores them in a queue. Depth and IR frames from each sensor received by the host
are captured at different times. To avoid temporal mismatch due to movement of objects, inputs from all sensors need to be synchronised to the same point in time. A time synchroniser module is used that matches incoming frames based on timestamps from the queue. A point cloud is generated on the host using the synchronised depth
data for each sensor. Each point cloud is then transformed (translated and rotated) based on its respective camera positions in the real world; see Figure 2. Te transformed point clouds are then merged to form one single, continuous point cloud, covering the combined FOV of sensors; see Figure 3. Te combined point cloud of the FOV is projected to a 2D canvas
using a cylindrical projection algorithm also known as front view projection; see Figure 4. In other words, the algorithm projects each point of the merged point cloud onto a pixel in the 2D plane, which
results in a single continuous panoramic image covering the combined field of view of all the sensors. Tis results in two 2D stitched images: one for stitched IR and one for stitched depth images projected onto 2D planes.
Enhancing projection quality Projecting the 3D combined point cloud onto a 2D image still does not give good quality images. Te images suffer from distortions and noise, which affects visual quality and would also adversely impact any algorithm that runs on the projection. Tese three key problems are listed below and shown in Figure 5: 1. Projecting invalid depth regions Te depth data of the ADTF3175 has an invalid depth value of 0mm for points that are beyond the operational range of the sensor (8000mm). Tis results in large void regions on the depth image and forms incomplete point clouds. A depth value of 8000mm (the largest depth supported by the camera) was assigned to all the invalid points on the depth image and a point cloud was generated with it. Tis ensured that there were no gaps in the point cloud.
www.electronicsworld.co.uk July/August 2025 19
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44