Optoelectronics
Automotive attention monitoring with 3D Time-of-Flight sensors
By Gualtiero Bagnuoli, marketing manager Optical Sensors, Melexis A
s today’s vehicles become ever smarter, driver monitoring is becoming a required feature to improve safety and ensure smoother
interaction with autonomous modes. The EU General Safety Regulation mandates drowsiness and distraction monitoring among new safety features required in new European vehicles as of 2022. Similarly, the US SAFE (Stay Aware for Everyone) Act of 2020 calls for measures to minimise or eliminate driver distraction and disengagement, “automation complacency”, and ADAS misuse.
In addition to protecting road users, the new systems are expected to help drivers become accustomed to increasing automation that can compensate for human errors and support new mobility models.
Head tracking and eye gaze technologies
Signs that a driver may be distracted or disengaged include extended periods looking away from the road, to either side or at central controls or a mobile phone. This can be detected through head- positioning and gaze tracking. Drowsiness, also, can be manifested in unusual head movements, excessive blinking, or changing of posture.
Driver Monitoring Systems (DMS) can accurately identify these signs and provide a helpful warning to the driver to refocus attention on the road or recommend taking a break. Moreover, the DMS can assess the driver’s readiness to interact with various autonomous driving systems, such as retaking control after a period of self- driving by the vehicle.
With a suitably wide field of view, the 34 September 2022
Ambient light does not interfere with the pulsed IR light
Up to vga resolution (640 x 480 px)
Pulsed IR light source + camera
Camera will only accept reflected IR light with the correct pulse
Amplitude
Distance Figure 1: Time-of-Flight (ToF) principle
DMS can also support additional value- added features such as gesture-based control of vehicle modes and infotainment settings, as well as monitoring passengers (OMS: occupant monitoring system) such as children in safety seats.
2D cameras are already widely used in vision-based automotive systems such as ADAS and surround-view systems. Using infrared wavelengths as a source of illumination can allow the system to be independent of changing lighting conditions, enabling consistent performance whether driving in bright sunlight, overcast conditions, or at night-time.
However, 2D systems have only limited ability to address head-pose variations and thus can only detect a small variety of head movements. 3D sensing is needed. Among established techniques, structured lighting
Components in Electronics
and stereo camera imaging have significant drawbacks as potential solutions for DMS. Structured lighting projects a known pattern onto a scene and observes distortion to determine distance. However, suitable projectors for high-resolution DMS are bulky and expensive, while complex image processing is needed to calculate the distance data. Also, bright ambient lighting can compromise performance.
Stereo imaging with two cameras allows a depth map to be calculated by correlating the two images. Although good resolution can be achieved using low-cost sensors, the cameras must be accurately positioned relative to each other and lighting conditions strongly affect performance. Moreover, the cost and heat dissipation associated with image processing are high. Alternatively, indirect Time of Flight (ToF) sensing illuminates a scene using
modulated light and determines depth information by measuring the phase delay in reflected signals. This can be accomplished using a single sensor of relatively low resolution and permits robust DMS performance in a small form factor.
Depth detection using ToF To calculate depth information from ToF measurements, mathematical cross- correlation is applied to the returning modulated signal. Quadrature sampling, changing the illumination phase in steps of 90 degrees, gives four cross-correlation terms from which the phase and amplitude are retrieved. The distance is calculated from the phase using the formula in figure 1. The pixel charge accumulated when measuring each phase is sampled to find the amplitude. The phase and amplitude for all pixels are calculated
www.cieonline.co.uk
Encoded IR light beam Reflected IR light
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62