Page 46
www.us-
tech.com
December, 2019 Production
Did you know? Omron has been the trusted partner for inspection for over 30 years
Easily identify the HiP defects Provides clarity for THT defects
Vision System Detects Driver Activity in Autonomous Vehicles
By Annika Mahl, Marketing Manager, ARRK Engineering
automotive manufacturers offer drive assistance systems that detect tiredness. But, it is not just sleeping at the wheel that causes accidents. Talking or
A
texting on smartphones and eating or drinking while driving also increase risk. Until now, drive assistance systems have been unable to identify these activities. ARRK Engineering has run a series of test geared toward automat- ically recognizing and categorizing mobile phone use and eating and drinking. Images were captured with infrared cameras and used for machine
learning by several convolutional neural network (CNN) systems. This created the basis for a driver assistant that can reliably detect various
scenarios at the wheel and warn the driver of hazardous behavior. For years, the automotive industry has installed systems that warn in case of driver fatigue. These driver assistants analyze, for example, the viewing direction of the driver, and automatically detect deviations from normal driving behavior. “Existing warning systems can only correctly identify specific hazard sit-
uations,” says Benjamin Wagner, senior consultant for driver assistance sys- tems at ARRK Engineering. “But during some activities like eating, drinking and phoning, the driver’s viewing direction remains aligned with the road ahead.” For that reason, ARRK Engineering ran a series of tests to identify a range of driver postures so systems can automatically detect the use of mobile phones and eating or drinking. For the system to correctly identify all types of visual, manual and cognitive distraction, ARRK tested various CNN mod- els with deep learning and trained them with the collected data.
Creating a Dataset In the test set up, two cameras with active infrared lighting were posi-
tioned to the left and right of the driver on the A-column of a test vehicle. Both cameras had a frequency of 30 Hz and delivered eight-bit grayscale images at 1,280 x 1,024 pixel resolution. “The cameras were also equipped with an IR long-pass filter to block out most of the visual spectrum light at wavelengths under 780 nm,” says Wagner. “In this manner we made sure that the captured light came primarily from the IR LEDs and that their full functionality was assured during day and night time.” In addition, blocking visible daylight prevented shadow effects in the
driver area that might otherwise have led to mistakes in facial recognition. A Raspberry Pi 3 Model B+ sent a trigger signal to both cameras to synchronize the moment of image capture. With this set up, images were captured of the postures of 16 test persons
VT-X750 3D CT AXI – X-Ray World’s Best AXI
• Fast and Easy to Use
• Unmatched HiP Detection
• Advanced Thru-Hole Inspection
www2.omron.com/ustechxray
in a stationary vehicle. To generate a wide range of data, the test persons dif- fered in gender, age, and headgear, as well as using different mobile phone models and consuming different foods and beverages. “We set up five distrac- tion categories that driver postures could later be assigned to. These were: ‘no visible distraction,’ ‘talking on smartphone,’ ‘manual smartphone use,’ ‘eating or drinking’ and ‘holding food or beverage,’” explains Wagner. “For the tests, we instructed the test subjects to switch between these activities during sim- ulated driving.” After capture, the images from two cameras were categorized and used for model training. Four modified CNN models were used to classify driver postures:
ResNeXt-34, ResNeXt-50, VGG-16, and VGG-19. The last two models are widely used in practice, while ResNeXt-34 and ResNeXt-50 contain a dedicat- ed structure for processing of parallel paths. To train the system, ARRK ran 50 epochs using the Adam optimizer, an adaptive learning rate optimization algorithm. In each epoch, the CNN model had to assign the test persons’ pos- tures to the defined categories. With each step, this categorization was adjusted by a gradient descent
method, so that the fault rate could be lowered continuously. After model training, a dedicated test dataset was used to calculate the error matrix which allowed an analysis of the fault rate per driver posture for each CNN model. Evaluation of the results showed that the ResNeXt-34 and ResNeXt-50
models achieved the highest classification accuracy, 92.88 percent for the left camera and 90.36 percent for the right camera. This is absolutely competitive with existing solutions for detection of driver fatigue. Using this information, ARRK has extended its training database which
now contains around 20,000 labeled eye data records. Based on this, it is pos- sible to develop an automated vision-based system to validate driver monitor- ing systems. “Besides evaluation of different classification models, we will analyze
whether the integration of associated object positions from the camera image can achieve further improvements,” says Wagner. In this context, approaches will be considered that are based on bounding box detection and semantic segmentation. The latter enable, in addition to classification, different levels of detail regarding the localization of objects. In this way, ARRK can improve the accuracy of driver
assistance systems for the automatic detection of driver distraction. Contact: P+Z Engineering GmbH, Frankfurter Ring 160, 80807 Munich,
Germany % +49-0-89-318-57-0 E-mail:
info@arrk-engineering.com Web:
www.arrk-engineering.com r
ccording to the World Health Organization (WHO), each year about 1.4 million people die in traffic collisions and another 20 to 50 million are in- jured. One of the main causes is driver inattention. Consequently, many
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68 |
Page 69 |
Page 70 |
Page 71 |
Page 72 |
Page 73 |
Page 74 |
Page 75 |
Page 76 |
Page 77 |
Page 78 |
Page 79 |
Page 80 |
Page 81 |
Page 82 |
Page 83 |
Page 84 |
Page 85 |
Page 86 |
Page 87 |
Page 88 |
Page 89 |
Page 90 |
Page 91 |
Page 92