search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
CASE STUDY


 Figure 1: a simplified system architecture for self-driving vehicles


sensors as shown in the blue box. This suite of sensors provides raw input data, but this must be understood and interpreted, which is performed by the Perception and Scene Understanding activities. The resulting infor- mation is combined with high-level path planning (eg, the overall desired route), and decisions are made on the specific actions to be taken (eg, slow down, turn right, or make an emergency stop). Once the desired action is determined, control algorithms translate the desired action into specific inputs to the vehicle’s control systems.


“Machine learning techniques, and in particular deep neural networks, are used primarily to handle large portions of the Perception and Scene Understanding activities”


SENSING THE EXTERNAL ENVIRONMENT There are four commonly used autonomous sensor technologies for detecting and track- ing external objects: ultrasonic rangefinders, radar, lidar, and camera image recognition systems. Each sensor technology has dif- ferent strengths and weaknesses and auto- mated vehicles must combine multiple sensor types in order to function effectively. Different developers are using different combinations of sensors, but at a minimum, camera image recognition is needed to pro- cess signs and similar indicators, and some combination of radar and/or lidar is needed to accurately determine the presence, range, and relative velocity of other vehicles, pedes- trians, and obstacles. Ultrasonic sensors are small and cheap.


A


t a basic level, driving can be broken down into five types of activities, as shown in Figure 1. First, a driver, whether


they it be human or automated, must have a means of sensing the external environment, the vehicle’s location within that environ- ment, and the vehicle’s own state (eg, speed, orientation, steering wheel angle). Humans do this through their senses, primarily vision, while automated vehicles use a variety of


www.thinkinghighways.com


They work in all lighting conditions, but cannot effectively measure speed and have extremely short range (less than 10 meters). Within that short range, they can measure distances very accurately. They are useful for applications such as backup obstacle detec- tion and warning. Radar has much greater range (on the order of 100-200 meters), is inexpensive compared to lidar, and unlike lidar and cameras, radar can function well in rain, snow, or fog. Radar can determine


relative speed as well as range. At very close range radars cannot measure range as pre- cisely as ultrasonic sensors. Radars also lack the resolution to accurately resolve object size and shape. Lidar systems have similar range to radar, and can also measure both range and relative speed. However, lidar systems are currently larger and significantly more expensive than other sensor types. Multiple vendors have announced new


lidar designs that they claim will sell for under US$500 in automotive volumes. Lidar systems use infra-red laser light and have much higher resolution than radar. They can be used to determine an object’s shape and size. They work in both day and night, but performance does degrade in snow, rain, fog, or dust. Camera systems, coupled with image processing technology, are also relatively small and inexpensive. Most impor- tantly, they are the only type of sensor that can detect color and recognize images, such as signs and roadway markings. Like lidar, they can also determine the presence, size, and shape of objects. They have good range when the lighting is good, but their range is a function of the lighting. Image processing allows relative speeds to be determined, but with less accuracy than radar or lidar systems. Multiple cameras can be used to determine range to objects.


MACHINE LEARNING AND NEURAL NETWORKS Machine learning techniques, and in par- ticular deep neural networks, are used primarily to handle large portions of the Per- ception and Scene Understanding activities. To explain machine learning, it helps to first review traditional computer programming. Traditional programming involves provid- ing the machine with explicit instructions on what to do at each step of the process, pos-


9


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68