search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
AUTONOMOUS VEHICLES


DEEP DRIVING With the power of Deep Neural Networks, including their ability to generalize and learn without explicit instructions, it’s natu- ral to wonder how much of the driving task could be turned over to them. Could they also handle behavior and motion planning? The answer appears to be a definite maybe. In fact, over 25 years ago, Dean A. Pomer- leau of Carnegie Mellon University used a relatively simple (not deep) neural network to automatically steer a vehicle. The vehi- cle was capable of steering on paved and unpaved roads, both single and multilane, at speeds up to 55 miles per hour. The sys- tem, called ALVINN (Autonomous Land Vehicle In a Neural Network), used a neural network consisting of just three layers, an input layer of just 960 nodes representing a 30 x 32 block video image, just four nodes in the single hidden layer, and 30 output nodes, covering the range of possible steering commands from hard left to hard right. A video of the project can be viewed at http://bit.ly/2cfWU9D. More recently, Nvidia’s Dave two project has used a deep neural network to automate the steering function. Their neural network has nine layers, thousands of nodes, and 27 million connections. With less than 100 hours of human driving training data, the system was reported to be able to operate on both


“With the power of Deep Neural Networks, including their ability to generalize and learn without explicit instructions, it’s natural to wonder how much of the driving task could be turned over to them. Could they also handle behavior and motion planning?”


local roads and highway, and in sunny, cloudy, and rainy conditions. Although lim- ited to steering, these are both examples of using a single “end-to-end” neural network to go from camera sensor inputs directly to commands to the steering system. They do not break the problem out into sepa- rate Perception and Scene Understanding, Behavior and Motion Planning, and Lateral Controller elements. Mobileye’s products, in contrast, use neu-


ral networks both for Perception and Scene Understanding as well as Behavior and Motion Planning (they refer to the latter as Driving Policy). However Mobileye separate deep neural networks for these components.


Group learning and knowledge transfer


Human drivers, with the limited exceptions of reading driver education manuals and classroom lectures, must learn to drive from scratch. Each new driver must start fresh, untrained on everything from how the car will react as one pushes the accelerator to how other drivers behave on freeways. It takes years of driving before a wide array of situations has been observed and the best actions are understood. Studies have shown that this lack of experience, in addition to their young age, is one of the reasons new drivers are at a significantly increased risk of accidents. This will not be the case with


automated vehicles. These vehicles 14


begin on day one with algorithms trained, tuned, and optimized by all the millions of driving miles accumulated by predecessor test vehicles (and, at least to the extent that the necessary data elements are collected and analyzed, by production vehicles as well). It’s as if a human driver started with


the experience of tens of thousands of drivers who have come before or are currently out on the road. So while there are undoubtedly


many areas where humans are superior, this knowledge transfer, in addition to constant attentiveness, is a clear advantage of automated vehicles.


In fact, Prof. Amnon Shashua, Mobileye’s co- founder and CTO contends that a single end- to-end deep learning architecture that does not break the problem down into multiple elements cannot work for all functions of demonstrably safe, commercially available, autonomous driving systems. It is likely that machine learning approaches,


and in particular, deep neural networks, can take over large portions of the driving task, but that the approach will continue to use a multi-component architecture, breaking the problem down into component parts, as roughly illustrated in Figure 1, and utilize multiple neural networks, each trained to address a component of the task, rather than a single end-to-end neural network. Given that the functions will be broken out, the lateral and longitudinal controllers may con- tinue to utilize conventional control theory approaches, because it’s unclear whether neural networks bring any advantage to that element of the driving task.


SUMMARY The magic of developing self-driving vehi- cles is made tractable by first decompos- ing the problem into elements and then tackling them one by one. The critical sub- elements involved in perception and scene understanding are best addressed through deep neural networks, which are a relatively recent advance in machine learning. This approach can readily perform the seemingly magic trick of identifying and reading road signs, as well as detecting and identifying various types of both stationary and mov- ing objects. Deep neural nets, in turn, only became practical with the development of big data techniques for collecting and han- dling the massive amounts of data needed to train the networks, as well as the use of relatively low cost GPUs that, through paral- lel processing, provide the massive amounts of computing horsepower that is required. These all combine to provide some of the magic tricks that developers are using to enable automated vehicles.


Mike McGurrin is the founder of McGurrin Consulting, LLC Mike@mcgurrin.com


www.thinkinghighways.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68