search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
MODELLING AND SIMULATION


AI and machine learning tools. For example, NVIDIA is developing DRIVE PX Pegasus, which is the world’s first AI computer for fully autonomous robo- taxis. More than 25 companies are using Pegasus to develop level five, fully autonomous vehicles, according to an NVIDIA spokesman. To create new machine learning algorithms for autonomous driving, you need large data sets. If we use real-world data, then it must go through a labour- intensive labelling process before the self-driving algorithm can ingest the data and learn from it. But simulated data is automatically labelled as it is created, which saves massive amounts of time. A joint project between Siemens and the German Research Center for Artificial Intelligence recently demonstrated it was more effective to a combination of synthetic and real-world data when training deep learning driving algorithms, compared to using real-world data alone. When we move away from the


development of these algorithms and want to test such vehicles on real road networks, then there are more


”Until recently, the majority of big data applications have been based on conventional modelling and simulation techniques”


challenges to address. For example, it is estimated that autonomous vehicles will produce 4TB of data every day through the complex combination of scanners and sensors, cameras and GPS used to detect vehicles, pedestrians, traffic signals, road curbs and other obstacles. While simulation allows us to supplement those real-world driving hours when we develop the necessary algorithms, we need to use human drivers on real roads to robustly evaluate the performance of new self-driving technologies. In order to achieve this, we will require


a new fleet of HPC configurations and simulation and modelling strategies to effectively label and analyse the reams


of real-world data that will result from autonomous driving in the real world. There are many clever techniques


stepping up to the challenge, including an MIT spin-off called iSee that is integrating cognitive science into its AI algorithms to give autonomous vehicles a kind of common sense to quickly deal with new situations.


Another approach, taken by Drive.ai, is


to use deep learning techniques to teach autonomous vehicles how to drive. This works under the premise that all data is not equal. So, instead of managing and analysing every piece of data available, the system collects high-quality data and then annotates it so it’s useful for deep learning algorithms. Whatever approach is used though,


artificial intelligence, machine learning and deep learning systems all thrive on data.


As such, the simulation and modelling


techniques of today are not going to cut it when it comes to the development of driverless cars, unless we can effectively integrate real world and real-time big data into our algorithms using novel HPC configurations.


Subscribe for free* Do you compute?


The only global publication for scientists and engineers using computing and software in their daily work


*Registration required


Do you subscribe? Register for free now! scientific-computing.com/subscribe


SCIENTIFIC COMPUTING WORLD


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28