search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
AUTOMOTIVE By Wayne Lyons, Director, Automotive, AMD A


dvanced driver assistance systems (ADAS) are known to deliver multiple benefits, including reduced driver stress and fatigue, and improving road safety. Systems like automated parking and traffic-jam assist make journeys more comfortable, while others such as forward collision warning (FCW), advanced emergency braking (AEB) and advanced emergency steering (AES) can help avert road-traffic accidents. The sensor fusion concept is widely acknowledged as an enabler of ADAS within the automotive sector. A driver of this is the amount of innovation in vehicle- based systems like radar, lidar, and camera technology, to name a few. These systems and the huge volumes of data they produce must be brought together to provide a single source of truth. Arguably, the description of sensor fusion is an oversimplification. Building a complete system that communicates with the driver and interacts with the vehicle involves more than sensors and systems talking to each other. It is more helpful to view this process as data aggregation through processing and distribution. Ultimately, ADAS requires all the signals captured from sensors on the vehicle to be pre-processed, shared with a single processing architecture, and aggregated to create actionable outputs and commands.


This article will discuss how data for ADAS is aggregated through processing and distribution, as well as the role of silicon architecture and adaptive computing in achieving this.


Multi-sensor ADAS


ADAS systems rely extensively on sensors such as radar, lidar, and cameras for situational awareness. While 77GHz radar provides object detection and motion tracking, lidar is used for high-resolution mapping and perception, causing many data points to be generated per second. Video cameras, for their part, are mounted in various parts of the vehicle and continuously generate multiple channels of high-resolution image data to provide forward, rear, and surround views. A wide


Data aggregation through processing and distribution for ADAS


variety of additional sensors are often used, including trackers of temperature, moisture, speed, GPS, and accelerometers, depending on the features the system is intended to provide.


Focusing on the camera signals, as an example, image processing is applied at the pixel level to enhance image quality and extract object information. Features like surround view depend on further video processing with graphic overlays. On the other hand, collision avoidance systems process sensor data to characterise the environment around the vehicle, such as identifying or tracking lane markings, signs, other vehicles, or pedestrians. Systems such as lane-keeping assist and autonomous braking or steering need to detect threats and, subsequently, may generate a warning or take over control of the vehicle to help avoid an accident. When implementing ADAS, a typical approach is to use a combination of customised parallel hardware to process sensor data, hosted in an FPGA or ASIC, while characterising the environment and making appropriate decisions are typically performed in software. The software tasks may be partitioned between a DSP for object processing connected to characterising the environment, and a microprocessor


14 MARCH 2024 | ELECTRONICS FOR ENGINEERS


for decision-level processing and vehicle communication.


AMD automotive grade (XA) Zynq UltraScale+ MPSoCs integrate a 64-bit quad-core ARM Cortex-A53 and dual-core ARM Cortex-R5 based processing system and AMD programmable logic UltraScale architecture in a single device. With an MPSoC like this it’s possible, for example, to integrate a complete ADAS imaging flow from sensing through environmental characterisation in a single device. Leveraging the various processing engines available, and the chip’s high- bandwidth internal interfaces, designers can partition the processing workloads optimally between hardware and software. DSP workloads can be handled either in hardware, using DSP slices, or in software using the ARM Cortex-R5 processor.


This flexibility lets developers optimise the design to help eliminate data flow bottlenecks, maximise performance, and provide efficient utilisation of device resources. MPSoC integration also enables savings in BOM costs and power consumption. Saving power is especially valuable going forward, as extra ADAS is mandated, more sophisticated infotainment is desired, and electric vehicles depend on efficient use of battery energy to maximise driving range. Along with


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54