EVENT Embedded World 2023
How will NVIDIA Jetson AGX Orin infl uence the future of Edge Computing?
With the release of the new system-on-module architecture Jetson AGX Orin, manufacturer NVIDIA has shaken up the market for hardware solutions. But, how much influence will the new platform for scaleable embedded and edge computing have in the future?
A
rtifi cial intelligence (AI) will be all around us in the near future. Even today, fully-autonomous drones deliver packages to
customers and monitor farmland. The automation of industrial sectors continues to progress, and the excitement is further fuelled by new technologies like 5G and 6G. Hardware manufacturers and IT distributors are trying to reduce the network load from already stressed clouds, and are now using decentralised edge computing solutions in close proximity to the application. As the world’s leading platform for autonomous machines, NVIDIA Jetson harnesses the full potential of AI-powered mobile robots, drones, network video recorders and optical inspections. The announcement of Jetson AGX Orin modules (Figure 1) in September 2022 therefore raises the question of how the new architecture will impact future hardware systems.
Server-level AI performance NVIDIA Jetson is a series of system-on- module solutions that include CPU, GPU, memory, power management, high-speed interfaces, and more. The systems vary in form factor, features and energy effi ciency for diff erent industries. The newly- introduced Jetson Orin modules off er up to 275 TOPS (Tera Operations per Second) in the highest model class, eight times the AI
Figure 1: Block diagram of Jetson AGX Orin
functions such as multi-sensor perception, mapping and localisation, planning and control, situational awareness, and security. In particular, robotics and other edge AI applications are increasingly requiring more resources for computer vision and conversational AI.
NVIDIA Jetson Orin
performance of the Xavier generation. The integrated Ampere GPU consists of
two Graphics Processing Clusters (GPC), up to eight Texture Processing Clusters (TPC) and up to 16 Streaming Multiprocessors (SM) with 192KB L1 cache and 4MB L2 cache. In addition, 128 CUDA cores are supported per SM, resulting in the 64GB Orin module, off ering a total of 2048 CUDA cores and 64 Tensor cores. The NVIDIA Deep Learning Accelerator (DLA) is a fi xed-function accelerator, optimised for deep-learning operations. It is designed for full hardware acceleration of convolutional neural network inference. The Orin SoC supports the next generation of NVDLA 2.0 with nine times the performance of NVDLA 1.0 and a more energy-effi cient architecture. The GPU and DLA optimisations raise
the AI performance of Orin to server level. Meanwhile, computational requirements are increasing by orders of magnitude for
Currently in the pipeline With Orin developer kits, companies can build complex and future-orientated AI applications at the edge, including areas such as industrial IoT, logistics and transportation, agriculture and healthcare. The compatibility of cloud-native workfl ows and AI software that has been used on previous platforms enables the new modules to provide additional performance and energy effi ciency for companies to develop mobile robots for everyday tasks. Autonomous machines already sort goods in logistics warehouses, or serve customers independently in retail. Compact Jetson hardware also enables real-time automated access controls for hazardous environments where employees can only enter with a helmet and safety vest.
The demand for the new modules is going to increase, given the improving price-performance ratio. Manufacturers and system integrators are already introducing embedded PCs and GPU computers based on Orin SoCs. Hardware manufacturer of confi gurable edge intelligence solutions AI-BLOX has already shown new processor modules with Jetson AGX Orin since December 2022. Further models with Orin NX and Orin Nano are expected to follow shortly. NASDAQ company One Stop Systems plans a 2023 release of a compact dual-Orin system in a ruggedised chassis, in collaboration with the German hardware provider Bressner Technology. The high- performance computing experts are using synergies to tap into the European market for industrial supercomputers.
CONTACT:
Bressner
www.bressner.de/en
22 February 2023 | Automation
automationmagazine.co.uk
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50