search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
COVER FEATURE


Enabling Edge AI inference with


compact industrial systems


T


he world’s machine vision experts are coming together in Stuttgart this month for VISION, which is arguably the industry’s


most important trade fair. Visitors will see the latest machine-vision products, technologies and trends, including hot topics like embedded vision, hyperspectral imaging and deep learning. For Advantech, this event provides a unique opportunity to promote its latest technologies (Hall 8, Booth 8D12). It is also an opportunity to demonstrate how artifi cial intelligence (AI) technology can break away from static rule-based programming, by substituting inference systems using dynamic learning for smarter decisions. After all, today’s advanced AI technology combined with IoT technology is now redefi ning entire industries and introducing countless smart applications.


An important trend is the shift of AI


inference systems toward the edge, closer to sensors and control elements, thereby reducing latency and improving response. Demand for edge AI hardware of all types, from wearables to embedded systems, is growing fast. One estimate published by Data Bridge Market Research in April 2019, entitled “Global Edge AI Hardware Market – Industry Trends and Forecast to 2026”, sees unit growth at 20.3% CAGR through 2026, reaching over 2.2 billion units.


8 September 2022 | Automation


The big challenge for edge AI inference platforms is feeding high bandwidth data and making decisions in real time, using limited space and power for AI and control algorithms. Three powerful AI application development pillars from NVIDIA are helping Advantech make edge AI inference solutions a reality and positively impacting the bottom line of forward-looking companies.


What is AI inference? There are two types of AI-enabled systems: for training and inference. Training systems examine data sets and outcomes, looking to create a decision-making algorithm. For large data sets, training systems have the luxury of scaling, using servers, cloud computing resources or, in extreme cases, supercomputers. They also can aff ord days or weeks to analyse data. The algorithm discovered in training is handed off to an AI inference system for use with real-world, real-time data. While less compute-intensive than training, inference requires effi cient AI acceleration to handle decisions quickly, keeping pace with incoming data. One popular option for acceleration is to use GPU cores, thanks to familiar programming tools, high performance and a strong ecosystem. Traditionally, AI inference systems have


been created on server-class platforms by adding a GPU card in a PCIe expansion


slot. Most AI inference still happens on AI- enabled servers or cloud computers, and some applications demand server-class platforms for AI acceleration performance. Where latency and response are concerns, lower power embedded systems can scale AI inference to the edge.


Advantages of Edge AI inference architecture Edge computing off ers a big advantage in distributed architectures handling volumes of real-time data. Moving all that data into the cloud or a server for analysis creates networking and storage challenges, impacting both bandwidth and latency. Localised processing closer to data sources, such as pre-processing with AI, can reduce these bottlenecks, lowering networking and storage costs. There are other edge computing benefi ts. Personally identifi able information can be anonymised, improving privacy. Security zones reduce chances of a system-wide breach. Local algorithms enforce real-time determinism, keeping systems under control, and many false alarms or triggers can be eliminated early in the workfl ow. Extending edge computing with AI


inference adds more benefi ts. Edge AI inference applications scale effi ciently by adding smaller platforms. Any improvements gained by inference on one edge node can be uploaded and


automationmagazine.co.uk


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64