search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Cover story


Designing intelligent edge systems


Edge AI only becomes meaningful when it leaves the lab and operates reliably in the fi eld


By Felipe Leiva, Technical Project Manager, Aetina I


n real deployments, system architects quickly face constraints that go far beyond raw computing performance. Latency must be predictable, power budgets are limited, connectivity


is not always guaranteed, and systems are expected to run continuously for years in harsh environments. At Aetina, Edge AI is approached as a system-level challenge


rather than a standalone computing problem. By combining NVIDIA Jetson platforms with industrial-grade hardware design, optimised software stacks and deployment experience, Aetina delivers edge AI devices that are already operating in traffic infrastructure, smart agriculture, industrial inspection, and intelligent buildings. With the upcoming launch of NVIDIA Jetson Thor, this approach naturally extends toward more advanced and autonomous edge systems.


From NVIDIA Jetson modules to Aetina edge AI devices The NVIDIA Jetson platform provides the accelerated computing foundation for edge AI, but turning a Jetson module into a deployable product requires additional engineering. Power stability, thermal behaviour, enclosure design, I/O fl exibility and long-term availability all infl uence whether a system can survive outside controlled environments. Aetina’s DeviceEdge portfolio addresses these requirements


by offering complete systems built on NVIDIA Jetson Orin platforms and prepared for future Jetson Thor designs. These systems are validated for continuous operation, wide temperature ranges and integration into existing infrastructure, allowing customers to focus on AI application development rather than hardware constraints.


06 February 2026 www.electronicsworld.co.uk


Understanding performance at the edge; what TOPS really means When comparing edge AI platforms, performance is often expressed in TOPS, or Trillions of Operations Per Second. TOPS is a practical indicator of how many AI operations a processor can execute per second, and it directly affects how many models, camera streams, or AI tasks can run simultaneously. In real deployments, higher TOPS enables more complex


models, higher input resolution, multi-stream processing and lower latency. However, TOPS alone does not guarantee system performance. Efficient use of frameworks such as NVIDIA CUDA and TensorRT is essential to translate theoretical performance into real-world results.


High-performance traffi c monitoring with NVIDIA Jetson AGX Orin Traffic monitoring and enforcement remain one of the most demanding AI use cases. Systems must process multiple high-resolution video streams in real time, often under extreme weather conditions, while maintaining high accuracy and uptime. Aetina’s AIE-PX11, AIE-PX12, AIE-PX21 and AIE-PX22,


powered by NVIDIA Jetson AGX Orin, are widely deployed in intelligent traffic projects across Europe. Depending on confi guration, Jetson AGX Orin delivers up to 275TOPS, making it suitable for heavy vision workloads such as license plate recognition, vehicle classifi cation, speed monitoring and incident detection. These systems leverage NVIDIA CUDA for parallel


processing, TensorRT for low-latency inference and DeepStream


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48