search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Artificial Intelligence Technology


Collaborative edge AI agents for enhanced security and smarter cities


By Shay Kamin Braun, director of AIoT product marketing, Ambarella U


ntil recently, artificial intelligence (AI) has played a mostly passive role in the processing stacks of surveillance and security systems, performing crucial tasks such as classifying and tagging sensor and image data to help streamline existing processes. But the capabilities of the underlying technology are increasing, and we are now seeing it cross over into active control: reacting and responding autonomously to changes in the environment to achieve goals set by human users. This new category of capabilities is known as agentic AI, which uses sophisticated software programs that can perceive their environment, reason, plan and act autonomously to achieve specific goals. Unlike traditional software, they possess a degree of intelligence, capable of learning from experience, adapting to new situations, and even collaborating with other AI agents or humans across a broad range of edge AI systems. For the security markets, these changes have the potential to be revolutionary, due to the improvements that can be realized in the efficiency and speed of incident responses using this new level of autonomous intelligence running inside security cameras. However, most of the early implementations of Agentic AI today run in the cloud, and are designed to work with non- real-time enterprise systems. When deployed at the edge, AI agents running on-camera and other edge infrastructure AI-box systems developed for surveillance and smart-city requirements will not just analyse but respond to events in real time – either by informing people with precise, timely alerts or by taking safe, autonomous steps.


Practical applications


There are many cases where autonomous intervention from AI can fundamentally transform processes. Take the traffic controls at intersections. Much of the time, there are no problems with traffic flow. But road repairs and partial closures can quickly lead


18 October 2025


to problems as vehicle backlogs build up and cause blockages at nearby junctions. Minor alterations to the sequencing and timing of signals can help these blockages to clear. AI systems that recognize when the buildup is reaching a critical point and extend the green signal to give a path more time to clear can ease the traffic flow. An AI agent could do this by interpreting the data from multiple cameras at the intersection using a combination of its pretraining from a fine-tuned network for traffic scenarios and reinforcement-learning algorithms to learn from traffic patterns it perceives after deployment, followed by the use of an adaptive control algorithm to adjust the signal in real-time.


This traffic scenario is one of many new, increasingly automated approaches where AI agents, operating directly inside surveillance and smart city cameras, can process multimodal inputs (video, images, audio, other sensor data) at the source, without


Components in Electronics


the need for an internet connection. This significantly reduces latency and bandwidth requirements. An overarching on-premise orchestration agent then aggregates these insights, received from the AI agents across multiple nearby cameras, providing centralized event labelling, intelligent alerts, informed decision-making and seamless escalation to human operators when necessary.


Another potential edge application is that of an “intelligent shadow agent” for body-worn cameras. An AI agent right on the device processes live video and audio, performs speech-to-text transcription, leverages vision-language models for scene understanding, and then utilizes Large Language Models (LLMs) to generate real-time summaries of events. This capability lightens the burden of offline report generation for professionals in law enforcement, retail and medical services, while also providing immediate, actionable


intelligence – effectively serving as an omnipresent, AI-powered assistant.


Processing in the cloud vs. at the edge


Agentic AI applications could rely on cloud services to run the AI components. But there are major advantages to having them hosted on the edge devices. Running AI agents on-device reduces latency dramatically, enabling near-instantaneous responses, which is crucial for real-time threat detection and incident response. There is no round-trip communication delay, which can easily reach several hundred milliseconds with cloud processing. Reliability also improves, particularly for mobile use-cases where it may not be possible to guarantee a reliable wireless internet connection.


Edge AI also improves the ability to meet privacy and data security requirements. This is not just a result of minimizing the exposure to network and server


www.cieonline.co.uk


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60