search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
AI Technology


massively power hungry, and raise questions about the environmental impact of an ever- increasing use of AI. Various credible public domain estimates of the cost of running the ChatGPT service suggest it could be not far off $1Million/day, although these are hard to verify.


This pushes users to rely on the services of third-party providers, adding to the complexity of an AI infrastructure and a dependency not totally within the user’s control. The likes of Google and Amazon can have dedicated infrastructure to support their systems, but that is clearly not the case for everyone.


Networks and latency


Such an architecture requires reliable and fast connections between whatever the input device is, and the server infrastructure in the cloud.


It also introduces a latency in the system that could be unpredictable depending on the demands on the service by other users, and the type of connection available to the input device.


This may not be an issue in the case where a user at a desktop is accessing AI services via a fast corporate internet connection. However, for an IoT type solution, where a number of sensor devices are gathering data that might be usefully analysed by AI, such an architecture risks being complicated and unwieldy. The benefits of using AI may be outweighed by the drawbacks.


AI “at the edge”


Fortunately, there are many solutions where AI can be of value which do not need the power of a full-scale general-purpose AI engine. In such cases, where there is a controlled problem domain, a relatively small device may be able to run a limited AI inference engine. In such a case, it can make sense to carry out the AI on the end device, which is what is known as “AI at the edge”. Even though under such an architecture, the AI inference engine is running on a small device, it can perform its task faster, as there is no need for the latency inherent in communicating input data to a remote server, and waiting for a response.


Reducing power consumption Such a solution can also aid power consumption, as there is less need for communication. Particularly for wirelessly connected devices, where the radio is often the most power-hungry part of the system, minimising radio traffic can be key to enabling battery powered devices with a reasonable level of autonomy.


www.cieonline.co.uk


In such a scenario, it is important to distinguish between the AI model training, which inevitably will have to be carried out on some kind of larger computing device, and the inference engine, which is the output of the training process. This latter part can be a relatively small component, if it is focused on a limited problem domain.


To give a concrete (and real world) example, one of our customers wanted to implement a people counting device using an IR sensor, in order to manage HVAC for comfort and efficiency. A traditional solution might involve writing some custom code to analyse sensor data and tune it manually to give accurate results. With an AI solution, a model can be created and trained using sensor input, removing the need to create custom code. Other examples might include a video doorbell, which you only want to switch on when a user approaches. An AI model could potentially make a better distinction between a person approaching the camera and other random background movement that might be picked up on the sensor. This can in turn reduce power consumption and potentially make battery operation realistic. An AI solution doesn’t have to be a binary choice between “AI at the edge” and “AI in the cloud”. Some initial processing of data at the front end can be used to identify clear or urgent cases, with other data sent onwards for cloud processing. For example, in an industrial control setting, a front-end AI might be configured to identify urgent imminent critical failures that might require shutting a machine down, whilst some other data might be sent up to the cloud for longer term analysis to aid process improvement.


Privacy issues


Privacy is a further concern when considering where intelligence in a system should be placed. Medical wearables can provide valuable and even life-saving diagnostic information. But not everyone would be happy to have medical data shared with cloud services, however secure they claim to be. Local processing using models trained offline can provide the best of both worlds. It is notable that Apple takes a different approach than Google and others to this issue. Apple AI tends to run locally on the device, to maintain user privacy, whereas Google (and others) have little issue with taking user input and using it in the cloud. This arguably gives Google an advantage in terms of developing its AI engines, but at the cost of user privacy.


Device characteristics


In practical terms, what kind of devices are required to carry out “AI at the edge”? Early


simple IoT devices, released a decade ago, based on perhaps a Bluetooth device with an integrated microprocessor might have been ARM Cortex M0 processor running at 32MHz, with 256KB Flash memory and 32KB of RAM. Such a device could run a protocol stack and a simple application (to read some sensors, for example), but little more. Next-generation devices have an order of magnitude more capability, with M33 processors running at 300MHz or more, large flash and RAM capacity and often dual core architectures so that the protocol stacks can run on a completely independent core to the main application core. Such devices are certainly capable of running simple AI inference engines. Such devices maintain the low power capability of traditional IoT solutions, running on ARM cores and with low power characteristics designed in from the start.


Specialist AI microprocessors and devices


If a more complex application is required, then a variety of dedicated AI oriented microprocessors exist. These can be anything from scaled down versions of Nvidia server chips to AI oriented DSP devices. There remains a trade off between performance and power consumption, but these devices can carry out AI functions whilst remaining resource-light in terms of power


consumption. Often the trade-off is more that a particular device is designed to process a certain type of input – images or voice commands or sensor analysis, rather than be able to carry out generic AI functions. However, for an IoT device this is often a perfectly acceptable trade off.


Future evolution of AI and IoT AI is still a fast-developing technology. AI models are becoming more complex and sophisticated, and the underlying hardware is also developing fast. It has also become a domain of geopolitical competition, with the Chinese company DeepSeek making headlines through its claim to have created a much more efficient AI model requiring significantly fewer resources in terms of computer hardware.


At the same time, the business models behind AI are yet to be clarified. The technology is currently in the “race” stage, where major technology firms are less interested in profit than establishing leadership. Offering services like ChatGPT for free (at least at some level) aids model development by engaging a massive user base. Clearly that isn’t sustainable long term. Carrying out AI efficiently and cost effectively will inevitably become a bigger focus, and the AI at the edge model will be a major part of that.


www.insightsip.com Components in Electronics June 2025 25


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56