search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Cover story


Machine learning at the edge A


rtifi cial intelligence (AI) and machine learning (ML) are used in many applications, in industries as diverse as travel, banking and fi nancial services, manufacturing,


food technology, healthcare, logistics, transportation, entertainment, and many more. One of the key factors accelerating these technologies’


deployment is the growth in computing power, allowing complex mathematics calculations to be executed easily and quickly. There are also increasing numbers of algorithms helping the creation of models and making data inference easier and quicker. And we are seeing more and more governments and companies investing in this area.


Going local The AI/ML tools that help non-data scientists easily understand, create and deploy models are a crucial element and are today available and easily accessible. Although model building will be done on the cloud – on high performance machines – often, inference should be done locally. This provides several benefi ts, including added security because there’s no communication with the external world. Acting locally means we are not consuming bandwidth and are not paying extra money to send the data to the cloud and then getting back the results. Some of the benefi ts of performing inference at the edge include: • Real time operation/Immediate response: - Low latency, safety operation;


• Reduced costs: - Efficient use of network bandwidth, less communication;


• Reliable operation with intermittent connectivity; • Better customer user experience:


By Adil Yacoubi, EMEA Senior Technical Marketing Engineer, Microchip Technology - Faster response time;


• Privacy and security: - Less data to transmit leads to increased privacy;


• Lower power consumption: - No need for fast communication;


• Local learning: - Better performance by making each product learn individually.


Latency is a good driver to perform inference locally and the Edge can help users by moving machine learning from high-performance machines to high-end microcontrollers and microprocessors.


About machine learning AI was established in the 1950s. It essentially replaces the programming procedure by developing algorithms based on the data, rather than the legacy method of writing them manually.


ML is a subset of AI, where the machine tries to extract


knowledge from the data. We provide the machine with prepared data and then ask it to come up with an algorithm that will help predict the results for a fresh set of data. ML is based on what we call ‘supervised learning’. In this


technique, the data is labelled, and the results are based on that labelling – we also build the model based on that labelling. Another technique is deep learning, which works on


more complex algorithms, where the data is not labelled. In this article we will mainly consider supervised learning for the Edge. The basic element of ML is the neural network, which consist of layers of nodes, each node having a connection


Figure 1: Artifi cial neural network


06 May 2024 www.electronicsworld.co.uk


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48