search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Feature: AI


When presented with new data, rather than scanning the entire new entry, sparse modelling looks for the occurrence of previously-determined key features for predictions


before an inference algorithm can be compiled that is making decisions at the smart factory edge. And each new set of pictures needs the same resource- intensive training. The energy- and time-wasting process is repeated again and again for each new inference model


– a process that never ends.


• As the backbone of the industrial world, embedded systems are built to be reliable and failsafe, typically featuring low compute power for fanless operation in a fully-closed housing. Embedding AI into these


systems leads to higher performance requirements. Embedded systems with no additional headroom in their thermal envelope risk losing reliability with AI. They also need a channel to send big data to central clouds, consuming additional energy and producing costly data traffic. This is why, until now, the benefits of AI were reserved for high compute environments with the ability to transmit edge data to external clouds.


Sparse modeling Sparse modelling offers a different approach and a broader path to bring this new type of AI to embedded low- power applications. It continuously and dynamically adjusts to changing conditions – such as lighting and vibration, or when cameras and/ or equipment need to be moved – by re-training at the edge. In essence, sparse modelling is an approach to understanding data that focuses on identifying unique features. Simply put, it understands data like the human mind does, rather than looking at every single hair and every millimeter of a person. Humans are able to recognise friends and family based on key features, such as eyes or noses. Sparse modelling embeds comparable logic into smart vision systems with the consequence that not the entire volume of big data needs to be processed, as with conventional AI, but only select data. Sparse-modelling-based algorithms consequently reduce data down to just the unique features. When presented with new data, rather


Tests show that, for the same level of accuracy, sparse modelling consumes only 1% of the energy of a conventional deep-learning platform


than scanning the entire new entry, sparse modelling looks for previously- determined key features for predictions. An added bonus of this approach is that the isolated features are understandable to humans, meaning that sparse modelling produces an explainable, white-box AI, another massive differentiator compared to conventional AI. Te initial model creation stages, where the AI engine and customer- specific data are merged to create a model tailored for the specific use case, rely primarily on human expertise. Standard, new inference models require only about


www.electronicsworld.co.uk November/December 2020 55


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68