search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Security & Monitoring Keep your AI IP safe Figure 2: By adding a small perturbation, the model can be tricked to transcribe any desired phrase.


At times, companies would worry about malicious intent from competitors to steal the model IP/feature that is stored on a device on which the company has expended R&D budget on. Once the model is trained and polished, it becomes a binary executable stored on the MCU and can be protected by the conventional IoT security measures, such as protection of physical interfaces to the chip, encryption of software, and using TrustZone. However, an important thing to note, even if the binary executable would be stolen, it is only the final polished model that is designed for a specific use case that can be easily identified as a copyright violation, and reverse engineering would require more effort than starting with a base model from scratch.


embedded engineer. Since those engineers tend to work in separate teams, the new security landscape can lead to confusion on responsibility division between the various stakeholders.


Adversarial attacks can occur in either training or inference phases. During training, a malicious attacker could attempt “model poisoning” which can be of targeted or untargeted types. In targeted model poisoning, an attacker would contaminate the training data set/AI base model resulting in a “backdoor” that can be activated by an arbitrary input to gain a particular output yet works properly with expected inputs. The contamination could be a small perturbation that does not affect the expected operation (such as model accuracy, inference speeds, etc.) of the model and would give the impression that there are no issues. This also does not require the attacker to grab and deploy a clone of the training system to verify the operation as the system itself was contaminated and would ubiquitously affect any system using the poisoned model/data set. This was the attack used on the clones in Star Wars.


Untargeted model poisoning or Byzantine attacks is when the attacker intends to reduce the performance (accuracy) of the model and stagnates the training. This would require returning to a point before the model/data set has been compromised (potentially from start).


Other than offline training, federated learning, a technique where data collected from the endpoints is used to re-train/ improve the cloud model is intrinsically vulnerable due to its decentralized nature of processing, allowing attackers to partake with compromised endpoint devices leading to the cloud model becoming compromised. This could have large implications as that same cloud model could be used throughout millions of devices.


During the inference phase, a hacker can opt for the “model evasion” technique where they iteratively query the model (e.g., an image) and add some noise to the input to understand how the model behaves. In such a manner, the hacker could potentially gain a specific/required output i.e., a logical decision after tuning their input enough times without using the expected input. Such querying could also be used for “model inversion”, where the information about the model or the training data is extracted similarly.


Risk analysis during your AI TinyML development


For the inference phase, adversarial attacks on AI models is an active field of research, where academia and industry have aligned to work on those issues and developed “ATLAS - Adversarial Threat Landscape for Artificial-Intelligence Systems!”. This is a matrix that would allow cybersecurity analysts


to assess the risk to their models. It also consists of use cases throughout the industry including edge AI. Learning from the provided case studies will provide product developers/ owners an understanding on how it would affect their use case, assess the risks, and take extra precautionary security steps to alleviate customer worries. AI models should be viewed as prone to such attacks and careful risk assessment needs to be conducted by various stakeholders.


For the training phase, ensuring that datasets and models come from trusted sources would mitigate the risk of data/model poisoning. Such models/data should usually be provided by reliable software vendors. An ML model can be also trained with security in mind, making the model more robust such as a brute force approach of adversarial training where the model is trained on many adversarial examples and learns to defend against them. Cleverhans - an open-source training library used to construct such examples to attack, defend and benchmark a model for adversarial attacks - has been developed and is used in academia. Defence distillation is another method where a model is trained from a larger model to output probabilities of different classes rather than hard decisions making it more difficult for adversary to exploit the model. However, both of those methods can be broken down with enough computational power.


Furthermore, in TinyML development, the AI models tend to be well-known and open-sourced such as MobileNet, which can then be optimized through a variety of hyperparameters. The datasets, on the other hand, are kept safe, as they are valuable treasures that companies spend resources to acquire and are specific for a given use case. Such as, adding bounding boxes to regions of interest in images. Generalized datasets are also available as open source such as CIFAR, ImageNet, etc. They are good to benchmark different models on, but tailored data sets should be used for specific use case development. For instance, for visual wake word in an office environment, a dataset secluded to an office environment would give the optimum result.


Sum up


As TinyML is still growing, it is good to be aware of various attacks that can occur on AI models, how they could affect your TinyML development, and specific use cases. ML models have now been attaining high accuracies but for deployment, it is important to ensure your models are robust as well. During development, both parties (ML engineers and embedded engineers) share responsibility for cybersecurity matters, including AI. Where ML engineers would concentrate on attacks during training and embedded engineers would ensure protection from inference attacks.


For model IP, a crucial element is to ensure that training datasets are kept secured to avoid competitors’ ability to develop similar models for similar use cases. As for the executable binary i.e., model IP on the device, it can be secured using best-in-class IoT security measures that Renesas’s RA family is renowned for, making it very difficult to access secure information maliciously.


Figure 3: End-to-end TinyML workflow. https://www.renesas.com/eu/en www.cieonline.co.uk Components in Electronics March 2023 47


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62