Security & Monitoring
Cybersecurity threats on endpoint AI systems
By Eldar Sido, product marketing specialist, IoT and infrastructure business unit, Renesas Electronics W
ith endpoint AI (or TinyML) being in the infancy stages and slowly getting adopted by the industry, more companies
are incorporating AI into their systems, such as for predictive maintenance in factories or keyword spotting in consumer electronics. But with the addition of an AI component into your IoT system, new security measures must be considered. IoT has matured to an extent where you can reliably release products into the field with peace of mind, with certifications such as PSA Certified that provide assurance that your IP can be secured through a variety of techniques such as isolated security engines, secure cryptographic key storage, and Arm TrustZone usage. Such assurances can be found on microcontrollers (MCUs) designed with security in mind with scalable hardware-based security features such as Renesas’s RA family. However, the addition of AI leads to the introduction of new threats that infest themselves into secure areas namely in the form of adversarial attacks. Adversarial attacks target the complexity of deep learning models and the underlying statistical mathematics to create weaknesses and exploit them in the field, leading to parts of the model, training data being leaked, or outputting unexpected results. This is due to the black-box nature of deep neural networks (DNN), where the decision-making in DNNs is not transparent i.e., “hidden layers” and customers are unwilling to risk their systems by the addition of an AI feature, slowing AI proliferation to the endpoint. Adversarial attacks are different than conventional cyberattacks as in traditional cyber security threats security analysts can patch the bug in source code and document it extensively. Since in DNNs there is no specific line of codes you can address, it becomes understandably difficult.
Adversarial attacks in pop culture can be seen in Star Wars. During the clone wars, Order 66 can be viewed as an adversarial attack where the clones behaved as
46 March 2023
Figure 1. Stickers taped on to STOP sign to fool the AI into believing it is a speed sign, the stickers (perturbations) are used to mimic graffiti to hide in plain sight.
Another example by researchers from the University of California, Berkley showed that by adding noise or perturbation into any music or speech, it would be misinterpreted by the AI model to mean something other than the played music or transcribe something completely different, yet that perturbation is inaudible to the human ear. This can be maliciously used in smart assistants or AI transcription services. The researchers have reproduced the audio waveform that is over 99.9 per cent similar to the original audio file but can transcribe any audio file of their choosing at a 100 per cent success rate on Mozilla’s DeepSpeech algorithm.
expected throughout the war but once they were given the order, they turned, leading to a tide in the war.
Notable examples can be found throughout many applications, such as a team of researchers taping stickers onto stop signs causing the AI to predict it as a speed sign. Such misclassification can lead to traffic accidents and more public distrust in using
Components in Electronics
AI in systems. The researchers managed to get 100 per cent misclassification in a lab setting and 84.8 per cent in field tests, proving that the stickers were quite effective. The algorithms fooled were based on convolution neural networks (CNN), so it can be extended to other use cases using CNN as a base such as object detection and keyword spotting.
Types of adversarial attacks To understand the many types of adversarial attacks, one must look at the conventional TinyML development pipeline as shown in figure 3. Where initially, the training is done offline, usually in the cloud, followed by the final polished binary executable flashed onto the MCU and used via API calls. The workflow requires a machine learning engineer and an
www.cieonline.co.uk
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62