search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Regulatory EU Artificial Intelligence Act: four-point summary


1. The AI Act classifies AI according to its risk: ■Unacceptable risk is prohibited (e.g. social scoring systems and manipulative AI).


■Most of the text addresses high-risk AI systems, which are regulated.


■A smaller section handles limited risk AI systems, subject to lighter transparency obligations: developers and deployers must ensure that end users are aware that they are interacting with AI (chatbots and deepfakes).


■Minimal risk is unregulated (including the majority of AI applications currently available on the EU single market, such as AI-enabled video games and spam filters – at least in 2021; this is changing with generative AI).


2. The majority of obligations fall on providers (developers) of high-risk AI systems: ■Those that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country.


■And also third-country providers where the high-risk AI system’s output is used in the EU.


3. Users are natural or legal persons that deploy an AI system in a professional capacity, not affected end users. ■Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers).


■This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.


4. General purpose AI (GPAI): ■All GPAI model providers must provide technical documentation, instructions for use, comply with the Copyright Directive and publish a summary about the content used for training.


■Free and open licence GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk.


■All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, adversarial testing, track and report serious incidents and ensure cybersecurity protections.


Source: Abstract from EU Artificial Intelligence Act/artificialintelligenceact.eu/high-level-summary


be seen as falling under the act’s definition, just as much as software that ‘learns’ how to identify potential abnormalities in diagnostic scans.


AI raises the risk profile of medical devices


According to Alison Dennis, international co-head of life sciences and healthcare at law firm Taylor Wessing, the AI regulatory framework created by the act is a demanding one. Indeed, she believes many companies will find compliance with it to be “hard work”. However, she has good news for anyone working in an industry for which the act provides a handy quiz question. “I think medical devices companies are not going to find it anywhere near as difficult as anyone else, because the fundamental base on which the AI act is written is so similar to the medical device regulations,” she says. “It’s the same thinking; the same process of a notified body, of a quality management system and of technical documentation.”


So, while medical device companies with devices that fall under the act will now have two separate regulatory processes to go through – one for the device and one for the AI – there will in effect be overlap. “The guidance suggests that you have a quality management


16


system for the medical device and then for the AI, but the process for the two is virtually the same.” While the practicalities may remain similar, cybersecurity expert Beau Woods says there is a logic behind emphasising the potential additional risks of AI. “Anytime you add new capabilities you open up exposure to additional accidents and adversaries to those that existed before you added that thing. “The more components, the more moving parts there are to something, the more room there is for failure modes. That’s just intrinsic. It’s not a feature of AI in itself, any more than it is of software. It is just an inherent property of complex systems.” Woods is cybersecurity advocate with I Am The


Calvary, a grassroots organisation focused on the intersection of digital security, public safety and human life. Its area of concern, says founder Joshua Corman, “is where bits and bytes meet flesh and blood”. Medical devices are thus right at the heart of its areas for action. In 2015, the organisation helped successfully push for the world’s first recall of a medical device purely for cybersecurity reasons, having identified that an infusion pump could be hacked and doses adjusted. With AI, risks could be exacerbated. “I’ve seen a lot of AI-type applications in diagnostics, so machine learning algorithms being used for radiology and imaging data,” says Woods.


“But as with any software there are ways to attack to make them less reliable – going in and tampering with things like the weighting in values, perhaps. “There this a constant back and forth of how we create something that works in an idealised environment and then how do we ensure that it does not fail in a hostile environment.”


For Corman, the key concern is that “dependence on connected technologies in these areas is growing a lot faster than our ability to secure those technologies”.


Regulatory and cybersecurity challenges With AI, the further challenge is that there is built-in evolution of the system over time. As Taylor Wessing’s Dennis puts it: “Change is built in with ‘proper’ AI because it is thinking and learning from the data that you’re putting into it.”


So, at which point should a change to an AI-supported product require reregulation? This is a question being considered by bodies worldwide. The US Food and Drug Administration (FDA), Health Canada, and the UK Medicines and Healthcare products Regulatory Agency have attempted to answer it through guidance on the use of predetermined change control plans for machine learning-enabled medical devices. (Machine learning is considered a sub-type of AI.) The idea is that manufacturers create such plans at the outset of the regulatory pathway, detailing anticipated changes in the product over time. The


www.medicaldevice-developments.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68  |  Page 69  |  Page 70  |  Page 71  |  Page 72  |  Page 73  |  Page 74  |  Page 75  |  Page 76  |  Page 77  |  Page 78  |  Page 79  |  Page 80  |  Page 81  |  Page 82  |  Page 83  |  Page 84  |  Page 85  |  Page 86  |  Page 87  |  Page 88  |  Page 89  |  Page 90  |  Page 91  |  Page 92  |  Page 93  |  Page 94  |  Page 95  |  Page 96  |  Page 97  |  Page 98  |  Page 99  |  Page 100  |  Page 101  |  Page 102  |  Page 103  |  Page 104  |  Page 105  |  Page 106  |  Page 107  |  Page 108  |  Page 109  |  Page 110  |  Page 111  |  Page 112  |  Page 113  |  Page 114  |  Page 115  |  Page 116  |  Page 117  |  Page 118  |  Page 119  |  Page 120  |  Page 121  |  Page 122  |  Page 123  |  Page 124  |  Page 125  |  Page 126  |  Page 127  |  Page 128  |  Page 129  |  Page 130