Technology
Commission, for example proposed the EU Artificial Intelligence Act in April 2021, which will establish harmonised rules for the marketing, service provision, and use of AI systems within the EU.
Additionally, the European Health Data Space,
was established to form a common space for the exchange and access to health data from a range of sources. This initiative aims to build a transparent and secure data framework for the exchange of high-quality data to fuel new and innovative AI applications in healthcare. Another effort to help simplify the confirmation of clinical efficacy and safety for AI systems has been offered by the UK, where a new regulatory sandbox was announced in November 2023. The sandbox has been designed to provide a safe space for AI tool developers in healthcare to trial innovative AI systems. The scheme is positioned to support the UK medical device regulator develop an understanding of certain higher risk AI applications in real world use, taking learnings from the programme to inform future regulatory policy and guidance to industry.13
In 2023, the MHRA also updated its Software and AI as a Medical Device Change Programme, to cover regulatory requirements for software and AI and ensure that patients are protected. This programme specifically builds on the intention to make the UK a globally recognised home of responsible innovation for medical device software and focuses on three main areas: safety assurances, clear guidance and processes for manufacturers to follow, and liaison with key partners such as the National
Institute for Health and Care Excellence (NICE) and NHS England. A second key part of policy is collaboration
with international regulators through the International Medical Device Regulators Forum (IMDRF), to support harmonisation of expectations for AI regulation in MedTech. Focusing specifically on the inherent risk
of exacerbating bias and inequalities with AI in MedTech, the MHRA also confirms that it recognises that medical device software incorporating AI must perform across all populations within the intended use of the device and serve the needs of diverse communities. In the US, the FDA is working on defining digital devices to ensure that regulation of AI/ ML-enabled devices is applied correctly. It has so far compiled a publicly available list of such tools which have been cleared by FDA for use in the US. This includes models of increasing complexity with more and more featuring deep learning models trained on larger and more diverse datasets.14
Hybrid models, combining
multiple different algorithmic approaches continue, however, to dominate within the medical devices assessed by FDA. A recent Executive Order in the US is further impacting how FDA regulates the use of AI/ ML in medical devices where it currently applies its “benefit-risk” framework. This framework confirms that AI/ML devices must conform to some basic principles, such as the demonstration of sensitivity and specificity for devices used for diagnostic purposes, the validation of intended purpose and stakeholder
requirements against specifications, and development that ensures repeatability, reliability and performance. The FDA has also considered the need for some AI/ML systems to be adaptively re-trained or tuned using new data or context specific data, such as data from post market clinical trials specifically conducted on women, thus introducing processes to enable certain pre- authorised software changes to be agreed by the manufacturer and FDA, which can then be deployed without the need for further regulatory assessment.15 To mitigate bias in AI, several strategies can be employed across the model development lifecycle. These include: l Pre-Processing Data: Sampling and curating data before building the model to ensure a balanced and diverse representation.
l In-Processing Adjustments: Implementing mathematical approaches to incentivise balanced predictions during model training.
l Post-Processing Adjustments: Adjusting the model’s output to correct any imbalances.
Moreover, involving human experts who understand and can identify the specific biases present in datasets and experts in clinical practice, who understand the confounding factors related to the medical field in which the AI system will be used, can help ensure the AI remains fair and limits bias. Secondarily, deploying systems with a human-in-the-loop
Mediplus TIVA Sets Dedicated infusion delivery systems
Designed in conjunction with anaesthetists to meet the highest TIVA standards as outlined in the guidelines*
Get in touch: +44 (0)1494 551200 @MediplusTIVA
marketing@mediplusuk.com
www.mediplusuk.com
*Nimmo AF, Absalom AR, Bagshaw O, et al. Guidelines for the safe practice of total intravenous anaesthesia (TIVA): joint guidelines from the Association of Anaesthetists and the Society for Intravenous Anaesthesia, 2018)
August 2024 I
www.clinicalservicesjournal.com 33
t
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60