Technology
to consider the risks and benefits of the technology alongside its clinical evidence base. The recently finalised AI Act will require additional assessment and scrutiny of AI system’s used in medical devices beyond current compliance requirements. Additionally, certain digital health products which incorporate AI, but are not medical devices under current CE marking regulations, will be regulated for the first time as high risk AI systems, and require formal regulatory assessment. The very real risk inherent in AI enabled medical devices lies in the fact that the analysis can only be as good as the data that it is based on. Take for example an algorithm which disproportionately assigns false negative results due to bias or omission in the underlying data. The demographics of the patient population used to train the AI system may be heavily skewed towards one population or entirely omit another demographic. Subsequently, when the software is used on a patient from a demographic that is not sufficiently represented in the data used to train the AI system, the results could be less accurate – resulting in poorer health outcomes for the patient, caused by the underlying bias. For example, training data focused on a single hospital will account for the policies, processes, tools and demographic context of that hospital. When an AI tool is released with limited training context and used in another hospital, the differences in patient case handling can result in a lower predictive accuracy in the new context, as the model is not tuned to the hospital’s different approaches. A large hospital in an inner city is likely to see very different patient demographics, co-morbidities, environmental and lifestyle factors, alongside differences in clinician specialisation, compared to a small hospital in suburban or countryside setting – all of which can contribute to AI bias.
This is the case revealed by a study published in 2019 revealing that a healthcare prediction algorithm, used by hospitals and insurance companies throughout the US to identify patients in need of “high-risk care management”, was overlooking black patients.7 This was due to the algorithm using healthcare spending as a proxy for determining healthcare need and resulted in black patients being erroneously assigned lower risk scores for the same care needs, due to underlying socio- economic and access factors unrelated to health.8
Use of algorithms with such bias has
potential to maintain or even increase such health inequalities. The issue with underrepresentation in data is most keenly evident in FemTech due to a number of elements, including a historical bias in medical research, where women have often been underrepresented due to a number of misplaced beliefs. This is illustrated by a recommendation from 1977 by the FDA that
women of reproductive age should not be enrolled in early-stage clinical trials. The longtail has been hard to shake off and a recent paper reveals that in studies carried out as recently as 2016-2022 women made up just 33% of participants.9 Similarly, during COVID-19 vaccine trials,
28.3% of publications did not report the sex distribution among participants,10
and only
8.8% provided sex-disaggregated vaccine effectiveness estimates – which are key to understanding any differences in overall efficacy and analyse side effect prevalence and severity experienced by women and men.11 A 2010 survey examining 2000 animal studies confirms this trend, finding that 80% included more males than females, while as late as 2016, 70% of biomedical experiments did not include sex as a biological variable. Where sex as a biological variable was included, less than half of trials included both males and females.12 Given the state of play and the existing data gap, it is key that both regulators and manufacturers work to ensure their AI models are developed to reduce and, where possible, eliminate sources of bias. An adequate regulatory framework that accounts for the peculiarities of this new tool needs to be developed and applied on the basis of solid methods to identify confounding factors and reduce bias through context-specific model tuning and improvements to underlying training data, to play catch up and build a more diverse and rich data set on which AI enabled medical devices can be trained.
The regulatory approach Regulators all over the world are aware of the potential for bias in AI and are working to develop suitable frameworks. The European
32
www.clinicalservicesjournal.com I August 2024
metamorworks -
stock.adobe.com
wavebreak3 -
stock.adobe.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60