search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Regulatory Evaluation needed for AI


AI-powered medical devices approved by the FDA under its current framework lack critical data on performance and bias, according to research newly published in Nature Medicine. All but four of the 130 AI devices approved by the FDA between January 2015 and December 2020 were only studied retrospectively. Prospective studies are needed to evaluate a tool’s clinical use and interactions with hospital systems, which, particularly given the complexity of AI, are likely to differ from what developers can envisage.


The Stanford University research team also noted a lack of diversity in studies, with just 17 of the 130 considering how the algorithm might perform across different patient groups. Using their own AI models for triaging potential cases of pneumothorax, the authors found substantial drop-offs in performance when the tools were implemented at new sites, as well as disparities in results across different patient demographics. The report recommends that AI devices are evaluated in multiple sites, with prospective studies that compare them with the standard of care. Post-market surveillance is also


necessary to identify and measure unintended outcomes and biases missed in trials. Source: Nature Medicine


Four 17 Nature Medicine 30


intermediate decisions to doctors could make them feel obliged to question each of those decisions. For Minssen, prompting humans to give approval or intervene at each stage largely defeats the purpose of using AI, slowing it down and undermining its effectiveness. “We need interfaces that enable users to better understand underlying computing processes,” suggests Minssen. “But if we take [transparency] too far then [the device] just becomes a tool.” Instead, an appropriate level of user understanding could be achieved by explaining key aspects of how a system works, including how it arrives at its decisions. “I think it would be desirable if the [device’s] assumptions and restrictions, the operating files, data properties, and output decisions can be systematically explained and validated,” says Minssen.


AI devices of the 130 approved by the FDA between January 2015 and December 2020 that were studied prospectively.


Algorithmic bias Like the humans that build them, machine learning applications can carry many different types of bias. A coder could unintentionally transfer their world view into parts of the algorithm by having it solve problems in certain ways, or an algorithm could be trained on a data set that isn’t representative of the population for which it is intended. Bias can also be contextual, for example, if an algorithm were created to function within a certain setting and moved to another, such as from a specialist clinic to a rural hospital.


Studies, of the 130 AI devices approved by the FDA, which considered how the algorithm might perform across different patient groups.


“The best we can do is try to mitigate biases as much as possible, by identifying the different buckets where bias can exist and then developing standards for […] what each AI manufacturer has to show [and to do] to mitigate this bias,” says Gerke. Regulators could require manufacturers to meet certain standards for data collection or to demonstrate how they have taken steps to minimise bias, for one. As well as looking at the data itself, manufacturers could also


consider whether the team developing the algorithm might risk introducing bias, explains Minssen. This might happen if the device is intended for people who breastfeed but no-one on the development team has breasts. It’s important that awareness of potential bias is baked into the development of a device, in terms of both performance and patient safety. “People don’t see issues of bias as safety issues, they think of them as a kind of woolly social science question,” says Ordish. “There’s a direct and important safety issue about having representative data and ensuring that your model isn’t biased.” The FDA’s plan to develop a methodology for identifying and eliminating bias is currently at the research stage.


Real-world performance Who is responsible for a device’s safety as it moves from the lab to the doctor’s office and from there into a patient’s hand? Per the action plan, the FDA will run a pilot-monitoring programme that collects device performance data to inform an eventual framework for real-world monitoring. In a similar vein to patient- centricity, effective post-market monitoring needs to engage healthcare professionals as well as manufacturers, points out Prof Dr Christian Johner, director of the Johner Institute for Healthcare IT. “The system needs to be improved to systematically collect and assess the post-market surveillance data,” he says. “We [currently] rely on the manufacturer being honest enough to report any incidents, and the collection and assessment of field data reported by healthcare professionals is still not really working.” Johner predicts that manufacturers will most likely be required to record usage data in future FDA monitoring regulations, in line with the broader industry shift towards relying on manufacturer data and evidence. Post-market monitoring should also include ongoing scrutiny of a device’s risk, he adds. In the UK, patients can report issues with a device through the MHRA’s yellow card scheme. However, as this is not obligatory, nor done through the device itself – users must give feedback through a dedicated MHRA website – the regulator can only do so much with what is reported, says Ordish.


It may be a few years before there are clear regulations for AI devices, but it is a good practice for manufacturers to think critically about the ethical, practical and legal challenges of using AI in medical devices at the beginning of development, rather than trying to adjust the systems of a finished product. It’s also essential not to lose sight of patient outcomes. “Many products may be safe and effective, but they don’t necessarily improve patient outcomes,” says Gerke. “We don’t necessarily need another AI that does not do any good in the long term.” ●


Medical Device Developments / www.nsmedicaldevices.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68  |  Page 69  |  Page 70  |  Page 71  |  Page 72  |  Page 73  |  Page 74  |  Page 75  |  Page 76  |  Page 77  |  Page 78  |  Page 79  |  Page 80  |  Page 81  |  Page 82  |  Page 83  |  Page 84  |  Page 85  |  Page 86  |  Page 87  |  Page 88  |  Page 89  |  Page 90  |  Page 91  |  Page 92  |  Page 93  |  Page 94  |  Page 95  |  Page 96  |  Page 97  |  Page 98  |  Page 99  |  Page 100  |  Page 101  |  Page 102  |  Page 103  |  Page 104  |  Page 105  |  Page 106  |  Page 107  |  Page 108  |  Page 109  |  Page 110  |  Page 111  |  Page 112  |  Page 113  |  Page 114  |  Page 115  |  Page 116  |  Page 117  |  Page 118  |  Page 119  |  Page 120  |  Page 121  |  Page 122  |  Page 123  |  Page 124  |  Page 125  |  Page 126  |  Page 127  |  Page 128  |  Page 129  |  Page 130  |  Page 131  |  Page 132  |  Page 133