16 DIGITALIZATION IN MEDICAL DEVICES B
SI is continuing to prepare the way for effective regulation of artifi cial intelligence (AI) in medical devices with the publication of a report in May making recommendations for
a way forward. The report, ‘Machine learning AI in medical devices; adapting regulatory frameworks and standards to ensure safety and performance’ was published in partnership with US standards organization, The Association for the Advancement of Medical Instrumentation (AAMI). It follows on from the groundbreaking
report published in February last year which fi rst highlighted the issues associated with regulating AI in healthcare. Currently there are no standards
that cover the defi nition, development, deployment and maintenance of AI in the context of regulated medical devices and BSI is continuing to work closely with AAMI to fi ll this gap.
The challenge for regulation is to build the trust of clinicians and patients and reassure people that a product incorporating AI is safe and effective while being clear about the risks. “This is an area that needs thought leadership,” said Rob Turpin, Healthcare Market Development Manager for BSI. “Since the last report we’ve moved the conversation on. Working with the same group of stakeholders we wanted to drill down into medical device regulation and increase clarity around safety and performance. “The question we asked ourselves was, ‘What is different about AI that we will have to rethink standards?’. In the paper
we particularly focus on risk management and validation for the more complex cases of AI, such as continuous learning systems and machine learning algorithms that pose a new set of challenges to healthcare.” Chief among these are: the level of autonomy
introduced by AI technologies; the ability of continuous learning systems to change their output over time in response to new data; and the ability to explain and understand how an output has been reached. All stakeholders want clinicians to be able to
take advantage of the medical advances that AI could bring in terms of better outcomes for patients, such as quicker and more accurate diagnoses. However, trust is a real issue where a
machine moves from performing automated tasks to making its own decisions. “Current practice in regulation requires
approval of signifi cant change in the output of a device before it is introduced,” said Rob. “Where devices can potentially modify their own outputs on a continuous basis, as is the case with AI, regulators will fi nd it diffi cult to approve it for use if the rationale and evidence behind its action are unclear, or if the device’s outputs change over time. “In the report we provide examples of what can go wrong with AI when it’s applied in practical situations and we highlight some of the questions that need to be asked before you use it,” said Rob.
One example highlighted is a driving app designed to help users avoid traffi c. “During the devastating California wildfi res in 2017 it directed traffi c into areas where the fi res were fi ercest because that’s where it saw traffi c was lightest. The app did not have adequate information or suffi ciently robust algorithms to make safe and accurate recommendations.” The report discusses what kind of controls could help assure that AI is safe and effective and makes a number of recommendations to cover failure modes and hazards that are specifi c to AI. These include regulators providing guidance on validation of AI
RISK MANAGEMENT CLICK TO READ
APPLICATION OF
ISO 14971
CONTENTS
SUBSCRIBE
CONTACT THE TEAM
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24