Technology
trained on images of melanoma in people with dark skin: “If we can’t get a model that performs equally across the population, then ethically, is it right to be implementing something that a certain sector of the population will not get as good? We just need to make sure that there is a safeguard to make sure that those people who fall in that category have additional access to care, to make sure they get a proper assessment.”
Hallucinations and inaccurate interpretation Doctors using generative AI observe that it is still prone to ‘hallucinations’. Some respondents questioned whether all doctors using generative AI in their practice are aware of this, albeit that they themselves understand this possibility. A number of examples were also given
where using AI can result in inaccurate outputs. Doctors using imaging AI to aid the diagnosis and treatment of stroke patients are particularly keen to highlight that AI image interpretation can be inaccurate (either over sensitive or under sensitive) and in some instances can be based on truncated data collection. A consultant radiologist pointed to the fact that
one of the AI systems used to assess the severity of a stroke only scanned 40% of the brain. They are concerned that, in some cases, the AI results were shared prematurely and that doctors may not be seeing patients as sufficiently urgent cases because AI results (which are limited in scope) are not indicating a stroke, even though this cannot be definitively ruled out. One respondent said: “The output from a
couple of the systems goes directly to the stroke physicians and I believe that they may be using that output as a diagnosis, which they’ve been
told not to; but it’s human nature: if you’ve got something that says this is normal and they’re trying to work out what to do, they’re going to say: ‘Oh, fine; I don’t need to see that patient quickly.’ But they’re not appreciating that it’s only covering 40% of the brain and a lot of the strokes we see are not in that 40%.”
Potential for over-reliance For some doctors a top-of-mind risk associated with AI relates to the potential for doctors to over-rely on these technologies. This was a particular concern for two radiologists working with diagnostic and decision support systems for the assessment of stroke patients. For example, one radiologist pointed to a number of inappropriate referrals received at the tertiary centre where they work that were based on the results of AI used in ‘district general hospitals’. “In general, I didn’t find it helpful, and I also think that non-neuroradiologists – so people working at district general hospitals who are referring cases – over-rely on it massively. We used to get lots of referrals that were inappropriate, based on incorrect outputs.” Both radiologists warned that some doctors
who may have access to the AI-based results, before a full radiology report had been produced, should be very wary of acting on these results. However, it is not just radiologists who are concerned about doctors’ potential to over- rely on AI. An anaesthetist was concerned that colleagues with less ‘expert’ knowledge might derive a misplaced confidence from the AI system and consider using local anaesthesia in cases where, based solely on their own knowledge, they may have otherwise dismissed it. Ultimately, the survey of a diverse sample
66
www.clinicalservicesjournal.com I September 2025
of doctors who use AI in some form, reveals that many currently regard AI as an assistive technology. Most are clear that the responsibility for any decision informed by AI, remains with the doctor. Furthermore, doctors report a willingness and confidence to override the recommendations of AI if they did not agree with the outputs, and some were already doing this frequently. Shaun Gallagher, Director of Strategy and
Policy at the GMC, concluded: “These views are helpful for us as a regulator, but also for wider healthcare organisations, in anticipating how we can best support the safe and efficient adoption of these technologies now, and into the future.”
SCAN ME
CSJ
To download the full report, scan the QR code:
https://www.gmc-uk.org/-/media/ documents/understanding-doctors- -experiences-of-using-artificial- intelligence--ai--in-the-uk_pdf- 109934543.pdf
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68 |
Page 69 |
Page 70 |
Page 71 |
Page 72 |
Page 73 |
Page 74 |
Page 75 |
Page 76 |
Page 77 |
Page 78 |
Page 79 |
Page 80