search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Technology


Doctors share their thoughts on the use of AI


Doctors who use artificial intelligence (AI) see benefits for their own efficiency and for patient care in a resource-stretched NHS and, although they recognise there are risks, they feel confident in managing them, according to a new study published by the General Medical Council (GMC).


Researchers commissioned by the GMC sought to find out more about the types of AI doctors are using, how well they understand the risks and what they do if they disagree with the output of an AI system. Doctors who had used AI in the past 12 months discussed the benefits, risks and their understanding of their professional responsibilities when using such technologies, in a series of in-depth interviews with researchers from Community Research. Most saw benefits to their efficiency when using AI, seeing it as a way to save or make more use of their time. However, some queried this saying they lacked confidence in the accuracy of some diagnostic and decision support systems, and so spent more time checking the results they received. Many doctors felt that NHS IT


systems would need to improve to pave the way for a broader roll out of AI technologies, noting that many are highly specialised and still in the development stage. Doctors who currently use generative AI, such as Chat GPT, often do so through a current interest in AI. Further benefits of AI use shared by doctors included the potential to reduce risk of human error and reduce bias through judgments made on patient characteristics. They also identified the limitless capacity


AI had to draw from wider research, data or guidance on a particular topic, compared to an individual. Doctors interviewed also understood that the emergent technologies presented risks. They saw potential for AI-generated answers to be based on data that could itself be false or biased. They also acknowledged possible confidentiality risks in sharing patient data and the potential for over reliance and deskilling. Many said they feel confident to override


decisions made by AI systems if necessary, and that ultimately the responsibility of patient care remains with them. Some did speculate if this may change in the future as systems become


more sophisticated and looked to regulators, like the GMC, for more guidance going forward. Doctors identified several education and training needs in relation to AI:


l Basic education on AI: Promoting a broad understanding of AI, its risks, responsibilities, and its potential impact on doctors’ practice (as part of both medical education and continuing professional development).


l Specific system training: For those doctors using or considering using generative AI and/ or diagnostic and decision support systems.


l Ethics and Data Protection: Highlighting potential issues around consent for how patient data is used and what level of information needs to be shared with patients when obtaining consent


Some pointed to the fact that algorithms and computing have long been used to support decision-making within medicine. They questioned where the distinction lies between ‘old-fashioned’, computerised decision-making systems and AI.


One consultant commented: “If you have your


cholesterol and blood pressure measured and your GP says: ‘Oh, I think you should take some anti-cholesterol tablets,’ they’re already using an algorithm to work that out. Nobody is making a fuss about that….So, it already exists. We’re inventing bogeymen, really...We’ve already done this for years without anybody worrying about it.” Another consultant said: “In my opinion, it’s


a marketing term…I see a lot of companies producing products and they just call it AI powered or AI – it’s a nonsense.” There appears to be some uncertainty about what exactly constitutes AI. However, doctors reported using the following AI systems within their practice: Chat GPT, Claude, SystmOne, EMIS, Patchs, Anima, AccurX, Limbus, Aspire triage, Brainomix, RapidAI.


Generative AI Those doctors using Chat GPT and/or Claude or other generative AI use it for a number of different purposes. Many of the doctors using generative AI tend to have a keen interest in technology and see themselves as relatively early adopters of these technologies. Often the


September 2025 I www.clinicalservicesjournal.com 63


Suriyo - stock.adobe.com


t


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68  |  Page 69  |  Page 70  |  Page 71  |  Page 72  |  Page 73  |  Page 74  |  Page 75  |  Page 76  |  Page 77  |  Page 78  |  Page 79  |  Page 80