Operating room technology
Today’s surgical robots are guided through consoles that provide a magnified, three- dimensional view of the surgical site.
losing, in the next generation, all those subtle skills that make a clinician a very good clinician.” Rodriguez y Baena has devoted his 25-year-long career to the development of robotics in surgery, but he remains grounded in his family background as a clinician (both of his parents were clinicians too). “They say that a very good clinician can look at you and, in a few seconds, sort of know what’s wrong with you,” he remarks. “That’s their training, their experience. Or, it could be in the way a patient looks, the way they move, the way they use their eyes – there are all these subtleties. And the human mind, through experience and training and making mistakes, and looking at many, many patients over a long period of time, somehow conjures some intuition about what may be wrong with a patient.” It’s this complex and intuitive skill set that Rodriguez y Baena fears losing if future generations place too much faith in the development of AI. And, though it is true that this intuitive, experiential approach makes us fallible, makes us prone to error, it also takes into account the reality of the world around us – something that robots are simply unable to do.
“If you look around, it’s all AI, AI, AI, and funders jump on board and keep on stimulating this push without bounds – and that, I think, can be counterproductive.”
Machine learning runs on data, on pattern recognition, and therefore on probability – which means that AI robots make decisions according to calculations about some kind of greater good. Investing AI with the power to make surgical judgements would, therefore, seem to require the activation of 19th century philosopher Jeremy Bentham’s ‘fundamental axiom’, according to which
46
“it is the greatest happiness of the greatest number that is the measure of right and wrong”. It’s a sound ethical code in theory; though, in practice, as critics of Bentham’s utilitarianism have argued, this doctrine leaves many questions unanswered: how, for example, can we define happiness, and why should we consider it the greater good? More importantly – and this is HAL’s conundrum – at what point might acting in the interests of the many justify injury to a few? This last question might also be levelled at the use of AI in medical decision making. Consider a game of chess: the most intelligent human has the capacity to weigh up probabilities several moves ahead; according to an article in Nature last year, the latest developments in AI might give it the capacity to weigh up when we’re going to die. At some point, these machines might be sophisticated enough to draw unimaginable patterns from the raw data we feed them. The question is, will the balance between surgeon and machine begin to tip? And if surgical decision aids do get better at decision making, who gets the power to override whom?
Market forces Perhaps ironically, as Rodriguez y Baena reminds us, the greatest danger we face at this juncture in the development of surgical AI might not be the technology but ourselves and our desire to embrace these machines with all-too-open arms. “If you look around,” he notes, “it’s all AI, AI, AI, and funders jump on board and keep on stimulating this push without bounds – and that, I think, can be counterproductive.” Rodriguez y Baena is also sceptical about what he calls “a holy grail of connectedness, smartness and AI assistance for all”, as promoted by “the Googles of this world”. As he explains, this affects funders, who affect researchers, who begin to write specific proposals because they want to please the funders. “We start to lose a little bit of perspective. And the job, at least of us academics, is to also act as the devil’s advocate; not only to jump in all, ‘Tech, tech, tech!’, but also to say, ‘Hey, wait a second, there are constraints’.” HAL shuts Dave out of the spacecraft in 2001, but he doesn’t succeed in trapping him in space: Dave disconnects HAL and speeds on towards Jupiter. But it’s a lonely quest, without man or machine for company, and Dave slowly loses his mind in the acid dream of deep space, confronting ageing iterations of himself, and winding up as a giant foetus that floats towards Earth. Perhaps Kubrick’s famously ambiguous ending is a parable against pulling the plug on our robot friends; maybe Dave’s mistake was to assume HAL’s sentience. The HAL 9000 was just a robot after all, an aid in decision making, mining data for patterns. To use AI effectively, in the operating room and beyond, we have to understand its limits first.
Practical Patient Care /
www.practical-patient-care.com
daphnusia/
Shutterstock.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59