Q&A · PRECISION MEDICINE
lung cancer based on their chest CTs. Later, it’s found that that algorithm doesn’t travel well and it’s not replicable across multiple sites and populations, or it gives the wrong answer, and that’s because it was treated perhaps more as an engineering exercise than a biomedical research exercise. We have a number of projects right now where we built predictive algo- rithms to look at patient trajectory such as whether or not individuals are going to develop sepsis or other critical issues during an inpatient hospital stay. Then we do a series of prospective studies where we actually run these algorithms for real patients in the hospital, but we’re not actually delivering those alerts to providers to make clinical decisions. Rather, we’re trying to see whether the outcomes that the providers ascertained during standard of care activities for these patients match what our algorithms are predicting. If we see enough concor- dance between our algorithms and those experts, then we move to the next phase and we evaluate prospectively where we give those alerts to the providers and see if that changes outcomes. It’s not a whole lot different from how we would proceed through the multiple stages of clinical trials for a new diagnostic or therapeutic approach. What we’re doing is running clinical trials of AI, working through these increasingly sort of expansive stages of use and evaluation.
Are there also some issues around data bias?
If you don’t use the right data to train your algorithm and understand how that data maps to the features of the patient populations that you are intending that algorithm to benefi t, you can both encode biases and potentially predispose negative outcomes for the populations that you’re going to deliver that solution to. So that means you have to start from the very beginning to measure and understand biases in the data that you’re using to train the algorithm, and then similarly biases in the way in which you structure that study. Unfortunately, in the computational domain, the general thinking is just ‘get me more data,’ not necessarily thinking about how we reduce biases and increase the diversity of that data. That’s a change in culture around these AI approaches that we have to promote. And it’s certainly something that we focus really carefully on here with the work that we do in the Institute for Informatics.
Is a lot of the work based on breakthroughs in genomics? Or in clinical phenotypes? Yes, genomics is a powerful way of under- standing the basis for human health and disease, but I’m often inclined to say to my students when I sequence a patient and understand what information is encoded in their DNA, what I really learned is the blueprint for what is going to be built. So if you’ll forgive the metaphor … we all know that when we build a building -- even when we have a blueprint that an architect has put together -- the actual building that we get is a function of the availability of mate- rials, the quality of the labor that built that building, even maybe the weather while it was being built, the ground conditions, etc. There’s a lot of factors that infl uence how we get from that blueprint to the fi nal building. Well, the same is true for human beings. When I know the sequence of an individual’s DNA, or potentially if I’ve looked at their RNA, I will know what is meant to happen biologically. But then all these other factors —clinical phenotype, behavior, environment — come into play. So breakthroughs in genomics are abso- lutely essential to delivering precision medicine, but we also have to measure all of these other data sources and combine that data if we’re really going to under- stand the sort of complex, multifactorial space that contributes to both health and disease, and so in many ways we have been more successful in the genomics domain than we have been in our ability to pheno- type patients clinically or understand how their environment or behaviors infl uence health and disease. We really have to play catch-up in order to better understand what’s going on beyond the genome if we’re really going to be able to achieve the promise of precision medicine.
Has your organization created models and used predictive ana- lytics to better empower clinical operations, research and public health initiatives in response to the COVID-19 pandemic? Does it require new types of partnerships within your organization or within the community and public health organizations? We have developed predictive models and deployed them for use both locally and at the population level to help us bet- ter respond to the pandemic. And these models have spanned a spectrum from identifying patients who are critically ill
22
hcinnovationgroup.com | NOVEMBER/DECEMBER 2021
that might benefi t from palliative care con- sults to better understanding the trajectory of our patients in the ICU and anticipating who might experience respiratory failure and therefore need early intervention in the form of more advanced respiratory therapies. And most recently we’ve been looking at how can we predict likely out- comes when a patient is placed on ECMO. Because we’re now seeing younger, sicker patients with COVID, and one of the ques- tions is should we put them on ECMO earlier? Because often ECMO is a therapy of last resort, which means that patients are already very ill when they’re placed on ECMO, which reduces the therapeutic benefi t to them. The question is, could we identify those patients earlier and perhaps intervene earlier to maximize outcomes and reduce the likelihood of complications? In addition to that, we’ve done similar work at the population level, trying to anticipate hot spots of COVID infections based on prior activity in the region, such as testing or other patient reported data. Across the board, COVID-19 has both been a driver for us to think about how we can use prediction to better organize our response to the pandemic. It’s also been a catalyst for moving some of these algorithms into the clinical environment or into the public health environment more quickly than we normally would have. This has both benefi ts and challenges — the benefi t being we’re getting real world experience; the chal- lenge being we’re not always getting the opportunity to evaluate them at the level of rigor that we might have if it was not a crisis situation. This is not to say that we’re deploying unsafe algorithms; we’re constantly monitoring these algorithms. It’s actually a whole discipline of informatics that we refer to as algorithmovigilance, which is basically constant monitoring of the algorithm performance to ensure that it is doing what it is anticipated to do and that the results are accurate. But without a doubt, prediction has been a major part of how we responded to the pandemic.
You have been involved with the National COVID Cohort Collaborative (N3C), which aims to bring together EHR data, harmonize it and make it broadly accessible. What were some lessons your organization learned from the rapid-fi re response to COVID? Well, I think there’s three critical lessons that we’ve learned from our rapid-fi re response, especially at the national level.
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32