search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Q&A · PRECISION MEDICINE


data or data that may help us understand the environments in which patients live or social determinants of health and disease? We need to be able to identify all those data, connect all those data to one another and then understand them in that multi-scale context, which is very complex from a com- putational standpoint. And so that’s really what we do in the Institute for Informatics. We work very hard to discover those data sources, to integrate them, to harmonize them and understand them.


Can you talk about some projects within the personalized medicine initiative? We have a variety of projects underway in the Institute for Informatics that contribute to our precision medicine strategy, and they span a broad variety of diseases that most people are familiar with such as cancer, cardiovascular disease, neurodegenerative disease, and other common diseases. We also have projects focused on rare diseases that occur less frequently but are no less important when we think about how we can have better, more precise approaches to diagnosis and treatment planning. One of our focuses is Alzheimer’s dis-


ease. We know with Alzheimer’s disease that there are a variety of presentations, and while the biology may be somewhat similar across those presentations, some patients will see very rapid neuro-degen- eration and deterioration of their cognitive function, while for other patients it’s a longer, more gradual process. And while we don’t have curative strategies for any of those scenarios, there are measures we can take to improve quality of life and also to support caregivers and family members as they navigate this disease. But that means we need to understand what’s the likely outcome for a patient. We’ve been looking at a broad variety


of data sources from patients enrolled in clinical trials in Washington University’s memory care clinic. This includes data that are captured in the EHR but also a variety of cognitive evaluation instruments and patient-generated data. And we’ve been using machine learning methods in order to identify patterns in that data that will allow us to predict which patients are going to have a rapid decline and which are more likely to have a slower, more longitudinal decline. We’ve seen great success with those preliminary models, and now we’re working with our clinical collaborators to validate those. And importantly, we’re doing that with data that’s captured in the


clinic so it doesn’t require us to do anything different at the point of care. Rather it’s a different way of looking at all that data that we collected at the point of care so we can improve our ability to make these prognostic assessments of a patient. We talk about this in the Institute for Informatics as being an effort to understand patient trajec- tory, so not all precision medicine is about finding a new treatment. Some aspects of precision medicine are simply about better understanding the trajectory a patient is on, so we can make smarter choices throughout the duration of that entire trajectory.


What are some of the challenges in terms of finding critical insights from EHRs and other data sources? What are some approaches you have taken to using EHR data? One of the issues with using the EHR data in precision medicine or healthcare research is that much of the data that’s really impor- tant to understand both health and disease is not captured in discrete or structured fields in the EHR. By that I mean very specific and concrete measurements that we can use when we apply advanced com- putational methods like machine learning. Probably somewhere between 70 percent to 80 percent of the really high value data is either captured in narrative text in the form of notes in the EHR. It may be captured in documents, images or other communica- tions that are scanned and attached to the record as PDFs or other non-computable formats. In all three of those scenarios, we have to use advanced computational meth- ods to extract information from that nar- rative text from those scanned documents and images in order to render the discrete features that ultimately inform, for example, the predictive models in Alzheimer’s dis- ease or cardiovascular disease or diabetes or cancer. That’s very challenging in terms of training the computational algorithms that allow us to extract that information, validating that the information that we’re extracting is in fact accurate, and then integrating that with the other discrete data that we do get out of the EHR. That is exacerbated further when we start talking about patient-generated data or social deter- minants, which are also very important. With the unstructured content that’s found in notes in the EHR, one of the approaches we use is natural language processing or NLP, which is an AI approach to effectively interpret that narrative text and extract features. And it’s not entirely


20 hcinnovationgroup.com | NOVEMBER/DECEMBER 2021


different when we talk about imaging data. Humans looking at an image can point to a spot or a lesion in a chest CT, for example. But what we have to do is train a computer to recognize that same pattern and create a discrete field, which is, there is a lesion, where is it anatomically located? How large is it and how certain are we that it’s there? That’s a very simple example, but it helps illustrate how we’re teaching computers to interpret pictures. A lot of what we do is teach the computer to read or teach the computer to interpret pictures so we can get those structured features back out and put them into our predictive models.


Are there elements of machine learning and AI behind the strategies to address cancer and other conditions? There is a lot of interest in biomedicine around the use of AI, and in particular machine learning or deep learning to identify patterns in data and predict outcomes for patients. Broadly, there’s great promise there in that these types of algorithms allow us to identify these high- order patterns in data that more traditional statistical modeling and testing approaches do not allow us to identify. We know that in both health and disease, the patterns that exist around the inter- action of genes, gene products, clinical features, people’s behaviors, and their environments are very complex and that there’s unlikely to be a single indicator that will tell us whether or not a patient is or is not going to experience a disease state, but rather it’s a confluence of all these different indicators that help us to predict outcomes, and that’s where the power of machine learning and other AI methods become all that much more important. The real challenge is that while everyone


is very excited about the promise of these AI-based methods, we still have to subject them to this same type of rigor in terms of their evaluation as we would any other discovery in biomedicine. The real chal- lenge is that despite all the enthusiasm about machine learning and AI, we need to temper that with the need to do rigorous empirical research to understand whether these algorithms really produce the improvements in quality, safety outcomes and value of care that we anticipate. This is where we’ve seen some trouble early on. For example, there have been reports where people have said they have built an amazing algorithm, and it’s going to diag- nose everybody who might be at risk of


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32