FEATURE · IMAGING AI
There are three main sources of evolving AI development: research developments; commercial developments; and AI plat- form development. Many research facili- ties are developing algorithms for specific factors, both imaging and non-imaging. Conversely, there are a number of com- mercial companies that are developing algorithms based on the products or services they sell. Finally, there are companies that are concerned with the delivery of such AI algorithms, and they are focused on devel- oping a platform or marketplace for deliv- ery of potentially their own- or third-party algorithms. This is similar to smartphone applications that are most likely delivered via an “app store” controlled by the phone’s operating system supplier such as Apple or Google. A user may choose to download any number of applications, and typically pays for only the ones they choose to use. A platform with multiple algorithms may be a better sell, but there are concerns about platform “exclusivity.” One might say the current state of devel- opment in all sources are in their infancy. In most cases, it appears that imaging AI is evolving separately from other areas. According to John Mongan, M.D., Ph.D., associate professor of clinical radiology, asso- ciate chair, translational informatics, director, Center for Intelligent Imaging, the Center for Intelligent Imaging at the University of California at San Francisco (UCSF) is evolving a framework for imaging AI, “We consider the risks for deployment as well as the probability of success and the clinical impact.” Mongan believes AI Governance “should include when and when not to use an AI algorithm. People using it must under- stand when it is and when it is not useful.” According to Alex Towbin, M.D., Neil D. Johnson Chair of Radiology Informatics, Cincinnati Children’s Hospital Medical Center, “Cincinnati Children’s is in the very early stages of AI. We have one clinical application implemented, and a number of internally developed applications in development. We have also purchased one externally developed stroke application, but have not implemented it yet.” There are a number of applications they have acquired that say they employ AI, but they are not really sure about that. His focus is on automated workflow, which may not directly be AI, but is part of it. Another example of governance is Brigham and Women’s Hospital, in conjunction with Massachusetts General Hospital. Kathy Andriole, Ph.D., director
of research at the MGH and BWH Center for Clinical Data Science states, “There are initiatives in many areas both imaging and non-imaging, including radiology, cardiol- ogy, the ICU, pathology, COVID, workflow optimization, and light images.” According to Andriole, “We have a committee across Mass General Brigham that oversees and approves clinical implementation of AI algorithms. The committee decides if and when an AI algorithm is implemented and looks at how to assess different applica- tions.” Andriole indicated that “diagnostic applications get the publicity, but there may be more potential in non-diagnostic algorithms such as workflow.” The University of Michigan balances between a governing committee and individual department efforts, accord- ing to Karandeep Singh, M.D., assistant professor of learning health sciences, internal medicine, urology, and infor- mation at the University of Michigan. Singh chairs the Clinical Intelligence Committee that oversees operational AI activities at Michigan Medicine. While the university as a whole has efforts focused on AI/ML research (e.g., the Michigan Integrated Center for Health Analytics and Medical Prediction [MiCHAMP]) and resources for translation (Precision Health Implementation Workgroup), the Clinical Intelligence Committee’s primary focus is on clinical operations. Singh says, “there is a mechanism to initiate requests to the committee where a given clinical workflow could benefit from the use of an AI/ML model. In turn, the committee helps clinical stakeholders review and endorse models based on products that are avail- able from vendors or models developed by researchers. In general, a researcher cannot directly bring a model to the attention of the committee but needs to have a clinical partner.” According to Singh, “If an AI/ ML model affects a broad set of clinicians or other stakeholders, it requires review by the committee.” According to Gary J. Wendt, M.D., pro-
fessor of radiology and vice chair of infor- matics, University of Wisconsin Madison (retired) “there is a long-standing gover- nance body in place that also addresses AI, including an approval process, but it doesn’t appear there is any coordination with non-imaging applications.” Imaging AI use cases are currently not diagnosing diseases but are merely a tool to extract data abnormalities in imaging studies or for quantifying physical param- eters such as blood flow rates or volumetric
14
hcinnovationgroup.com | NOVEMBER/DECEMBER 2021
information from imaging data sets. These results are then presented to the diagnosti- cian, e.g., as a pop-up overlay screen on a PACS display. The radiologist can then consider if these data are meaningful for augmenting the clinical diagnosis. Other AI system configurations can use
the AI-detected abnormalities or quanti- fied data to accelerate the triage workflow process to present the AI prioritized results to the radiologist. In the future there will likely be more and varied use cases.
What is needed for AI governance? Given the current state of AI, it is not too soon to consider governance within your institution. The first point to consider is the differentiation between imaging AI and non-imaging AI applications. Is there sufficient overlap between these areas to warrant a common governance? Using radiology services as an example, AI imaging applications such as a stroke detection algorithm able to differentiate between Intracranial hemorrhage (ICH) and a large vessel occlusion (LVO) usually receive the greatest amount of publicity. However, workflow orchestration applica- tions that guide the caregiver automatically thru the clinical data such as medical his- tory, lab results, etc. and then intelligently prioritize the worklist study selection may be just as valuable. Wendt stated, “Knowing the results of
an AI analysis on an image five minutes before it is read may not be as valuable as having relevant information for that exam available at the time the study is interpreted.” Furthermore, Wendt and Towbin both suggest that AI image analysis may be more beneficial outside imaging services in improving the time to treat a lesion that has been analyzed by an AI algorithm.
Contrast these examples with an AI algorithm that evaluates the current patient schedule for a department, where the algorithm predicts potential no shows and improves the actual percentage of patients that show up for an exam. Such a non-imaging algorithm may have a more beneficial economic impact on the depart- ment than an imaging application. Regardless of scope, there is value in some form of governance and oversight to guide AI deployment. The University of Michigan, the University of California at San Francisco, and Brigham and Women’s/ Massachusetts General all appear to have working committees who play a role in AI
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32