search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
FEATURE · IMAGING AI


applications oversight. In all these cases these were committees not necessarily established specifically for AI. Therefore, it may be easier to utilize existing infra- structure to provide governance than to start from scratch. A governance body should be multi- disciplinary to ensure all relevant services are addressed when considering the merits of an AI application. This is particularly important when considering AI applications that cut across multiple services and clinicians, such as a pre-diabetic algorithm that may impact a general practitioner, endocrinologists, oph- thalmologist, and potentially others. In the case of the University of Michigan, the feeling is that a service line such as radi- ology may be sufficiently self-contained with respect to some AI algorithms that they can self-manage AI governance. For example, an algorithm that analyzes images for a specific cancer, or a workflow prioritization algorithm may only affect the radiologist, and therefore may have little impact outside radiology. The concept of an AI orchestration platform has been expressed by multiple AI vendors as a means for managing AI applications. These platforms are similar in concept to mobile phone vendor “app stores” in that they can offer multiple appli- cations in one place, usually for a subscrip- tion fee. The intent of these platforms is to enable a vendor to approve a number of different applications, whether they be internally developed or from a third-party source, thereby making it easier for an institution to use. While in theory these platforms can simplify the accessibility and use of AI applications, not every vendor’s platform may have all the applications an institution is interested in, likely resulting in the need for multiple platforms. According to Mongan, AI implementers


should look to the DICOM (Digital Imaging Communications) standard as an example of a successful deployment methodology model to follow for interface standards for deployment of AI. The DICOM standard has grown to be a successful means of digital image sharing by establishing a standardized format for a variety of images. Creating a standards group and defining a standard means for conveying an AI algo- rithm would be a tremendous boon to the acceptance of AI applications. Implementing AI can be enhanced by fol- lowing the experiences of others, including: • Identify AI application activity: There are likely areas already that are working on or using AI algorithms. Understanding where


there is already activity, and if it benefits more than the service responsible can help determine the best place for governance. • Define an AI implementation strategy: A well-defined strategy is fundamental to AI Governance. It should include who is responsible for governance authority for the enterprise as well as for individual services, as well as governance processes. • Determine the highest priority AI areas: Some AI applications will have localized value, whereas others may have enterprise-wide value. For example, an algorithm that assists in the identification of small lesions may have value within radiology, whereas one that predicts no- show rates may have more universal value. An institution will need to have clearly defined policies relative to the criteria for determining the value of an AI application. Is it monetary? Clinical? Other? • Establish governing bodies to review and approve AI applications: The value of AI governance can be enhanced by establishing a process for the review and approval of AI applications. This may assist in avoiding redundancy in simi- lar algorithms, as well as provide better interoperability across service lines.


Summary According to Towbin, “We already have algorithms today, but it is probably two or three decades away from large segments of our work being automated with hun- dreds or thousands of algorithms firing at once.” Towbin also believes “the real value of AI is when it’s doing things I can’t do.” He further believes that “the first 100 years in radiology has been qualitative medicine, while the next 100 years will be quantitative.” Wendt is skeptical of AI cost justification,


“You need to get the CFO on board, as anything that will show a readmissions reduction will be favorable.” Similarly, Wendt believes that the potential of a platform for AI will optimize interoper- ability and minimize training. As for where AI algorithms will reside, Wendt is concerned that EHR vendors are reluctant to get involved as it may involve 510(k) regulatory requirements. A central theme from several contribu- tors is the value of standardization. Both Singh and Mongan stress the importance of standardization. Mongan suggests look- ing to DICOM standard, as “it was the biggest success story in imaging,” while Singh believes that physician organizations (e.g., American Society of Nephrology or


16 hcinnovationgroup.com | NOVEMBER/DECEMBER 2021


American College of Radiology) would be effective groups for AI guidance, but models and technologies affecting dif- ferent types of stakeholders need deep collaborative work for algorithms to be both accepted and helpful. Many AI imaging algorithms are con-


sidered “Software as a medical Device” or SaMD, by the FDA. As such, only FDA- approved AI applications should be used by diagnosticians. There are multiple algorithms approved by the FDA, often for the same application. An example would be for stroke detec- tion. Having a multitude of algorithms with similar functionality proliferating thru a multi-hospital IDN would signifi- cantly increase operational cost without a guaranteed improvement in patient care quality. This is contrary to a strategy for achieving the Triple Aim, a framework for simultaneously achieving an improved patient experience, improving the health of populations, and reducing healthcare cost. An imaging governance committee (IGC) for vetting and standardizing all AI applications makes sense. The IGC could be dedicated to specific service lines of an IDN, such as an imaging service. The IGC should consist of different IDN stakehold- ers, such as medical, legal, regulatory, IT, clinical engineering and others. The IGC could also manage other domains besides AI, such as the standardization of other assets such as PACS in the imaging ser- vice line across the IDN. Finally, the IGC should have representation of the IDN’s C-level governance. As the authors have experienced in many IDNs, lack of coordination and lack of standardization is often the cause of an “intra service lines” breakdown and is at the origin of unnecessary costs and quality issues with care delivery. It also causes “inter-communication” issues between different service lines leading to even more care delivery quality issues. In contrast, standardization leads to quality improvements and cost savings. These actions result ultimately in better patient satisfaction, and as such enables achieving the trifecta of the Triple Aim. IDNs need to standardize processes and technologies. In order to achieve the Triple Aim in AI deployments, they also need to imple- ment IDN-wide and service line-specific AI governance. HI


Bios for Joseph L. Marion and Henri Primo


can be found online at: https://hcinnovation- group.com/21243735


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32