search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
SOLUTION PROVIDER Q&A Sponsored Content


Protenus’s Nick Culbertson on Where Artificial Intelligence Is Headed


H


Nick Culbertson CEO


Protenus


ealthcare Innovation spoke recently with Nick Culbertson, CEO of the Baltimore-based Protenus, regarding artificial intelligence (AI) and its potential—and challenges. Below is an excerpt from that conversation.


To begin with, could you please tell us about yourself, your role, and your organization? Certainly. I’m CEO of Protenus, a leading healthcare compliance analytics company. We leverage AI and automation to help hospitals ultimately reduce risk by creating more efficient workflows and identifying viola- tions earlier, and preventing them.


You leverage AI in your company’s software. What kinds of questions should potential buyers be asking about AI? What are the right kinds of questions to ask, and what are the dangers of not asking the right questions? That’s a really important question. AI has been around for decades, but only recently have we seen applications specifically in the healthcare space. And one of the series of questions to ask any vendor should be around how AI is actually built. Who is managing and supporting the company’s AI? What is the team that is responsible? What kinds of data scientists and engineers are constructing the artificial intelligence? Another important question to ask is what data was


used to generate the artificial intelligence; your data may or may not be similar to the data that was used. And finally, how is the company leveraging its own research in order to eliminate bias in artificial intelligence? And that’s important, because AI will scale human processes up to a level that no human could ever complete; so if bias is present built into those early stages, that bias will end up being scaled up across your organization. No AI is without any type of bias; so it’s really important to constantly be measuring and evaluating that, because ultimately, in the future, covered entities and health sys- tems will be responsible for the type of bias involved in their AI, even if it’s supplied by their partners or vendors.


Patient care organizations are facing so many different types of responsibility now. And clearly, they can’t simply blame their vendors,


as they’re responsible for the tools they use. Do you anticipate regulators coming into this area? And what might regulation actually look like if it were applied in some way? There are certainly indications that regulators are going to be getting more involved in this area. We’ve even seen some posturing in the language of the final notice of HIT. So I think regulators are going to be looking to see whether various tools like AI are scaling bias or discriminating against certain types of patients or populations, as a result of how analytics are being used. AI is new in the clinical or administrative space, and regulators are always looking for opportunities to understand new technologies, to make sure they’re benefiting our patients and benefiting the industry overall. So I think it’s really important for leadership in health systems to be thinking about what those regulations might look like, and how they can position themselves to stay ahead of any types of concerns, with the most important area being focused on what kinds of bias might exist inside systems. I t’s not enough to partner with another organization


and hand off the responsibility for a system, to them. You have to understand how a system works; so asking those questions about what data is going into a trainer set, and what they’re doing to really understand risks around bias, will be paramount in making sure we’re staying ahead of any future regulations.


What should CIOs, CTOs, CISOs, CMIOs, and other healthcare IT leaders, be thinking about in that regard, especially per potential regulation? One thing you can put in place is to decide which metrics you want to determine for success of an AI program— making sure you’re using some consistent measurement of success. What is the AI trying to accomplish? Predict or prevent disease? Predict or prevent compliance issues? You need to make sure those elements are being tracked so that it’s clear what the AI is setting out to do. And it’s important to make sure to measure the AI from a quality assurance standpoint, making sure you know what your false positive rate is, or false negative, for example. At the end of the day, AI is not magic; it’s scaling up processes that can be performed manually, to a level that humans can’t accomplish.


www.protenus.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32