AI provides a broad array of benefits, including predictive analytics, contact tracing, resource allocation, and education and training platforms. AI enhances diagnostics with objective pattern recognition, standardizing diagnosis of infections, and AI facilitates the sharing of information to gain IPC expertise. AI data mining of laboratory results can be used to
predict outbreaks and/or infection events. Next-generation sequencing (NSG) effectively identifies pathogens and identifies antimicrobial resistance (AMR). AI was also used in the development of the COVID-19 vaccine employing data management to address boosters and COVID-19 variants.
AI offers the opportunity to conduct complex analysis and analysis of diverse HAI across a senior living community. Such analyses could aid in predicting residents most at risk of HAI or AMR events and facilitate the timely detection of outbreaks. AI is also used in staff training for hand washing effectiveness and aiding in reporting transmission during outbreaks.
Artificial intelligence and risk mitigation When contracting with an AI provider, senior living operators should consider risk mitigation strategies as follows:
• Create policies and procedures for AI-based applications, devices and wearables.
• Use a multidisciplinary team (including the resident and/or staff member) to review any new products, services or devices being brought into the organization before use.
• Define clear expectations, goals and objectives with the AI provider.
• To ensure safety, test the effectiveness of the processes that are using AI through the use of Failure Mode and Effects Analysis.
• Training checklists should be developed for the care team who will be using the AI applications.
• Educate the care team on escalation strategies should there be a question regarding the device integrity or when injuries occur.
• Loop in the organization’s insurance carriers and brokers to review any insurance implications that may arise.
• Track and trend all device incidents. Ensure that the care team knows the process for reporting such incidents.
• Build into organizational device management policies the requirements for reporting to the Food & Drug Administration any issues that could have or did result in harm.
• Privacy laws may not have caught up with the use of AI. Be cautious with AI vendor contracting and insert federal privacy requirements into the agreement.
• Review AI providers’ security and privacy protocols to ensure adequate safeguards are in place to protect data and comply with privacy laws.
• Have legal counsel review all AI agreements prior to engagement. Legal should review data governance and access to determine how data is collected, stored and shared.
• The AI system must be consistently monitored with checks and balances in place to ensure safety.
• Meet regularly with staff and residents to address application concerns.
SEPTEMBER/OCTOBER 2023
ARGENTUM.ORG 17
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52