search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
COVER STORY


they are. When AI isn’t accountable or explainable, it feels like a threat not a support.


WHAT WE’RE DOING TO HELP


At Digital Care Hub, we’re working with the Institute for Ethics in AI, Casson Consulting, and others to shape a shared vision for responsible AI.


Over 30 organisations have signed our Collaborative Call to Action, and more than 20 tech suppliers have signed the Tech Suppliers’ Pledge – with more joining weekly.


We’re also developing an AI in Social Care Alliance to bring diverse voices into long-term decision-making.


We’re not trying to slow innovation – we’re trying to shape it so it serves care, not the other way around.


WHAT GOOD LOOKS LIKE


We have developed guidance to help care providers, staff, and partners think through how to use AI in safe, ethical, and person- centred ways. While many of the statements describe what should happen rather than what is currently happening, they support conversations and decisions about how Generative AI can improve care.


The guidance highlights areas such as improving outcomes, ensuring choice and control, protecting privacy, promoting transparency, maintaining human connection, and avoiding discrimination.


15


For example, care providers can check their position against the following statements: • We make it clear when people are interacting with technologies in their dealings with us.


• We make clear to our staff when they are expected to take responsibility for checking the work done by the technologies in use in our services. We empower them with skills and capabilities to overrule technologies where needed.


• We do not make assumptions about whether people’s social or emotional needs can be met by technologies.


• We ask the suppliers of the technologies we use how they safeguard against bias against individuals or groups, including those with protected characteristics, in their systems. We only use tools with adequate safeguards in place.


FINAL THOUGHTS: LET’S KEEP PEOPLE AT THE CENTRE


It’s easy to get swept up in the AI buzz. But the real question isn’t ‘what can AI do?’ It’s ‘what kind of care do we want to offer – and how can AI help us enhance it?’


Responsible innovation starts with asking what matters to people and involving them every step of the way. If we do that, AI could help create a more dignified, empowering care system.


If we don’t, we risk turning care into something colder, less human – and less just.


www.digitalcarehub.co.uk/AIinSocialCare www.digitalcarehub.co.uk/AICalltoAction www.digitalcarehub.co.uk/AITechSuppliersPledge


www.tomorrowscare.co.uk


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42