WHAT'S NEW? A CALL TO ACTION ON THE RESPONSIBLE USE OF AI IN SOCIAL CARE
https://www.digitalcarehub.co.uk/ai-and-robotics/oxford-project-the-respon- sible-use-of-generative-ai-in-social-care/ensuring-the-responsible-use-of- generative-ai-in-social-care-a-collaborative-call-to-action/
https://www.digitalcarehub.co.uk/ai-and-robotics/oxford-project-the-respon- sible-use-of-generative-ai-in-social-care/responsible-use-of-generative-ai-in- social-care-guidance/
https://www.digitalcarehub.co.uk/ai-and-robotics/oxford-project-the-respon- sible-use-of-generative-ai-in-social-care/pledge-by-tech-suppliers-on-the- responsible-use-of-ai-in-social-care/
On 27 March 2025, over 150 participants gathered for the first- ever AI Summit in Social Care, a groundbreaking event bringing together care providers, social care staff, individuals drawing on care and support, tech suppliers, local authorities, academics, national bodies, and regulators. The summit marked a significant step in the responsible adoption of Artificial Intelligence (AI) in the adult social care sector.
At the summit, a Call to Action, guidance, and a Pledge for Tech Suppliers were launched, aimed at ensuring that AI, especially generative AI, is developed and used responsibly, ethically, and inclusively within the sector. These calls to action invite all those involved in adult social care to unite in shaping the future of AI, promoting care over innovation and safeguarding the rights, privacy, and dignity of individuals.
A collaboration across sectors The summit was the culmination of a 12-month collaborative effort co-led by the Institute for Ethics in AI at Oxford University, the Digital Care Hub, and Casson Consulting. Over 50 individuals and organisations contributed to this project, including people with lived experience of care, social care workers, care providers, researchers, local government, and tech suppliers. Together, they explored both the potential opportunities and the risks of AI in adult social care, emphasising the importance of human-centred and ethical approaches.
With AI, particularly generative AI, being adopted rapidly, there is growing concern about the lack of clear guidelines and safeguards to ensure responsible, safe, and effective use. The Oxford Collaboration came together to help bridge this gap, and ensure solutions are co-produced with all stakeholders.
Defining responsible use of generative AI The group agreed the following definition:
The ‘responsible use of (generative) AI in social care’ means that the use of AI systems in the care or related to the care of people supports
10
and does not undermine, harm or unfairly breach fundamental values of care, including human rights, independence, choice and control, dignity, equality and wellbeing.
https://www.digitalcarehub.co.uk/ai-and-robotics/oxford-project-the-respon- sible-use-of-generative-ai-in-social-care/responsible-use-of-generative-ai-in- social-care-guidance/
Guidance: 'I' and 'We' statements The guidance sets out key principles that need to be considered in implementing Generative AI tools in social care. It is presented as a series of 'I' and 'We' statements that describe what good looks like using Generative AI from different stakeholders' perspectives. They do not necessarily reflect people’s current experience, but it is hoped that they can inform a better future.
A collective Call to Action The Call to Action outlines six priority actions that call on governments, regulators, and all stakeholders to collaborate on responsible AI adoption in social care:
• Adopt guidance – Use the developed guidance, setting out key principles and stakeholder perspectives on the ethical use of generative AI in social care.
• Encourage collaboration – Ensure diverse perspectives, including people who draw on care, unpaid carers, and social care workers, are included in decision-making processes.
• Develop regulation – Urge the government to collaborate with regulators on creating AI guidelines and accountability structures.
•
Inclusive innovation– Call for the development of a supportive infrastructure for inclusive and human-centred design in care technology.
• Support for new business models – Advocate for government support for emerging business models in care technology.
• Align standards with ethics – Government to ensure National Standards on technology align with legal frameworks and ethical principles.
The Call to Action emphasises that AI in social care must prioritise care, safety, fairness, and inclusion, rather than just efficiency or cost-saving. It must complement human care, not replace it.
www.tomorrowscare.co.uk
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42