Data management
The second critical issue when considering the integration of generative AI into clinical trials is patient privacy, specifically the need to ensure that data is anonymised and compliant with HIPAA regulations. As the landscape of data security evolves, the advent of quantum computing poses new challenges and risks too. Insights from McKinsey suggest that there could be up to 5,000 operational quantum computers by 2030, significantly enhancing computational power and potentially undermining current data anonymisation techniques. “That’s a very scary prospect because organisation and anonymisation as we know it works because we have limited compute capacity,” says Crowther. “The ability to work back and break down that anonymisation is very high, but we don’t know yet so we have to be careful about things like patient safety and privacy concerns with data.” Quantum computing’s potential to decrypt anonymised data means that existing privacy protections could become obsolete, posing a serious risk to patient confidentiality. As quantum technologies advance, the healthcare industry must anticipate and counteract these threats by developing more robust encryption methods and enhancing data security protocols. The third ethical implication centres on the concept of “explainability.” Ensuring that AI models are transparent and their predictions understandable is crucial for fostering trust and accountability within the industry. Crowther underscores the importance of this aspect, stating that ensuring that people can see and understand why models are predicting certain things is important to help level people up in the industry to understand data better. This transparency is not merely a technical challenge but a fundamental requirement for ethical AI deployment. It helps demystify AI decision-making processes, making it clear why certain outcomes are predicted, which is essential for regulatory compliance and ethical accountability. One effective approach to improving explainability Crowther points out, is the use of feature importance measures. By identifying the elements in data that significantly influence the model’s predictions and by evaluating the features, it can determine which ones contribute the most to the predictive outcomes so stakeholders can gain a clearer understanding of how the AI operates and ensure that it aligns with clinical and ethical standards.
40% 14
The percentage of healthcare-related data, making up a predicted 180 zettabyte global data lake by 2025.
Crowther sees a valuable lesson in how the banking industry manages ethical considerations in AI. “In banking, they actually have ethical AI consultants,” he points out, suggesting that the pharmaceutical sector could greatly benefit from a similar approach. “I think in future we should be bringing in AI ethicists … [and] I think a team dedicated to that would be a brilliant addition to the industry for every company.” By establishing teams of AI ethicists, the pharmaceutical
industry can ensure that AI technologies are developed and deployed responsibly, with a keen eye on mitigating biases, protecting patient privacy, and maintaining transparency.
AI versus AI
Despite the significant potential of generative AI, its integration in clinical trials is fraught with several challenges. One major hurdle is regulatory compliance; while it is feasible for companies to leverage AI for creating documentation for regulatory agencies like the FDA and EMA, the role of the regulators themselves becomes complex. “We have come to a point where we have AI versus AI,” Crowther says. This scenario highlights a critical consideration: the role of humans. “The human in the loop is always going to need to be there, so this isn’t the case of AI taking people’s jobs – it’s about making room for meaningful work.” The challenge lies in discerning and navigating among various AI solutions, ensuring that only those which add genuine value and do not introduce unnecessary complexity are implemented. Crowther also points out the need to carefully manage interactions between different AI systems to prevent potential conflicts and inefficiencies. Another challenge is the growing popularity of synthetic data as a viable solution to enhance the efficacy of statistical models in clinical trials. In this, Crowther underscores the importance of ensuring robust predictions by simulating data that closely resembles real-world conditions, particularly where traditional datasets are insufficient. “The challenge with that is if AI is doing it and training it, how are we 100% sure that it’s reflecting real-world conditions,” he explains. It becomes crucial to distinguish between genuinely valuable applications and mere hype to avoid pursuing uses that do not authentically improve clinical trial outcomes. This scrutiny ensures that synthetic data solutions contribute meaningfully to advancing medical research and enhancing the reliability of AI-driven insights in healthcare contexts. Despite these challenges, the potential of generative AI is immense. “It’s a game changer without a doubt, but it is a game changer in the background,” says Crowther. He envisions a future of “just-in-time insights”, where the right information is delivered to the right people at the right time. This capability will enhance decision-making, improve patient outcomes, and streamline the clinical trial process. Ultimately, integrating generative AI into clinical trials aims to optimise interactions between sponsors, doctors, sites, and patients. Crowther believes that improving these connections will lead to superior trial outcomes, greater patient satisfaction, expedited drug development, and healthcare providers who recognise the transformative impact being made. ●
Clinical Trials Insight /
www.worldpharmaceuticals.net
bearsky23/
Shutterstock.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37