Manufacturing
controlling manufacturing processes, ensuring real-time product quality and consistency. So how can those in pharmaceuticals fuse QbD and PAT with AI, which is becoming increasingly important? Nikolai Makaranka, formerly of Bristol Myers Squibb (BMS), who launched a pharma AI start-up after seeing a gap in the market, recommends noting the similarities of QbD, PAT and AI. Fundamentally, says Makaranka, who focuses on AI-powered quality management solutions for life sciences, all rely on similar foundational capabilities. These approaches demand: robust digital infrastructure, specialised technical talent and a data-driven culture. “Each of these frameworks are model and data-dependent, whether you’re building a design space for a formulation, controlling real-time process parameters, or training AI algorithms to identify deviations,” says Makaranka.
So, where are they distinct? For one, AI is more accessible. Unlike QbD or PAT, often requiring regulatory engagement, validated systems and complex instrumentation, AI experimentation can begin with something as simple as a ChatGPT prompt. AI is also more affordable, with little upfront cost and applicability across functions. This low entry barrier makes it easy for teams to start piloting solutions. “However, this ease has created misconceptions,” Makaranka points out. Even outside GxP (good practice regulations) contexts, making AI work involves: clean, labelled, representative data; robust infrastructure for storage and governance; and clear success metrics and risk management frameworks. So, embrace AI’s capabilities, but expect complexities.
Learning from past mistakes Systematic gaps can cause hold-ups, argues Makaranka – when companies attempt to develop AI in-house, they encounter blockers. These tend to be fragmented data systems and disconnected platforms; inadequate data quality and metadata structures; and lacking interoperability between existing tools and AI frameworks. These mirror the issues that plagued QbD and PAT rollouts. Grace Cronin, senior director of MS&T systems and engineering at BMS, believes the industry must embrace cross-functional collaboration, citing MS&T, PD and BI&T alignment as examples, and regulatory engagement early in AI’s life cycle. In January 2025, the FDA published ‘Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products’. BMS has joined with BioPhorum and other pharmaceutical companies to review and provide feedback.
“Similar initiatives like BioPhorum’s PAT roadmap and ILM/RTR frameworks have been effective in the past,” Cronin notes, “as they offer valuable guidance on building business cases, managing model validation and aligning with regulatory expectations,” BMS is integrating AI across drug discovery, clinical
www.worldpharmaceuticals.net
trials manufacturing and regulatory operations. Previous challenges include fragmented or unstructured data and multiple initiatives running in parallel without necessarily compatible technologies. “We understand the speed of change in the technology or understanding or the capability of that technology. To start overcoming these challenges, we set a vision and developed a clear roadmap with a three-year horizon to identify milestones,” Cronin explains. Louie argues that barriers to success are often poor data quality. For effective AI, particularly in predictive and generative applications, data must be “standardised, curated and governed from the outset”. To maximise impact, companies should “prioritise use cases that are high-impact yet low-risk, address clearly defined operational pain points, have measurable KPIs and offer potential for scale”.
Success when learning also depends on engaging cross-functional stakeholders early, including scientific, manufacturing, quality, regulatory and digital representatives. Louie feels organisational change management and upskilling teams in AI concepts are critical to supporting adoption and driving long-term transformation. It’s essential to establish strong foundations of compliance with data integrity, information security and data protection standards, particularly for AI systems intended for use in GxP-regulated environments and subject to health authority oversight.
Issues include variability in data formats; lacking standardisation, incomplete information; and inconsistent definitions. Often, there aren’t adequate tools or processes to clean and prepare datasets, nor clear data governance structure to ensure accountability for bringing data to internal corporate standards. Additionally, organisations often don’t allocate resources to go back to source systems and correct underlying data quality issues. Louie warns this becomes especially problematic when the objective is to train models that provide insights and predictions based on specialised, historical or domain- specific content.
As data must be carefully prepared before system migrations, he recommends incorporating “data readiness stages” into AI projects. This should include selecting appropriate tools to analyse data quality, identify gaps or inconsistencies, and make corrections before data is used for AI training or reference. This is particularly important when planning to use generative AI models, such as large language models (LLMs). Cronin acknowledges hesitation can stem from concerns about regulatory uncertainty and lacking experience in implementing transformative tools in GxP environments, alongside fear of disruption, changing workflows, retraining teams and rethinking legacy systems. Addressing these requires “clear vision, communication and accelerated pilot programmes”, she suggests.
41
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68 |
Page 69 |
Page 70 |
Page 71 |
Page 72 |
Page 73 |
Page 74 |
Page 75 |
Page 76 |
Page 77 |
Page 78 |
Page 79 |
Page 80 |
Page 81 |
Page 82 |
Page 83 |
Page 84 |
Page 85