Artificial intelligence (AI) is increasingly reshaping the pharmaceutical industry, moving beyond experimental use cases to become a concrete component of regulatory decision-making and patient-facing processes. Its growing adoption is redefining how medicinal products are researched, developed, approved and monitored throughout their lifecycle. A recent report published by EFPIA explores in detail how robust AI governance frameworks can be effectively embedded both in medicinal product research and development (R&D) and in post-authorisation activities. The document highlights the need for a structured and harmonised approach to AI adoption, capable of balancing innovation with regulatory compliance and patient safety.
Central to the report is the concept of trust. EFPIA underlines that trust in AI systems can only be achieved through transparency, accountability and continuous dialogue between pharmaceutical companies and regulatory authorities. Rather than treating AI as a purely technical enabler, the report frames it as a socio-technical system whose impact extends to organisational processes, regulatory interactions and clinical outcomes. Across the case studies analysed, several recurring best practices emerge. Data quality and standardisation are identified as foundational elements: without consistent, well-governed datasets, AI models risk producing unreliable outputs that cannot be confidently used in regulated environments. Equally important is investment in training and change management, which plays a critical role in fostering acceptance of AI tools among both technical teams and regulatory stakeholders.
The report also stresses the importance of early AI integration and governance. Embedding governance mechanisms from the initial design phases helps align AI-driven innovation with regulatory expectations, reducing friction at later stages. In this context, model transparency and explainability are prioritised over raw predictive performance, reflecting regulators’ need to understand how and why an algorithm reaches a given conclusion. Finally, EFPIA points to the need for proactive monitoring and continuous risk assessment throughout the AI lifecycle. Even during pilot projects or exploratory phases, potential risks must be identified and managed early, ensuring that AI solutions remain compliant, robust and fit for purpose as they scale. This approach, the report concludes, is essential for enabling the sustainable and responsible use of AI across the pharmaceutical ecosystem.