The European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) have jointly published a document outlining ten principles for the responsible use of artificial intelligence (AI) throughout the medicines lifecycle. The initiative marks a coordinated effort between the two regulators to support the future development of AI guidance across different jurisdictions. The joint principles will provide a common reference framework for the development, validation and oversight of AI systems used in activities related to medicinal products, from early research to post-market phases.

A human-centric system

According to the document, AI systems should be human-centric by design and developed using a risk-based approach. This approach must include appropriate validation processes and risk mitigation measures proportionate to the intended use and potential impact of the system. The development and deployment of AI should comply with applicable legal, ethical, technical, scientific and cybersecurity standards, as well as relevant regulatory requirements, including Good Practices (GxP). A clearly defined context of use is another key requirement. Each AI system should be designed and implemented for a specific purpose, with its scope and limitations clearly documented. Development should involve multidisciplinary expertise across the entire lifecycle of the system, reflecting the complexity of the pharmaceutical regulatory environment. In line with GxP principles, detailed documentation is expected. This includes data source provenance, data processing steps and analytical decisions taken during model development and operation.

Governance and risk-based AI oversight

Robust governance mechanisms must also be in place to ensure the protection of privacy and sensitive information. Data used in AI systems should be fit-for-use, ensuring adequate interpretability, explainability and predictive performance. The agencies stress that performance assessments should follow a risk-based approach and evaluate the AI system as a whole, including human-AI interactions where relevant.The document also calls for the implementation of a risk-based quality management system throughout the technology’s lifecycle. This is aimed at ensuring continuous oversight, reliability and compliance over time. Finally, information generated by AI systems should be presented to users in plain language, ensuring clarity, accessibility and contextual relevance. With these joint principles, EMA and FDA signal a shared commitment to promoting consistent and responsible use of AI in the medicines regulatory space, laying the groundwork for future guidance in this rapidly evolving field.

LEAVE A REPLY

Please enter your comment!
Please enter your name here