top of page

Delivering AI in Pharmacovigilance: a Survey of Existing Guidance

Updated: Apr 19

There is continued interest from pharmaceutical companies, solution providers, and academia in improving pharmacovigilance processes with Artificial Intelligence given the increasing volume of data to be processed and the highly laborious nature of such processes.


Being a crucial part of ensuring the safety and effectiveness of drugs, pharmacovigilance is a “high stakes” AI use case, and its impact requires thorough consideration. From the EMA on AI in medicine regulation (Aug/21):


“This range of applications brings with it regulatory challenges, including the transparency of algorithms and their meaning, as well as the risks of AI failures and the wider impact these would have on AI uptake in medicine development and patients’ health”


Medical literature automation for pharmacovigilance with AI


What does current guidance look like? In this article, we survey recent publications putting forward guiding principles, opinions, and frameworks for practitioners coming from regulatory agencies and the industry. We focus on publications close to the pharmacovigilance industry, see also the Related Guidance section for other related initiatives.


How is it implemented at biologit? We will also discuss how we applied those insights in delivering the biologit MLM-AI Platform.


Datalift Berlin 2022: This topic is discussed in our talk at the 2022 DataLift Summit in Berlin. Check out the recording below:


datalift summit with Bruno Ohana

Survey of regulatory guidance and proposals


FDA AI/ML-Based Software as Medical Device (2019 Proposal and 2021 Update)

In 2019 the FDA issued a discussion paper and request for feedback on the framework for regulatory oversight on AI/ML-based software as a medical device (AI/ML SaMD), opening a dialogue with the industry and iterating on framework refinements. The proposal outlines a “total product lifecycle approach” for developing, deploying and monitoring AI systems using good machine learning practices:


product lifecycle

FDA's Total Product LifeCycle

 

TransCelerate Biopharma is a non-profit organization focusing on accelerating research and development practices across the pharmaceutical industry. Their 2021 paper Validating Intelligent Automation Systems in Pharmacovigilance: Insights from Good Manufacturing Practices [3] provides considerations more tailored for the pharmacovigilance domain.


The proposal uses the engineering practices from ISPE GAMP (Good Automated Manufacturing Practices) as the starting point, The benefit of this approach is that it aligns terminology for AI projects with a widely adopted framework in the industry, which is then enriched with AI-specific considerations.


The ICMRA (International Coalition of Medicines Regulatory Authorities) is a voluntary body for sharing and coordinating initiatives across regulators worldwide. In August 2021 ICMRA issued an assessment report [6] gathering regulatory feedback from its members on use cases for AI, including pharmacovigilance. The document outlines possible directions of regulatory thinking based on case studies of applying AI technology to regulated use cases, one of which is in Pharmacovigilance.


The Good Machine Learning Practices for Medical Device Development: Guiding Principles (GMLP) [5] is a jointly issued document by the FDA (US), HPRA (UK), and Health Canada synthetizing good practices for machine learning use cases in healthcare.


guiding principles

Summary of Good Machine Learning Guiding Principles - FDA/MHRA/Health Canada 2021


Alongside considerations for good AI engineering processes, of note is the emphasis on leveraging multi-disciplinary expertise for building AI products, human factors (Focus on the Human-AI team, Clear and essential information to users) and governance in AIOps processes (data provenance and model monitoring).


Key themes on AI guidance for pharmacovigilance

From the existing guidance and points of view being put forward we can spot certain common themes:


Sound Technical Approach to Machine Learning

Unsurprisingly, many recommendations relate to ensuring the ML project follows industry best practices. There are guiding principles in GMLP dedicated to adequate train and test data split, sourcing representative datasets and producing experimental model results, with similar considerations reflected in [3].


Attention to the Total Product Lifecycle

Good engineering practices should be followed across the entire development lifecycle, beyond AI development and testing and including post-release monitoring and feedback. The FDA AI/ML SaMD guidance sets out inspecting guidelines similar to that of traditional software products where:


“FDA will assess the culture of quality and organizational excellence of a particular company and have reasonable assurance of the high quality of their software development, testing, and performance monitoring of their products.“ [4]


Similarly, GMLP dedicates a guiding principle for monitoring model performance after deployment, and another for ensuring good engineering practices are followed during development:


"Deployed models have the capability to be monitored in “real world” use with a focus on maintained or improved safety and performance." [5]


TransCelerate [3] frames this topic within the ISPE GAMP framework and adds specific AI considerations to each stage of the systems development lifecycle. In leveraging GAMP, AI considerations are framed within a comprehensive framework already common to pharmacovigilance and quality teams, including acceptance testing, validation, risk management, change management, etc.


Documentation and Transparency

Strong emphasis is also seen on establishing good documentation practices of both internal processes and user-facing documentation to clearly state the system’s intended uses and limitations. From GMLP:


“Users are provided ready access to clear, contextually relevant information that is appropriate for the intended audience […]” [5]


From TransCelerate:


“Confidence in the output of AI-based static systems requires insight into why decisions are made within the model.” [3]


Multi-Disciplinary Teams

Another common trend across the survey is the emphasis on establishing multi-disciplinary teams across the lifecycle, ensuring the adequacy of data collection, model design, risk management and validation efforts.


AI Solutions for Pharmacovigilance: Our approach


How to translate the existing proposals into actionable steps? The existing guidance, while beneficial, still leaves a lot of implementation decisions at the hands of product teams. It is a challenge recognized by the publications we surveyed, for example on feedback submitted by stakeholders to the GMLP guidance in the FDA’s docket.

A similar observation is in the TransCelerate paper’s concluding remarks:


“[…] industry should engage regulators actively in discussions since agreement on high-level performance measures with a clear interpretation and verifiable measurement processes will be essential.” [3]


In the meantime, we believe it is important for practitioners to share their experiences and help advance the discussion of AI in pharmacovigilance. We have since published our implementation approach in our white paper and welcome feedback on it.


document

Biologit has built a quality system and established our processes following GAMP. From this foundation, we also developed our "AI" Development Lifecycle SOP guiding the AI-specific considerations from requirements to validation and production deployment.


qms

This enabled us to meet two important requirements:

  • Facilitating robust and repeatable processes through documentation

  • Ensuring traceability of key decisions across the development lifecycle

Availing of comprehensive internal documentation has the added benefit of easily producing external documentation as a byproduct, helping with our next goal of transparency:


Investing in Transparency

We've seen transparency in AI systems as a recurring theme in regulatory thinking, helping users make more informed decisions, and mitigating the risk of model misuse.


Transparency through documentation was a key concern of the team: we developed and made publicly available artifacts explaining the AI objectives, intended uses and limitations to different audiences:


Aggregate Guidance as Risks and Controls

With multiple sources of guidance available, and with more to come, it made sense to re-frame our understanding in terms of their underlying risks, allowing to aggregate similar recommendations under the same theme.


Once this is done, risk controls can be implemented. The controls correspond to actionable steps across different areas: system design, data collection, lifecycle processes, documentation and user experience.

risk control

Sample risks and controls for the design of biologit MLM-AI (full table here)



📖Learn more: See our article on AI Risk Management 


Multi-Disciplinary Team

biologit MLM-AI was built from the ground up together pharmacovigilance professionals. The result of this collaboration can be seen in all aspects of the product:

  • Data collection and labeling protocols are suited for the real needs of pharmacovigilance use cases.

  • AI models were designed to avoid missing important adverse events: a key risk consideration by our SMEs.

  • Full traceability and oversight of AI decisions to ensure traceability, validation and quality assurance of results.

  • Involvement of pharmacovigilance SMEs in the continuous monitoring of model performance.

ai tags

Model predictions relevant to pharmacovigilance workflows in biologit MLM-AI


Future Outlook


Adoption Strategies

In pharmacovigilance, it is important to provide users a clear path to validating AI-based systems within their workflows, and sharing implementation and adoption experiences will help speed up adoption.


Currently, GVP regulation provides a general framework for adopting AI-based technology (or any technology) using a risk-based approach. From the ICMRA report:


“Current guidelines do not outline requirements of such software in detail, however, per I.B.8 of GVP module I, IT systems used in pharmacovigilance should be fit for purpose, and “subject to appropriate checks, qualification and/or validation activities to prove their suitability[6]


A more recent FDA paper [1] reflecting on the agency’s experience with AI implementations further reinforces the need for validating and adopting AI within a risk-based framework:


“[…] an overarching consideration is that important quality checks will be needed to ensure the performance of the combined human–AI system is at least as good as the human-only system it is replacing.“ [1]


"AI in PV" Regulatory Surveillance?

Regulatory guidance for AI in pharmacovigilance is fast evolving, and the need for further refinements is clearly stated in the ICMRA findings [6]:

“Regulatory guidelines for algorithm development and use in pharmacovigilance should be defined.”


So more AI guidance is coming. Even when fully defined, it is unlikely that guidance will remain static, improving over time as regulators share experiences and learn from best practices from other industries:


“Conduct outreach to learn from […] Other industries that have been using algorithms in critical services for a long time could be useful e.g. aviation/nuclear; Tech companies, or other government agencies using AI.”


Future guidance will likely contain specificities according to use case, types of AI model (static vs. dynamic), geographical region, etc. It may be necessary for practitioners to engage in routine regulatory surveillance exercises to actively monitor AI guidance and translate it to their specific needs as the use of this technology becomes more widespread.


Related Guidance

In addition to the sources discussed here, it is worth mentioning other closely related efforts: the WHO has published broader AI in healthcare guidance on Ethics and governance of artificial intelligence for health, while the EU is proposing a comprehensive regulatory framework for AI applications according to their level of risk.


The Biologit MLM-AI Platform for Medical Literature Monitoring

biologit MLM-AI is a complete literature monitoring solution built for pharmacovigilance and safety surveillance teams. Its flexible workflow, unified scientific database, and unique AI productivity features deliver fast, inexpensive, and fully traceable results for any screening needs.


Contact us for more information on the platform and to sign up for a free trial.


MLM-AI platform


References


[1] Ball R, Dal Pan G, “Artificial Intelligence for Pharmacovigilance: Ready for Prime Time?” Drug Safety V.45 2022 - https://link.springer.com/article/10.1007/s40264-022-01157-4


[2] Ohana B, Sullivan J, Baker N, “Validation and transparency in AI systems for pharmacovigilance: a case study applied to the medical literature monitoring of adverse events”, ArXiv Preprint, December 2021. - https://arxiv.org/abs/2201.00692 


[3] Huysentruyt et al, 2021, "Validating Intelligent Automation Systems in Pharmacovigilance: Insights from Good Manufacturing Practices", Drug Safety V.44 2021 - https://link.springer.com/article/10.1007/s40264-020-01030-2  


[4] US Food and Drug Administration, 2019, "Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback" - https://www.fda.gov/media/122535/download 


[5] FDA, MHRA and Health Canada, 2021, "Good Machine Learning Practice for Medical Device Development: Guiding Principles", https://www.fda.gov/media/153486/download


[6] International Coalition of Medicines Regulatory Authorities, 2021 "Horizon Scanning Assessment Report – Artificial Intelligence", 2021 - https://www.icmra.info/drupal/sites/default/files/2021-08/horizon_scanning_report_artificial_intelligence.pdf 


bottom of page