Towards AI Transparency with Model Factsheets

Updated: Jun 14

At biologit we’re making transparency an integral part of our development process, and we’re excited to release the first fact sheet for biologit MLM-AI’s Suspected Adverse Event model and get feedback from the AI and pharmacovigilance community.



Model FactSheets were introduced by (Arnold et al, 2019) where they are defined as documentation artifacts disclosing key characteristics of an AI system: how data was curated, the model design decisions, its intended use and trade-offs. This improved understanding can play a valuable role in the risk management of AI systems.


 

“FactSheets help prevent overgeneralization and unintended use of AI services by solidly grounding them with metrics and usage scenarios” (Arnold et al, 2019)

 

In addition, the emerging regulatory guidance for AI in Pharmacovigilance (Huysentruyt et al, 2021) and the valuable resources from the AboutML initiative from the partnership on AI also helped us formulate our version which currently outlines:

  • Business problem

  • Intended use (target domain, inputs, operational envelope)

  • Data curation and labeling protocol

  • Training data characteristics

  • Description of machine learning models and inference pipeline

  • Performance metrics and experimental results

We will continue updating and extended fact sheets for MLM-AI as the platform evolves. We’d love to hear your feedback.


References


(Arnold et al, 2019) - Arnold, Matthew, et al. "FactSheets: Increasing trust in AI services through supplier's declarations of conformity." IBM Journal of Research and Development 63.4/5 (2019): 6-1. [ArXiv]


(Huysentruyt et al, 2021) - Huysentruyt K, et al. Validating Intelligent Automation Systems in Pharmacovigilance: Insights from Good Manufacturing Practices. Drug Saf. 2021 Mar;44(3):261-272. [doi]


See also