CDER and CBER have partnered with the European Medicines Agency (EMA) to establish 10 guiding principles aimed at assisting industry and product developers in utilizing artificial intelligence (AI) to enhance drug and biological product development.
Most of the other nine principles formalize practices that pharmaceuticals and Biologics industry already understands well such as validation, traceability, risk-based decision making, lifecycle management and data integrity which are essentially process and system centric. AI could fit into them as another tool which required qualification, validation and implementation of checks and balances.
The critical question then is:
Can AI truly “decide” with a patient centric intent as they are expected to:
- Optimized against pre-defined objectives?
- Learn correlation from historical data?
- Reflect the assumptions, biases and constraints embedded by their designs and training datasets
Will they understand why patient safety matters, or they will merely approximate outcomes based on statistical inferences
- Ethical judgement is always expected to be “Simulated” not exercised
- Can Safety signal sensitivity be resolved mathematically or require moral interventions
- Rare, novel or ethically ambiguous scenarios can fall outside the model’s competence
Seeing through the lens, the guiding principle implicitly frames AI as:
- A decision support system, not a decision maker
- A productivity amplifier for AI skilled professional not a replacement for them
- A tool to surface insights faster and more comprehensively, while final responsibility will remain with qualified individual.
Conclusion
Reference:
Guiding principles of good AI practice in drug development, January 2026