By Jerry Chapman, Editor-In-Chief at Xavier Health, and GMP Quality Expert
Continuously learning systems (CLS) have shown great promise for improving product quality in the pharmaceutical and medical device industries. These artificial intelligence (AI) algorithms constantly and automatically update themselves as they recognize patterns and behaviors from real-world data, enabling companies to become predictive, rather than reactive, when it comes to quality assurance. However, the output for the same task can change as a CLS algorithm evolves. This stands in sharp contrast to systems traditionally used in the life sciences, which are validated and expected to not change — performing exactly the same way each time they are used.
A team of FDA officials and industry professionals working through Xavier Health’s Artificial Intelligence (AI) Initiative are tackling this issue by developing good machine learning practices (GmLPs) for the evaluation and use of CLS. A primary objective of the Xavier Health CLS Working Team has been to identify how one can provide a reasonable level of confidence in the performance of a CLS in a way that maximizes the advantages of AI while minimizing risks to product quality and patient safety. (A second Xavier Health team of industry professionals is actively exploring the use of AI for continuous product quality assurance (CPQA), as discussed in a previous article.)
The team includes members from pharmaceutical, medical device, and computer technology companies, and from academia and FDA. FDA involvement is critical in this effort, as both industry and regulatory agencies need to trust the science behind the AI and evolve their understanding of AI together.
At the Xavier Health AI Summit in August 2018, members of the CLS team discussed the recently completed primary deliverable from the first phase of their work — a white paper, “Perspectives and Good Practices for AI and Continuously Learning Systems in Healthcare.”
Regarding the AI effort, Xavier Health Director Marla Phillips commented, “Some people feel that the use of AI is irresponsible. But honestly, we are at the point in our industry, with decades of data sitting on the shelf unused, that we are irresponsible for not using AI. Think about all the predictive trends sitting on the shelf that could prevent failures. Think of the diagnoses that go undiagnosed, or the cures left unused. It is time to take action.”
This article originally appeared on Pharmaceutical Online, September 28, 2018.