Continuously Learning Systems (CLS) Working Team

  Be a part of the conversation; join our    LinkedIn AI group    today!

Be a part of the conversation; join our LinkedIn AI group today!

Goal

To increase the assurance of our product quality by becoming a predictive organization through the power of AI to provide more robust information to make more informed decisions than we can today. 

Problem Statement to Address

Continuously learning systems bear the risks of unanticipated outcomes due to a lack of human involvement in the changes, unintended (and undetected) degradation in time, confusion for users, and incompatibility of results with other software that may use the output of the evolving algorithm.


Phase 2 (August 2018-August 2019)

Phase 2 Team Leadership

CO-LEADERS

  • Pat Baird, Regulatory Head of Global Software Standards, Philips

  • Rohit Nayak, Principal, Sundance Capital, LLC

TEAM ADMINISTRATOR

  • Kelly Nienburg, Senior Associate, PwC

MENTOR FROM XAVIER AI CORE TEAM

  • Kumar Madurai, Principal Consultant, CTG

AI EXPERT FROM XAVIER AI CORE TEAM

  • Kumar Madurai, Principal Consultant, CTG


Description of Work

To increase the confidence in output derived through artificial intelligence processes such that humans can confidently make critical decisions from these outcomes.  In the context of continuous learning models, the ability to explain how and why the underlying artificial intelligence algorithms evolved to make better decisions or predict different outcomes is critical.  The team will address how to develop interpretation pathways for increasingly complex and sophisticated algorithms, including how to extract knowledge, understanding and root causes for augmented human decisions. 

The work of this team will encompass:

I. Consistent Nomenclature Development

a. Define the various AI terms and clearly define the differences
b. Link the terms to the extent the humans are in the loop
c. Define what explainability is and all the possibilities for what it could be

II.  Increasing Confidence in AI Output

a. Good practices for establishing a credible data set
b. Good practices for data set training
c. Good practices for linking the outcome to the data sources

III.  Regulatory Landscape

a. Current Landscape of how AI is regulated
b. Future Paradigm of how AI can be regulated


Phase 1 Deliverable (August 2017-August 2018)

Phase 1 Team Leadership

CO-LEADERS

  • Berkman Sahiner, Senior Biomedical Research Scientist, FDA

  • Mohammed Wahab, Senior Manager, Informatics and Analytics, Abbott

TEAM ADMINISTRATOR

  • Mac McKeen, Fellow, Regulatory Science, Boston Scientific

MENTOR FROM XAVIER AI CORE TEAM

  • Walt Mullikin, Head of Enterprise Analytics, Shire

AI EXPERT FROM XAVIER AI CORE TEAM

  • Kumar Madurai, Principal Consultant, CTG

 

Problem Solving Process

  1. Identify the current landscape of continuously-learning algorithms as it relates to this work group.

  2. Develop a hierarchy of continuously learning algorithms, from the simplest (closest to the locked algorithms) to the most advanced.

  3. Start with what we think we know: Scientific needs for validation of locked algorithms.

  4. Identify important components of continuously-learning algorithms and potential ways to evaluate them.

  5. Role Reversal Experiment: think of yourself as a customer of a continuously-learning algorithm.

Expected Deliverables

  1. Detailed examples of continuously-learning algorithms currently used in medicine

  2. A list of continuously-learning algorithm applications in medicine that we might likely encounter in the near future

  3. Preparation and conduct of a survey on the current landscape of continuously-learning algorithms in medicine

  4. Identification at a high level as to what scientific and technical information is important to know about a continuously-learning algorithm to have confidence in its safety and effectiveness:

a. Before the algorithm is deployed
b. After the algorithm starts to evolve while it is in use