Podcast: AI Explainability

Audio Wave Photo

In a fascinating podcast, Jon Speer, Founder and VP QA/RA, Greenlight Guru, interviews Marla Phillips, Director, Xavier Health, on AI explainability and why it matters in healthcare. As AI begins to transform how the pharma and medical device industries operate, overcome the media-generated hype and fear of AI to discover its benefits when using it responsibly. AI explainability will be just one of the many topics covered at Xavier AI Summit 2018.

Highlights of the podcast include:

  • AI Explainability
    Part of transparency of how end user can have confidence in the outcome of AI. Explainability links credibility of input to the output.

  • AI has been around since the 1950s
    But its use is new to some people. The Xavier AI Summit shows how it works.

  • Pivoting from being reactive to proactive
    Advancing use of AI to identify correlations between data for improvement of the quality of products/patients.

  • Some devices have digital health components
    There's movements around real-world data for information to go to manufacturers to evaluate performance.

  • AI can be used to identify conditions that lead to failure
    Continuous Product Quality Assurance team encourages review and assessment of all the data.

  • Where is the data?
    GMP, non-GMP, financial, weather, and other kinds of data that impacts product quality. Use AI to take out garbage, find what's meaningful.

  • AI is a continuously learning system
    How to evaluate it? How did it reach its outcome? How to demonstrate credibility? How to train algorhythm?

  • Challenges of implementing AI
    include figuring out how to demonstrate credibility of AI output when not using validation and not having access to electronic data.

Xavier AI Summit 2019

Xavier AI Summit

Artificial Intelligence realized.
The future is here.

August 20-22, 2019

 In addition to this podcast, Greenlight Guru has turned the recording into a blog post, which you can view here: https://www.greenlight.guru/blog/ai-explainability.