This week, Robert and Haley delve into a critical question in health care’s AI boom: How can we know if AI tools really work for patients? As AI-powered diagnostic tools flood health systems, ensuring they’re accurate and unbiased is crucial—but there’s currently no standardized testing for validation. Enter the Coalition for Health AI (CHAI), which is creating a network of independent "assurance labs" to rigorously test health care algorithms before they reach patients.
We explore why AI in health care presents unique risks, such as bias and algorithm drift, and discuss FDA Commissioner Robert Califf’s call for ongoing monitoring of health care AI tools. With 3,000 health systems, tech firms, and patient advocates involved, CHAI’s plan includes “ingredient and nutrition labels” to assess how well AI performs across different patient populations.
Haley breaks down how overwhelmed health care leaders, like Sanford Health’s David Newman, are navigating a crowded field of AI products. We look at Sanford’s governance process for vetting new tech and why independent validation could streamline the adoption of safe, effective AI solutions.
If you’re curious about the future of AI in health care and the movement to make these tools transparent and accountable, this episode is for you!