Blog

Webinar: The Critical Role of Expert Feedback in Healthcare AI

Author Image
The Centaur Blogging Team
May 30, 2025

Accelerating AI in Healthcare: The Critical Role of Expert Feedback

In the race to deploy AI in healthcare, the bottleneck isn’t always the model architecture or computing power—it’s the quality of data and the feedback loop behind it. In a recent Centaur Labs webinar, leaders from Google Health, PathAI, and Centaur Labs came together to discuss why expert feedback is essential to building effective, safe, and trustworthy healthcare AI systems.

Erik Duhaime, CEO of Centaur Labs, opened by framing the conversation: “You can’t have safe AI if you don’t have the data to measure whether it’s safe.” That simple truth—so often overlooked in the excitement around model performance—underscores the importance of feedback throughout the AI development lifecycle.

The Data Behind the Models

As AI models continue to influence decisions in radiology, pathology, and diagnostics, the need for high-quality training and validation data has never been greater. Google Health and PathAI shared how they incorporate clinicians and domain experts not just at the labeling stage but throughout model validation and post-deployment.

Duhaime emphasized the pitfalls of relying solely on “ground truth” as a static concept. “There’s a lot of subjectivity; there’s a lot of disagreement,” he said. “If you’re going to claim that your model is better than a radiologist or pathologist, you need a better benchmark.”

Scaling Feedback Through Crowds of Experts

Centaur Labs, founded on the premise that medical AI requires collective intelligence, has built a platform that allows developers to harness the judgment of thousands of medical professionals. “What’s powerful about collecting a wide variety of opinions,” Duhaime explained, “is that you can measure disagreement and use that as a proxy for uncertainty.”

This has broad implications—not just for training data, but for understanding model blind spots and ensuring safe deployment. “You can identify edge cases,” he continued, “and figure out where the model needs more examples or where experts don’t agree.”

The process isn’t just about annotation, either. It’s about iteration. “It’s not just labels that matter—it’s feedback,” said Duhaime. That feedback loop helps teams continuously refine models, improving generalizability and reducing risk.

Rethinking Ground Truth

A particularly important moment in the discussion came when Duhaime challenged the industry’s traditional view of annotation accuracy. “We tend to treat disagreement as noise, but a lot of times, disagreement is a signal,” he said. “It tells you that the case is hard, that the data is ambiguous, or that there’s no clinical consensus.”

This recognition has led Centaur Labs to invest heavily in infrastructure that allows clients to collect and analyze diverse expert input at scale. Rather than relying on one person’s opinion, they can see how 20 or more professionals weigh in—helping them make more informed decisions.

The Future Is Human-in-the-Loop

As the discussion wrapped, Duhaime reiterated a core belief at the heart of Centaur Labs: “AI should not be trained and evaluated in isolation.” Human expertise isn’t just a helpful addition—it’s a requirement for safe, effective, and ethical medical AI.

For healthcare organizations looking to accelerate AI initiatives, the takeaway was clear: model performance is only part of the story. Continuous, scalable expert feedback is what turns promising algorithms into reliable clinical tools.


Schedule a demo with Centaur.ai

Related posts

August 15, 2024

Time Range Selection for Medical Video Annotation | Centaur AI

Know Centaur AI's new time range selection feature that speeds up medical video annotation, improving accuracy and efficiency in healthcare data processing.

Continue reading →
October 14, 2025

Drone & Satellite AI: Data Annotation Quality | Centaur AI

Drones and satellites reveal emissions that once went unseen. But the true value lies in expert annotation that turns raw images into intelligence. High-quality data annotation is essential for training and evaluating AI models, ensuring accurate detection, compliance, and trust in a future where proof is the standard.

Continue reading →
November 3, 2025

Why Radiology AI Can't Afford Poor Annotation | Centaur AI

Radiology AI requires engineered annotation quality for training and evaluation to avoid dangerous clinical error. Centaur uses collective intelligence to outperform individual annotators and create reliable labels for imaging tasks like stroke detection and tumor classification, producing scientifically trustworthy datasets for LLM evaluation and high stakes medical AI applications.

Continue reading →