Blog
Centaur.ai has teamed up with Consensus, an emerging AI-powered scientific search engine, to accelerate AI development with high-quality, expert annotations. Faced with the challenge of extracting precise information from scientific literature, Consensus turned to Centaur.ai’s collective labeling platform, which combines robust software with a vetted network of scientists and doctors.
In just two weeks, Centaur.ai delivered over 100,000 unique expert labels, meeting Consensus’s needs for scale and accuracy. “Because of the unique Centaur.ai data labeling solution that not only provides the software to complete labeling tasks but also provides a network of experts… it was a no-brainer to work together,” said Consensus CEO Eric Olson. Centaur.ai managed the full lifecycle, from sourcing annotators to quality assurance, allowing Consensus to focus on refining its models. That rapid success laid the foundation for an ongoing collaboration to ensure their AI continues delivering the most accurate scientific insights.
Erik Duhaime, co-founder and CEO of Centaur.ai, reaffirmed the company’s commitment to advancing AI through expert-driven data: “We’re excited to partner with teams on the cutting edge of healthcare and AI… We’re thrilled to support them both as they move to general availability.” Moving forward, both teams aim to leverage this synergy to offer users instant access to rigorously vetted research findings, underpinned by scalable, collective expert intelligence.
Centaur.ai delivers high-quality annotations for neurological datasets where precision determines scientific validity. Through competitive collective intelligence, Centaur produces reproducible labels that strengthen model evaluation and training. NeurIPS attendees working with EEG, EMG, multimodal waveforms, or cognitive modeling should meet with Centaur to see how accuracy is engineered, not assumed.
Listen to Co-founder and CEO Erik Duhaime talk about the origins of Centaur Labs and the future of medical data labeling.
Medical AI annotation pipelines often work well for research but fail under FDA scrutiny. Regulators expect documented multi-expert consensus, transparent disagreement resolution, and full annotation provenance. Workflows that rely on single annotators or simple tiebreakers may produce accurate labels but lack the auditability required for regulatory clearance and clinical deployment.