Blog

Centaur Labs teams up with Brigham and Women's Hospital on Massachusetts Life Sciences funded project

Author Image
Laura Kier, VP of Growth
July 8, 2021

Brigham and Women’s Hospital Partners with Centaur

Centaur Labs was recently awarded a $750,000 grant from the Massachusetts Life Sciences Center (MLSC) as part of the “Bits to Bytes” program, in collaboration with Brigham and Women’s Hospital (BWH). This award supports a strategic initiative aimed at converting large volumes of clinical imaging data—particularly lung ultrasound exams—into structured, expert-annotated datasets. These datasets are essential to drive AI innovation in respiratory medicine, enabling faster model development, more robust validation, and ultimately better patient outcomes.

The MLSC Bits to Bytes Program Goals

The MLSC’s Bits to Bytes program was launched in 2018 to fund projects that transform scientific data into actionable, machine-readable formats, all while building Massachusetts’s data science workforce. In the latest round of funding, 15 innovative projects were selected statewide, collectively receiving nearly $20 million in support. The Centaur Labs–BWH proposal stood out for its potential to significantly accelerate AI development in pulmonary diagnostics and for its clear strategy to seamlessly integrate expert annotation at scale.

Why Lung Ultrasound Annotation Matters

Lung ultrasound is experiencing renewed attention as a fast, non-invasive, bedside imaging modality—especially in conditions like pneumonia, pulmonary edema, and COVID-19. However, unlike CT and X-ray datasets, ultrasound data requires specialized annotation to capture subtle, dynamic patterns such as pleural line irregularities, B-lines, consolidations, and pleural effusions. Labeling this data consistently at scale is extremely labor-intensive and requires trained clinical experts.

That’s where Centaur Labs’ collective annotation platform delivers a unique advantage. By mobilizing a vetted network of clinicians and radiology experts, the team can curate large volumes of ultrasound studies while maintaining medical-level accuracy. The platform taps into expert-led consensus workflows—multiple reviewers annotate the same case, and high-quality labels emerge through aggregated agreement, while outlier annotations are flagged for deeper review.

Academic Validation: Bridging Gamified Labeling and Medical Quality

Centaur’s approach won early validation through a scholarly study published in collaboration with researchers from BWH, Massachusetts General Hospital, and Eindhoven University of Technology. The study demonstrated that a gamified expert-crowdsourcing platform could label lung ultrasound frames as reliably as traditional expert review—even for nuanced clinical features—and at scale. This evidence was instrumental in securing the MLSC grant, proving that innovative annotation methodologies could meet clinical standards.

The platform’s gamification aspects—leaderboards, rewards, and performance tracking—enhance engagement and quality. Clinicians and medical students could label de-identified ultrasound clips through a mobile app at their convenience, often during spare moments, effectively contributing to large-scale data annotation without disrupting daily workflows.

Broader Implications for Medical AI

Beyond lung ultrasound, this collaboration highlights a scalable approach for deploying expert annotation across other clinical modalities—such as dermatology, pathology, cardiology, and structured clinical notes. The MLSC grant validates the hypothesis that combining collective labeling with medical expertise can tackle large-scale data bottlenecks. More broadly, this model aligns with the emerging vision of “human-in-the-loop” systems, where expert feedback isn’t optional—it’s fundamental to model trustworthiness and real-world performance.

The Brigham & Women’s Hospital–Centaur Labs partnership represents a pioneering step in medical AI data infrastructure. By combining large-scale data collection, structured expert annotation, and public support via the MLSC, the initiative addresses a key barrier in AI pipelines: trusted, high-quality training data. This lung ultrasound project not only accelerates algorithm performance but also builds annotation capacity across the region—benefiting future endeavors in diagnostics, personalized medicine, and beyond.

In a world where AI tools risk slipping into error-prone black boxes, this project delivers a repeatable, accountable method for ensuring clinical AI is both data-driven and expert-validated. As the team prepares to launch the annotation effort, we look forward to sharing milestone updates, open-access datasets, and collaboration invitations with California’s broader medical AI community—and beyond.

Read the full article at Masslifesciences.com »

Related posts

February 2, 2022

Disease prevalence and feedback in dermatology

A Centaur Labs study found that disease prevalence and expert feedback significantly influence diagnostic accuracy in dermatology, highlighting the need for contextual data and ongoing guidance to reduce errors and improve clinical decision-making.

Continue reading →
June 15, 2025

Cognitive-Inspired Data Engineering For AI

Centaur.AI’ latest study tackles human bias in crowdsourced AI training data using cognitive-inspired data engineering. By applying recalibration techniques, they improved medical image classification accuracy significantly. This approach enhances AI reliability in healthcare and beyond, reducing bias and improving efficiency in machine learning model training.

Continue reading →
May 30, 2025

Expert Feedback Is Critical To Accelerate AI

Expert feedback is essential for safe, effective healthcare AI, as emphasized in a Centaur Labs webinar featuring leaders from Google Health, PathAI, and Centaur.

Continue reading →