Blog

When Every Pixel Matters

Author Image
Tristan Bishop, Head of Marketing
May 23, 2025

At Centaur Labs, we believe that great AI begins with great data. In medical imaging, that means honoring every pixel, every voxel, and every frame. Whether you’re a researcher training a model, a clinician validating results, or an annotator labeling anatomical structures, success hinges on clarity, consistency, and context.

This post unpacks the unsung backbone of modern imaging—DICOM—and explores how our segmentation tools inside the Centaur DICOM Viewer empower high-fidelity annotation at scale.

What Is DICOM and Why Does It Matter?

DICOM (Digital Imaging and Communications in Medicine) is more than a file format—it’s the universal language of medical imaging. Every CT scan, MRI, and ultrasound image comes wrapped in a DICOM container filled with metadata. This includes:

  • Modality type (MRI, CT, Ultrasound)
  • Pixel spacing (real-world size of each pixel)
  • Slice thickness (depth of each imaging plane)
  • Study and series IDs (to group related images)
  • Resolution (rows and columns)

These tags may seem technical, but they’re critical. Miss a slice thickness? Your 3D reconstruction could misrepresent anatomy. Misread pixel spacing? Measurements could be off by millimeters. In healthcare, that margin matters.

Volume-Based vs. Time-Based Imaging

Medical images typically fall into two categories, each requiring a different annotation mindset:

Volume-Based Imaging (e.g., CT/MR): These are 3D scans composed of thin slices, each one a voxel-sized window into anatomy. Accurate 3D reconstructions require a deep understanding of DICOM geometry.

2D Time-Based Imaging (e.g., Ultrasound): These are 2D frames recorded across time. Rather than reconstructing volume, you’re tracking motion, frame by frame. Segmentation here is always 2D, but no less meaningful.

Behind every scan is a clinical story in progress. Understanding whether you’re segmenting space or time changes how you interpret and annotate each image.

OHIF + Centaur: Built for Modern Segmentation Workflows

We’ve embedded our full suite of segmentation tools directly into the open-source OHIF viewer to support varied use cases and diverse users, from laptops and tablets to researchers and physicians.

Navigation & Metadata Tools

  1. Zoom, Pan, Scroll, Slice Navigation: Precisely explore volumetric data.
  2. Window/Level Adjustment: Control brightness and contrast for clearer visibility.
  3. DICOM Tag Browser: View metadata directly—no guessing, no surprises.
  4. Segmentation Overview Bar: See where annotations live within the volume.
  5. 3D Volume Rendering: Visualize your segmentations in three dimensions.

Annotation Tools: Precision by Design

Our annotation tools are purpose-built and adaptable, organized into three core categories:

 Pixel-Based Tools

  1. Paintbrush & Eraser: Freehand tools with adjustable size—ideal for stylus or tablet use.
  2. Threshold Tool: Automatically segments based on pixel intensity—useful for isolating structures like tumors or bones.

Shape-Based Tools

  1. Circle/Sphere: For fast geometric shapes in 2D or 3D.
  2. Box: Allows labelers to create volumetric box annotations.
  3. Polygon & Contour: Point-by-point drawing; contour mode allows curved edges for anatomical accuracy.

AI-Assisted Tools (SAM: Segment Anything Model)

  1. Single-Slice SAM: Draw a box around a structure; the model predicts the segmentation.
  2. Propagation SAM: Segment multiple slices from a single prompt.

Best of all, SAM-generated predictions can be refined with contour or pixel tools for maximum precision.

User Preferences and Platform Considerations

Different users need different tools, and Centaur’s OHIF integration respects that.

  • Mouse + Keyboard Users: Polygon tools offer high control for desktop workflows.
  • Tablet Users: Pixel tools with stylus support (e.g., Apple Pencil) enable fluid freehand drawing.

Tool interoperability is also key: you can easily start with AI, switch to manual, or jump between shapes and brushes.

While OHIF provides a strong foundation for visualization, the Centaur platform extends far beyond viewing, transforming image inspection into a full annotation pipeline with built-in quality control, feedback loops, and consensus modeling. As annotators draw segmentations or respond to classification prompts, their submissions are recorded and immediately analyzed for agreement with others. 

For more advanced workflows, we offer an arbitrator mode. Arbitrator mode enables power users or customers to inspect submissions from multiple annotators, compare disagreements, and create a gold-standard consensus through direct review. Once consensus is established, results are automatically compiled and available for download, structured, standardized, and ready for model training or clinical validation. This layered approach—human insight, automated scoring, and transparent arbitration—demonstrates the real power of Centaur’s platform when built on top of OHIF: not just viewing, but decision-making at scale.

What It All Means—for You, for AI, for Healthcare

We don’t just build tools—we build trust—trust in the data, trust in the process, and trust in the people behind each annotation. The insights become clinically meaningful when segmentations are precise and metadata is respected. That, in turn, powers AI systems to assist in diagnosis, tracking disease, or guiding treatment.

Every DICOM header is a data blueprint. Every annotation is a decision. And behind every decision is a human. That’s the Centaur way: combining human insight with machine efficiency to shape the future of medical AI.

If you work with medical imaging, we’d love to show you how our OHIF-based tools can accelerate your workflow without compromising quality. 

For a demonstration of how Centaur can facilitate your AI model training and evaluation with greater accuracy, scalability, and value, click here: https://centaur.ai/demo

Related posts

October 20, 2025

Content Moderation in the Age of AI: Why High-Quality Data is the Real Game-Changer

Content moderation depends on more than AI automation—it requires high-quality training data. Centaur.ai delivers expert-labeled, multimodal datasets that help platforms detect hate speech, disinformation, explicit content, and compliance risks. By combining human insight with scalable infrastructure, Centaur.ai builds safer, more ethical, and more adaptable moderation systems.

Continue reading →
May 30, 2025

Expert Feedback Is Critical To Accelerate AI

Expert feedback is essential for safe, effective healthcare AI, as emphasized in a Centaur Labs webinar featuring leaders from Google Health, PathAI, and Centaur.

Continue reading →
December 8, 2020

When experts disagree, who do you trust for your medical AI?

Learn the how to mitigate the impact of medical error in your data labeling pipeline by intelligently aggregating multiple expert opinions together

Continue reading →