Blog

New time range selection capability accelerates medical video annotation

Author Image
Tom Gellatly, VP of Engineering
August 15, 2024

While much of AI development in healthcare centers on images and text, some information can only be captured in video. More than any other data type, video is most similar to how humans perceive the world, and AI teams are increasingly using it to build cutting-edge models in healthcare. 

Our medical video annotation tool now enhanced with time range selection and classification features, enabling AI teams to leverage video more efficiently for model development. Read more to learn how our range selection capability works and how customers are using it to build their models. 

How to use time range selection and classification for medical video annotation

Our new time range selection capability allows you to select a range of frames - down to the .1 sec - with two taps and assign them to a class. Time range selection and classification are now available for video projects on both mobile and desktop. Request a demo with our team to see this tool in action, or follow this step-by-step guide to start using time range selection and classification today.

From the Centaur.AI dashboard:

  1. Navigate to a video project
  2. Click "New task" and select "Time range selection."
  3. Give the task a name, prompt, and label classes
  4. Add video import(s) to the task
  5. Navigate to a case, click "Draw new label," and use the video progress bar and record/end buttons to create gold standard ranges
  6. Work with the Centaur Labs team to launch your time range selection contest to the crowd!

From the mobile app:

  1. Navigate to a video contest
  2. To select a range, tap the first and last frames in the range. All frames between will automatically be selected.
  3. Tap any of the highlighted frames
  4. A prompt will appear - Select a class. 
  5. Save & Submit

Examples of medical video annotation

We know firsthand from our customers that AI teams are finding creative ways to build models that leverage medical video. See how surgical video, patient monitoring footage and telehealth consult recordings can benefit from time range selection and classification.

Surgical video

Surgical and diagnostic procedures are often recorded for both educational and documentation purposes, providing a detailed view into the surgical process, techniques, and patient outcomes. Time range selection for surgical video enables labelers to identify important phases in a surgery and individual steps within those phases, e.g. the pre-operative phase and the step to check for the presence of required surgical equipment.

Patient monitoring

When you think of patient monitoring, you may think of real-time vitals at the bedside or remote patient monitoring via data feeds from wearable devices or mobile apps where patients can manually describe or record exercise, nutrition, notes from therapy, and more. On top of this, AI teams are also building models based on video feeds of patient activity in their homes or hospital rooms, with the intention of minimizing the risk of dangerous falls. Labelers can identify segments of video that include information such as high-risk patient behavior, home features (steep stairs!), or habits that increase the risk of falling, and then mitigate that risk. 

Telehealth consults

Facial expressions and gestures give clinicians a sense of a patient’s emotions about a topic, helping them answer questions like “Does the patient understand what I just shared?”, “Is the patient excited to adopt this new lifestyle change?”, and “Is the patient apprehensive about this course of action?” Range selection enables labelers to find and classify these significant expressions so they can be used to inform care.

These use cases are just a preview of how our time range selection and classification tools can accelerate your video labeling projects. If you don’t see your use case listed, connect with our sales team and engineers to learn how your unique use case can benefit.

Related posts

July 1, 2025

Microsoft Case Study: Grounding AI in Expert-Labeled Data

Centaur.AI collaborated with Microsoft Research and the University of Alicante to create PadChest-GR, the first multimodal, bilingual, sentence-level dataset for grounded radiology reporting. This breakthrough enables AI models to justify diagnostic claims with visual references, improving transparency and reliability in medical AI.

Continue reading →
August 1, 2022

Centaur.ai partners with Lucem Health to advance medical AI

Learn about our partnership with Mayo Clinic spin out Lucem Health, and how clinical AI development teams can access high quality medical data annotations at scale.

Continue reading →
February 2, 2022

Disease prevalence and feedback in dermatology

A Centaur Labs study found that disease prevalence and expert feedback significantly influence diagnostic accuracy in dermatology, highlighting the need for contextual data and ongoing guidance to reduce errors and improve clinical decision-making.

Continue reading →