Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Subscribe to our monthly newsletter
Copyright © 2025. All rights reserved by Centaur.ai
Blog
While much of AI development in healthcare centers on images and text, some information can only be captured in video. More than any other data type, video is most similar to how humans perceive the world, and AI teams are increasingly using it to build cutting-edge models in healthcare.
Our medical video annotation tool now enhanced with time range selection and classification features, enabling AI teams to leverage video more efficiently for model development. Read more to learn how our range selection capability works and how customers are using it to build their models.
Our new time range selection capability allows you to select a range of frames - down to the .1 sec - with two taps and assign them to a class. Time range selection and classification are now available for video projects on both mobile and desktop. Request a demo with our team to see this tool in action, or follow this step-by-step guide to start using time range selection and classification today.
From the Centaur.AI dashboard:
From the mobile app:
We know firsthand from our customers that AI teams are finding creative ways to build models that leverage medical video. See how surgical video, patient monitoring footage and telehealth consult recordings can benefit from time range selection and classification.
Surgical and diagnostic procedures are often recorded for both educational and documentation purposes, providing a detailed view into the surgical process, techniques, and patient outcomes. Time range selection for surgical video enables labelers to identify important phases in a surgery and individual steps within those phases, e.g. the pre-operative phase and the step to check for the presence of required surgical equipment.
When you think of patient monitoring, you may think of real-time vitals at the bedside or remote patient monitoring via data feeds from wearable devices or mobile apps where patients can manually describe or record exercise, nutrition, notes from therapy, and more. On top of this, AI teams are also building models based on video feeds of patient activity in their homes or hospital rooms, with the intention of minimizing the risk of dangerous falls. Labelers can identify segments of video that include information such as high-risk patient behavior, home features (steep stairs!), or habits that increase the risk of falling, and then mitigate that risk.
Facial expressions and gestures give clinicians a sense of a patient’s emotions about a topic, helping them answer questions like “Does the patient understand what I just shared?”, “Is the patient excited to adopt this new lifestyle change?”, and “Is the patient apprehensive about this course of action?” Range selection enables labelers to find and classify these significant expressions so they can be used to inform care.
These use cases are just a preview of how our time range selection and classification tools can accelerate your video labeling projects. If you don’t see your use case listed, connect with our sales team and engineers to learn how your unique use case can benefit.
Centaur.AI collaborated with Microsoft Research and the University of Alicante to create PadChest-GR, the first multimodal, bilingual, sentence-level dataset for grounded radiology reporting. This breakthrough enables AI models to justify diagnostic claims with visual references, improving transparency and reliability in medical AI.
Learn about our partnership with Mayo Clinic spin out Lucem Health, and how clinical AI development teams can access high quality medical data annotations at scale.
A Centaur Labs study found that disease prevalence and expert feedback significantly influence diagnostic accuracy in dermatology, highlighting the need for contextual data and ongoing guidance to reduce errors and improve clinical decision-making.