Atrio Stenosis Detection Model
A YOLOv8-based object detection model for identifying suspected stenosis regions in coronary angiography frames. This model is the AI core of Atrio, a clinical workflow platform that helps cardiologists review angiography studies faster and with less cognitive load.
The model takes individual frames extracted from DICOM cine sequences and outputs bounding boxes with confidence scores around suspected stenosis regions. It is designed to operate within a human-in-the-loop workflow โ a cardiologist reviews and approves every flagged finding before any action is taken.
Intended Use
Primary Use
- Assistive frame-level stenosis flagging in coronary angiography studies
- Used within the Atrio platform as part of a supervised, human-in-the-loop review workflow
- Surfaces high-priority frames for cardiologist review, reducing manual scrubbing through hundreds of cine frames
- All AI-flagged findings require explicit cardiologist approval before being accepted
Out-of-Scope Use
- Not intended for autonomous clinical diagnosis of any kind
- Not validated for imaging modalities outside of coronary angiography
- Not a replacement for cardiologist review or clinical judgment
- Not intended for pediatric cardiac imaging or non-cardiac vascular imaging
- Not suitable for use as a sole basis for treatment decisions
Model Details
| Property | Details |
|---|---|
| Model Type | Object Detection |
| Architecture | YOLOv8 |
| Input | Angiography frames (JPEG/PNG extracted from DICOM cine sequences) |
| Output | Bounding boxes, class labels, confidence scores |
| Framework | Ultralytics YOLOv8 |
| Task | Stenosis region detection |
| Repo | rachitgoyell/stenosis-detection |
How to Use
Installation
pip install ultralytics huggingface_hub
Inference
from huggingface_hub import hf_hub_download
from ultralytics import YOLO
# Download model weights from Hugging Face
model_path = hf_hub_download(
repo_id="rachitgoyell/stenosis-detection",
filename="best.pt"
)
# Load the model
model = YOLO(model_path)
# Run inference on an angiography frame
results = model("frame.jpg", conf=0.25)
# Print detections
for result in results:
print("Bounding boxes:", result.boxes.xyxy)
print("Confidence scores:", result.boxes.conf)
print("Class labels:", result.boxes.cls)
# Visualize results
results[0].show()
Saving Annotated Output
# Save annotated frame to disk
results[0].save(filename="annotated_frame.jpg")
Batch Inference on Extracted Frames
import os
from pathlib import Path
frames_dir = "path/to/extracted/frames"
frame_paths = list(Path(frames_dir).glob("*.jpg"))
results = model(frame_paths, conf=0.25, batch=8)
for i, result in enumerate(results):
result.save(filename=f"output/frame_{i}_annotated.jpg")
Training Details
Dataset
- Source: Coronary angiography cine sequences extracted from DICOM studies
- Annotations: Bounding boxes drawn around suspected stenosis regions by domain experts
- Frame extraction: Individual frames sampled from cine sequences at uniform intervals
- Format: YOLOv8-compatible (images + YOLO-format .txt label files)
Preprocessing
- DICOM to JPEG frame extraction using pydicom
- Contrast normalization applied per frame
- Frames resized to 640x640 for training
- Uniform frame sampling from cine sequences to reduce redundancy
Augmentation
- Horizontal flip
- Brightness and contrast jitter
- Mosaic augmentation (YOLOv8 default)
- Random scaling and translation
Training Configuration
| Parameter | Value |
|---|---|
| Epochs | 100 |
| Image Size | 640 |
| Optimizer | AdamW |
| Batch Size | 16 |
| Confidence Threshold | 0.25 |
| IoU Threshold | 0.45 |
| Hardware | GPU (CUDA) |
Evaluation
Metrics reported on the internal validation split. External prospective validation is ongoing.
| Metric | Value |
|---|---|
| mAP50 | 0.72 |
| mAP50-95 | 0.48 |
| Precision | 0.74 |
| Recall | 0.69 |
These numbers reflect performance on the internal held-out validation set and should not be interpreted as clinically validated performance figures. Independent external validation has not yet been completed.
Limitations and Bias
- Model performance may vary across different angiography equipment, imaging protocols, and contrast injection techniques
- Trained on a limited dataset that may not be representative of all patient populations, demographics, or disease presentations
- Detection confidence is reduced in heavily calcified vessels, overlapping artery segments, and low-contrast frames
- The model has not been tested on data from all major angiography system manufacturers
- Foreshortening and non-standard projection angles may reduce detection accuracy
- Should not be used as a sole or primary basis for any clinical decision
- External prospective validation has not yet been completed
Clinical Disclaimer
This model is a research and assistive tool developed as part of the Atrio platform.
- It is not FDA-cleared, CE-marked, or approved by any regulatory body for clinical use
- It is not intended for standalone clinical deployment
- All model outputs must be reviewed and approved by a qualified cardiologist before being acted upon
- The Atrio platform enforces a human-in-the-loop approval step โ no AI finding is accepted without explicit doctor confirmation
- Clinical deployment of this model in any setting requires independent validation and appropriate regulatory clearance
Model Files
| File | Description |
|---|---|
best.pt |
Trained YOLOv8 model weights (best checkpoint) |
data.yaml |
Dataset class configuration file |
README.md |
Model card |
About Atrio
Atrio is an AI-assisted clinical workflow platform for cardiologists. It transforms raw DICOM angiography cine images into prioritized findings, visual evidence, and structured reports โ while keeping the doctor fully in control at every step.
This model powers the core detection pipeline within Atrio. It is one component of a larger system that includes a findings inbox, visual evidence panel, voice input during procedures, risk stratification, and automated report generation.
Atrio helps cardiologists go from raw angiography scans to final reports in minutes instead of hours โ without changing how they work.
License
This model is released under the MIT License.
This model is intended for research and assistive use only. Clinical deployment requires independent validation, institutional review, and applicable regulatory approvals. The authors make no warranties regarding the clinical accuracy or safety of this model.