Spaces:
Running
title: Echotracker
emoji: π
colorFrom: gray
colorTo: green
sdk: gradio
sdk_version: 6.12.0
app_file: app.py
pinned: false
license: mit
short_description: To run EchoTracker instantly on a custom or given videos.
π« EchoTracker
Advancing Myocardial Point Tracking in Echocardiography
EchoTracker is an interactive demo for tracking user-selected points on cardiac tissue across echocardiography video sequences. It was presented at MICCAI 2024 and demonstrates strong generalisation across cardiac views and scanner types β including out-of-distribution settings not seen during training.
The model weights used in this demo are from the follow-up work "Taming Modern Point Tracking for Speckle Tracking Echocardiography via Impartial Motion" (ICCV 2025 Workshop), which further advances robustness and accuracy for speckle tracking in echocardiography.
Demo
Try the live demo on Hugging Face Spaces: EchoTracker Space
Features
- Interactive point selection β click directly on a video frame to place up to 100 tracking points on cardiac structures (e.g. LV/RV walls, myocardium)
- Frame navigation β scrub through frames with a slider to choose the optimal query frame (end-diastolic recommended)
- Multi-view support β handles A4C (apical 4-chamber), RV (right ventricle), and PSAX (parasternal short-axis) views
- Out-of-distribution (OOD) generalisation β tested on scanner types and views not seen during training
- Faded trajectory visualisation β output video overlays colour-coded tracks with fade-trail rendering
- Built-in examples β four bundled clips (A4C, A4C OOD, RV OOD, PSAX OOD) for instant testing
How to Use
- Load a video β upload your own echocardiography clip or click one of the provided example thumbnails.
- Navigate to the query frame β use the frame slider to find the desired starting frame. The end-diastolic frame is recommended for best results.
- Place tracking points β click anywhere on the frame image to add a point. Up to 100 points are supported per run.
- Adjust selection β use Undo to remove the last point or Clear All to start over.
- Run the tracker β press βΆ Run EchoTracker to generate trajectories for all selected points.
- View output β the annotated video with colour-coded tracks appears in the output player.
Tip: Points are stored as
(x, y)pixel coordinates on the original frame and are automatically rescaled to the model's 256 Γ 256 input resolution.
Running Locally
Prerequisites
- Python 3.10+
- A CUDA-capable GPU (optional but recommended; CPU inference is supported)
Installation
git clone https://github.com/riponazad/echotracker.git
cd echotracker
pip install gradio torch opencv-python-headless numpy Pillow mediapy scikit-image
Launch
python app.py
The Gradio interface will be available at http://localhost:7860.
Model Weights
The pre-trained TorchScript model (echotracker_cvamd_ts.pt) must be present in the project root. It is included in this repository/Space and loaded automatically at startup.
Repository Structure
echotracker/
βββ app.py # Gradio application and UI
βββ utils.py # Point-to-tensor conversion and tracking visualisation
βββ echotracker_cvamd_ts.pt # Pre-trained TorchScript model weights
βββ example_samples/ # Bundled example echocardiography clips
β βββ input1.mp4 # A4C view
β βββ input2.mp4 # A4C view (OOD)
β βββ input3_RV.mp4 # RV view (OOD)
β βββ psax_video_crop.mp4 # PSAX view (OOD)
βββ outputs/ # Saved tracking output videos
Technical Details
| Property | Value |
|---|---|
| Model format | TorchScript (.pt) |
| Input resolution | 256 Γ 256 (grayscale) |
| Max tracking points | 100 |
| Output video FPS | 25 |
| Supported views | A4C, RV, PSAX |
| Device | CUDA (auto) or CPU |
The tracker receives a batch of grayscale frames of shape [B, T, 1, H, W] and a set of query points [B, N, 3] (frame index, x, y). It returns per-point trajectories that are denormalised and overlaid on the original-resolution frames.
Citation
If you use EchoTracker or the model weights in this demo, please cite both papers:
@InProceedings{azad2024echotracker,
author = {Azad, Md Abulkalam and Chernyshov, Artem and Nyberg, John
and Tveten, Ingrid and Lovstakken, Lasse and Dalen, H{\aa}vard
and Grenne, Bj{\o}rnar and {\O}stvik, Andreas},
title = {EchoTracker: Advancing Myocardial Point Tracking in Echocardiography},
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
doi = {10.1007/978-3-031-72083-3_60}
}
@InProceedings{Azad_2025_ICCV,
author = {Azad, Md Abulkalam and Nyberg, John and Dalen, H{\aa}vard
and Grenne, Bj{\o}rnar and Lovstakken, Lasse and {\O}stvik, Andreas},
title = {Taming Modern Point Tracking for Speckle Tracking Echocardiography via Impartial Motion},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {October},
year = {2025},
pages = {1115--1124}
}
Authors
Md Abulkalam Azad, Artem Chernyshov, John Nyberg, Ingrid Tveten, Lasse Lovstakken, HΓ₯vard Dalen, BjΓΈrnar Grenne, Andreas Γstvik
License
This project is licensed under the MIT License.
Note: The bundled example echocardiography clips are provided for demonstration purposes only and should not be downloaded, reproduced, or used outside this demo.
Support This Work
If you find EchoTracker useful, please consider clicking the Like button at the top of this Space. It helps others discover this work and supports continued open research in cardiac image analysis.