Spaces:
Running
Running
metadata
title: FetalCLIP
emoji: ๐ถ
colorFrom: green
colorTo: blue
sdk: docker
pinned: false
license: apache-2.0
FetalCLIP - Fetal Ultrasound Analysis
Foundation Model for Zero-Shot Fetal Ultrasound Analysis
Features
- ๐ฌ View Classification: Classify ultrasound images into 13 anatomical views
- ๐ Gestational Age Estimation: Estimate gestational age from fetal brain ultrasounds
- ๐ฅ DICOM Support: Full preprocessing pipeline for medical DICOM files
- ๐ผ๏ธ PNG/JPEG Support: Basic preprocessing for standard image files
How to Use
- Upload a fetal ultrasound image (PNG, JPEG, or DICOM)
- Click "Classify View" to identify the anatomical plane
- View the top predictions with confidence scores
Model
This demo uses the FetalCLIP model, a vision-language foundation model trained on fetal ultrasound images.
- Model: numansaeed/fetalclip-model
- Architecture: ViT-L/14 based CLIP model
- Training: Contrastive learning on fetal ultrasound-text pairs
Supported Views
- Fetal abdomen
- Fetal brain (transventricular)
- Fetal brain (transthalamic)
- Fetal brain (transcerebellar)
- Fetal femur
- Fetal heart (4-chamber)
- Fetal heart (LVOT)
- Fetal heart (RVOT)
- Fetal heart (3VV)
- Fetal kidney
- Fetal face (lips)
- Fetal spine (coronal)
- Fetal spine (sagittal)
Citation
If you use this model, please cite:
@article{maani2025fetalclip,
title={FetalCLIP: A Visual-Language Foundation Model for Fetal Ultrasound Image Analysis},
author={Maani, Fadillah and Saeed, Numan and Saleem, Tausifa and Farooq, Zaid and Alasmawi, Hussain and Diehl, Werner and Mohammad, Ameera and Waring, Gareth and Valappi, Saudabi and Bricker, Leanne and Yaqub, Mohammad},
journal={arXiv preprint arXiv:2502.14807},
year={2025}
}