|
|
--- |
|
|
license: cc-by-nc-nd-4.0 |
|
|
task_categories: |
|
|
- question-answering |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- medical |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
--- |
|
|
|
|
|
# U2-BENCH: Ultrasound Understanding Benchmark |
|
|
|
|
|
**U2-BENCH** is the **first large-scale benchmark for evaluating Large Vision-Language Models (LVLMs) on ultrasound imaging understanding**. It provides a diverse, multi-task dataset curated from **40 licensed sources**, covering **15 anatomical regions** and **8 clinically inspired tasks** across classification, detection, regression, and text generation. |
|
|
|
|
|
### Check the 🌟Leaderboard🌟here: https://dolphin-sound.github.io/u2-bench/ |
|
|
|
|
|
### Evaluation code released! |
|
|
Two options: |
|
|
1. developed from VLMEvalKit: https://github.com/dolphin-sound/u2-bench-evalkit |
|
|
2. developed by our intern Yalun: https://github.com/gurenolun/Dolphin-ai-bench |
|
|
|
|
|
--- |
|
|
|
|
|
## 📂 Dataset Structure |
|
|
|
|
|
The dataset is organized into **8 folders**, each corresponding to one benchmark task: |
|
|
|
|
|
- `caption_generation/` |
|
|
- `clinical_value_estimation/` |
|
|
- `disease_diagnosis/` |
|
|
- `keypoint_detection/` |
|
|
- `lesion_localisation/` |
|
|
- `organ_detection/` |
|
|
- `report_generation/` |
|
|
- `view_recognition_and_assessment/` |
|
|
|
|
|
Each folder contains `.tsv` files with task-specific annotations. A shared file, [`an_explanation_of_the_columns.tsv`](./an_explanation_of_the_columns.tsv), maps each column to its meaning. |
|
|
|
|
|
--- |
|
|
|
|
|
## 📄 Data Format |
|
|
|
|
|
The dataset is provided as `.tsv` files, where: |
|
|
|
|
|
- `img_data` contains a **base64-encoded image** (typically a 2D frame from an ultrasound video). |
|
|
- Each row corresponds to a **single sample**. |
|
|
- Columns include task-specific fields such as: |
|
|
- `dataset_name`, `anatomy_location`, `classification_task` |
|
|
- `caption`, `report`, `class_label`, `measurement`, `gt_bbox`, `keypoints`, etc. |
|
|
|
|
|
A full explanation is provided in [`an_explanation_of_the_columns.tsv`](./an_explanation_of_the_columns.tsv). |
|
|
|
|
|
--- |
|
|
|
|
|
## 🔬 Tasks |
|
|
|
|
|
U2-BENCH includes 8 core tasks: |
|
|
|
|
|
| Capability | Task Name | Description | |
|
|
|----------------|------------------------------|-------------------------------------------------| |
|
|
| Classification | Disease Diagnosis (DD) | Predict clinical diagnosis from ultrasound | |
|
|
| Classification | View Recognition (VRA) | Classify standard views in sonography | |
|
|
| Detection | Lesion Localization (LL) | Locate lesions with spatial classification | |
|
|
| Detection | Organ Detection (OD) | Identify presence of anatomical structures | |
|
|
| Detection | Keypoint Detection (KD) | Predict anatomical landmarks (e.g. biometry) | |
|
|
| Regression | Clinical Value Estimation | Estimate scalar metrics (e.g., fat %, EF) | |
|
|
| Generation | Report Generation | Produce structured clinical ultrasound reports | |
|
|
| Generation | Caption Generation | Generate brief anatomical image descriptions | |
|
|
|
|
|
--- |
|
|
|
|
|
## 📊 Dataset Statistics |
|
|
|
|
|
- **Total samples**: 7,241 |
|
|
- **Anatomies**: 15 (e.g., thyroid, fetus, liver, breast, heart, lung) |
|
|
- **Application scenarios**: 50 across tasks |
|
|
- **Multi-task support**: Some samples contain multiple labels (e.g., classification + regression) |
|
|
|
|
|
--- |
|
|
|
|
|
## 🛡️ Ethics, License & Use |
|
|
|
|
|
- The dataset is distributed under the **Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)** license. |
|
|
- For **non-commercial research and evaluation only**. |
|
|
- Data is derived from **licensed and publicly available ultrasound datasets**. |
|
|
- All images are de-identified, and annotations were manually validated. |
|
|
- **Do not use** this dataset for diagnostic or clinical deployment without regulatory approval. |
|
|
|
|
|
--- |
|
|
|
|
|
## 📦 Loading from Hugging Face |
|
|
|
|
|
You can load the dataset using 🤗 Datasets: |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("DolphinAI/u2-bench", split="train") |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## 📚 Citation |
|
|
|
|
|
If you use this benchmark in your research, please cite: |
|
|
|
|
|
```bibtex |
|
|
@article{le2025u2bench, |
|
|
title={U2-BENCH: Benchmarking Large Vision-Language Models on Ultrasound Understanding}, |
|
|
author={Le, Anjie and Liu, Henan and others}, |
|
|
journal={Under Review}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## 🔧 Contributions |
|
|
|
|
|
We welcome community contributions and evaluation scripts. |
|
|
Please open a pull request or contact Dolphin AI for collaboration. |
|
|
|