File size: 4,345 Bytes
f463740
 
 
 
 
 
 
 
 
 
85e00b0
 
 
 
 
 
a75c60a
 
63fdafb
 
 
 
 
85e00b0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
---
license: cc-by-nc-nd-4.0
task_categories:
- question-answering
language:
- en
tags:
- medical
size_categories:
- 1K<n<10K
---

# U2-BENCH: Ultrasound Understanding Benchmark

**U2-BENCH** is the **first large-scale benchmark for evaluating Large Vision-Language Models (LVLMs) on ultrasound imaging understanding**. It provides a diverse, multi-task dataset curated from **40 licensed sources**, covering **15 anatomical regions** and **8 clinically inspired tasks** across classification, detection, regression, and text generation.

### Check the 🌟Leaderboard🌟here: https://dolphin-sound.github.io/u2-bench/

### Evaluation code released! 
Two options:
1. developed from VLMEvalKit: https://github.com/dolphin-sound/u2-bench-evalkit
2. developed by our intern Yalun: https://github.com/gurenolun/Dolphin-ai-bench

---

## 📂 Dataset Structure

The dataset is organized into **8 folders**, each corresponding to one benchmark task:

- `caption_generation/`
- `clinical_value_estimation/`
- `disease_diagnosis/`
- `keypoint_detection/`
- `lesion_localisation/`
- `organ_detection/`
- `report_generation/`
- `view_recognition_and_assessment/`

Each folder contains `.tsv` files with task-specific annotations. A shared file, [`an_explanation_of_the_columns.tsv`](./an_explanation_of_the_columns.tsv), maps each column to its meaning.

---

## 📄 Data Format

The dataset is provided as `.tsv` files, where:

- `img_data` contains a **base64-encoded image** (typically a 2D frame from an ultrasound video).
- Each row corresponds to a **single sample**.
- Columns include task-specific fields such as:
  - `dataset_name`, `anatomy_location`, `classification_task`
  - `caption`, `report`, `class_label`, `measurement`, `gt_bbox`, `keypoints`, etc.

A full explanation is provided in [`an_explanation_of_the_columns.tsv`](./an_explanation_of_the_columns.tsv).

---

## 🔬 Tasks

U2-BENCH includes 8 core tasks:

| Capability     | Task Name                    | Description                                     |
|----------------|------------------------------|-------------------------------------------------|
| Classification | Disease Diagnosis (DD)       | Predict clinical diagnosis from ultrasound      |
| Classification | View Recognition (VRA)       | Classify standard views in sonography           |
| Detection      | Lesion Localization (LL)     | Locate lesions with spatial classification      |
| Detection      | Organ Detection (OD)         | Identify presence of anatomical structures      |
| Detection      | Keypoint Detection (KD)      | Predict anatomical landmarks (e.g. biometry)    |
| Regression     | Clinical Value Estimation    | Estimate scalar metrics (e.g., fat %, EF)       |
| Generation     | Report Generation            | Produce structured clinical ultrasound reports  |
| Generation     | Caption Generation           | Generate brief anatomical image descriptions    |

---

## 📊 Dataset Statistics

- **Total samples**: 7,241
- **Anatomies**: 15 (e.g., thyroid, fetus, liver, breast, heart, lung)
- **Application scenarios**: 50 across tasks
- **Multi-task support**: Some samples contain multiple labels (e.g., classification + regression)

---

## 🛡️ Ethics, License & Use

- The dataset is distributed under the **Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)** license.
- For **non-commercial research and evaluation only**.
- Data is derived from **licensed and publicly available ultrasound datasets**.
- All images are de-identified, and annotations were manually validated.
- **Do not use** this dataset for diagnostic or clinical deployment without regulatory approval.

---

## 📦 Loading from Hugging Face

You can load the dataset using 🤗 Datasets:

```python
from datasets import load_dataset

dataset = load_dataset("DolphinAI/u2-bench", split="train")
```

---

## 📚 Citation

If you use this benchmark in your research, please cite:

```bibtex
@article{le2025u2bench,
  title={U2-BENCH: Benchmarking Large Vision-Language Models on Ultrasound Understanding},
  author={Le, Anjie and Liu, Henan and others},
  journal={Under Review},
  year={2025}
}
```

---

## 🔧 Contributions

We welcome community contributions and evaluation scripts.
Please open a pull request or contact Dolphin AI for collaboration.