Datasets:

Modalities:
Image
Text
Formats:
csv
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:
Saint-lsy commited on
Commit
fe110ef
·
verified ·
1 Parent(s): 71a8629

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -53
README.md CHANGED
@@ -1,53 +1,64 @@
1
- ---
2
- license: cc-by-sa-3.0
3
- tags:
4
- - medical
5
- language:
6
- - en
7
- task_categories:
8
- - question-answering
9
- configs:
10
- - config_name: EndoBench
11
- data_files:
12
- - split: test
13
- path: EndoBench.tsv
14
- ---
15
- # <div align="center"><b> EndoBench </b></div>
16
-
17
- [🍎 **Homepage**](https://github.com/CUHK-AIM-Group/EndoBench) | [**🤗 Dataset**](https://huggingface.co/datasets/Saint-lsy/EndoBench/) | [**🤗 Paper**]()
18
-
19
- This repository is the official implementation of the paper **EndoBench: A Comprehensive Evaluation of Multi-Modal Large Language Models for Endoscopy Analysis**.
20
-
21
- ## ☀️ Tutorial
22
-
23
- EndoBench is a comprehensive MLLM evaluation framework spanning 4 endoscopy scenarios and 12 clinical tasks with 12 secondary subtasks that mirror the progression of endoscopic examination workflows. Featuring five levels of visual prompting granularity to assess region-specific understanding, our EndoBench contains 6,832 clinically validated VQA pairs derived from 22 endoscopy datasets. This structure enables precise measurement of MLLMs' clinical perceptual, diagnostic accuracy, and spatial comprehension across diverse endoscopic scenarios.
24
-
25
- Our dataset construction involves collecting 21 public and 1 private endoscopy datasets and standardizing QA pairs, yielding **446,535** VQA pairs comprising our EndoVQA-Instruct dataset, the current largest endoscopic instruction-tuning collection. From EndoVQA-Instruct, we extract representative pairs that undergo rigorous clinical review, resulting in our final EndoBench of 6,832 clinically validated VQA pairs.
26
-
27
- We provide two datasets:
28
-
29
- 1. EndoVQA-Instruct-trainval, which included **439703** VQA pairs.
30
-
31
- 2. EndoBench, which encompasses 4 distinct endoscopic modalities, 12 specialized clinical tasks with 12 secondary subtasks, and 5 levels of visual prompting granularity, resulting in **6,832** rigorously validated VQA pairs from 22 diverse datasets. Our multi-dimensional evaluation framework mirrors the clinical workflow—spanning anatomical recognition, lesion analysis, spatial localization, and surgical operations—to holistically gauge the perceptual and diagnostic abilities of MLLMs in realistic scenarios.
32
-
33
- We provide 2 versions of EndoBench: .json file and .tsv file.
34
-
35
-
36
- ## Evaluation
37
- The evaluation can be built upon **VLMEvalKit**. To get started:
38
-
39
- 1. Visit the [VLMEvalKit Quickstart Guide](https://github.com/open-compass/VLMEvalKit/blob/main/docs/en/get_started/Quickstart.md) for installation instructions. You can following command for installation:
40
- ```bash
41
- git clone https://github.com/open-compass/VLMEvalKit.git
42
- cd VLMEvalKit
43
- pip install -e .
44
- ```
45
-
46
- 2. Add our dataset to VLMEvalKit.
47
-
48
- 3. You can find more details on the [ImageMCQDataset Class](https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/dataset/image_mcq.py).
49
-
50
- ## Disclaimers
51
-
52
- The guidelines for the annotators emphasized strict compliance with copyright and licensing rules from the initial data source, specifically avoiding materials from websites that forbid copying and redistribution.
53
- Should you encounter any data samples potentially breaching the copyright or licensing regulations of any site, we encourage you to contact us. Upon verification, such samples will be promptly removed.
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-3.0
3
+ tags:
4
+ - medical
5
+ language:
6
+ - en
7
+ task_categories:
8
+ - question-answering
9
+ configs:
10
+ - config_name: EndoBench
11
+ data_files:
12
+ - split: test
13
+ path: EndoBench.tsv
14
+ ---
15
+ # <div align="center"><b> EndoBench </b></div>
16
+
17
+ [🍎 **Homepage**](https://cuhk-aim-group.github.io/EndoBench.github.io/)|[💻 **GitHub**](https://github.com/CUHK-AIM-Group/EndoBench)|[**🤗 Dataset**](https://huggingface.co/datasets/Saint-lsy/EndoBench/)|[**📖 Paper**]()
18
+
19
+ This repository is the official implementation of the paper **EndoBench: A Comprehensive Evaluation of Multi-Modal Large Language Models for Endoscopy Analysis**.
20
+
21
+ ## ☀️ Tutorial
22
+
23
+ EndoBench is a comprehensive MLLM evaluation framework spanning 4 endoscopy scenarios and 12 clinical tasks with 12 secondary subtasks that mirror the progression of endoscopic examination workflows. Featuring five levels of visual prompting granularity to assess region-specific understanding, our EndoBench contains 6,832 clinically validated VQA pairs derived from 22 endoscopy datasets. This structure enables precise measurement of MLLMs' clinical perceptual, diagnostic accuracy, and spatial comprehension across diverse endoscopic scenarios.
24
+
25
+ Our dataset construction involves collecting 20 public and 1 private endoscopy datasets and standardizing QA pairs, yielding **446,535** VQA pairs comprising our EndoVQA-Instruct dataset, the current largest endoscopic instruction-tuning collection.
26
+ From EndoVQA-Instruct, we extract representative pairs that undergo rigorous clinical review, resulting in our final EndoBench of 6,832 clinically validated VQA pairs.
27
+
28
+ We split **EndoVQA-Instruct** and provide two datasets:
29
+
30
+ 1. EndoVQA-Instruct-trainval, which included **439703** VQA pairs. We provide the .json file containing the original image paths. You can download these datasets according to your needs.
31
+ The private WCE2025 dataset is available upon request.
32
+
33
+ 3. EndoBench, resulting in **6,832** rigorously validated VQA pairs, and we provide 2 versions of EndoBench: `EndoBench.json` and `EndoBench.tsv` file.
34
+ Each data entry in the `EndoBench.json` file corresponds to an image in the `EndoBench-Images.zip` file. The `EndoBench.tsv` file contains images in base64 format.
35
+
36
+ ## Evaluation
37
+ The evaluation can be built upon **VLMEvalKit**. To get started:
38
+
39
+ 1. This project is built upon **VLMEvalKit**. To get started:
40
+
41
+ Visit the [VLMEvalKit Quickstart Guide](https://github.com/open-compass/VLMEvalKit/blob/main/docs/en/get_started/Quickstart.md) for installation instructions. or you can run the following command for a quick start:
42
+ ```bash
43
+ git clone https://github.com/CUHK-AIM-Group/EndoBench.git
44
+ cd VLMEvalKit
45
+ pip install -e .
46
+ ```
47
+
48
+ 2. You can evaluate your model with the following command:
49
+ ```bash
50
+ python run.py --data EndoBench --model Your_model_name
51
+ ```
52
+ **Demo**: Qwen2.5-VL-7B-Instruct on EndoBench, Inference only
53
+ ```bash
54
+ python run.py --data EndoBench --model Qwen2.5-VL-7B-Instruct --mode infer
55
+ ```
56
+
57
+ 3. You can find more details on the [ImageMCQDataset Class](https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/dataset/image_mcq.py).
58
+
59
+ ## Disclaimers
60
+
61
+ The guidelines for the annotators emphasized strict compliance with copyright and licensing rules from the initial data source, specifically avoiding materials from websites that forbid copying and redistribution.
62
+ Should you encounter any data samples potentially breaching the copyright or licensing regulations of any site, we encourage you to contact us. Upon verification, such samples will be promptly removed.
63
+
64
+ Greatly appreciate to all the authors of these datasets for their contributions to the field of endoscopy analysis.