ManTle commited on
Commit
b2695b7
·
verified ·
1 Parent(s): 13e2069

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +107 -0
README.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - visual-question-answering
5
+ - image-text-to-text
6
+ language:
7
+ - en
8
+ pretty_name: Med Eval Data
9
+ size_categories:
10
+ - 10K<n<100K
11
+ ---
12
+
13
+ # Med Eval Data
14
+
15
+ This dataset contains evaluation data for the **Med** project. Its data format is the same as [`Med2026/Med_training_data`](https://huggingface.co/datasets/Med2026/Med_training_data), and it can be loaded with the same codebase from [`GAIR-NLP/Med`].
16
+
17
+ ## Overview
18
+
19
+ Each example is stored in the same JSON / parquet schema as the training data, with the following top-level fields:
20
+
21
+ - `images`
22
+ - `data_source`
23
+ - `prompt`
24
+ - `ability`
25
+ - `reward_model`
26
+ - `extra_info`
27
+ - `agent_name`
28
+
29
+ This means the dataset is directly compatible with the data loading pipeline used in the Med codebase.
30
+
31
+ ## Compatibility
32
+
33
+ This dataset has the **same format** as [`Med2026/Med_training_data`](https://huggingface.co/datasets/Med2026/Med_training_data).
34
+
35
+ You can use the same loading logic and preprocessing pipeline from:
36
+
37
+ - [`GAIR-NLP/Med`](https://github.com/GAIR-NLP/Med)
38
+
39
+ No format conversion is required.
40
+
41
+ ## Data Split by File Naming
42
+
43
+ The evaluation data is divided into two settings according to the file name:
44
+
45
+ - Files with `single_turn_agent` in the filename correspond to **evaluation without tool use**
46
+ - Files with `tool_agent` in the filename correspond to **evaluation with tool use**
47
+
48
+ In other words:
49
+
50
+ - `*single_turn_agent*` → without-tool evaluation
51
+ - `*tool_agent*` → with-tool evaluation
52
+
53
+ ## Data Format
54
+
55
+ Each sample is a JSON object with the following structure:
56
+
57
+ ```python
58
+ {
59
+ "images": [PIL.Image],
60
+ "data_source": "vstar_bench_single_turn_agent",
61
+ "prompt": [
62
+ {
63
+ "content": "<image>\nWhat is the material of the glove?\n(A) rubber\n(B) cotton\n(C) kevlar\n(D) leather\nAnswer with the option's letter from the given choices directly.",
64
+ "role": "user"
65
+ }
66
+ ],
67
+ "ability": "direct_attributes",
68
+ "reward_model": {
69
+ "answer": "A",
70
+ "format_ratio": 0.0,
71
+ "ground_truth": "\\boxed{A}",
72
+ "length_ratio": 0.0,
73
+ "style": "multiple_choice",
74
+ "verifier": "mathverify",
75
+ "verifier_parm": {
76
+ "det_verifier_normalized": null,
77
+ "det_reward_ratio": {
78
+ "iou_max_label_first": null,
79
+ "iou_max_iou_first": null,
80
+ "iou_completeness": null,
81
+ "map": null,
82
+ "map50": null,
83
+ "map75": null
84
+ }
85
+ }
86
+ },
87
+ "extra_info": {
88
+ "answer": "A",
89
+ "data_source": "vstar_bench_single_turn_agent",
90
+ "id": "vstar_bench_0",
91
+ "image_path": "direct_attributes/sa_4690.jpg",
92
+ "question": "<image>\nWhat is the material of the glove?\n(A) rubber\n(B) cotton\n(C) kevlar\n(D) leather\nAnswer with the option's letter from the given choices directly.",
93
+ "split": "test",
94
+ "index": "0",
95
+ "prompt_length": null,
96
+ "tools_kwargs": {
97
+ "crop_and_zoom": {
98
+ "create_kwargs": {
99
+ "raw_query": "What is the material of the glove?\n(A) rubber\n(B) cotton\n(C) kevlar\n(D) leather\nAnswer with the option's letter from the given choices directly.",
100
+ "image": "PIL.Image"
101
+ }
102
+ }
103
+ },
104
+ "need_tools_kwargs": false
105
+ },
106
+ "agent_name": "single_turn_agent"
107
+ }