TYTSTQ commited on
Commit
dc7ac93
·
verified ·
1 Parent(s): 8a96b62

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +177 -0
README.md ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ configs:
3
+ - config_name: all
4
+ default: true
5
+ data_files:
6
+ - split: train
7
+ path: data/all/train*.parquet
8
+ - split: test
9
+ path: data/all/test*.parquet
10
+ - config_name: qrr
11
+ data_files:
12
+ - split: train
13
+ path: data/qrr/train*.parquet
14
+ - split: test
15
+ path: data/qrr/test*.parquet
16
+ - config_name: trr
17
+ data_files:
18
+ - split: train
19
+ path: data/trr/train*.parquet
20
+ - split: test
21
+ path: data/trr/test*.parquet
22
+ - config_name: fdr
23
+ data_files:
24
+ - split: train
25
+ path: data/fdr/train*.parquet
26
+ - split: test
27
+ path: data/fdr/test*.parquet
28
+ task_categories:
29
+ - visual-question-answering
30
+ language:
31
+ - en
32
+ license: mit
33
+ tags:
34
+ - spatial-reasoning
35
+ - vlm-benchmark
36
+ - ordinal-relations
37
+ - 3d-scenes
38
+ - multi-view
39
+ size_categories:
40
+ - 100K<n<1M
41
+ ---
42
+
43
+ # ORDINARY-BENCH Multi-View Dataset
44
+
45
+ A multi-view version of the ORDINARY-BENCH benchmark for evaluating Vision-Language Models (VLMs) on **ordinal spatial reasoning** in 3D scenes. Each sample includes **4 camera views** of the same scene.
46
+
47
+ > Single-view version: [TYTSTQ/ordinary-bench](https://huggingface.co/datasets/TYTSTQ/ordinary-bench)
48
+ >
49
+ > Source code & evaluation pipeline: [GitHub - tasd12-ty/ordinary-bench-core](https://github.com/tasd12-ty/ordinary-bench-core)
50
+
51
+ ## Overview
52
+
53
+ | | |
54
+ |---|---|
55
+ | Scenes | 700 synthetic 3D scenes (Blender, CLEVR-style) |
56
+ | Complexity | 7 levels: 4 to 10 objects per scene (100 each) |
57
+ | Questions | 332,857 total across 3 reasoning types |
58
+ | Images | 4 views per scene (480 x 320 PNG each) |
59
+
60
+ ## Question Types
61
+
62
+ ### QRR (Quantitative Relation Reasoning) -- 130,557 questions
63
+
64
+ Compare 3D distances between object pairs. Two variants:
65
+ - **Disjoint**: Is `dist(A,B)` less than, approximately equal to, or greater than `dist(C,D)`?
66
+ - **Shared anchor**: From anchor A, is `dist(A,B)` less/equal/greater than `dist(A,C)`?
67
+ - **Answer format**: `<`, `~=`, or `>`
68
+
69
+ ### TRR (Ternary Relation Reasoning) -- 197,400 questions
70
+
71
+ Clock-face direction reasoning:
72
+ - Standing at object `ref1`, facing toward object `ref2` (12 o'clock direction)
73
+ - What clock hour (1-12) is the `target` object at?
74
+ - **Answer format**: integer 1-12
75
+
76
+ ### FDR (Full Distance Ranking) -- 4,900 questions
77
+
78
+ Given an anchor object, rank all other objects by 3D distance, nearest to farthest.
79
+ - **Answer format**: ordered JSON array of object IDs, e.g., `["obj_2", "obj_1", "obj_3"]`
80
+
81
+ ## Quick Start
82
+
83
+ ```python
84
+ from datasets import load_dataset
85
+
86
+ # Load QRR questions (test split)
87
+ ds = load_dataset("TYTSTQ/ordinary-bench-multiview", "qrr", split="test")
88
+
89
+ sample = ds[0]
90
+ sample["view_0"] # PIL Image (480x320) - camera view 0
91
+ sample["view_1"] # PIL Image - camera view 1
92
+ sample["view_2"] # PIL Image - camera view 2
93
+ sample["view_3"] # PIL Image - camera view 3
94
+ sample["question_text"] # "Compare the distance between obj_0 and obj_1 vs ..."
95
+ sample["qrr_gt_comparator"] # Ground truth: "<", "~=", or ">"
96
+
97
+ # Load all question types
98
+ ds_all = load_dataset("TYTSTQ/ordinary-bench-multiview", split="test")
99
+ ```
100
+
101
+ ## Configs
102
+
103
+ | Config | Description | Questions |
104
+ |--------|-------------|-----------|
105
+ | `all` (default) | All 3 question types | 332,857 |
106
+ | `qrr` | Distance comparison only | 130,557 |
107
+ | `trr` | Clock direction only | 197,400 |
108
+ | `fdr` | Distance ranking only | 4,900 |
109
+
110
+ ## Data Splits
111
+
112
+ | Split | Scenes per complexity | Total scenes | Total questions |
113
+ |-------|----------------------|--------------|-----------------|
114
+ | train | 80 | 560 | 266,261 |
115
+ | test | 20 | 140 | 66,596 |
116
+
117
+ ## Column Schema
118
+
119
+ ### Common columns (all configs)
120
+
121
+ | Column | Type | Description |
122
+ |--------|------|-------------|
123
+ | `scene_id` | string | Scene identifier, e.g., `n04_000080` |
124
+ | `n_objects` | int | Number of objects in scene (4-10) |
125
+ | `split` | string | Complexity split: `n04` through `n10` |
126
+ | `view_0` | Image | Camera view 0 (480x320 PNG) |
127
+ | `view_1` | Image | Camera view 1 (480x320 PNG) |
128
+ | `view_2` | Image | Camera view 2 (480x320 PNG) |
129
+ | `view_3` | Image | Camera view 3 (480x320 PNG) |
130
+ | `objects` | string | JSON array: `[{"id": "obj_0", "desc": "large brown rubber sphere"}, ...]` |
131
+ | `question_type` | string | `qrr`, `trr`, or `fdr` |
132
+ | `qid` | string | Question ID, e.g., `qrr_0001` |
133
+ | `question_text` | string | Natural language question |
134
+ | `scene_metadata` | string | Full scene JSON (3D coordinates, camera parameters, etc.) |
135
+
136
+ ### QRR-specific columns
137
+
138
+ | Column | Type | Description |
139
+ |--------|------|-------------|
140
+ | `qrr_variant` | string | `disjoint` or `shared_anchor` |
141
+ | `qrr_pair1` | string | JSON: `["obj_0", "obj_1"]` |
142
+ | `qrr_pair2` | string | JSON: `["obj_2", "obj_3"]` |
143
+ | `qrr_metric` | string | Distance metric, e.g., `dist3D` |
144
+ | `qrr_gt_comparator` | string | Ground truth: `<`, `~=`, or `>` |
145
+
146
+ ### TRR-specific columns
147
+
148
+ | Column | Type | Description |
149
+ |--------|------|-------------|
150
+ | `trr_target` | string | Target object ID |
151
+ | `trr_ref1` | string | Standing position object |
152
+ | `trr_ref2` | string | 12 o'clock facing direction object |
153
+ | `trr_gt_hour` | int | Ground truth clock hour (1-12) |
154
+ | `trr_gt_quadrant` | int | Ground truth quadrant (1-4) |
155
+ | `trr_gt_angle_deg` | float | Ground truth angle in degrees |
156
+
157
+ ### FDR-specific columns
158
+
159
+ | Column | Type | Description |
160
+ |--------|------|-------------|
161
+ | `fdr_anchor` | string | Anchor object ID |
162
+ | `fdr_n_ranked` | int | Number of objects to rank |
163
+ | `fdr_gt_ranking` | string | JSON: `["obj_2", "obj_1", "obj_3"]` (nearest to farthest) |
164
+ | `fdr_gt_distances` | string | JSON: `[3.006, 3.553, 3.882]` |
165
+ | `fdr_gt_tie_groups` | string | JSON: `[["obj_2"], ["obj_1", "obj_3"]]` |
166
+
167
+ ## Prompt Templates
168
+
169
+ System prompts for VLM evaluation are included in `prompts/system_prompts.json`.
170
+
171
+ ## Source Code
172
+
173
+ **[github.com/tasd12-ty/ordinary-bench-core](https://github.com/tasd12-ty/ordinary-bench-core)**
174
+
175
+ ## License
176
+
177
+ MIT