appledora commited on
Commit
e884532
·
verified ·
1 Parent(s): 01ae938

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +142 -67
README.md CHANGED
@@ -19,140 +19,215 @@ tags:
19
  pretty_name: DORI
20
  ---
21
 
22
- # Dataset Card for Dataset Name
23
-
24
- <!-- Provide a quick summary of the dataset. -->
25
-
26
- This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
27
 
28
  ## Dataset Details
29
 
30
  ### Dataset Description
31
 
32
- <!-- Provide a longer summary of what this dataset is. -->
33
 
 
34
 
 
35
 
36
- - **Curated by:** [More Information Needed]
37
- - **Funded by [optional]:** [More Information Needed]
38
- - **Shared by [optional]:** [More Information Needed]
39
- - **Language(s) (NLP):** [More Information Needed]
40
- - **License:** [More Information Needed]
41
 
42
- ### Dataset Sources [optional]
43
 
44
- <!-- Provide the basic links for the dataset. -->
 
45
 
46
- - **Repository:** [More Information Needed]
47
- - **Paper [optional]:** [More Information Needed]
48
- - **Demo [optional]:** [More Information Needed]
 
 
 
 
 
 
 
 
 
49
 
50
  ## Uses
51
 
52
- <!-- Address questions around how the dataset is intended to be used. -->
53
-
54
  ### Direct Use
55
 
56
- <!-- This section describes suitable use cases for the dataset. -->
57
 
58
- [More Information Needed]
 
 
 
 
59
 
60
  ### Out-of-Scope Use
61
 
62
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
63
-
64
- [More Information Needed]
 
 
65
 
66
  ## Dataset Structure
67
 
68
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69
 
70
- [More Information Needed]
71
 
72
  ## Dataset Creation
73
 
74
  ### Curation Rationale
75
 
76
- <!-- Motivation for the creation of this dataset. -->
 
 
 
 
 
77
 
78
- [More Information Needed]
79
 
80
  ### Source Data
81
 
82
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
83
-
84
  #### Data Collection and Processing
85
 
86
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
87
-
88
- [More Information Needed]
89
 
90
- #### Who are the source data producers?
91
 
92
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
 
 
 
93
 
94
- [More Information Needed]
95
 
96
- ### Annotations [optional]
97
 
98
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
99
 
100
  #### Annotation process
101
 
102
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
 
 
 
 
103
 
104
- [More Information Needed]
105
 
106
  #### Who are the annotators?
107
 
108
- <!-- This section describes the people or systems who created the annotations. -->
109
-
110
- [More Information Needed]
 
111
 
112
  #### Personal and Sensitive Information
113
 
114
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
115
-
116
- [More Information Needed]
117
 
118
  ## Bias, Risks, and Limitations
119
 
120
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
121
-
122
- [More Information Needed]
 
 
 
 
123
 
124
  ### Recommendations
125
 
126
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
127
-
128
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
129
-
130
- ## Citation [optional]
 
131
 
132
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
133
 
134
  **BibTeX:**
135
 
136
- [More Information Needed]
137
-
138
- **APA:**
139
-
140
- [More Information Needed]
141
 
142
- ## Glossary [optional]
143
 
144
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
 
 
 
 
 
 
 
145
 
146
- [More Information Needed]
147
 
148
- ## More Information [optional]
149
 
150
- [More Information Needed]
151
 
152
- ## Dataset Card Authors [optional]
153
 
154
- [More Information Needed]
155
 
156
  ## Dataset Card Contact
157
 
158
- [More Information Needed]
 
19
  pretty_name: DORI
20
  ---
21
 
22
+ # Dataset Card for DORI
 
 
 
 
23
 
24
  ## Dataset Details
25
 
26
  ### Dataset Description
27
 
28
+ DORI (Discriminative Orientation Reasoning Intelligence) is a comprehensive benchmark designed to evaluate object orientation understanding in multimodal large language models (MLLMs). The benchmark isolates and evaluates orientation perception as a primary capability, offering a systematic assessment framework that spans four essential dimensions of orientation comprehension: frontal alignment, rotational transformations, relative directional relationships, and canonical orientation understanding.
29
 
30
+ DORI contains 33,656 carefully constructed multiple-choice questions based on 13,652 images spanning both natural (37%) and simulated (63%) environments. The benchmark covers 67 object categories (31 household and 36 outdoor item categories) across 11 diverse computer vision datasets.
31
 
32
+ What makes DORI unique is its systematic approach to isolating orientation perception from confounding factors like object recognition difficulty, scene clutter, linguistic ambiguity, and contextual distractions. For each orientation dimension, DORI provides both coarse-grained questions (basic categorical judgments) and fine-grained questions (precise angular measurements).
33
 
34
+ - **Curated by:** Anonymous Authors (NeurIPS 2025 submission)
35
+ - **Language(s) (NLP):** English
36
+ - **License:** Not specified in the current documentation
 
 
37
 
38
+ ### Dataset Sources
39
 
40
+ - **Repository:** https://huggingface.co/datasets/appledora/DORI-Benchmark
41
+ - **Paper:** "Right Side Up? Disentangling Orientation Understanding in MLLMs with Fine-grained Multi-axis Perception Tasks" (NeurIPS 2025 submission)
42
 
43
+ The dataset incorporates images from multiple existing datasets including:
44
+ - KITTI
45
+ - Cityscapes
46
+ - COCO
47
+ - JTA
48
+ - 3D-FUTURE
49
+ - Objectron
50
+ - ShapeNet
51
+ - OmniObject3D
52
+ - NOCS REAL
53
+ - Get3D
54
+ - COCO Space SEA
55
 
56
  ## Uses
57
 
 
 
58
  ### Direct Use
59
 
60
+ DORI is designed for evaluating and benchmarking orientation reasoning capabilities in multimodal large language models (MLLMs). Its primary uses include:
61
 
62
+ 1. Evaluating MLLMs' understanding of object orientation across four core dimensions
63
+ 2. Comparing model performance on coarse vs. fine-grained orientation perception
64
+ 3. Diagnosing specific weaknesses in spatial reasoning across different model architectures
65
+ 4. Supporting research to improve MLLMs' abilities in applications that require spatial understanding (robotics, augmented reality, autonomous navigation)
66
+ 5. Providing a standardized framework for measuring progress in orientation understanding capabilities
67
 
68
  ### Out-of-Scope Use
69
 
70
+ The DORI benchmark is not intended for:
71
+ - Training models directly (it's an evaluation benchmark)
72
+ - Evaluating general vision capabilities beyond orientation understanding
73
+ - Assessing human perception or cognitive abilities
74
+ - Commercial applications without proper attribution
75
 
76
  ## Dataset Structure
77
 
78
+ DORI consists of multiple-choice questions with a standardized format. Each question includes:
79
+
80
+ 1. A task description specifying the orientation dimension being evaluated
81
+ 2. Contextual information explaining relevant orientation concepts
82
+ 3. Step-by-step analysis instructions
83
+ 4. Multiple-choice options
84
+ 5. Examples illustrating expected reasoning
85
+
86
+ The benchmark is organized into four core dimensions with seven specific tasks:
87
+
88
+ 1. **Frontal Alignment**
89
+ - View Parallelism Perception
90
+ - Directional Facing Perception
91
+
92
+ 2. **Rotational Transformation**
93
+ - Single-axis Rotation
94
+ - Compound Rotation
95
+
96
+ 3. **Relative Orientation**
97
+ - Inter-object Direction Perception
98
+ - Viewer-scene Direction Perception
99
+
100
+ 4. **Canonical Orientation**
101
+ - Canonical Orientation Reasoning
102
+
103
+ Each task has two levels of assessment:
104
+ - **Coarse-grained** questions evaluating basic categorical understanding
105
+ - **Fine-grained** questions probing precise quantitative estimations
106
+
107
+ The distribution of tasks in the dataset is:
108
+ - Compound Rotation: 26%
109
+ - Viewer-Scene Direction: 20%
110
+ - Inter-Object Direction: 19%
111
+ - View Parallelism: 10%
112
+ - Single-Axis Rotation: 9%
113
+ - Directional Facing: 9%
114
+ - Canonical Orientation: 5%
115
 
116
+ Major object categories include chairs (15%), cars (14%), cameras (13%), sofas (10%), people (8%), tables (7%), and motorbikes (6%).
117
 
118
  ## Dataset Creation
119
 
120
  ### Curation Rationale
121
 
122
+ DORI was created to address limitations in existing orientation benchmarks, which often:
123
+ - Focus only on simple directional judgments without fine-grained assessment
124
+ - Do not represent the naturality or nuances of real-life scenarios
125
+ - Present tasks in ambiguous ways
126
+ - Fail to systematically evaluate orientation across different frames of reference
127
+ - Include too few samples for reliable evaluation
128
 
129
+ DORI aims to provide a comprehensive, hierarchical evaluation framework specifically targeting orientation understanding, as this capability is fundamental for numerous AI applications including autonomous navigation, augmented reality, and robotic manipulation.
130
 
131
  ### Source Data
132
 
 
 
133
  #### Data Collection and Processing
134
 
135
+ DORI collected data via two primary means:
136
+ 1. Converting existing 3D information from established datasets into orientation questions
137
+ 2. Manually annotating samples where needed
138
 
139
+ The benchmark uses both real-world images (37%) and simulated renders (63%) to ensure comprehensive coverage of visual complexities. For simulated datasets, precise orientation parameters provided ground truth angular measurements with known accuracy. For real-world images, expert annotation established clear ground truth values.
140
 
141
+ Each question was created following a rigorous process involving:
142
+ 1. Isolating objects with bounding boxes to tackle cluttered scenes
143
+ 2. Employing standardized orientation terminology with explicit spatial frames of reference
144
+ 3. Ensuring difficulty progression from simple categorical judgments to precise angular measurements
145
 
146
+ The prompts were iteratively refined through multiple cycles of human feedback to address ambiguities, clarify terminology, and improve task specificity.
147
 
148
+ #### Who are the source data producers?
149
 
150
+ The source data comes from 11 established computer vision datasets created by various research groups:
151
+ - KITTI (Geiger et al., 2012)
152
+ - Cityscapes (Cordts et al., 2016)
153
+ - COCO (Lin et al., 2014)
154
+ - JTA (Fabbri et al., 2018)
155
+ - 3D-FUTURE (Fu et al., 2021)
156
+ - Objectron (Ahmadyan et al., 2021)
157
+ - ShapeNet (Chang et al., 2015)
158
+ - OmniObject3D (Wu et al., 2023)
159
+ - NOCS REAL (Wang et al., 2019)
160
+ - Get3D (Gao et al., 2022)
161
+ - COCO Space SEA (a combination of datasets)
162
+
163
+ ### Annotations
164
 
165
  #### Annotation process
166
 
167
+ For datasets with available 3D information, orientation information was derived algorithmically. For example:
168
+ - JTA: Orientation was calculated by analyzing shoulder positions relative to the camera and head angle
169
+ - KITTI: Rotation matrices were used to categorize vehicles and pedestrians based on orientation
170
+ - 3D-Future: 6-DoF parameters were used to calculate precise rotational adjustments
171
+ - COCO: Expert manual labeling was performed for object orientations
172
 
173
+ The annotation process included rigorous quality control with multiple human evaluators checking for ambiguities and edge cases. The process created standardized, clear annotations for both coarse and fine-grained orientation judgments.
174
 
175
  #### Who are the annotators?
176
 
177
+ The annotations were performed by a combination of:
178
+ - Automated conversion from existing 3D parameters (for synthetic datasets)
179
+ - Expert human annotators with experience in computer vision and spatial reasoning (particularly for natural images)
180
+ - Non-expert annotators providing feedback for prompt refinement and disambiguation
181
 
182
  #### Personal and Sensitive Information
183
 
184
+ The dataset uses established computer vision datasets and does not introduce new personal or sensitive information. The focus is on object orientation rather than identifying individuals or private data.
 
 
185
 
186
  ## Bias, Risks, and Limitations
187
 
188
+ The DORI benchmark has several limitations:
189
+ - Performance may be influenced by the quality of bounding box annotations
190
+ - Some objects inherently have more ambiguous orientations than others
191
+ - The distribution of objects is not entirely balanced across all categories
192
+ - While diverse, the benchmark cannot cover every possible orientation scenario
193
+ - Performance on synthetic vs. real images may vary due to domain differences
194
+ - The benchmark primarily features static orientation reasoning rather than dynamic manipulation
195
 
196
  ### Recommendations
197
 
198
+ Users of the DORI benchmark should:
199
+ - Consider results across both coarse and fine-grained questions for a complete understanding of model capabilities
200
+ - Pay attention to performance differences across the four core dimensions to identify specific weaknesses
201
+ - Note that orientation understanding is just one component of spatial reasoning
202
+ - Be aware that orientation perception in controlled environments may differ from real-world deployment scenarios
203
+ - Consider that poor performance on DORI may indicate fundamental limitations in a model's spatial representation capabilities
204
 
205
+ ## Citation
206
 
207
  **BibTeX:**
208
 
 
 
 
 
 
209
 
210
+ ## Glossary
211
 
212
+ - **Frontal Alignment**: The ability to perceive how an object's front-facing surface is oriented relative to the viewer
213
+ - **Rotational Transformation**: Understanding orientation changes through rotation, reflecting requirements for object manipulation
214
+ - **Relative Orientation**: Understanding how objects are oriented in relation to each other and with respect to the viewer
215
+ - **Canonical Orientation**: The ability to recognize when objects deviate from their expected orientations
216
+ - **Coarse-grained questions**: Basic categorical judgments about orientation (e.g., "is the car facing toward or away from the camera?")
217
+ - **Fine-grained questions**: Precise metric relationships about orientation (e.g., "at what angle is the car oriented relative to the camera?")
218
+ - **Egocentric reference frame**: Orientation relative to the camera/viewer
219
+ - **Allocentric reference frame**: Orientation independent of the viewer's perspective
220
 
221
+ ## More Information
222
 
223
+ The DORI benchmark represents a significant advancement in the assessment of orientation understanding in MLLMs. Initial evaluations of 15 state-of-the-art MLLMs revealed significant limitations, with even the best models achieving only 54.2% accuracy on coarse tasks and 33.0% on granular orientation judgments, compared to human performance of 86.6% and 80.9% respectively.
224
 
225
+ Performance patterns indicate that current models struggle most with precise angular estimations, multi-axis rotational transformations, and perspective shifts beyond egocentric frames. These findings strongly suggest that future architectural advancements must develop specialized mechanisms for continuous geometric representation to bridge this critical gap in machine perception.
226
 
227
+ ## Dataset Card Authors
228
 
229
+ Anonymous Authors (NeurIPS 2025 submission)
230
 
231
  ## Dataset Card Contact
232
 
233
+ For more information, please contact the authors through the NeurIPS 2025 submission system.