Datasets:
Update README as datacard, version 1
Browse files
README.md
CHANGED
|
@@ -1,40 +1,295 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-4.0
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
- name:
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
-
|
| 39 |
-
|
| 40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- object-detection
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
tags:
|
| 8 |
+
- computer-vision
|
| 9 |
+
- object-detection
|
| 10 |
+
- yolo
|
| 11 |
+
- virtual-reality
|
| 12 |
+
- vr
|
| 13 |
+
- accessibility
|
| 14 |
+
- social-vr
|
| 15 |
+
pretty_name: DISCOVR - Virtual Reality UI Object Detection Dataset
|
| 16 |
+
size_categories:
|
| 17 |
+
- 10K<n<100K
|
| 18 |
+
dataset_info:
|
| 19 |
+
features:
|
| 20 |
+
- name: image
|
| 21 |
+
dtype: image
|
| 22 |
+
- name: objects
|
| 23 |
+
struct:
|
| 24 |
+
- name: class_id
|
| 25 |
+
list: int64
|
| 26 |
+
- name: center_x
|
| 27 |
+
list: float32
|
| 28 |
+
- name: center_y
|
| 29 |
+
list: float32
|
| 30 |
+
- name: width
|
| 31 |
+
list: float32
|
| 32 |
+
- name: height
|
| 33 |
+
list: float32
|
| 34 |
+
splits:
|
| 35 |
+
- name: train
|
| 36 |
+
num_bytes: 536592914
|
| 37 |
+
num_examples: 15207
|
| 38 |
+
- name: test
|
| 39 |
+
num_bytes: 29938152
|
| 40 |
+
num_examples: 839
|
| 41 |
+
- name: validation
|
| 42 |
+
num_bytes: 59182849
|
| 43 |
+
num_examples: 1645
|
| 44 |
+
download_size: 613839753
|
| 45 |
+
dataset_size: 625713915
|
| 46 |
+
configs:
|
| 47 |
+
- config_name: default
|
| 48 |
+
data_files:
|
| 49 |
+
- split: train
|
| 50 |
+
path: data/train-*
|
| 51 |
+
- split: test
|
| 52 |
+
path: data/test-*
|
| 53 |
+
- split: validation
|
| 54 |
+
path: data/validation-*
|
| 55 |
+
---
|
| 56 |
+
|
| 57 |
+
# DISCOVR: Virtual Reality UI Object Detection Dataset
|
| 58 |
+
|
| 59 |
+
## Dataset Description
|
| 60 |
+
|
| 61 |
+
DISCOVR is an object detection dataset for identifying user interface elements and interactive objects in virtual reality (VR) and social VR environments. The dataset contains **17,691 annotated images** across **30 object classes** commonly found in VR applications.
|
| 62 |
+
|
| 63 |
+
This dataset is designed to support research in VR accessibility, automatic UI analysis, and assistive technologies for virtual environments.
|
| 64 |
+
|
| 65 |
+
### Dataset Summary
|
| 66 |
+
|
| 67 |
+
- **Total Images:** 17,691
|
| 68 |
+
- Training: 15,207 images
|
| 69 |
+
- Validation: 1,645 images
|
| 70 |
+
- Test: 839 images
|
| 71 |
+
- **Classes:** 30 object categories
|
| 72 |
+
- **Format:** YOLOv8
|
| 73 |
+
- **License:** CC BY 4.0
|
| 74 |
+
|
| 75 |
+
## Object Classes
|
| 76 |
+
|
| 77 |
+
The dataset includes 30 classes of VR UI elements and interactive objects:
|
| 78 |
+
|
| 79 |
+
| ID | Class Name | Description |
|
| 80 |
+
|----|------------|-------------|
|
| 81 |
+
| 0 | avatar | User representations (human avatars) |
|
| 82 |
+
| 1 | avatar-nonhuman | Non-human avatar representations |
|
| 83 |
+
| 2 | button | Interactive buttons |
|
| 84 |
+
| 3 | campfire | Campfire objects (social gathering points) |
|
| 85 |
+
| 4 | chat box | Text chat interface elements |
|
| 86 |
+
| 5 | chat bubble | Speech/thought bubbles |
|
| 87 |
+
| 6 | controller | VR controller representations |
|
| 88 |
+
| 7 | dashboard | VR OS dashboard |
|
| 89 |
+
| 8 | guardian | Boundary/guardian system indicators (blue grid/plus signs) |
|
| 90 |
+
| 9 | hand | Hand representations |
|
| 91 |
+
| 10 | hud | Heads-up display elements |
|
| 92 |
+
| 11 | indicator-mute | Mute status indicators |
|
| 93 |
+
| 12 | interactable | Generic interactable objects |
|
| 94 |
+
| 13 | locomotion-target | Movement/teleportation targets |
|
| 95 |
+
| 14 | menu | Menu interfaces |
|
| 96 |
+
| 15 | out of bounds | Out-of-bounds warnings (red circle) |
|
| 97 |
+
| 16 | portal | Portal/doorway objects |
|
| 98 |
+
| 17 | progress bar | Progress indicators |
|
| 99 |
+
| 18 | seat-multiple | Multi-person seating |
|
| 100 |
+
| 19 | seat-single | Single-person seating |
|
| 101 |
+
| 20 | sign-graphic | Graphical signs |
|
| 102 |
+
| 21 | sign-text | Text-based signs |
|
| 103 |
+
| 22 | spawner | Object spawning points |
|
| 104 |
+
| 23 | table | Tables and surfaces |
|
| 105 |
+
| 24 | target | Target/aim points |
|
| 106 |
+
| 25 | ui-graphic | Graphical UI elements |
|
| 107 |
+
| 26 | ui-text | Text UI elements |
|
| 108 |
+
| 27 | watch | Watch/time displays |
|
| 109 |
+
| 28 | writing surface | Whiteboards/drawable surfaces |
|
| 110 |
+
| 29 | writing utensil | Drawing/writing tools |
|
| 111 |
+
|
| 112 |
+
## Dataset Structure
|
| 113 |
+
|
| 114 |
+
```
|
| 115 |
+
DISCOVR/
|
| 116 |
+
├── train/
|
| 117 |
+
│ ├── images/ # 15,207 training images (.jpg)
|
| 118 |
+
│ └── labels/ # YOLO format annotations (.txt)
|
| 119 |
+
├── validation/
|
| 120 |
+
│ ├── images/ # 1,645 validation images
|
| 121 |
+
│ └── labels/ # YOLO format annotations
|
| 122 |
+
├── test/
|
| 123 |
+
│ ├── images/ # 839 test images
|
| 124 |
+
│ └── labels/ # YOLO format annotations
|
| 125 |
+
└── data.yaml # Dataset configuration file
|
| 126 |
+
```
|
| 127 |
+
|
| 128 |
+
### Annotation Format
|
| 129 |
+
|
| 130 |
+
Annotations are in YOLO format with normalized coordinates:
|
| 131 |
+
```
|
| 132 |
+
<class_id> <center_x> <center_y> <width> <height>
|
| 133 |
+
```
|
| 134 |
+
|
| 135 |
+
All coordinates are normalized to [0, 1] range relative to image dimensions.
|
| 136 |
+
|
| 137 |
+
## Usage
|
| 138 |
+
|
| 139 |
+
### With Hugging Face Datasets
|
| 140 |
+
|
| 141 |
+
```python
|
| 142 |
+
from datasets import load_dataset
|
| 143 |
+
|
| 144 |
+
# Load the full dataset
|
| 145 |
+
dataset = load_dataset("UWMadAbility/DISCOVR")
|
| 146 |
+
|
| 147 |
+
# Access individual splits
|
| 148 |
+
train_data = dataset['train']
|
| 149 |
+
val_data = dataset['validation']
|
| 150 |
+
test_data = dataset['test']
|
| 151 |
+
|
| 152 |
+
# Example: Get first training image and its annotations
|
| 153 |
+
sample = train_data[0]
|
| 154 |
+
image = sample['image']
|
| 155 |
+
objects = sample['objects']
|
| 156 |
+
|
| 157 |
+
print(f"Number of objects: {len(objects['class_id'])}")
|
| 158 |
+
print(f"Class IDs: {objects['class_id']}")
|
| 159 |
+
print(f"Bounding boxes: {list(zip(objects['center_x'], objects['center_y'], objects['width'], objects['height']))}")
|
| 160 |
+
```
|
| 161 |
+
|
| 162 |
+
### With YOLOv8/Ultralytics
|
| 163 |
+
|
| 164 |
+
First, download the dataset and create a `data.yaml` file:
|
| 165 |
+
|
| 166 |
+
```yaml
|
| 167 |
+
path: ./DISCOVR
|
| 168 |
+
train: train/images
|
| 169 |
+
val: validation/images
|
| 170 |
+
test: test/images
|
| 171 |
+
|
| 172 |
+
nc: 30
|
| 173 |
+
names:
|
| 174 |
+
0: avatar
|
| 175 |
+
1: avatar-nonhuman
|
| 176 |
+
2: button
|
| 177 |
+
3: campfire
|
| 178 |
+
4: chat box
|
| 179 |
+
5: chat bubble
|
| 180 |
+
6: controller
|
| 181 |
+
7: dashboard
|
| 182 |
+
8: guardian
|
| 183 |
+
9: hand
|
| 184 |
+
10: hud
|
| 185 |
+
11: indicator-mute
|
| 186 |
+
12: interactable
|
| 187 |
+
13: locomotion-target
|
| 188 |
+
14: menu
|
| 189 |
+
15: out of bounds
|
| 190 |
+
16: portal
|
| 191 |
+
17: progress bar
|
| 192 |
+
18: seat-multiple
|
| 193 |
+
19: seat-single
|
| 194 |
+
20: sign-graphic
|
| 195 |
+
21: sign-text
|
| 196 |
+
22: spawner
|
| 197 |
+
23: table
|
| 198 |
+
24: target
|
| 199 |
+
25: ui-graphic
|
| 200 |
+
26: ui-text
|
| 201 |
+
27: watch
|
| 202 |
+
28: writing surface
|
| 203 |
+
29: writing utensil
|
| 204 |
+
```
|
| 205 |
+
|
| 206 |
+
Then train a model:
|
| 207 |
+
|
| 208 |
+
```python
|
| 209 |
+
from ultralytics import YOLO
|
| 210 |
+
|
| 211 |
+
# Load a pretrained model
|
| 212 |
+
model = YOLO('yolov8n.pt')
|
| 213 |
+
|
| 214 |
+
# Train the model
|
| 215 |
+
results = model.train(
|
| 216 |
+
data='data.yaml',
|
| 217 |
+
epochs=100,
|
| 218 |
+
imgsz=640,
|
| 219 |
+
batch=16
|
| 220 |
+
)
|
| 221 |
+
|
| 222 |
+
# Validate the model
|
| 223 |
+
metrics = model.val()
|
| 224 |
+
|
| 225 |
+
# Make predictions
|
| 226 |
+
results = model.predict('path/to/vr_image.jpg')
|
| 227 |
+
```
|
| 228 |
+
|
| 229 |
+
### With Transformers (DETR, etc.)
|
| 230 |
+
|
| 231 |
+
```python
|
| 232 |
+
from datasets import load_dataset
|
| 233 |
+
from transformers import AutoImageProcessor, AutoModelForObjectDetection
|
| 234 |
+
|
| 235 |
+
# Load dataset
|
| 236 |
+
dataset = load_dataset("UWMadAbility/DISCOVR")
|
| 237 |
+
|
| 238 |
+
# Load model and processor
|
| 239 |
+
processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50")
|
| 240 |
+
model = AutoModelForObjectDetection.from_pretrained("facebook/detr-resnet-50")
|
| 241 |
+
|
| 242 |
+
# Process image
|
| 243 |
+
sample = dataset['train'][0]
|
| 244 |
+
inputs = processor(images=sample['image'], return_tensors="pt")
|
| 245 |
+
|
| 246 |
+
# Note: You'll need to convert YOLO format to COCO format for DETR
|
| 247 |
+
# YOLO: (center_x, center_y, width, height) normalized
|
| 248 |
+
# COCO: (x_min, y_min, width, height) in pixels
|
| 249 |
+
```
|
| 250 |
+
|
| 251 |
+
## Applications
|
| 252 |
+
|
| 253 |
+
This dataset can be used for:
|
| 254 |
+
|
| 255 |
+
- **VR Accessibility Research**: Automatically detecting and describing UI elements for users with disabilities
|
| 256 |
+
- **UI/UX Analysis**: Analyzing VR interface design patterns
|
| 257 |
+
- **Assistive Technologies**: Building screen readers and navigation aids for VR
|
| 258 |
+
- **Automatic Testing**: Testing VR applications for UI consistency
|
| 259 |
+
- **Content Moderation**: Detecting inappropriate content in social VR spaces
|
| 260 |
+
- **User Behavior Research**: Understanding how users interact with VR interfaces
|
| 261 |
+
|
| 262 |
+
## Citation
|
| 263 |
+
|
| 264 |
+
If you use this dataset in your research, please cite our publication using DISCOVR, called VRSight:
|
| 265 |
+
|
| 266 |
+
```bibtex
|
| 267 |
+
@inproceedings{killough2025vrsight,
|
| 268 |
+
title={VRSight: An AI-Driven Scene Description System to Improve Virtual Reality Accessibility for Blind People},
|
| 269 |
+
author={Killough, Daniel and Feng, Justin and Ching, Zheng Xue and Wang, Daniel and Dyava, Rithvik and Tian, Yapeng and Zhao, Yuhang},
|
| 270 |
+
booktitle={Proceedings of the 38th Annual ACM Symposium on User Interface Software and Technology},
|
| 271 |
+
pages={1--17},
|
| 272 |
+
year={2025}
|
| 273 |
+
}
|
| 274 |
+
```
|
| 275 |
+
|
| 276 |
+
## License
|
| 277 |
+
|
| 278 |
+
This dataset is released under the **Creative Commons Attribution 4.0 International (CC BY 4.0)** license.
|
| 279 |
+
|
| 280 |
+
You are free to:
|
| 281 |
+
- Share — copy and redistribute the material
|
| 282 |
+
- Adapt — remix, transform, and build upon the material
|
| 283 |
+
|
| 284 |
+
Under the following terms:
|
| 285 |
+
- Attribution — You must give appropriate credit
|
| 286 |
+
|
| 287 |
+
## Contact
|
| 288 |
+
|
| 289 |
+
For questions, issues, or collaborations:
|
| 290 |
+
- Main Codebase: https://github.com/MadisonAbilityLab/VRSight
|
| 291 |
+
- This Repository: [UWMadAbility/DISCOVR](https://huggingface.co/datasets/UWMadAbility/DISCOVR)
|
| 292 |
+
- Organization: UW-Madison Ability Lab
|
| 293 |
+
|
| 294 |
+
## Acknowledgments
|
| 295 |
+
This dataset was created by Daniel K., Justin, Daniel W., ZX, Ricky, Abhinav, and the MadAbility Lab at the University of Wisconsin-Madison to support research in VR accessibility and assistive technologies.
|