Datasets:
File size: 9,039 Bytes
23ded52 5b1756f 23ded52 5b1756f 23ded52 5fdac5f 5b1756f 3a96e5e 5b1756f 23ded52 5b1756f 23ded52 5b1756f 23ded52 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 |
---
license: cc-by-4.0
task_categories:
- object-detection
language:
- en
tags:
- computer-vision
- object-detection
- yolo
- virtual-reality
- vr
- accessibility
- social-vr
pretty_name: DISCOVR - Virtual Reality UI Object Detection Dataset
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: image
dtype: image
- name: objects
struct:
- name: class_id
list: int64
- name: center_x
list: float32
- name: center_y
list: float32
- name: width
list: float32
- name: height
list: float32
splits:
- name: train
num_bytes: 536592914
num_examples: 15207
- name: test
num_bytes: 29938152
num_examples: 839
- name: validation
num_bytes: 59182849
num_examples: 1645
download_size: 613839753
dataset_size: 625713915
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
# DIgitial Social Context Objects in VR (DISCOVR): A Social Virtual Reality Object Detection Dataset
## Dataset Description
DISCOVR is an object detection dataset for identifying user interface elements and interactive objects in virtual reality (VR) and social VR environments. The dataset contains **17,691 annotated images** across **30 object classes** commonly found in 17 top social VR applications & VR demos.
This dataset is designed to support research in VR accessibility, automatic UI analysis, and assistive technologies for virtual environments.
### The entire dataset is available to download at once at https://huggingface.co/datasets/UWMadAbility/DISCOVR/blob/main/dataset.zip
### Pretrained YOLOv8 weights are available at https://huggingface.co/UWMadAbility/VRSight
### If you use DISCOVR in your work, please cite our work VRSight for which it was developed:
```bibtex
@inproceedings{killough2025vrsight,
title={VRSight: An AI-Driven Scene Description System to Improve Virtual Reality Accessibility for Blind People},
author={Killough, Daniel and Feng, Justin and Ching, Zheng Xue and Wang, Daniel and Dyava, Rithvik and Tian, Yapeng and Zhao, Yuhang},
booktitle={Proceedings of the 38th Annual ACM Symposium on User Interface Software and Technology},
pages={1--17},
year={2025}
}
```
---
### Dataset Summary
- **Total Images:** 17,691
- Training: 15,207 images
- Validation: 1,645 images
- Test: 839 images
- **Classes:** 30 object categories
- **Format:** YOLOv8
- **License:** CC BY 4.0
## Object Classes
The dataset includes 30 classes of VR UI elements and interactive objects:
| ID | Class Name | Description |
|----|------------|-------------|
| 0 | avatar | User representations (human avatars) |
| 1 | avatar-nonhuman | Non-human avatar representations |
| 2 | button | Interactive buttons |
| 3 | campfire | Campfire objects (social gathering points) |
| 4 | chat box | Text chat interface elements |
| 5 | chat bubble | Speech/thought bubbles |
| 6 | controller | VR controller representations |
| 7 | dashboard | VR OS dashboard |
| 8 | guardian | Boundary/guardian system indicators (blue grid/plus signs) |
| 9 | hand | Hand representations |
| 10 | hud | Heads-up display elements |
| 11 | indicator-mute | Mute status indicators |
| 12 | interactable | Generic interactable objects |
| 13 | locomotion-target | Movement/teleportation targets |
| 14 | menu | Menu interfaces |
| 15 | out of bounds | Out-of-bounds warnings (red circle) |
| 16 | portal | Portal/doorway objects |
| 17 | progress bar | Progress indicators |
| 18 | seat-multiple | Multi-person seating |
| 19 | seat-single | Single-person seating |
| 20 | sign-graphic | Graphical signs |
| 21 | sign-text | Text-based signs |
| 22 | spawner | Object spawning points |
| 23 | table | Tables and surfaces |
| 24 | target | Target/aim points |
| 25 | ui-graphic | Graphical UI elements |
| 26 | ui-text | Text UI elements |
| 27 | watch | Watch/time displays |
| 28 | writing surface | Whiteboards/drawable surfaces |
| 29 | writing utensil | Drawing/writing tools |
## Dataset Structure
```
DISCOVR/
├── train/
│ ├── images/ # 15,207 training images (.jpg)
│ └── labels/ # YOLO format annotations (.txt)
├── validation/
│ ├── images/ # 1,645 validation images
│ └── labels/ # YOLO format annotations
├── test/
│ ├── images/ # 839 test images
│ └── labels/ # YOLO format annotations
└── data.yaml # Dataset configuration file
```
### Annotation Format
Annotations are in YOLO format with normalized coordinates:
```
<class_id> <center_x> <center_y> <width> <height>
```
All coordinates are normalized to [0, 1] range relative to image dimensions.
---
## Usage
### With Hugging Face Datasets
```python
from datasets import load_dataset
# Load the full dataset
dataset = load_dataset("UWMadAbility/DISCOVR")
# Access individual splits
train_data = dataset['train']
val_data = dataset['validation']
test_data = dataset['test']
# Example: Get first training image and its annotations
sample = train_data[0]
image = sample['image']
objects = sample['objects']
print(f"Number of objects: {len(objects['class_id'])}")
print(f"Class IDs: {objects['class_id']}")
print(f"Bounding boxes: {list(zip(objects['center_x'], objects['center_y'], objects['width'], objects['height']))}")
```
### With YOLOv8/Ultralytics
First, download the dataset and create a `data.yaml` file:
```yaml
path: ./DISCOVR
train: train/images
val: validation/images
test: test/images
nc: 30
names:
0: avatar
1: avatar-nonhuman
2: button
3: campfire
4: chat box
5: chat bubble
6: controller
7: dashboard
8: guardian
9: hand
10: hud
11: indicator-mute
12: interactable
13: locomotion-target
14: menu
15: out of bounds
16: portal
17: progress bar
18: seat-multiple
19: seat-single
20: sign-graphic
21: sign-text
22: spawner
23: table
24: target
25: ui-graphic
26: ui-text
27: watch
28: writing surface
29: writing utensil
```
Then train a model:
```python
from ultralytics import YOLO
# Load a pretrained model
model = YOLO('yolov8n.pt')
# Train the model
results = model.train(
data='data.yaml',
epochs=100,
imgsz=640,
batch=16
)
# Validate the model
metrics = model.val()
# Make predictions
results = model.predict('path/to/vr_image.jpg')
```
### With Transformers (DETR, etc.)
```python
from datasets import load_dataset
from transformers import AutoImageProcessor, AutoModelForObjectDetection
# Load dataset
dataset = load_dataset("UWMadAbility/DISCOVR")
# Load model and processor
processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50")
model = AutoModelForObjectDetection.from_pretrained("facebook/detr-resnet-50")
# Process image
sample = dataset['train'][0]
inputs = processor(images=sample['image'], return_tensors="pt")
# Note: You'll need to convert YOLO format to COCO format for DETR
# YOLO: (center_x, center_y, width, height) normalized
# COCO: (x_min, y_min, width, height) in pixels
```
---
## Applications
This dataset can be used for:
- **VR Accessibility Research**: Automatically detecting and describing UI elements for users with disabilities
- **UI/UX Analysis**: Analyzing VR interface design patterns
- **Assistive Technologies**: Building screen readers and navigation aids for VR
- **Automatic Testing**: Testing VR applications for UI consistency
- **Content Moderation**: Detecting inappropriate content in social VR spaces
- **User Behavior Research**: Understanding how users interact with VR interfaces
## Citation
If you use this dataset in your research, please cite our publication using DISCOVR, called VRSight:
```bibtex
@inproceedings{killough2025vrsight,
title={VRSight: An AI-Driven Scene Description System to Improve Virtual Reality Accessibility for Blind People},
author={Killough, Daniel and Feng, Justin and Ching, Zheng Xue and Wang, Daniel and Dyava, Rithvik and Tian, Yapeng and Zhao, Yuhang},
booktitle={Proceedings of the 38th Annual ACM Symposium on User Interface Software and Technology},
pages={1--17},
year={2025}
}
```
## License
This dataset is released under the **Creative Commons Attribution 4.0 International (CC BY 4.0)** license.
You are free to:
- Share — copy and redistribute the material
- Adapt — remix, transform, and build upon the material
Under the following terms:
- Attribution — You must give appropriate credit
## Contact
For questions, issues, or collaborations:
- Main Codebase: https://github.com/MadisonAbilityLab/VRSight
- This Repository: [UWMadAbility/DISCOVR](https://huggingface.co/datasets/UWMadAbility/DISCOVR)
- Organization: UW-Madison Ability Lab
## Acknowledgments
This dataset was created by Daniel K., Justin, Daniel W., ZX, Ricky, Abhinav, and the MadAbility Lab at the University of Wisconsin-Madison to support research in VR accessibility and assistive technologies.
|