--- license: cc-by-4.0 task_categories: - object-detection language: - en tags: - computer-vision - object-detection - yolo - virtual-reality - vr - accessibility - social-vr pretty_name: DISCOVR - Virtual Reality UI Object Detection Dataset size_categories: - 10K ``` All coordinates are normalized to [0, 1] range relative to image dimensions. --- ## Usage ### With Hugging Face Datasets ```python from datasets import load_dataset # Load the full dataset dataset = load_dataset("UWMadAbility/DISCOVR") # Access individual splits train_data = dataset['train'] val_data = dataset['validation'] test_data = dataset['test'] # Example: Get first training image and its annotations sample = train_data[0] image = sample['image'] objects = sample['objects'] print(f"Number of objects: {len(objects['class_id'])}") print(f"Class IDs: {objects['class_id']}") print(f"Bounding boxes: {list(zip(objects['center_x'], objects['center_y'], objects['width'], objects['height']))}") ``` ### With YOLOv8/Ultralytics First, download the dataset and create a `data.yaml` file: ```yaml path: ./DISCOVR train: train/images val: validation/images test: test/images nc: 30 names: 0: avatar 1: avatar-nonhuman 2: button 3: campfire 4: chat box 5: chat bubble 6: controller 7: dashboard 8: guardian 9: hand 10: hud 11: indicator-mute 12: interactable 13: locomotion-target 14: menu 15: out of bounds 16: portal 17: progress bar 18: seat-multiple 19: seat-single 20: sign-graphic 21: sign-text 22: spawner 23: table 24: target 25: ui-graphic 26: ui-text 27: watch 28: writing surface 29: writing utensil ``` Then train a model: ```python from ultralytics import YOLO # Load a pretrained model model = YOLO('yolov8n.pt') # Train the model results = model.train( data='data.yaml', epochs=100, imgsz=640, batch=16 ) # Validate the model metrics = model.val() # Make predictions results = model.predict('path/to/vr_image.jpg') ``` ### With Transformers (DETR, etc.) ```python from datasets import load_dataset from transformers import AutoImageProcessor, AutoModelForObjectDetection # Load dataset dataset = load_dataset("UWMadAbility/DISCOVR") # Load model and processor processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50") model = AutoModelForObjectDetection.from_pretrained("facebook/detr-resnet-50") # Process image sample = dataset['train'][0] inputs = processor(images=sample['image'], return_tensors="pt") # Note: You'll need to convert YOLO format to COCO format for DETR # YOLO: (center_x, center_y, width, height) normalized # COCO: (x_min, y_min, width, height) in pixels ``` --- ## Applications This dataset can be used for: - **VR Accessibility Research**: Automatically detecting and describing UI elements for users with disabilities - **UI/UX Analysis**: Analyzing VR interface design patterns - **Assistive Technologies**: Building screen readers and navigation aids for VR - **Automatic Testing**: Testing VR applications for UI consistency - **Content Moderation**: Detecting inappropriate content in social VR spaces - **User Behavior Research**: Understanding how users interact with VR interfaces ## Citation If you use this dataset in your research, please cite our publication using DISCOVR, called VRSight: ```bibtex @inproceedings{killough2025vrsight, title={VRSight: An AI-Driven Scene Description System to Improve Virtual Reality Accessibility for Blind People}, author={Killough, Daniel and Feng, Justin and Ching, Zheng Xue and Wang, Daniel and Dyava, Rithvik and Tian, Yapeng and Zhao, Yuhang}, booktitle={Proceedings of the 38th Annual ACM Symposium on User Interface Software and Technology}, pages={1--17}, year={2025} } ``` ## License This dataset is released under the **Creative Commons Attribution 4.0 International (CC BY 4.0)** license. You are free to: - Share — copy and redistribute the material - Adapt — remix, transform, and build upon the material Under the following terms: - Attribution — You must give appropriate credit ## Contact For questions, issues, or collaborations: - Main Codebase: https://github.com/MadisonAbilityLab/VRSight - This Repository: [UWMadAbility/DISCOVR](https://huggingface.co/datasets/UWMadAbility/DISCOVR) - Organization: UW-Madison Ability Lab ## Acknowledgments This dataset was created by Daniel K., Justin, Daniel W., ZX, Ricky, Abhinav, and the MadAbility Lab at the University of Wisconsin-Madison to support research in VR accessibility and assistive technologies.