guidedog_gold_test / README.md
kjunh's picture
Upload README.md with huggingface_hub
17a301a verified
metadata
license: cc-by-nc-4.0
task_categories:
  - visual-question-answering
  - image-to-text
language:
  - en
tags:
  - blind
  - low-vision
  - accessibility
  - egocentric
  - navigation
  - guidedog
size_categories:
  - 1K<n<10K
gated: manual
extra_gated_heading: Request Access to GuideDog Gold Dataset
extra_gated_prompt: |
  This dataset is released for **non-commercial research purposes only**.

  By requesting access, you agree to:
  1. Use this dataset solely for academic research
  2. Not redistribute the dataset or any derived data
  3. Cite our work in any publications using this dataset
  4. Delete the dataset upon request from the authors

  Access requests are reviewed manually by the dataset authors.
extra_gated_button_content: Submit Access Request
extra_gated_fields:
  Full Name: text
  Affiliation (University/Organization): text
  Country: country
  Email (institutional preferred): text
  Research Purpose:
    type: select
    options:
      - Academic Research
      - Education
      - Non-profit Research
      - Other
  Brief description of intended use: text
  I agree to use this dataset for non-commercial research only: checkbox
  I agree to cite the GuideDog paper in any publications: checkbox

GuideDog Gold Dataset

Human-verified gold-standard annotations for blind and low-vision (BLV) navigation assistance.

Dataset Description

GuideDog Gold is a curated dataset of egocentric street-view images with human-verified annotations designed to help develop assistive AI systems for blind and low-vision users. Each image contains:

  • Scene descriptions with spatial context
  • Object detection with clock-direction localization (10-2 o'clock range)
  • Depth information for detected objects
  • Hazard annotations identifying safety-critical elements
  • Gold-standard guidance text following BLV navigation standards

Annotation Standards

Annotations follow the GuideDog S1/S2/S3 framework:

  • S1: Scene description and user position context
  • S2: Hazards at clock directions with distances
  • S3: Summary and recommended navigation direction

Intended Use

This dataset is intended for:

  • Training and evaluating vision-language models for BLV assistance
  • Research on proactive navigation guidance systems
  • Benchmarking multimodal models on accessibility tasks

Ethical Considerations

  • All images are sourced from publicly available walking tour videos
  • No personally identifiable information is included
  • Dataset is provided for research purposes only

Citation

If you use this dataset, please cite:

@article{guidedog2024,
  title={GuideDog: Proactive Navigation Assistance for Blind and Low-Vision Users},
  author={[Authors]},
  journal={arXiv preprint},
  year={2024}
}

License

This dataset is licensed under CC BY-NC 4.0.

Contact

For questions or issues, please contact the dataset maintainers through the HuggingFace discussion tab.