EgoXR-GUI / README.md
Anonymous114's picture
Initial commit for double-blind review
742ce00
metadata
dataset_info:
  features:
    - name: task_id
      dtype: int32
    - name: annotation_id
      dtype: int32
    - name: image
      dtype: image
    - name: instruction_cn
      dtype: string
    - name: instruction_en
      dtype: string
    - name: sample_id
      dtype: string
    - name: gaze_point
      list: int32
    - name: choices
      struct:
        - name: activity
          dtype: string
        - name: is_same_window
          dtype: string
        - name: place
          dtype: string
        - name: platform
          dtype: string
        - name: scenario
          dtype: string
        - name: task type
          dtype: string
        - name: ui_type
          dtype: string
    - name: target_bbox
      struct:
        - name: height
          dtype: float32
        - name: image_rotation
          dtype: int32
        - name: labels
          list: string
        - name: original_height
          dtype: int32
        - name: original_width
          dtype: int32
        - name: rotation
          dtype: float32
        - name: width
          dtype: float32
        - name: x
          dtype: float32
        - name: 'y'
          dtype: float32
    - name: is_ok
      dtype: bool
    - name: objects
      struct:
        - name: bbox
          list:
            list: float32
            length: 4
        - name: category
          list:
            class_label:
              names:
                '0': target
language:
  - en
  - zh
license: cc-by-4.0
tags:
  - computer-vision
  - visual-grounding
  - xr
  - egocentric
  - gui
  - apple-vision-pro
  - instruction-following
  - multimodal
task_ids:
  - visual-grounding
  - object-detection
pretty_name: EgoXR-GUI - Egocentric XR GUI Grounding Dataset

EgoXR-GUI: Benchmarking GUI Grounding in Physical–Digital Extended Reality

EgoXR-GUI is the first extended reality (XR) specific GUI grounding benchmark. Unlike traditional desktop or mobile GUI benchmarks, EgoXR-GUI evaluates whether multimodal large language models (MLLMs) can effectively reason about virtual interfaces embedded within hybrid digital–physical environments.

Overview

  • Dataset Size: 1,070 carefully curated examples. (Originally comprising more internal annotations, the final publicly released benchmark validates exactly 1,070 high-quality target grounding instructions across diverse spatial scenarios.)
  • Platform: Apple Vision Pro and other 3D/XR environments.
  • Task Types:
    1. Direct Grounding: Simple identification.
    2. Spatial Grounding: Reasoning about UI elements based on 3D spatial properties.
    3. Semantic Grounding: Reasoning based on the text or icon semantics of the UI elements.
  • Language Supported: English (instruction_en) and Chinese (instruction_cn).

Data Fields

Each Example contains the following fields:

  • task_id & annotation_id: Unique identifiers for tracking the specific visual task.
  • sample_id: External sample identifier linking back to the origin dataset source.
  • image: The egocentric view captured from the XR headset/environment.
  • instruction_en: The grounding prompt in English.
  • instruction_cn: The grounding prompt in Chinese.
  • gaze_point: The tracked eye gaze coordinate [x, y] representing the user's attention.
  • choices: Structured dictionary showing context tags:
    • is_same_window
    • ui_type
    • platform
    • scenario
    • place
    • activity
    • task type
  • target_bbox: The exact geometrical target. Contains x, y, width, height, spatial rotation, and string labels.
  • objects: Hugging Face standardized format representing the bounding box for Data Viewer Visualization.
  • is_ok: Quality control boolean indicator.