--- license: cc-by-nc-nd-4.0 task_categories: - video-classification - object-detection - question-answering language: - en size_categories: - 100

![Example from dataset.](examples/colmap_sam3_example.gif) ![Example from dataset.](examples/output.gif) ![Example from dataset.](examples/combined.gif) ![Example from dataset.](examples/track.gif) ## Dataset Description This dataset investigates the synchronization of eye tracking and speech recognition using Aria smart glasses to determine whether individuals exhibit visual and verbal synchronization when identifying an object. Participants were tasked with identifying food items from a recipe while wearing Aria glasses, which recorded their eye movements and speech in real time. The dataset enables analysis of gaze–speech synchronization and offers a rich resource for studying how people visually and verbally ground references in real environments. ### Key Features - **Dual perspectives**: Egocentric (first-person via ARIA glasses) and exocentric (third-person via GoPro camera) video recordings - **Gaze tracking**: Eye-tracking data synchronized with video - **Audio & transcription**: Speech recordings with automatic word-level transcription (WhisperX) - **Referential expressions**: Natural language references to objects with temporal and spatial grounding - **Recipe metadata**: Ingredient locations and preparation steps with spatial annotations - **125 recordings**: 25 participants × 5 recipes - **Total duration**: 3.7 hours (average recording: 108 seconds) ### Dataset Details - **Curated by:** KTH Royal Institute of Technology - **Language(s):** English - **License:** CC BY-NC-ND 4.0 ([Link](https://creativecommons.org/licenses/by-nc-nd/4.0/)) - **Participants:** 25 individuals (7 men, 18 women) - **Data Collection Setup:** Participants memorized a series of ingredients and steps in five recipes and verbally instructed the steps while wearing ARIA glasses ### Direct Use This dataset is suitable for research in: - Referential expression grounding - Gaze and speech synchronization - Egocentric video understanding - Multi-modal cooking activity recognition - Spatial reasoning with language - Human-robot interaction and multimodal dialogue systems - Eye-tracking studies in task-based environments ### Out-of-Scope Use - The dataset is not intended for commercial applications without proper ethical considerations - Misuse in contexts where privacy-sensitive information might be inferred or manipulated should be avoided ## Dataset Structure ``` data/ par_01/ raw/ rec_01/ ego_video.mp4 # Egocentric video (ARIA glasses) exo_video.mp4 # Exocentric video (GoPro camera) audio.wav # Audio recording ego_gaze.csv # Gaze tracking data rec_02/ ... annotations/ v1/ rec_01/ whisperx_transcription.tsv # ASR word-level transcription references.csv # Referential expressions with gaze fixations rec_02/ ... par_02/ ... manifests/ metadata.parquet # Dataset metadata metadata.csv # CSV version recipes.json # Recipe details with ingredient locations schema.md # Data format documentation ``` ## Data Fields ### Raw Data **Egocentric Video** (`ego_video.mp4`) - First-person perspective from ARIA glasses - 30 FPS - Captures participant's point of view during cooking **Exocentric Video** (`exo_video.mp4`) - Third-person perspective from GoPro camera - 30 FPS - Captures overall scene and participant actions **Audio** (`audio.wav`) - Sample rate: 48kHz - Format: WAV - Contains participant's verbal instructions **Gaze Data** (`ego_gaze.csv`) - Real-time eye movement tracking from ARIA glasses - Timestamp-synchronized with video - Gaze coordinates and fixation data ### Annotations **Transcription** (`whisperx_transcription.tsv`) - Word-level automatic speech recognition (WhisperX) - Timestamps for each word - Speaker diarization **References** (`references.csv`) - Referential expressions (e.g., "the red paprika") - Temporal alignment with video and speech - Gaze fixations during utterances - Object references with spatial grounding ### Metadata **`metadata.parquet`** - One row per recording with: - `participant_id`: Participant identifier (par_01 to par_25) - `recording_id`: Recording identifier (rec_01 to rec_05) - `recording_uid`: Unique recording ID (par_XX_rec_YY) - `recipe_id`: Recipe identifier (recipe_01 to recipe_05) - `duration_sec`: Video duration in seconds - `ego_fps`, `exo_fps`: Frame rates - `has_*`: Boolean flags for data availability - `n_references`: Number of referential expressions - `notes`: Data quality notes **`recipes.json`** - Recipe details including: - Recipe name and preparation steps - Ingredients with spatial locations - Surface mapping (table, countertop, cupboard shelf, window surface) - Location IDs for spatial grounding ## Dataset Statistics - **Total recordings**: 125 - **Total participants**: 25 - **Recordings per participant**: 5 - **Unique recipes**: 5 - **Average recording duration**: 108 seconds - **Total dataset duration**: 3.7 hours ## Dataset Creation ### Curation Rationale The dataset was created to explore how gaze and speech synchronize in referential communication and whether object location influences this synchronization. It provides a rich resource for multimodal grounding research across egocentric and exocentric perspectives. ### Source Data #### Data Collection and Processing - **Hardware:** ARIA smart glasses, GoPro camera - **Collection Method:** Participants wore ARIA glasses while describing recipe ingredients and steps, allowing real-time capture of gaze and verbal utterances - **Annotation Process:** - Temporal correlation between gaze and speech detected using Python scripts - Automatic transcription using WhisperX - Referential expressions annotated with gaze fixations ## Loading the Dataset ### Using the metadata ```python import pandas as pd import json # Load metadata metadata = pd.read_parquet('data/manifests/metadata.parquet') # Load recipes with open('data/manifests/recipes.json') as f: recipes = json.load(f) # Filter recordings recipe_1_recordings = metadata[metadata['recipe_id'] == 'recipe_01'] ``` ### Using the provided loader script ```python from scripts.load_dataset import ARIAReferentialDataset # Initialize dataset dataset = ARIAReferentialDataset('data') # Load a specific recording recording = dataset.load_recording('par_01', 'rec_01') print(f"Recipe: {recording['recipe']['name']}") print(f"Duration: {recording['metadata']['duration_sec']:.1f}s") print(f"Has gaze: {recording['metadata']['has_gaze']}") print(f"References: {recording['metadata']['n_references']}") # Access data gaze_df = recording['gaze'] references_df = recording['references'] ``` See `scripts/load_dataset.py` for complete examples. ## Citation If you use this dataset in your research, please cite: ```bibtex @misc{deichler2024lookandtell, title={Look and Tell: A Dataset for Multimodal Grounding Across Egocentric and Exocentric Views}, year={2024}, eprint={2510.22672}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2510.22672}, note={Presented at NeurIPS 2025 SpaVLE Workshop} } ``` ## License This dataset is released under the **Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License** (CC BY-NC-ND 4.0). You are free to: - **Share** — copy and redistribute the material in any medium or format Under the following terms: - **Attribution** — You must give appropriate credit - **NonCommercial** — You may not use the material for commercial purposes - **NoDerivatives** — If you remix, transform, or build upon the material, you may not distribute the modified material ## Contact For questions or issues, please open an issue on this dataset repository or contact the KTH Royal Institute of Technology team. ## Acknowledgments This work was conducted at KTH Royal Institute of Technology. We thank all participants who contributed their data to this research.