Datasets:
Languages:
English
ArXiv:
Tags:
egocentric-vision
exocentric-vision
gaze-tracking
referential-expressions
cooking
spatial-reasoning
License:
File size: 9,196 Bytes
c30741c 537c1f5 2826107 537c1f5 94fb293 049b19c 537c1f5 049b19c db4b929 1994a37 ac5244f 1994a37 537c1f5 1994a37 537c1f5 1994a37 537c1f5 c30741c 537c1f5 c30741c 537c1f5 c30741c 537c1f5 c30741c 537c1f5 c30741c 537c1f5 c30741c 537c1f5 c30741c 537c1f5 c30741c 1d090f6 537c1f5 c30741c 537c1f5 809dde0 537c1f5 77745f0 537c1f5 77745f0 537c1f5 c30741c 537c1f5 c30741c 537c1f5 c30741c 537c1f5 c30741c 537c1f5 c30741c 537c1f5 c30741c 537c1f5 c30741c 537c1f5 c30741c 537c1f5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 |
---
license: cc-by-nc-nd-4.0
task_categories:
- video-classification
- object-detection
- question-answering
language:
- en
size_categories:
- 100<n<1000
tags:
- egocentric-vision
- exocentric-vision
- gaze-tracking
- referential-expressions
- cooking
- spatial-reasoning
- gaze-speech-synchronization
- multimodal-grounding
pretty_name: "Look and Tell: KTH-ARIA Referential Dataset"
---
# Look and Tell — A Dataset for Multimodal Grounding Across Egocentric and Exocentric Views
This page hosts the **KTH-ARIA Referential / "Look and Tell"** dataset, introduced in our poster **"Look and Tell: A Dataset for Multimodal Grounding Across Egocentric and Exocentric Views"**, presented at the **NeurIPS 2025 SpaVLE Workshop (SPACE in Vision, Language, and Embodied AI), San Diego**.
<p align="center">
<a href="https://huggingface.co/datasets/annadeichler/KTH-ARIA-referential/resolve/main/examples/look_tell_poster-v3.jpg">
<img src="https://huggingface.co/datasets/annadeichler/KTH-ARIA-referential/resolve/main/examples/look_tell_poster-v3.jpg" width="250">
</a>
</p>




## Dataset Description
This dataset investigates the synchronization of eye tracking and speech recognition using Aria smart glasses to determine whether individuals exhibit visual and verbal synchronization when identifying an object. Participants were tasked with identifying food items from a recipe while wearing Aria glasses, which recorded their eye movements and speech in real time. The dataset enables analysis of gaze–speech synchronization and offers a rich resource for studying how people visually and verbally ground references in real environments.
### Key Features
- **Dual perspectives**: Egocentric (first-person via ARIA glasses) and exocentric (third-person via GoPro camera) video recordings
- **Gaze tracking**: Eye-tracking data synchronized with video
- **Audio & transcription**: Speech recordings with automatic word-level transcription (WhisperX)
- **Referential expressions**: Natural language references to objects with temporal and spatial grounding
- **Recipe metadata**: Ingredient locations and preparation steps with spatial annotations
- **125 recordings**: 25 participants × 5 recipes
- **Total duration**: 3.7 hours (average recording: 108 seconds)
### Dataset Details
- **Curated by:** KTH Royal Institute of Technology
- **Language(s):** English
- **License:** CC BY-NC-ND 4.0 ([Link](https://creativecommons.org/licenses/by-nc-nd/4.0/))
- **Participants:** 25 individuals (7 men, 18 women)
- **Data Collection Setup:** Participants memorized a series of ingredients and steps in five recipes and verbally instructed the steps while wearing ARIA glasses
### Direct Use
This dataset is suitable for research in:
- Referential expression grounding
- Gaze and speech synchronization
- Egocentric video understanding
- Multi-modal cooking activity recognition
- Spatial reasoning with language
- Human-robot interaction and multimodal dialogue systems
- Eye-tracking studies in task-based environments
### Out-of-Scope Use
- The dataset is not intended for commercial applications without proper ethical considerations
- Misuse in contexts where privacy-sensitive information might be inferred or manipulated should be avoided
## Dataset Structure
```
data/
par_01/
raw/
rec_01/
ego_video.mp4 # Egocentric video (ARIA glasses)
exo_video.mp4 # Exocentric video (GoPro camera)
audio.wav # Audio recording
ego_gaze.csv # Gaze tracking data
rec_02/
...
annotations/
v1/
rec_01/
whisperx_transcription.tsv # ASR word-level transcription
references.csv # Referential expressions with gaze fixations
rec_02/
...
par_02/
...
manifests/
metadata.parquet # Dataset metadata
metadata.csv # CSV version
recipes.json # Recipe details with ingredient locations
schema.md # Data format documentation
```
## Data Fields
### Raw Data
**Egocentric Video** (`ego_video.mp4`)
- First-person perspective from ARIA glasses
- 30 FPS
- Captures participant's point of view during cooking
**Exocentric Video** (`exo_video.mp4`)
- Third-person perspective from GoPro camera
- 30 FPS
- Captures overall scene and participant actions
**Audio** (`audio.wav`)
- Sample rate: 48kHz
- Format: WAV
- Contains participant's verbal instructions
**Gaze Data** (`ego_gaze.csv`)
- Real-time eye movement tracking from ARIA glasses
- Timestamp-synchronized with video
- Gaze coordinates and fixation data
### Annotations
**Transcription** (`whisperx_transcription.tsv`)
- Word-level automatic speech recognition (WhisperX)
- Timestamps for each word
- Speaker diarization
**References** (`references.csv`)
- Referential expressions (e.g., "the red paprika")
- Temporal alignment with video and speech
- Gaze fixations during utterances
- Object references with spatial grounding
### Metadata
**`metadata.parquet`** - One row per recording with:
- `participant_id`: Participant identifier (par_01 to par_25)
- `recording_id`: Recording identifier (rec_01 to rec_05)
- `recording_uid`: Unique recording ID (par_XX_rec_YY)
- `recipe_id`: Recipe identifier (recipe_01 to recipe_05)
- `duration_sec`: Video duration in seconds
- `ego_fps`, `exo_fps`: Frame rates
- `has_*`: Boolean flags for data availability
- `n_references`: Number of referential expressions
- `notes`: Data quality notes
**`recipes.json`** - Recipe details including:
- Recipe name and preparation steps
- Ingredients with spatial locations
- Surface mapping (table, countertop, cupboard shelf, window surface)
- Location IDs for spatial grounding
## Dataset Statistics
- **Total recordings**: 125
- **Total participants**: 25
- **Recordings per participant**: 5
- **Unique recipes**: 5
- **Average recording duration**: 108 seconds
- **Total dataset duration**: 3.7 hours
## Dataset Creation
### Curation Rationale
The dataset was created to explore how gaze and speech synchronize in referential communication and whether object location influences this synchronization. It provides a rich resource for multimodal grounding research across egocentric and exocentric perspectives.
### Source Data
#### Data Collection and Processing
- **Hardware:** ARIA smart glasses, GoPro camera
- **Collection Method:** Participants wore ARIA glasses while describing recipe ingredients and steps, allowing real-time capture of gaze and verbal utterances
- **Annotation Process:**
- Temporal correlation between gaze and speech detected using Python scripts
- Automatic transcription using WhisperX
- Referential expressions annotated with gaze fixations
## Loading the Dataset
### Using the metadata
```python
import pandas as pd
import json
# Load metadata
metadata = pd.read_parquet('data/manifests/metadata.parquet')
# Load recipes
with open('data/manifests/recipes.json') as f:
recipes = json.load(f)
# Filter recordings
recipe_1_recordings = metadata[metadata['recipe_id'] == 'recipe_01']
```
### Using the provided loader script
```python
from scripts.load_dataset import ARIAReferentialDataset
# Initialize dataset
dataset = ARIAReferentialDataset('data')
# Load a specific recording
recording = dataset.load_recording('par_01', 'rec_01')
print(f"Recipe: {recording['recipe']['name']}")
print(f"Duration: {recording['metadata']['duration_sec']:.1f}s")
print(f"Has gaze: {recording['metadata']['has_gaze']}")
print(f"References: {recording['metadata']['n_references']}")
# Access data
gaze_df = recording['gaze']
references_df = recording['references']
```
See `scripts/load_dataset.py` for complete examples.
## Citation
If you use this dataset in your research, please cite:
```bibtex
@misc{deichler2024lookandtell,
title={Look and Tell: A Dataset for Multimodal Grounding Across Egocentric and Exocentric Views},
year={2024},
eprint={2510.22672},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2510.22672},
note={Presented at NeurIPS 2025 SpaVLE Workshop}
}
```
## License
This dataset is released under the **Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License** (CC BY-NC-ND 4.0).
You are free to:
- **Share** — copy and redistribute the material in any medium or format
Under the following terms:
- **Attribution** — You must give appropriate credit
- **NonCommercial** — You may not use the material for commercial purposes
- **NoDerivatives** — If you remix, transform, or build upon the material, you may not distribute the modified material
## Contact
For questions or issues, please open an issue on this dataset repository or contact the KTH Royal Institute of Technology team.
## Acknowledgments
This work was conducted at KTH Royal Institute of Technology. We thank all participants who contributed their data to this research. |