Datasets:
The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
EgoTL-DATA
EgoTL Overview
EgoTL is an egocentric benchmark for long-horizon household tasks introduced in "EgoTL: Egocentric Think-Aloud Chains for Long-Horizon Tasks." The project studies how well current vision-language models and world models handle step-by-step reasoning, spatial grounding, navigation, and manipulation in real home environments.
Unlike many egocentric datasets that rely on post-hoc labels or noisy automatic annotations, EgoTL is built around a real-time say-before-act collection protocol. Before taking an action, the participant verbalizes the next goal or intention, producing think-aloud supervision aligned with the visual stream. The project also emphasizes spatial grounding and long-horizon task structure, aiming to support research on embodied reasoning rather than only short-horizon recognition.
According to the official project website, EgoTL is designed to expose failure modes of modern foundation models on long-horizon egocentric reasoning and to support improvement through human-aligned supervision. The benchmark highlights errors such as hallucinated objects, skipped steps, weak spatial grounding, and inconsistent long-horizon planning.
Official project website: https://ego-tl.github.io/
What EgoTL Provides
EgoTL centers on egocentric household tasks with think-aloud chains, task structure, and clip-level grounding. The project website describes the dataset and benchmark as supporting:
- long-horizon household task reasoning
- egocentric navigation and manipulation understanding
- think-aloud chain-of-thought style supervision
- action and scene reasoning under cluttered real environments
- spatially grounded evaluation and long-horizon generation analysis
The official EgoTL-Bench presentation highlights six task dimensions across three layers, including:
- memory-conditioned planning
- scene-aware action reasoning
- next action prediction
- action recognition
- direction recognition
- distance estimation
This Repository
This Hugging Face dataset repository hosts video clips and JSON annotations associated with EgoTL-format data. The uploaded files are organized as:
episodes_json_selected_0000_0020_for_tar/
videos_selected_episodes_0000_0020/
The JSON files include episode-level task descriptions and ordered segment annotations. Each segment contains timing metadata, clip references, action or goal information, outcome tags, and think-aloud text.
Data Format
Example annotation fields include:
episode_idtasksegmentsnamet0t1durationvideo_nameclip_pathgoalevidenceactionsuccess_eventtxt
Intended Use
This repository is suitable for work on:
- egocentric video understanding
- long-horizon planning and reasoning
- action recognition and next-step prediction
- alignment between video and think-aloud text
- spatially grounded embodied AI evaluation
Citation
If you use EgoTL or files from this repository, please cite the EgoTL project and paper from the official website:
@misc{liu2026egotlegocentricthinkaloudchains,
title={EgoTL: Egocentric Think-Aloud Chains for Long-Horizon Tasks},
author={Lulin Liu and Dayou Li and Yiqing Liang and Sicong Jiang and Hitesh Vijay and Hezhen Hu and Xuhai Xu and Zirui Liu and Srinivas Shakkottai and Manling Li and Zhiwen Fan},
year={2026},
eprint={2604.09535},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2604.09535}
}
Acknowledgment
The project description in this card is based on the official EgoTL website:
- Downloads last month
- 282