--- dataset_info: features: - name: ep_id dtype: string - name: video dtype: string - name: question dtype: string - name: answer dtype: string - name: task_id dtype: string - name: high_level_category dtype: string - name: low_level_category dtype: string - name: num_interactions dtype: int64 splits: - name: train num_bytes: 107506980 num_examples: 79213 - name: validation num_bytes: 9653447 num_examples: 5870 download_size: 14758637 dataset_size: 117160427 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* license: apache-2.0 task_categories: - question-answering language: - en tags: - robotics - embodied-ai pretty_name: findingdory size_categories: - 10K arXiv Website GitHub Code Huggingface Model

FindingDory: A Benchmark to Evaluate Memory in Embodied Agents

Karmesh Yadav*, Yusuf Ali*, Gunshi Gupta, Yarin Gal, Zsolt Kira
Current vision-language models (VLMs) struggle with long-term memory in embodied tasks. To address this, we introduce **FindingDory**, a benchmark in Habitat that evaluates memory-based reasoning across 60 long-horizon tasks. In this repo, we release the FindingDory Video Dataset. Each video contains images collected from a robot’s egocentric view as it navigates realistic indoor environments and interacts with objects. This dataset was used to train and evaluate the high-level agent SFT agent in the FindingDory benchmark. # Usage ``` from datasets import load_dataset dataset = load_dataset("yali30/findingdory") ``` # Dataset Structure | Field name | Description | | ------------------------- | ------------------------------------------------------------------------------------------------------------- | | **ep\_id** | Episode id. | | **video** | Relative path of the video clip. | | **question** | Question posed to the agent based on the episode. | | **answer** | Ground-truth answer stored as a list of image indices | | **task\_id** | Identifier indicating which task template the episode belongs to (string). | | **high\_level\_category** | Higl-task task category label. (Options: Single-Goal Spatial Tasks, Single-Goal Temporal Tasks, Multi-Goal Tasks). | | **low\_level\_category** | Fine-grained task category label. (Example categories: Interaction-Order, Room Visitation, etc) | | **num\_interactions** | Number of objects the robot interacts with, during the experience collection. | Notes: * The validation split contains 60 tasks . The training split only contains 55 task because the 5 “Object Attributes” tasks are withheld from the training set. * A subsampled version of the dataset (96 frames per episode) is available [here](https://huggingface.co/datasets/yali30/findingdory-subsampled-96). 📄 Citation ``` @article{yadav2025findingdory, title = {FindingDory: A Benchmark to Evaluate Memory in Embodied Agents}, author = {Yadav, Karmesh and Ali, Yusuf and Gupta, Gunshi and Gal, Yarin and Kira, Zsolt}, journal = {arXiv preprint arXiv:2506.15635}, year = {2025} } ```