File size: 7,232 Bytes
a08cfaa 701acfe a08cfaa 8692527 7b45279 8692527 a08cfaa 7b45279 a08cfaa 7b45279 a08cfaa |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 |
---
license: mit
---
<div align="center">
<h1>
TeleEgo: <br>
Benchmarking Egocentric AI Assistants in the Wild
</h1>
<!-- ้กน็ฎๅพฝ็ซ -->
<p>
<a href="https://arxiv.org/abs/2510.23981">
<img alt="arXiv" src="https://img.shields.io/badge/ArXiv-2510.23981-b31b1b.svg">
</a>
<a href="https://programmergg.github.io/jrliu.github.io/">
<img alt="Page" src="https://img.shields.io/badge/Project Page-Link-green">
</a>
<a href="https://github.com/TeleAI-UAGI/TeleEgo/">
<img alt="GitHub" src="https://img.shields.io/badge/GitHub-Repository-blue?logo=github">
</a>
</p>
<!-- <img src="assets/teaser.png" alt="Teaser" style="width:80%; max-width:700px;"> -->
๐ข **Note**๏ผThis project is still under active development, and the benchmark will be continuously updated.
</div>
## ๐ Introduction
**TeleEgo** is a comprehensive **omni benchmark** designed for **multi-person, multi-scene, multi-task, and multimodal long-term memory reasoning** in egocentric video streams.
It reflects realistic personal assistant scenarios where continuous egocentric video data is collected across hours or even days, requiring models to maintain and reason over **memory, understanding, and cross-memory reasoning**. **Omni** here means that TeleEgo covers the full spectrum of **roles, scenes, tasks, modalities, and memory horizons**, offering all-round evaluation for egocentric AI assistants.
**TeleEgo provides:**
- ๐ง **Omni-scale, diverse egocentric data** from 5 roles across 4 daily scenarios.
- ๐ค **Multi-modal annotations**: video, narration, and speech transcripts.
- โ **Fine-grained QA benchmark**: 3 cognitive dimensions, 12 subcategories.
---
## ๐ Dataset Overview
- **Participants**: 5 (balanced gender)
- **Scenarios**:
- Work & Study
- Lifestyle & Routines
- Social Activities
- Outings & Culture
- **Recording**: 3 days/participant (~14.4 hours each)
- **Modalities**:
- Egocentric video streams
- Speech & conversations
- Narration and event descriptions
---
## Download
```bash
# Extract (only need to specify the first file)
7z x archive.7z.001
# Or extract to a specific directory
7z x archive.7z.001 -o./extracted_data
```
## Dataset Structure
After extraction, the dataset structure is:
```
TeleEgo/
โโโ merged_P1_A.json # QA annotations for Participant 1
โโโ merged_P2_A.json # QA annotations for Participant 2
โโโ merged_P3_A.json # QA annotations for Participant 3
โโโ merged_P4_A.json # QA annotations for Participant 4
โโโ merged_P5_A.json # QA annotations for Participant 5
โโโ merged_P1.mp4 # Video stream for Participant 1 (~46GB)
โโโ merged_P2.mp4 # Video stream for Participant 2 (~35GB)
โโโ merged_P3.mp4 # Video stream for Participant 3 (~58GB)
โโโ merged_P4.mp4 # Video stream for Participant 4 (~57GB)
โโโ merged_P5.mp4 # Video stream for Participant 5 (~38GB)
โโโ timeline_P1.json # Temporal annotations for Participant 1
โโโ timeline_P2.json # Temporal annotations for Participant 2
โโโ timeline_P3.json # Temporal annotations for Participant 3
โโโ timeline_P4.json # Temporal annotations for Participant 4
โโโ timeline_P5.json # Temporal annotations for Participant 5
```
## Alternative Download Methods
If you have difficulty accessing Hugging Face, you can also download the dataset from:
**Baidu Netdisk (็พๅบฆ็ฝ็)**
```
Link: https://pan.baidu.com/s/1TSqfjqeaXdP2TWEpiy_3KA?pwd=7wmh
```
The Baidu Netdisk version contains the **uncompressed data files** (MP4 videos and JSON annotations) directly
## ๐งช Benchmark Tasks
TeleEgo-QA evaluates models along **three main dimensions**:
1. **Memory**
- Short-term / Long-term / Ultra-long Memory
- Entity Tracking
- Temporal Comparison & Interval
2. **Understanding**
- Causal Understanding
- Intent Inference
- Multi-step Reasoning
- Cross-modal Understanding
3. **Cross-Memory Reasoning**
- Cross-temporal Causality
- Cross-entity Relation
- Temporal Chain Understanding
Each QA instance includes:
- Question type: Single-choice, Multi-choice, Binary, Open-ended
<!-- ---
---
-->
<!-- ## Baselines


---
## ๐ค Collaborators
Thanks to these amazing people for contributing to the project:
<a href="https://github.com/rebeccaeexu">
<img src="https://avatars.githubusercontent.com/rebeccaeexu" width="60px" style="border-radius:50%" />
</a>
<a href="https://github.com/DavisWANG0">
<img src="https://avatars.githubusercontent.com/DavisWANG0" width="60px" style="border-radius:50%" />
</a>
<a href="https://github.com/H-oliday">
<img src="https://avatars.githubusercontent.com/H-oliday" width="60px" style="border-radius:50%" />
</a>
<a href="https://github.com/Xiaolong-RRL">
<img src="https://avatars.githubusercontent.com/Xiaolong-RRL" width="60px" style="border-radius:50%" />
</a>
<a href="https://github.com/Programmergg">
<img src="https://avatars.githubusercontent.com/Programmergg" width="60px" style="border-radius:50%" />
</a>
<a href="https://github.com/yiheng-wang-duke">
<img src="https://avatars.githubusercontent.com/yiheng-wang-duke" width="60px" style="border-radius:50%" />
</a>
<a href="https://github.com/cocowy1">
<img src="https://avatars.githubusercontent.com/cocowy1" width="60px" style="border-radius:50%" />
</a>
<a href="https://github.com/chxy95">
<img src="https://avatars.githubusercontent.com/chxy95" width="60px" style="border-radius:50%" />
</a> -->
## ๐ Citation
If you find our **TeleEgo** in your research, please cite:
```bib
@article{yan2025teleego,
title={TeleEgo: Benchmarking Egocentric AI Assistants in the Wild},
author={Yan, Jiaqi and Ren, Ruilong and Liu, Jingren and Xu, Shuning and Wang, Ling and Wang, Yiheng and Wang, Yun and Zhang, Long and Chen, Xiangyu and Sun, Changzhi and others},
journal={arXiv preprint arXiv:2510.23981},
year={2025}
}
```
## ๐ชช License
This project is licensed under the **MIT License**.
Dataset usage is restricted under a **research-only license**.
---
<!-- ## References
* EgoLife: Towards Egocentric Life Assistant [\[arXiv:2503.03803\]](https://arxiv.org/abs/2503.03803)
* M3-Agent: Seeing, Listening, Remembering, and Reasoning [\[arXiv:2508.09736\]](https://arxiv.org/abs/2508.09736)
* HourVideo: 1-Hour Video-Language Understanding [\[arXiv:2411.04998\]](https://arxiv.org/abs/2411.04998) -->
## ๐ฌ Contact
If you have any questions, please feel free to reach out: chxy95@gmail.com.
---
<div align="center">
<strong>โจ TeleEgo is an Omni benchmark, a step toward building personalized AI assistants with true long-term memory, reasoning and decision-making in real-world wearable scenarios. โจ</strong>
</div>
<!-- <br/> -->
<!-- <div align="center" style="margin-top: 10px;">
<img src="assets/TeleAI.jpg" alt="TeleAI Logo" width="120px" />
<img src="assets/TeleEgo.png" alt="TeleEgo Logo" width="120px" />
</div>
--> |