Datasets:

ArXiv:
License:
David0219 commited on
Commit
a08cfaa
·
verified ·
1 Parent(s): 2e5ef1a

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +172 -0
README.md ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ <div align="center">
5
+ <h1>
6
+ TeleEgo: <br>
7
+ Benchmarking Egocentric AI Assistants in the Wild
8
+ </h1>
9
+
10
+ <!-- 项目徽章 -->
11
+ <p>
12
+ <!-- <a href="https://huggingface.co/datasets/David0219/TeleEgo">
13
+ <img alt="Hugging Face" src="https://img.shields.io/badge/HuggingFace-Dataset-orange">
14
+ </a> -->
15
+ <a href="https://arxiv.org/abs/2510.23981">
16
+ <img alt="arXiv" src="https://img.shields.io/badge/ArXiv-2510.23981-b31b1b.svg">
17
+ </a>
18
+ <a href="https://programmergg.github.io/jrliu.github.io/">
19
+ <img alt="Page" src="https://img.shields.io/badge/Project Page-Link-green">
20
+ </a>
21
+ </p>
22
+
23
+ <!-- <img src="assets/teaser.png" alt="Teaser" style="width:80%; max-width:700px;"> -->
24
+
25
+ 📢 **Note**:This project is still under active development, and the benchmark will be continuously updated.
26
+ </div>
27
+
28
+
29
+
30
+ ## 📌 Introduction
31
+
32
+ **TeleEgo** is a comprehensive **omni benchmark** designed for **multi-person, multi-scene, multi-task, and multimodal long-term memory reasoning** in egocentric video streams.
33
+ It reflects realistic personal assistant scenarios where continuous egocentric video data is collected across hours or even days, requiring models to maintain and reason over **memory, understanding, and cross-memory reasoning**. **Omni** here means that TeleEgo covers the full spectrum of **roles, scenes, tasks, modalities, and memory horizons**, offering all-round evaluation for egocentric AI assistants.
34
+
35
+ **TeleEgo provides:**
36
+
37
+ - 🧠 **Omni-scale, diverse egocentric data** from 5 roles across 4 daily scenarios.
38
+ - 🎤 **Multi-modal annotations**: video, narration, and speech transcripts.
39
+ - ❓ **Fine-grained QA benchmark**: 3 cognitive dimensions, 12 subcategories.
40
+
41
+
42
+ ---
43
+
44
+ ## 📊 Dataset Overview
45
+
46
+ - **Participants**: 5 (balanced gender)
47
+ - **Scenarios**:
48
+ - Work & Study
49
+ - Lifestyle & Routines
50
+ - Social Activities
51
+ - Outings & Culture
52
+ - **Recording**: 3 days/participant (~14.4 hours each)
53
+ - **Modalities**:
54
+ - Egocentric video streams
55
+ - Speech & conversations
56
+ - Narration and event descriptions
57
+
58
+ ---
59
+
60
+ ## 🧪 Benchmark Tasks
61
+
62
+ TeleEgo-QA evaluates models along **three main dimensions**:
63
+
64
+ 1. **Memory**
65
+ - Short-term / Long-term / Ultra-long Memory
66
+ - Entity Tracking
67
+ - Temporal Comparison & Interval
68
+
69
+ 2. **Understanding**
70
+ - Causal Understanding
71
+ - Intent Inference
72
+ - Multi-step Reasoning
73
+ - Cross-modal Understanding
74
+
75
+ 3. **Cross-Memory Reasoning**
76
+ - Cross-temporal Causality
77
+ - Cross-entity Relation
78
+ - Temporal Chain Understanding
79
+
80
+ Each QA instance includes:
81
+
82
+ - Question type: Single-choice, Multi-choice, Binary, Open-ended
83
+
84
+ <!-- ---
85
+
86
+ ---
87
+ -->
88
+ <!-- ## Baselines
89
+ ![Baseline 1](assets/res1.png)
90
+ ![Baseline 2](assets/res2.png)
91
+ ---
92
+
93
+ ## 🤝 Collaborators
94
+
95
+ Thanks to these amazing people for contributing to the project:
96
+
97
+ <a href="https://github.com/rebeccaeexu">
98
+ <img src="https://avatars.githubusercontent.com/rebeccaeexu" width="60px" style="border-radius:50%" />
99
+ </a>
100
+ <a href="https://github.com/DavisWANG0">
101
+ <img src="https://avatars.githubusercontent.com/DavisWANG0" width="60px" style="border-radius:50%" />
102
+ </a>
103
+ <a href="https://github.com/H-oliday">
104
+ <img src="https://avatars.githubusercontent.com/H-oliday" width="60px" style="border-radius:50%" />
105
+ </a>
106
+ <a href="https://github.com/Xiaolong-RRL">
107
+ <img src="https://avatars.githubusercontent.com/Xiaolong-RRL" width="60px" style="border-radius:50%" />
108
+ </a>
109
+ <a href="https://github.com/Programmergg">
110
+ <img src="https://avatars.githubusercontent.com/Programmergg" width="60px" style="border-radius:50%" />
111
+ </a>
112
+ <a href="https://github.com/yiheng-wang-duke">
113
+ <img src="https://avatars.githubusercontent.com/yiheng-wang-duke" width="60px" style="border-radius:50%" />
114
+ </a>
115
+ <a href="https://github.com/cocowy1">
116
+ <img src="https://avatars.githubusercontent.com/cocowy1" width="60px" style="border-radius:50%" />
117
+ </a>
118
+ <a href="https://github.com/chxy95">
119
+ <img src="https://avatars.githubusercontent.com/chxy95" width="60px" style="border-radius:50%" />
120
+ </a> -->
121
+
122
+
123
+ ## 📜 Citation
124
+
125
+ If you find our **TeleEgo** in your research, please cite:
126
+
127
+ ```bibtex
128
+ @misc{yan2025teleegobenchmarkingegocentricai,
129
+ title={TeleEgo: Benchmarking Egocentric AI Assistants in the Wild},
130
+ author={Jiaqi Yan and Ruilong Ren and Jingren Liu and Shuning Xu and Ling Wang and Yiheng Wang and Yun Wang and Long Zhang and Xiangyu Chen and Changzhi Sun and Jixiang Luo and Dell Zhang and Hao Sun and Chi Zhang and Xuelong Li},
131
+ year={2025},
132
+ eprint={2510.23981},
133
+ archivePrefix={arXiv},
134
+ primaryClass={cs.CV},
135
+ url={https://arxiv.org/abs/2510.23981},
136
+ }
137
+ ```
138
+
139
+ ## 🪪 License
140
+
141
+ This project is licensed under the **MIT License**.
142
+ Dataset usage is restricted under a **research-only license**.
143
+
144
+ ---
145
+
146
+ <!-- ## References
147
+
148
+ * EgoLife: Towards Egocentric Life Assistant [\[arXiv:2503.03803\]](https://arxiv.org/abs/2503.03803)
149
+ * M3-Agent: Seeing, Listening, Remembering, and Reasoning [\[arXiv:2508.09736\]](https://arxiv.org/abs/2508.09736)
150
+ * HourVideo: 1-Hour Video-Language Understanding [\[arXiv:2411.04998\]](https://arxiv.org/abs/2411.04998) -->
151
+
152
+
153
+ ## 📬 Contact
154
+
155
+ If you have any questions, please feel free to reach out: chxy95@gmail.com.
156
+
157
+ ---
158
+
159
+ <div align="center">
160
+
161
+ <strong>✨ TeleEgo is an Omni benchmark, a step toward building personalized AI assistants with true long-term memory, reasoning and decision-making in real-world wearable scenarios. ✨</strong>
162
+
163
+ </div>
164
+
165
+ <!-- <br/> -->
166
+
167
+ <!-- <div align="center" style="margin-top: 10px;">
168
+ <img src="assets/TeleAI.jpg" alt="TeleAI Logo" width="120px" />
169
+ &nbsp;&nbsp;&nbsp;
170
+ <img src="assets/TeleEgo.png" alt="TeleEgo Logo" width="120px" />
171
+ </div>
172
+ -->