luuuulinnnn commited on
Commit
27a803d
·
verified ·
1 Parent(s): 3a0ec93

Restore README section order

Browse files
Files changed (1) hide show
  1. README.md +29 -29
README.md CHANGED
@@ -11,6 +11,35 @@ size_categories:
11
 
12
  # EgoTL-DATA
13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ## This Repository
15
 
16
  This Hugging Face dataset repository hosts video clips and JSON annotations associated with EgoTL-format data. The uploaded files are organized as:
@@ -51,35 +80,6 @@ This repository is suitable for work on:
51
  - alignment between video and think-aloud text
52
  - spatially grounded embodied AI evaluation
53
 
54
- ## EgoTL Overview
55
-
56
- EgoTL is an egocentric benchmark for long-horizon household tasks introduced in **"EgoTL: Egocentric Think-Aloud Chains for Long-Horizon Tasks."** The project studies how well current vision-language models and world models handle step-by-step reasoning, spatial grounding, navigation, and manipulation in real home environments.
57
-
58
- Unlike many egocentric datasets that rely on post-hoc labels or noisy automatic annotations, EgoTL is built around a real-time **say-before-act** collection protocol. Before taking an action, the participant verbalizes the next goal or intention, producing think-aloud supervision aligned with the visual stream. The project also emphasizes spatial grounding and long-horizon task structure, aiming to support research on embodied reasoning rather than only short-horizon recognition.
59
-
60
- According to the official project website, EgoTL is designed to expose failure modes of modern foundation models on long-horizon egocentric reasoning and to support improvement through human-aligned supervision. The benchmark highlights errors such as hallucinated objects, skipped steps, weak spatial grounding, and inconsistent long-horizon planning.
61
-
62
- Official project website: https://ego-tl.github.io/
63
-
64
- ## What EgoTL Provides
65
-
66
- EgoTL centers on egocentric household tasks with think-aloud chains, task structure, and clip-level grounding. The project website describes the dataset and benchmark as supporting:
67
-
68
- - long-horizon household task reasoning
69
- - egocentric navigation and manipulation understanding
70
- - think-aloud chain-of-thought style supervision
71
- - action and scene reasoning under cluttered real environments
72
- - spatially grounded evaluation and long-horizon generation analysis
73
-
74
- The official EgoTL-Bench presentation highlights six task dimensions across three layers, including:
75
-
76
- - memory-conditioned planning
77
- - scene-aware action reasoning
78
- - next action prediction
79
- - action recognition
80
- - direction recognition
81
- - distance estimation
82
-
83
  ## Citation
84
 
85
  If you use EgoTL or files from this repository, please cite the EgoTL project and paper from the official website:
 
11
 
12
  # EgoTL-DATA
13
 
14
+ ## EgoTL Overview
15
+
16
+ EgoTL is an egocentric benchmark for long-horizon household tasks introduced in **"EgoTL: Egocentric Think-Aloud Chains for Long-Horizon Tasks."** The project studies how well current vision-language models and world models handle step-by-step reasoning, spatial grounding, navigation, and manipulation in real home environments.
17
+
18
+ Unlike many egocentric datasets that rely on post-hoc labels or noisy automatic annotations, EgoTL is built around a real-time **say-before-act** collection protocol. Before taking an action, the participant verbalizes the next goal or intention, producing think-aloud supervision aligned with the visual stream. The project also emphasizes spatial grounding and long-horizon task structure, aiming to support research on embodied reasoning rather than only short-horizon recognition.
19
+
20
+ According to the official project website, EgoTL is designed to expose failure modes of modern foundation models on long-horizon egocentric reasoning and to support improvement through human-aligned supervision. The benchmark highlights errors such as hallucinated objects, skipped steps, weak spatial grounding, and inconsistent long-horizon planning.
21
+
22
+ Official project website: https://ego-tl.github.io/
23
+
24
+ ## What EgoTL Provides
25
+
26
+ EgoTL centers on egocentric household tasks with think-aloud chains, task structure, and clip-level grounding. The project website describes the dataset and benchmark as supporting:
27
+
28
+ - long-horizon household task reasoning
29
+ - egocentric navigation and manipulation understanding
30
+ - think-aloud chain-of-thought style supervision
31
+ - action and scene reasoning under cluttered real environments
32
+ - spatially grounded evaluation and long-horizon generation analysis
33
+
34
+ The official EgoTL-Bench presentation highlights six task dimensions across three layers, including:
35
+
36
+ - memory-conditioned planning
37
+ - scene-aware action reasoning
38
+ - next action prediction
39
+ - action recognition
40
+ - direction recognition
41
+ - distance estimation
42
+
43
  ## This Repository
44
 
45
  This Hugging Face dataset repository hosts video clips and JSON annotations associated with EgoTL-format data. The uploaded files are organized as:
 
80
  - alignment between video and think-aloud text
81
  - spatially grounded embodied AI evaluation
82
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83
  ## Citation
84
 
85
  If you use EgoTL or files from this repository, please cite the EgoTL project and paper from the official website: