Tevior commited on
Commit
eff1e50
·
verified ·
1 Parent(s): 362201d

Update README, docs (annotation plan + guidelines), scripts, src/data

Browse files
README.md CHANGED
@@ -1,123 +1,214 @@
1
- # TopoSlots Unified Motion Dataset
2
 
3
- Multi-skeleton motion dataset for topology-agnostic motion tokenization research.
4
 
5
- ## Overview
 
 
6
 
7
- | Dataset | Skeleton Type | Joints | Motions | Text | Source |
8
- |---------|:-------------|:------:|:-------:|:----:|--------|
9
- | `humanml3d/` | Human (SMPL) | 22 | 14,449 | 44K texts | AMASS MoCap |
10
- | `lafan1/` | Human (Ubisoft) | 22 | 77 | ✗ | Ubisoft La Forge |
11
- | `100style/` | Human (XSens) | 23 | 810 | style labels | 100Style (Zenodo) |
12
- | `bandai_namco/` | Human (BN) | 22 | 3,053 | ✗ | Bandai Namco Research |
13
- | `cmu_mocap/` | Human (CMU) | 31 | 2,496 | ✗ | CMU MoCap Database |
14
- | `mixamo/` | Human (Mixamo) | 67 | 2,453 | ✗ | Adobe Mixamo |
15
- | `truebones_zoo/` | 73 Animal Species | 9-145 | 1,110 | 888 captions | Truebones Zoo |
16
- | **Total** | **79 skeletons** | **9-145** | **24,448** | | |
17
 
18
- ## Data Format (Scheme C)
 
 
 
 
 
 
 
 
 
 
 
19
 
20
- All datasets use a unified `.npz` format per motion clip:
21
 
22
- ### Slot Token Input (cross-skeleton compatible)
23
- - `local_positions`: `[T, J, 3]` float32 root-relative joint positions (meters)
24
- - `velocities`: `[T, J, 3]` float32 joint velocities (m/s)
 
 
25
 
26
- ### Root Track (separate)
27
- - `root_position`: `[T, 3]` float32 global root position (meters)
28
- - `root_velocity`: `[T, 3]` float32 — root velocity (m/s)
29
 
30
- ### Decoder Ground Truth (skeleton-specific)
31
- - `joint_positions`: `[T, J, 3]` float32 — global joint positions (meters)
32
- - `local_rotations_6d`: `[T, J-1, 6]` float32 — continuous 6D rotation representation
33
- - `accelerations`: `[T, J, 3]` float32 — joint accelerations
34
- - `bone_lengths`: `[T, J]` float32 — per-frame bone lengths (meters)
35
 
36
- ### Auxiliary
37
- - `foot_contact`: `[T, 4]` float32 — [left_heel, left_toe, right_heel, right_toe] binary contact
38
 
39
- ### Metadata
40
- - `num_frames`: int — actual frame count (before padding)
41
- - `fps`: float frames per second (all resampled to 20 fps)
42
- - `skeleton_id`: str — dataset identifier
43
- - `texts`: str text descriptions joined by `|||` (empty if unavailable)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
 
45
- ### Per-Dataset Files
46
  ```
47
- {dataset}/
48
- ├── skeleton.npz # Skeleton graph (joint_names, parent_indices, rest_offsets, adjacency, geodesic_dist)
49
- ├── stats.npz # Normalization statistics (mean/std for positions and velocities)
50
- ├── motions/ # Per-motion .npz files
51
- │ ├── 000000.npz
52
- │ ├── 000001.npz
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  │ └── ...
54
- └── splits/ # Data splits
55
- ├── train.txt
56
- ── val.txt
57
- ├── test.txt
58
- ── all.txt
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
  ```
60
 
61
- ### Truebones Zoo Additional
62
- ```
63
- truebones_zoo/
64
- ├── skeletons/ # Per-species skeleton graphs
65
- │ ├── Dog.npz
66
- │ ├── Cat.npz
67
- │ ├── Horse.npz
68
- │ └── ... (73 species)
69
- └── motions/ # Named as {species}_{index}.npz
70
- ├── Dog_0000.npz # Extra fields: species, source_file
71
- ├── Cat_0004.npz
72
- └── ...
73
- ```
74
 
75
- ## Scale & Units
76
- - All positions in **meters**
77
- - BVH datasets auto-converted from centimeters (÷100)
78
- - CMU MoCap uses custom scale factor (×0.0685)
79
- - Truebones Zoo: natural animal scale preserved (Pigeon 0.13m, Horse 1.50m, Trex 6.06m)
80
- - All motions resampled to **20 fps**
81
- - Max frames: **196** (~10 seconds)
82
-
83
- ## Skeleton Diversity
84
-
85
- | Category | Species Count | Joint Range | Examples |
86
- |----------|:------------:|:-----------:|---------|
87
- | Human (various rigs) | 6 | 22-67 | SMPL, LAFAN1, CMU, Mixamo (with fingers) |
88
- | Quadrupeds | ~25 | 18-79 | Dog, Cat, Horse, Bear, Elephant |
89
- | Flying | ~10 | 9-69 | Eagle, Bat, Buzzard, Pigeon |
90
- | Reptiles/Dinos | ~10 | 25-69 | Trex, Alligator, Crocodile |
91
- | Insects/Arachnids | ~10 | 41-84 | Ant, Spider, Scorpion, Centipede |
92
- | Snakes | ~3 | 19-27 | Anaconda, KingCobra |
93
- | Fantasy | ~5 | 32-143 | Dragon (143j), Mammoth |
94
-
95
- ## Usage (PyTorch)
96
-
97
- ```python
98
- from src.data.unified_dataset import UnifiedMotionDataset, collate_fn
99
- from torch.utils.data import DataLoader
100
-
101
- dataset = UnifiedMotionDataset(
102
- data_dirs=['data/processed/humanml3d', 'data/processed/mixamo', 'data/processed/truebones_zoo'],
103
- split='train',
104
- max_joints=128, # pad all skeletons to this
105
- max_frames=196,
106
- )
107
-
108
- dataloader = DataLoader(dataset, batch_size=8, collate_fn=collate_fn, shuffle=True)
109
- batch = next(iter(dataloader))
110
- # batch['motion_features']: [8, 196, 128, 6] — per-joint position+velocity
111
- # batch['skeleton_features']: [8, 128, 9] — joint graph features
112
- # batch['joint_mask']: [8, 128] — valid joint mask
113
- # batch['num_joints']: [8] — actual joint counts (22, 67, 31, ...)
114
  ```
115
 
116
  ## License
117
- - HumanML3D: See original AMASS license
118
- - LAFAN1: CC BY-NC-ND 4.0
 
 
119
  - 100Style: CC BY 4.0
120
- - Bandai Namco: CC BY-NC 4.0
121
- - CMU MoCap: Free for research
122
- - Mixamo: Adobe Mixamo Terms of Use
123
- - Truebones Zoo: Royalty-free
 
1
+ # TopoSlots Motion Data
2
 
3
+ Unified multi-skeleton 3D motion dataset for **TopoSlots**: topology-agnostic per-slot motion tokenization with foundation alignment.
4
 
5
+ - **Paper target**: NeurIPS 2026 / ICLR 2027
6
+ - **Last updated**: 2026-03-27
7
+ - **Motions**: 24,448 across 7 datasets, 79 skeleton types (6 human + 73 animal species)
8
 
9
+ ---
 
 
 
 
 
 
 
 
 
10
 
11
+ ## Dataset Summary
12
+
13
+ | Dataset | Motions | Joints | Skeleton Type | Text Coverage | Text Quality | Renders |
14
+ |---------|:-------:|:------:|---------------|:------------:|:------------:|:-------:|
15
+ | **humanml3d** | 14,449 | 22 | Human (SMPL) | 100% | High (human multi-caption) | 14,449 |
16
+ | **bandai_namco** | 3,053 | 21 | Human (BN) | 100% | Low (template sentences) | 3,053 |
17
+ | **cmu_mocap** | 2,496 | 31 | Human (CMU) | 92% | Low (CMU index text) | 2,496 |
18
+ | **mixamo** | 2,453 | 67 | Human (Mixamo, full fingers) | **0%** | None (hash filenames) | 2,453 |
19
+ | **truebones_zoo** | 1,110 | 25-143 | 73 animal species | 80% | Medium (auto-generated) | 1,110 |
20
+ | **100style** | 810 | 23 | Human (XSens) | 100% | Low (template sentences) | 810 |
21
+ | **lafan1** | 77 | 22 | Human (Ubisoft) | 100% | Low (action type only) | 77 |
22
+ | **Total** | **24,448** | 9-143 | **79 skeletons** | 88% | | 24,448 |
23
 
24
+ ### Text Annotation Status
25
 
26
+ **Needs annotation** (6 datasets, 9,999 motions):
27
+ - `mixamo`: 2,453completely missing, filenames are hashes
28
+ - `truebones_zoo`: 222 missing + 888 need review
29
+ - `cmu_mocap`: 195 missing
30
+ - `bandai_namco`, `100style`, `lafan1`: have template text, need upgrade to natural language
31
 
32
+ **Complete** (no annotation needed):
33
+ - `humanml3d`: 14,449 with high-quality human annotations (3-5 captions each)
 
34
 
35
+ ---
 
 
 
 
36
 
37
+ ## Data Format (Scheme C)
 
38
 
39
+ ### Per-motion file: `motions/{id}.npz`
40
+
41
+ | Field | Shape | Description |
42
+ |-------|-------|-------------|
43
+ | `local_positions` | [T, J, 3] | Root-relative joint positions (slot token input) |
44
+ | `velocities` | [T, J, 3] | Joint velocities (slot token input) |
45
+ | `root_position` | [T, 3] | Global root trajectory |
46
+ | `root_velocity` | [T, 3] | Root velocity |
47
+ | `joint_positions` | [T, J, 3] | Global joint positions (FK output) |
48
+ | `local_rotations_6d` | [T, J-1, 6] | 6D rotation representation (decoder GT) |
49
+ | `accelerations` | [T, J, 3] | Joint accelerations |
50
+ | `bone_lengths` | [T, J] | Per-frame bone lengths |
51
+ | `foot_contact` | [T, 4] | Foot contact labels (l_heel, l_toe, r_heel, r_toe) |
52
+ | `num_frames` | scalar | Number of valid frames |
53
+ | `fps` | scalar | Frames per second (20) |
54
+ | `skeleton_id` | str | Dataset identifier |
55
+ | `texts` | str | Text descriptions separated by `\|\|\|` |
56
+ | `source_file` | str | Original BVH/source filename |
57
+ | `species` | str | (Zoo only) Animal species name |
58
+
59
+ ### Per-skeleton file: `skeleton.npz`
60
+
61
+ | Field | Shape | Description |
62
+ |-------|-------|-------------|
63
+ | `joint_names` | [J] | Original joint names from BVH |
64
+ | `canonical_names` | [J] | Standardized English anatomical names (for CLIP) |
65
+ | `parent_indices` | [J] | Parent joint index (-1 for root) |
66
+ | `rest_offsets` | [J, 3] | Rest-pose offsets from parent (meters) |
67
+ | `adjacency` | [J, J] | Undirected adjacency matrix |
68
+ | `geodesic_dist` | [J, J] | Geodesic distance matrix |
69
+ | `bone_lengths` | [J] | Rest-pose bone lengths |
70
+ | `depths` | [J] | Tree depth per joint |
71
+ | `degrees` | [J] | Number of children per joint |
72
+ | `side_tags` | [J] | left/right/center classification |
73
+ | `symmetry_pairs` | [P, 2] | Symmetric joint pair indices |
74
+
75
+ ### Per-dataset files
76
+
77
+ | File | Description |
78
+ |------|-------------|
79
+ | `labels.json` | Structured L1 labels: `{motion_id: {L1_action, L1_style, source_file, ...}}` |
80
+ | `stats.npz` | Normalization statistics (mean/std for positions and velocities) |
81
+ | `splits/{train,val,test,all}.txt` | Data split files (80/10/10) |
82
+
83
+ ### Renders (for annotation)
84
+
85
+ | File | Description |
86
+ |------|-------------|
87
+ | `renders/{id}.gif` | Stick figure animation (GIF) |
88
+ | `renders/{id}_overview.png` | Multi-view static overview |
89
+
90
+ ---
91
+
92
+ ## File Structure
93
 
 
94
  ```
95
+ TopoSlots-MotionData/
96
+ ├── README.md
97
+
98
+ ├── humanml3d/ # 14,449 motions, SMPL-22, HIGH-QUALITY TEXT
99
+ │ ├── skeleton.npz
100
+ │ ├── labels.json
101
+ │ ├── stats.npz
102
+ │ ├── splits/
103
+ │ └── motions/*.npz
104
+
105
+ ├── bandai_namco/ # 3,053 motions, 21 joints, NEEDS TEXT UPGRADE
106
+ │ ├── skeleton.npz
107
+ │ ├── labels.json
108
+ │ ├── stats.npz
109
+ │ ├── splits/
110
+ │ ├── motions/*.npz
111
+ │ └── renders/*.gif + *_overview.png
112
+
113
+ ├── cmu_mocap/ # 2,496 motions, 31 joints, NEEDS TEXT UPGRADE
114
+ │ ├── (same structure)
115
+ │ └── renders/
116
+
117
+ ├── mixamo/ # 2,453 motions, 67 joints, NO TEXT
118
+ │ ├── (same structure)
119
+ │ └── renders/
120
+
121
+ ├── truebones_zoo/ # 1,110 motions, 73 species, PARTIAL TEXT
122
+ │ ├── skeleton.npz # representative skeleton
123
+ │ ├── skeletons/*.npz # per-species skeletons
124
+ │ ├── (same structure)
125
+ │ └── renders/
126
+
127
+ ├── 100style/ # 810 motions, 23 joints, NEEDS TEXT UPGRADE
128
  │ └── ...
129
+
130
+ ├── lafan1/ # 77 motions, 22 joints, NEEDS TEXT UPGRADE
131
+ │ └── ...
132
+
133
+ ── docs/
134
+ │ ├── ANNOTATION_TOOL_PLAN.md # Web annotation tool design + implementation prompt
135
+ │ ├── ANNOTATION_GUIDELINE.md # Annotation specification for annotators (Chinese)
136
+ │ └── PROMPT_DESIGN.md # Text prompt design decisions (L1/L2/L3)
137
+
138
+ ├── scripts/
139
+ │ ├── render_motion.py # Render npz → GIF + overview PNG
140
+ │ ├── preprocess_bvh.py # Generic BVH → Scheme C converter
141
+ │ ├── preprocess_humanml3d.py # HumanML3D → Scheme C converter
142
+ │ └── preprocess_truebones_zoo.py # Zoo species-aware converter
143
+
144
+ └── src/
145
+ └── data/
146
+ ├── skeleton_graph.py # Skeleton topology: adjacency, geodesic, side tags, symmetry
147
+ ├── bvh_parser.py # BVH file parser (handles 6-channel joints)
148
+ ├── canonical_names.py # Human dataset canonical name mapping
149
+ ├── zoo_canonical_names.py # Animal canonical name mapping (rule engine)
150
+ ├── humanml3d_converter.py # SMPL-22 skeleton + 263D feature extraction
151
+ └── unified_dataset.py # PyTorch Dataset (multi-skeleton, variable joints)
152
  ```
153
 
154
+ ---
155
+
156
+ ## Annotation Workflow
157
+
158
+ ### For agents setting up the annotation tool:
159
+
160
+ 1. **Read** `docs/ANNOTATION_TOOL_PLAN.md` — contains the complete design and implementation prompt
161
+ 2. **Read** `docs/ANNOTATION_GUIDELINE.md` — annotation specification for human annotators
162
+ 3. **Data needed**: `{dataset}/renders/` (GIF/PNG) + `{dataset}/motions/` (npz metadata) + `{dataset}/labels.json`
163
+ 4. **Tool**: Flask + SQLite web app, implementation prompt in Section 9 of ANNOTATION_TOOL_PLAN.md
164
+ 5. **Post-processing**: Chinese annotations → LLM batch translate → English → inject into npz `texts` field
 
 
165
 
166
+ ### Annotation priority:
167
+ | Priority | Dataset | Count | Task |
168
+ |:--------:|---------|:-----:|------|
169
+ | P0 | mixamo | 2,453 | Annotate from scratch (no existing text) |
170
+ | P1 | truebones_zoo | 1,110 | Fill 222 missing + review 888 existing |
171
+ | P1 | cmu_mocap | 195 | Fill missing entries |
172
+ | P2 | bandai_namco | 3,053 | Upgrade template text to natural language |
173
+ | P2 | 100style | 810 | Upgrade template text |
174
+ | P2 | lafan1 | 77 | Upgrade template text |
175
+
176
+ ---
177
+
178
+ ## Data Processing History
179
+
180
+ ### Bug fixes applied (2026-03-18 ~ 2026-03-27):
181
+ 1. **FK rotation convention**: Fixed intrinsic vs extrinsic Euler angle bug in `preprocess_bvh.py`. All 5 BVH datasets reprocessed.
182
+ 2. **Bandai Namco dummy root**: Removed static `joint_Root` placeholder node (22→21 joints).
183
+ 3. **Bandai Namco per-joint positions**: Fixed parser to read 6-channel BVH (all joints have position+rotation channels).
184
+ 4. **Side tag detection**: Expanded from 2 patterns to 6-priority regex system (99.3% accuracy across 3,563 joints).
185
+ 5. **Canonical name standardization**: 1,193 unique raw names → 916 canonical names across all 79 skeletons.
186
+
187
+ ### Skeleton audit results:
188
+ - 0 errors across 79 skeletons
189
+ - Side tag accuracy: 99.3%
190
+ - Canonical name coverage: 100%
191
+ - Geodesic distance validity: 100%
192
+ - Motion data integrity: 100% (no NaN/Inf)
193
+
194
+ ---
195
+
196
+ ## Citation
197
+
198
+ ```bibtex
199
+ @misc{toposlots2026,
200
+ title={TopoSlots: Topology-Agnostic Per-Slot Motion Tokenization with Foundation Alignment},
201
+ year={2026},
202
+ }
 
 
203
  ```
204
 
205
  ## License
206
+
207
+ Research use only. Individual dataset licenses apply:
208
+ - HumanML3D: MIT
209
+ - LAFAN1: CC BY-NC 4.0
210
  - 100Style: CC BY 4.0
211
+ - Bandai Namco: CC BY-NC-ND 4.0
212
+ - CMU MoCap: Public domain
213
+ - Mixamo: Adobe Mixamo terms
214
+ - Truebones Zoo: Commercial license (research use)
docs/ANNOTATION_GUIDELINE.md CHANGED
@@ -1,282 +1,259 @@
1
  # TopoSlots 动作文本标注规范
2
 
3
- ## 1. 背景与目标
4
 
5
- ### 当前标注状况
6
 
7
- | 数据集 | Motions | 已有文本 | 缺失情况 |
8
- |--------|:-------:|:-------:|---------|
9
- | HumanML3D | 14,449 | 44,970 条 | ✅ 完整(多条/动作) |
10
- | Truebones Zoo | 1,110 | 888 条 | ⚠ 部分缺失(222条无文本),且标注来自 VLM 自动生成 |
11
- | LAFAN1 | 77 | 0 | ❌ 完全缺失 |
12
- | 100Style | 810 | 0 | ❌ 只有风格标签(如 "ShieldedRight"),无动作描述 |
13
- | Bandai Namco | 3,053 | 0 | ❌ 完全缺失 |
14
- | CMU MoCap | 2,496 | 0 | ❌ 完全缺失 |
15
- | Mixamo | 2,453 | 0 | ❌ 只有哈希文件名,无语义信息 |
16
 
17
- ### 标注目标
18
 
19
- 为缺失文本的动作补充统一格式的中/英文双语标注,用于:
20
- 1. **文本条件动作生成**(text-to-motion generation)
21
- 2. **文本-动作检索**(text-motion retrieval)
22
- 3. **跨骨架语义对齐**(cross-skeleton semantic alignment)
 
 
 
 
 
23
 
24
- ---
25
 
26
- ## 2. 标注格式规范
 
 
 
 
 
 
27
 
28
- ### 2.1 文件组织
 
 
 
 
 
29
 
30
- 每个动作对应一个 JSON 标注文件,命名与 motion npz 文件一致:
 
 
 
 
 
31
 
 
32
  ```
33
- annotations/
34
- ├── humanml3d/ # 已有标注,转换为统一格式
35
- ├── lafan1/
36
- │ ├── 000000.json
37
- │ └── ...
38
- ├── 100style/
39
- ├── bandai_namco/
40
- ├── cmu_mocap/
41
- ├── mixamo/
42
- └── truebones_zoo/
43
- ├── Dog_0000.json
44
- └── ...
45
  ```
46
 
47
- ### 2.2 JSON Schema
 
 
 
 
48
 
49
- ```json
50
- {
51
- "motion_id": "000000",
52
- "dataset": "lafan1",
53
- "skeleton_type": "human",
54
- "skeleton_id": "lafan1",
55
- "species": "human",
56
- "num_joints": 22,
57
- "num_frames": 196,
58
- "fps": 20,
59
- "duration_sec": 9.8,
60
-
61
- "captions": {
62
- "short": {
63
- "zh": ["一个人向前走了几步然后停下。"],
64
- "en": ["A person walks forward a few steps then stops."]
65
- },
66
- "detailed": {
67
- "zh": ["一个人从静止状态开始,先迈出左脚向前走了三步,步伐平稳,双臂自然摆动,然后逐渐减速停在原地。"],
68
- "en": ["A person starts from a standing position, takes three steps forward beginning with the left foot, walks with steady pace and natural arm swing, then gradually decelerates to a stop."]
69
- }
70
- },
71
 
72
- "labels": {
73
- "action_category": "locomotion",
74
- "action_subcategory": "walk",
75
- "style": null,
76
- "intensity": "normal",
77
- "locomotion_type": "forward",
78
- "has_contact": true,
79
- "contact_type": ["foot_ground"]
80
- },
81
 
82
- "body_parts_involved": ["legs", "arms", "spine"],
83
 
84
- "annotation_source": "manual",
85
- "annotator_id": null,
86
- "annotation_date": null
87
- }
88
  ```
89
 
90
- ### 2.3 字段说明
91
 
92
- #### 必填字段
93
 
94
- | 字段 | 类型 | 说明 | 示例 |
95
- |------|------|------|------|
96
- | `motion_id` | str | 与 npz 文件名对应 | `"000000"`, `"Dog_0000"` |
97
- | `dataset` | str | 数据集来源 | `"lafan1"`, `"truebones_zoo"` |
98
- | `skeleton_type` | str | 骨架大类 | `"human"`, `"quadruped"`, `"flying"`, `"reptile"`, `"insect"`, `"snake"`, `"fantasy"` |
99
- | `species` | str | 具体物种/角色 | `"human"`, `"dog"`, `"eagle"`, `"trex"` |
100
- | `captions.short.en` | list[str] | 英文简短描述(1-2句) | |
101
- | `captions.short.zh` | list[str] | 中文简短描述(1-2句) | |
102
 
103
- #### 推荐字段
104
 
105
- | 字段 | 类型 | 说明 |
106
- |------|------|------|
107
- | `captions.detailed.en` | list[str] | 英文详细描述(包含时序、身体部位、运动细节) |
108
- | `captions.detailed.zh` | list[str] | 中文详细描述 |
109
- | `labels.action_category` | str | 动作大类(见下表) |
110
- | `labels.action_subcategory` | str | 动作子类 |
111
- | `labels.style` | str | 风格标签(如 "angry", "sneaky") |
112
- | `body_parts_involved` | list[str] | 参与的身体部位 |
113
 
114
- ---
115
 
116
- ## 3. 标注内容指南
 
 
 
 
 
 
 
 
117
 
118
- ### 3.1 动作类别体系
119
 
120
- #### 人类动作类别
121
 
122
- | 大类 `action_category` | 子类 `action_subcategory` 示例 |
123
- |----------------------|------------------------------|
124
- | `locomotion` | walk, run, jog, sprint, crawl, sidestep, backward_walk |
125
- | `upper_body` | wave, point, reach, grab, throw, push, pull, clap |
126
- | `full_body` | jump, squat, lunge, stretch, bend, twist, turn |
127
- | `dance` | ballet, hip_hop, freestyle, waltz |
128
- | `sport` | kick, punch, swing, block, dodge |
129
- | `daily_activity` | sit_down, stand_up, pick_up, put_down, open_door |
130
- | `interaction` | handshake, hug, fight, carry |
131
- | `transition` | idle, t_pose, rest, fall, get_up |
132
 
133
- #### 动物动作类别
 
 
 
 
 
 
134
 
135
- | 大类 `action_category` | 子类示例 |
136
- |----------------------|---------|
137
- | `locomotion` | walk, run, gallop, trot, slither, fly, swim, hop, crawl |
138
- | `combat` | attack, bite, claw, charge, headbutt, sting |
139
- | `idle` | stand, sit, lie_down, sleep, breathe, look_around |
140
- | `expression` | roar, bark, hiss, chirp, howl, growl |
141
- | `interaction` | eat, drink, dig, scratch, groom, play |
142
- | `special` | fly_takeoff, fly_land, dive, jump, roll, shake |
143
 
144
- ### 3.2 描述撰写规范
145
 
146
- #### 简短描述 (short)
147
- - **1-2 句话**,概括动作核心语义
148
- - 必须包含:**动作主体** + **核心动作** + **方向/方式**(如有)
149
- - 人类动作:以 "一个人..." / "A person..." 开头
150
- - 动物动作:以 "一��[动物]..." / "A [animal]..." 开头
151
 
152
- **好的示例**:
153
- ```
154
- en: "A person walks forward steadily then turns left."
155
- zh: "一个人平稳地向前走然后左转。"
 
 
156
 
157
- en: "A dog leaps forward and snaps its jaws in an attack."
158
- zh: "一只狗向前扑跳并张嘴撕咬进行攻击。"
159
- ```
160
 
161
- **不好的示例**:
162
  ```
163
- "Walking." 太简短,缺少主体和细节
164
- ❌ "The motion data shows a bipedal character performing locomotion in the XZ plane." — 太技术化
165
- ❌ "一段动画" — 无信息量
166
  ```
167
 
168
- #### 详细描述 (detailed)
169
- - **2-4 句话**,包含时序信息和身体部位细节
170
- - 描述顺序:**起始姿态 → 主要动作 → 结束状态**
171
- - 包含:速度、幅度、哪些身体部位参与、是否有接触
172
 
173
- **好的示例**:
174
- ```
175
- en: "A person starts from a standing position with arms at their sides. They bend their knees
176
- and lower into a deep squat, keeping their back straight. After holding briefly, they push
177
- upward explosively into a small jump, landing softly on both feet."
178
 
179
- zh: "一个人从自然站立姿势开始,双臂自然下垂。然后弯曲膝盖向下蹲至深蹲位置,保持背部挺直。
180
- 短暂停顿后,爆发性地向上推起做一个小跳跃,双脚轻柔落地。"
181
- ```
182
 
183
- ### 3.3 多条标注要求
184
 
185
- 每个动作 **至少 2 条** 不同的简短描述,保持
186
- - 用词不同但语义相同(如 "走路" vs "行走","walk" vs "stroll")
187
- - 不同粒度的关注点(如一条描述整体动作,一条描述细节)
188
 
189
- 参考 HumanML3D 的做法:每个动作有 3-5 条不同标注者的描述。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
190
 
191
- ### 3.4 跨骨架一致性
192
 
193
- **关键原则**:相同语义的动作,在不同骨架上的描述应该可以匹配。
 
 
 
 
 
194
 
195
- 例如,"向前走" 的描述应在以下情况通用:
196
- - HumanML3D (22 joints, SMPL) 的人类行走
197
- - LAFAN1 (22 joints, Ubisoft) 的人类行走
198
- - Dog (55 joints, Truebones) 的狗行走
199
- - Horse (79 joints, Truebones) 的马行走
200
 
201
- 描述应聚焦于 **语义动作**("向前走")而非 **关节/骨骼细节**("left_knee flexion 45°")。
202
 
203
- ---
 
 
 
 
 
 
 
 
 
204
 
205
- ## 4. 标注流程
206
 
207
- ### 4.1 已有标注转换
 
 
 
 
 
 
 
208
 
209
- | 来源 | 转换方式 |
210
- |------|---------|
211
- | HumanML3D texts | 解析 `text#tokens#start#end` 格式 → JSON |
212
- | Truebones Zoo captions | 已有 JSON,提取 `short.original` → `captions.short.en`,缺 zh |
213
- | 100Style style labels | 文件名解析(如 `Angry_FW.bvh` → action=walk, style=angry) |
214
 
215
- ### 4.2 缺失标注补充
216
 
217
- **推荐流程**:
218
 
219
- 1. **VLM 自动标注**(初稿)
220
- - 渲染动作骨架视频(stick figure)
221
- - 使用 VLM(如 Qwen2.5-VL、GPT-4o)生成描述
222
- - NECromancer 就是用 Qwen2.5-VL 做的自动标注
223
 
224
- 2. **人工审核**(修正)
225
- - 检查自动标注的准确性
226
- - 补充中文翻译
227
- - 修正错误描述(特别是动物动作)
228
 
229
- 3. **质量控制**
230
- - 每条标注由至少 1 人审核
231
- - 不确定的动作标记为 `"annotation_source": "auto_vlm_unverified"`
232
 
233
- ### 4.3 标注优先级
234
 
235
- | 优先级 | 数据集 | 理由 |
236
- |:------:|--------|------|
237
- | P0 | Truebones Zoo(缺失 222 条) | 论文核心卖点——多物种,且部分已有标注 |
238
- | P1 | LAFAN1 (77 条) | 高质量人类动画,数量少 |
239
- | P1 | 100Style (810 条) | 已有风格标签,只需扩展为句子 |
240
- | P2 | Bandai Namco (3,053 ) | 数量较多,可先自动标注 |
241
- | P2 | CMU MoCap (2,496 ) | 数量较多,CMU 有原始动作分类 |
242
- | P3 | Mixamo (2,453 条) | 文件名是哈希,需从原始 Mixamo 网站恢复动作名 |
243
 
244
- ---
245
 
246
- ## 5. 与相关工作的对比
247
-
248
- | 数据集 | 标注格式 | 标注方式 | 语言 | 每动作条数 | 物种数 |
249
- |--------|---------|---------|------|:---------:|:-----:|
250
- | **HumanML3D** | `text#tokens#start#end` 纯文本 | 人工 (AMT) | 英文 | 3-5 | 1 (人) |
251
- | **BABEL** | 动作标签 + 帧级标注 | 人工 | 英文 | 2-3 | 1 (人) |
252
- | **Motion-X** | 语义标签 + 身体/手/脸描述 | Vicuna 1.5 增强 | 英文 | 多级 | 1 (人) |
253
- | **NECromancer UvU** | 自由文本 | VLM (Qwen2.5-VL) | 英文 | 1 | 多种 |
254
- | **T2M4LVO Zoo** | JSON: 4长度 × 7变体 × 7实体 | VLM + 变换 | 英文 | **196** | 68 |
255
- | **AniMo4D** (CVPR 2025) | 文本描述 + 物种属性 | 人工 | 英文 | 2.4 | 114 |
256
- | **AnimalML3D** | 文本描述 | 人工 | 英文 | 3 | 36 |
257
- | **TopoSlots (本项目)** | JSON: short/detailed × en/zh + 结构化标签 | VLM + 人工审核 | **中英双语** | ≥ 2 | 79 |
258
-
259
- ### 我们的改进点
260
- 1. **中英双语**:方便中文团队使用,也支持多语言条件生成和检索
261
- 2. **结构化标签**:`action_category` + `subcategory` + `style` 便于分类分析和条件生成
262
- 3. **骨架元信息**:`skeleton_type` + `species` 支持跨物种检索和过滤
263
- 4. **标注来源追踪**:`annotation_source` + `quality_score` 区分人工/自动标注质量
264
- 5. **分词保留**:兼容 HumanML3D 的 `word/POS` 格式,支持检索评估
265
-
266
- ### 注意事项
267
- - **NECromancer 的 VLM 标注流程不透明**(未公开 prompt、渲染方式、质量评估),审稿人可能质疑。我们应明确记录标注流程。
268
- - **T2M4LVO 的 196 条/动作过于冗余**,我们采用 ≥2 条精选描述 + 结构化标签的平衡方案。
269
- - **AniMo4D 是动物标注竞争对手**(114 种,185K 描述),但它基于 SMAL 模板不是 BVH,且不含人类。
270
 
271
  ---
272
 
273
- ## 6. 验证标准
274
-
275
- 标注完成后,需验证:
276
 
277
- - [ ] 每动作至少 2 条 `short.en` 描述
278
- - [ ] 每个动作至少 1 `short.zh` 描述
279
- - [ ] `action_category` 字段填充率 > 95%
280
- - [ ] 动物动作的 `species` 字段全部正确
281
- - [ ] 随机抽样 100 条,人工检查描述与动作的一致性 > 90%
282
- - [ ] 文本-动作检索实验(TMR 评估)验证标注质量
 
1
  # TopoSlots 动作文本标注规范
2
 
3
+ > 更新: 2026-03-19
4
 
5
+ ## 1. 当前数据状况
6
 
7
+ ### 1.1 总览
 
 
 
 
 
 
 
 
8
 
9
+ 7 个数据集,24,448 条动作,79 种骨架(6 人类 + 73 动物)。
10
 
11
+ | 数据集 | Motions | 骨架 | 已有文本 | 覆盖率 | 本质量 | 需要标注 |
12
+ |--------|:-------:|:----:|:-------:|:------:|---------|:--------:|
13
+ | HumanML3D | 14,449 | 22j 人类 | 14,449 | **100%** | 高——人工多条标注 | 0 |
14
+ | Bandai Namco | 3,053 | 21j 人类 | 3,053 | **100%** | **低**——模板句,L1级 | **需升级** |
15
+ | CMU MoCap | 2,496 | 31j 人类 | 2,301 | 92% | **低**——CMU官方索引直接拼接 | **195 + 升级** |
16
+ | Mixamo | 2,453 | 67j 人类 | 0 | **0%** | 无 | **2,453** |
17
+ | Truebones Zoo | 1,110 | 25~143j 动物×73种 | 888 | 80% | 中——自动生成 species+action | **222 + 审校** |
18
+ | 100Style | 810 | 23j 人类 | 810 | **100%** | **低**——模板句,L1级 | **需升级** |
19
+ | LAFAN1 | 77 | 22j 人类 | 77 | **100%** | **低**——仅动作类型 | **需升级** |
20
 
21
+ ### 1.2 现有文本样例及问题
22
 
23
+ **HumanML3D (OK, 不需要动)**:
24
+ ```
25
+ "a man kicks something or someone with his left leg.|||the standing person kicks
26
+ with their left foot before going back to standing position.|||a person kicks with
27
+ their left leg.|||a person standing kicks their left foot forward."
28
+ ```
29
+ - 多条标注(3-5 条),自然语言,L2-L3 级别
30
 
31
+ **Bandai Namco (需升级)**:
32
+ ```
33
+ "A person performs a active bow." ← 语法不通,模板生硬
34
+ "A person performs a angry bow." ← 缺冠词、无细节
35
+ "A person performs walk turn left." ← 不自然
36
+ ```
37
 
38
+ **CMU MoCap (需升级)**:
39
+ ```
40
+ "A person performs: playground - forward jumps, turn around." ← 原始索引直接拼接
41
+ "A person performs: walk." ← 太简单
42
+ ```
43
+ 195 条完全无文本(CMU 索引中缺失或标注为 "Unknown" 的条目)
44
 
45
+ **100Style (需升级)**:
46
  ```
47
+ "A person does a backward run in a aeroplane style." ← 模板句,aeroplane style 含义不明
48
+ "A person stands idle in a monk style." ← style 语义对人来说不直观
 
 
 
 
 
 
 
 
 
 
49
  ```
50
 
51
+ **LAFAN1 (需升级)**:
52
+ ```
53
+ "A person performs aiming." ← 极度简单,无任何细节
54
+ "A person performs fight."
55
+ ```
56
 
57
+ **Mixamo (完全缺失)**:
58
+ - 文件名是哈希值(如 `00041fd3325430d72c5a947e1171de3b.bvh`),无任何语义信息
59
+ - **必须通过观看 GIF 渲染来标注**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
 
61
+ **Truebones Zoo (部分缺失)**:
62
+ ```
63
+ "An alligator sways its head and wags its tail.|||An ambush predator sways its
64
+ head and wags its tail.|||An animal sways its head and wags its tail.|||"
65
+ ```
66
+ - 已有 888/1110 条,自动生成,质量中等
67
+ - 缺失 222 条(主要是 Idle、TPOSE 等难以描述的姿态)
 
 
68
 
69
+ ### 1.3 文本存储位置
70
 
71
+ 文本**存在 motion npz 文件的 `texts` 字段里**:
72
+ ```python
73
+ data = np.load("data/processed/{dataset}/motions/{id}.npz")
74
+ texts = str(data["texts"]) # 多条文本用 "|||" 分隔
75
  ```
76
 
77
+ 另有结构化标签存在 `data/processed/{dataset}/labels.json`。
78
 
79
+ ### 1.4 可视化文件位置
80
 
81
+ 每条 motion 都有渲染好的 GIF 和静态图,用于标注时参考:
82
+ ```
83
+ /scratch/ts1v23/workspace/motion_representation_study/data/processed/{dataset}/renders/
84
+ {id}.gif ← stick figure 动画
85
+ {id}_overview.png ← 多视角静态截图
86
+ ```
 
 
87
 
88
+ ---
89
 
90
+ ## 2. 标注任务
 
 
 
 
 
 
 
91
 
92
+ ### 2.1 任务分级
93
 
94
+ | 优先级 | 任务 | 数据集 | 条数 | 方式 |
95
+ |:------:|------|--------|:----:|------|
96
+ | **P0** | 从零标注 | Mixamo | 2,453 | 看 GIF 写描述 |
97
+ | **P1** | 补缺 + 审校 | Truebones Zoo | 222 缺失 + 888 审校 | 看 GIF 补写/修正 |
98
+ | **P1** | 补缺 | CMU MoCap | 195 | 看 GIF 写描述 |
99
+ | **P2** | 升级文本 | Bandai Namco | 3,053 | 看 GIF,替换模板句 |
100
+ | **P2** | 升级文本 | 100Style | 810 | 看 GIF,替换模板句 |
101
+ | **P2** | 升级文本 | LAFAN1 | 77 | 看 GIF,替换模板句 |
102
+ | - | 不需要 | HumanML3D | 0 | 已有高质量标注 |
103
 
104
+ ### 2.2 P0/P1: 从零标注或补缺
105
 
106
+ 打开对应的 GIF 文件,写 **1-2 句英文描述**。
107
 
108
+ **描述原则**:
109
+ - 写 **"做什么"**,不写 **"怎么做"**
110
+ - "A person..." "A [animal]..." 开头
111
+ - 包含:动作主体 + 核心动作 + 方向/速度/风格(如适用)
112
+ - **不要提及关节名、骨骼数量、骨架结构**
 
 
 
 
 
113
 
114
+ **好的描述**:
115
+ ```
116
+ A person jogs forward and then slows to a walk.
117
+ A person throws a punch with their right hand then steps back.
118
+ A dog runs forward and leaps over an obstacle.
119
+ An eagle spreads its wings and takes off from the ground.
120
+ ```
121
 
122
+ **差的描述**:
123
+ ```
124
+ "Walking." → 太短,缺主体
125
+ "The skeleton moves forward." → 提到了骨架
126
+ "Left knee bends 45 degrees." → 关节级细节
127
+ "A 67-joint character performs motion." → 技术信息
128
+ "一段动画" → 无信息量
129
+ ```
130
 
131
+ ### 2.3 P2: 升级已有模板文本
132
 
133
+ 已有模板句需要**替换为自然语言描述**,不是在模板句上修改。
 
 
 
 
134
 
135
+ | 原始模板 | → 升级后 |
136
+ |---------|---------|
137
+ | `A person performs a active bow.` | `A person bows energetically with a wide arm gesture.` |
138
+ | `A person performs walk turn left.` | `A person walks forward and makes a left turn.` |
139
+ | `A person does a backward run in a aeroplane style.` | `A person runs backward with both arms extended out to the sides like airplane wings.` |
140
+ | `A person performs aiming.` | `A person holds a steady aiming pose, looking forward with arms raised as if holding a rifle.` |
141
 
142
+ ### 2.4 多条描述
 
 
143
 
144
+ 每条动作**至少写 2 条同描述**,用 `|||` 分隔
145
  ```
146
+ A person walks forward briskly.|||A person takes quick steps in the forward direction.
 
 
147
  ```
148
 
149
+ 用词不同但语义相同。参考 HumanML3D 的风格——同一动作由不同人描述
 
 
 
150
 
151
+ ---
 
 
 
 
152
 
153
+ ## 3. 标注格式
 
 
154
 
155
+ ### 3.1 交付格式
156
 
157
+ 每个数据集提交一个 JSON 文件
 
 
158
 
159
+ ```json
160
+ {
161
+ "000000": {
162
+ "texts_en": [
163
+ "A person bows politely with a slight forward lean.",
164
+ "A person performs a respectful bow, bending at the waist."
165
+ ],
166
+ "action_category": "gesture",
167
+ "style": "active",
168
+ "notes": ""
169
+ },
170
+ "000001": {
171
+ "texts_en": [
172
+ "A person bows with an aggressive, exaggerated motion.",
173
+ "A person angrily bends forward in a forceful bow."
174
+ ],
175
+ "action_category": "gesture",
176
+ "style": "angry",
177
+ "notes": ""
178
+ }
179
+ }
180
+ ```
181
 
182
+ 字段说明:
183
 
184
+ | 字段 | 必填 | 说明 |
185
+ |------|:----:|------|
186
+ | `texts_en` | ✅ | 英文描述列表,至少 2 条 |
187
+ | `action_category` | ✅ | 动作大类(见下表) |
188
+ | `style` | 选填 | 风格标签(如 angry, sneaky, tired) |
189
+ | `notes` | 选填 | 标注备注(如 "动作不清晰"、"可能是两个动作拼接") |
190
 
191
+ ### 3.2 动作类别
 
 
 
 
192
 
193
+ #### 人类
194
 
195
+ | `action_category` | 包含 |
196
+ |-------------------|------|
197
+ | `locomotion` | walk, run, jog, sprint, crawl, sidestep, backward walk, skip, hop |
198
+ | `upper_body` | wave, point, reach, grab, throw, push, pull, clap, salute |
199
+ | `full_body` | jump, squat, lunge, stretch, bend, twist, turn, roll, cartwheel |
200
+ | `dance` | ballet, hip hop, freestyle, waltz, spin |
201
+ | `combat` | kick, punch, slash, block, dodge, stab |
202
+ | `daily` | sit down, stand up, pick up, put down, drink, eat |
203
+ | `gesture` | bow, nod, shrug, beckon, wave goodbye |
204
+ | `idle` | stand, t-pose, rest pose, breathe |
205
 
206
+ #### 动物
207
 
208
+ | `action_category` | 包含 |
209
+ |-------------------|------|
210
+ | `locomotion` | walk, run, gallop, trot, slither, fly, swim, hop, crawl |
211
+ | `combat` | attack, bite, claw, charge, headbutt, sting |
212
+ | `idle` | stand, sit, lie down, sleep, breathe, look around |
213
+ | `vocalization` | roar, bark, hiss, chirp, howl |
214
+ | `interaction` | eat, drink, dig, scratch, groom, play |
215
+ | `aerial` | takeoff, land, dive, soar, hover |
216
 
217
+ ---
 
 
 
 
218
 
219
+ ## 4. 标注工具
220
 
221
+ ### 4.1 查看动作
222
 
223
+ GIF 和概览图在:
224
+ ```
225
+ /scratch/ts1v23/workspace/motion_representation_study/data/processed/{dataset}/renders/
226
+ ```
227
 
228
+ 用任何图片查看器打开 `.gif` 即可预览动作。如果 GIF 不够清晰,可以看 `_overview.png` 的多视角静态图。
 
 
 
229
 
230
+ ### 4.2 按数据集分配
 
 
231
 
232
+ 建议按数据集拆分给不同标注者:
233
 
234
+ | 标注者 | 数据集 | 条数 |
235
+ |--------|--------|:----:|
236
+ | A | Mixamo (前半) | ~1,200 |
237
+ | B | Mixamo (后半) | ~1,200 |
238
+ | C | Bandai Namco | 3,053 |
239
+ | D | CMU MoCap (195 缺失) + 100Style + LAFAN1 | ~1,082 |
240
+ | E | Truebones Zoo (222 缺失 + 888 审校) | ~1,110 |
 
241
 
242
+ ### 4.3 审校 Truebones Zoo
243
 
244
+ 对已有的 888 条 Zoo 文本,审校任务是:
245
+ 1. 打开 GIF
246
+ 2. 对比已有文本,判断是否准确
247
+ 3. 如果描述有误或太泛(如 "An animal moves"),**重写**
248
+ 4. 如果描述正确,跳过
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
249
 
250
  ---
251
 
252
+ ## 5. 质量标准
 
 
253
 
254
+ - [ ] 每动作至少 2 条英文描述
255
+ - [ ] 描述必须以 "A person..." / "A [animal]..." 开头
256
+ - [ ] 描述不提及骨骼/关节/技术细节
257
+ - [ ] `action_category` 全部填写
258
+ - [ ] 动作与描述不匹配率 < 5%(抽样检查)
259
+ - [ ] Idle / T-Pose 也需要标注(如 "A person stands still in a neutral pose.")
docs/ANNOTATION_TOOL_PLAN.md ADDED
@@ -0,0 +1,401 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 动作标注 Web 平台 — 实现方案
2
+
3
+ > 基于 Codex (GPT) 审批意见修订
4
+ > 2026-03-27
5
+
6
+ ---
7
+
8
+ ## 一、总体架构
9
+
10
+ ```
11
+ 标注者(浏览器) ──HTTP──▶ Flask App (端口 8080)
12
+
13
+ ├── SQLite (annotations.db)
14
+ ├── /data/renders/{dataset}/{id}.gif
15
+ └── /data/renders/{dataset}/{id}_overview.png
16
+ ```
17
+
18
+ **技术栈**: Flask + Jinja2 + vanilla JS + SQLite
19
+ **部署**: `python app.py --port 8080 --host 0.0.0.0`,标注者浏览器直连内网 IP
20
+
21
+ ---
22
+
23
+ ## 二、数据库 Schema
24
+
25
+ ### motions 表(只读,初始化时从 npz 导入)
26
+
27
+ ```sql
28
+ CREATE TABLE motions (
29
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
30
+ dataset TEXT NOT NULL, -- 'bandai_namco', 'mixamo', ...
31
+ motion_id TEXT NOT NULL, -- '000042', 'Dog_0001'
32
+ source_file TEXT, -- 原始 BVH 文件名
33
+ num_frames INTEGER,
34
+ fps REAL,
35
+ num_joints INTEGER,
36
+ species TEXT, -- 仅 Zoo 数据集
37
+ existing_en TEXT, -- 已有的英文文本(只读展示)
38
+ action_category_auto TEXT, -- 自动提取的 L1(从 labels.json)
39
+ gif_path TEXT, -- 相对路径
40
+ overview_path TEXT, -- 相对路径
41
+ UNIQUE(dataset, motion_id)
42
+ );
43
+ ```
44
+
45
+ ### annotations 表(标注者写入)
46
+
47
+ ```sql
48
+ CREATE TABLE annotations (
49
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
50
+ dataset TEXT NOT NULL,
51
+ motion_id TEXT NOT NULL,
52
+ annotator TEXT NOT NULL DEFAULT '',
53
+ L1_zh TEXT DEFAULT '', -- 动作标签(2-6字)
54
+ L2_zh TEXT DEFAULT '', -- 短描述(1句,12-30字)
55
+ L3_zh TEXT DEFAULT '', -- 详细描述(2-3句,仅复杂动作)
56
+ action_category TEXT DEFAULT '', -- 标注者选择的类别
57
+ style TEXT DEFAULT '', -- 风格标签(可选)
58
+ species_override TEXT DEFAULT '',-- 物种修正(仅 Zoo)
59
+ notes TEXT DEFAULT '', -- 备注
60
+ status TEXT DEFAULT 'unassigned',-- unassigned/in_progress/submitted/reviewed/needs_revision/skipped
61
+ flag TEXT DEFAULT '', -- uncertain/bad_render/ambiguous(异常标记)
62
+ version INTEGER DEFAULT 1,
63
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
64
+ updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
65
+ UNIQUE(dataset, motion_id)
66
+ );
67
+ ```
68
+
69
+ ### annotation_history 表(变更记录)
70
+
71
+ ```sql
72
+ CREATE TABLE annotation_history (
73
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
74
+ dataset TEXT NOT NULL,
75
+ motion_id TEXT NOT NULL,
76
+ annotator TEXT,
77
+ L1_zh TEXT,
78
+ L2_zh TEXT,
79
+ L3_zh TEXT,
80
+ action_category TEXT,
81
+ status TEXT,
82
+ version INTEGER,
83
+ saved_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
84
+ );
85
+ ```
86
+
87
+ ---
88
+
89
+ ## 三、页面设计
90
+
91
+ ### 3.1 主页 `/`
92
+
93
+ 6 个数据集卡片(排除 HumanML3D),每个显示:
94
+ - 数据集名(中文别名)
95
+ - 总条数 / 已标注数 / 完成率进度条
96
+ - 优先级标记(P0/P1/P2)
97
+
98
+ 数据集中文映射:
99
+ | dataset_id | 中文名 | 优先级 |
100
+ |---|---|---|
101
+ | mixamo | Mixamo 人类动画 | P0 |
102
+ | truebones_zoo | 动物动作 | P1 |
103
+ | cmu_mocap | CMU 动作捕捉 | P1 |
104
+ | bandai_namco | 万代南梦宫 | P2 |
105
+ | 100style | 100种风格 | P2 |
106
+ | lafan1 | LAFAN1 | P2 |
107
+
108
+ ### 3.2 标注列表页 `/dataset/<dataset_id>`
109
+
110
+ - 分页表格(每页 50 条)
111
+ - 列:序号 | GIF 缩略图(小) | Motion ID | 已有标注状态 | L1 | L2 | 状态标签 | 操作
112
+ - 顶部筛选栏:全部 / 未标注 / 已标注 / 已提交 / 需修改 / 已跳过
113
+ - **"开始标注下一条"按钮**(自动跳到第一条未标注的)
114
+
115
+ ### 3.3 标注页面 `/annotate/<dataset_id>/<motion_id>` (核心)
116
+
117
+ ```
118
+ ┌─────────────────────────────────────────────────────────────┐
119
+ │ ◀ 上一条 [000042 / 3053] 下一条 ▶ [跳到未标注] │
120
+ ├────────────────────────┬────────────────────────────────────┤
121
+ │ │ │
122
+ │ ┌──────────────────┐ │ 📝 标注表单 │
123
+ │ │ │ │ │
124
+ │ │ GIF 动画 │ │ L1 动作标签 *(必填,2-6字) │
125
+ │ │ (播放/暂停) │ │ ┌────────────────────────┐ │
126
+ │ │ │ │ │ 例: 走路、跳跃、攻击 │ │
127
+ │ └──────────────────┘ │ └────────────────────────┘ │
128
+ │ │ │
129
+ │ ┌──────────────────┐ │ L2 短描述 *(必填,1句话) │
130
+ │ │ 多视角静态图 │ │ ┌────────────────────────┐ │
131
+ │ │ (overview.png) │ │ │ 例: 一个人向前走了几步 │ │
132
+ │ └──────────────────┘ │ │ 然后停下。 │ │
133
+ │ │ └────────────────────────┘ │
134
+ │ ──────────────────── │ │
135
+ │ ▶ 显示参考文本(折叠) │ L3 详细描述(选填,复杂动作建议填) │
136
+ │ "A person performs │ ┌────────────────────────┐ │
137
+ │ walk turn left." │ │ │ │
138
+ │ │ │ │ │
139
+ │ │ └────────────────────────┘ │
140
+ │ │ │
141
+ │ │ 动作类别 [下拉选择] │
142
+ │ │ 风格/物种 [选填] │
143
+ │ │ 备注 [选填] │
144
+ │ │ │
145
+ │ │ 异常标记: ☐看不清 ☐动作模糊 ☐渲染异常│
146
+ │ │ │
147
+ │ │ [💾 保存] [保存并下一条 ▶] [跳过] │
148
+ ├────────────────────────┴────────────────────────────────────┤
149
+ │ 📖 标注指南(可折叠) │
150
+ │ L1: 2-6字动作标签 | L2: 1句12-30字 | L3: 2-3句含时序细节 │
151
+ │ ✅ 好: "一个人向前快走" | ❌ 差: "走路" "关节运动" │
152
+ └─────────────────────────────────────────────────────────────┘
153
+ ```
154
+
155
+ **关键交互**:
156
+ - 已有英文模板句**默认折叠**(避免锚定偏差)
157
+ - 自动保存:输入 2 秒无操作后自动 AJAX 保存草稿
158
+ - 明确"提交本条"按钮(status 从 in_progress → submitted)
159
+ - 键盘: `Ctrl+S` 保存, `Ctrl+→` 下一条, `Ctrl+←` 上一条
160
+ - GIF 播放/暂停: 用 libgif-js 库实现帧级控制(无需 ffmpeg 转 mp4)
161
+
162
+ ### 3.4 进度统计页 `/stats`
163
+
164
+ - 各数据集完成率柱状图
165
+ - 各标注者贡献统计
166
+ - 按 action_category 分布饼图
167
+
168
+ ---
169
+
170
+ ## 四、API 设计
171
+
172
+ ```
173
+ GET / → 主页
174
+ GET /dataset/<ds> → 标注列表(支持 ?page=&filter=)
175
+ GET /annotate/<ds>/<mid> → 标注页面
176
+ POST /api/save/<ds>/<mid> → 保存标注(自动保存/手动保存)
177
+ POST /api/submit/<ds>/<mid> → 提交标注(status→submitted)
178
+ POST /api/skip/<ds>/<mid> → 跳过(status→skipped, flag 必填)
179
+ GET /api/next_unannotated/<ds> → 获取下一条未标注的 motion_id
180
+ GET /api/stats → JSON 统计数据
181
+ GET /renders/<ds>/<filename> → 静态文件(GIF/PNG)
182
+ ```
183
+
184
+ ---
185
+
186
+ ## 五、状态机
187
+
188
+ ```
189
+ unassigned ──(打开标注页)──▶ in_progress
190
+ in_progress ──(提交)──▶ submitted
191
+ in_progress ──(跳过)──▶ skipped
192
+ submitted ──(审核通过)──▶ reviewed
193
+ submitted ──(打回)──▶ needs_revision
194
+ needs_revision ──(重新提交)──▶ submitted
195
+ skipped ──(重新打开)──▶ in_progress
196
+ ```
197
+
198
+ ---
199
+
200
+ ## 六、后处理流水线
201
+
202
+ ```
203
+ 1. 标注冻结后导出:
204
+ python scripts/export_annotations.py --db annotations.db --output annotations_zh.json
205
+
206
+ 2. LLM 批量翻译:
207
+ python scripts/translate_annotations.py \
208
+ --input annotations_zh.json \
209
+ --output annotations_en.json \
210
+ --model qwen2.5-72b \
211
+ --terminology docs/terminology.json
212
+
213
+ 3. 注入 npz:
214
+ python scripts/inject_texts.py \
215
+ --annotations annotations_en.json \
216
+ --data_dir data/processed/
217
+ ```
218
+
219
+ **翻译脚本职责**:
220
+ - 中文 → 英文直译
221
+ - 生成第 2 条同义改写(不同用词相同语义)
222
+ - 质量检查:长度、禁止技术词(关节/骨骼/帧数)、格式一致性
223
+ - 记录 `translation_model`, `prompt_version`, `translated_at`
224
+
225
+ **术语表** (`docs/terminology.json`):
226
+ ```json
227
+ {
228
+ "走路": "walk", "跑步": "run", "跳跃": "jump",
229
+ "左手": "left hand", "右脚": "right foot",
230
+ "鳄鱼": "alligator", "老鹰": "eagle", ...
231
+ }
232
+ ```
233
+
234
+ ---
235
+
236
+ ## 七、文件结构
237
+
238
+ ```
239
+ annotation_tool/
240
+ ├── app.py # Flask 主程序
241
+ ├── db.py # SQLite 数据库层
242
+ ├── init_db.py # 从 npz/labels.json 初始化数据库
243
+ ├── requirements.txt # flask
244
+ ├── annotations.db # SQLite 数据库(运行时生成)
245
+ ├── templates/
246
+ │ ├── base.html # 基础模板(导航栏、CSS/JS 引用)
247
+ │ ├── index.html # 主页(数据集卡片)
248
+ │ ├── dataset.html # 标注列表页
249
+ │ ├── annotate.html # 标注页面(核心)
250
+ │ └── stats.html # 统计页
251
+ ├── static/
252
+ │ ├── css/
253
+ │ │ └── style.css # 全局样式
254
+ │ ├── js/
255
+ │ │ ├── annotate.js # 标注页交互逻辑(自动保存、键盘、GIF控制)
256
+ │ │ └── libgif.js # GIF 帧级控制库
257
+ │ └── img/
258
+ │ └── logo.png # 可选
259
+ └── scripts/
260
+ ├── export_annotations.py # 导出标注为 JSON
261
+ ├── translate_annotations.py # LLM 批量翻译
262
+ └── inject_texts.py # 注入 npz
263
+ ```
264
+
265
+ ---
266
+
267
+ ## 八、数据迁移
268
+
269
+ 标注工具需要访问以下数据(从深度学习机拷贝到标注机):
270
+
271
+ ```bash
272
+ # 需要拷贝的文件(约 23G renders + 少量 npz 元数据)
273
+ rsync -avP data/processed/*/renders/ target_machine:/path/to/data/renders/
274
+ rsync -avP data/processed/*/motions/ target_machine:/path/to/data/motions/ # 用于读取 texts/metadata
275
+ rsync -avP data/processed/*/labels.json target_machine:/path/to/data/labels/
276
+ rsync -avP data/processed/*/skeleton.npz target_machine:/path/to/data/skeletons/
277
+ ```
278
+
279
+ 或者只拷贝 renders(GIF/PNG)+ labels.json,init_db.py 只需要这些来初始化。
280
+
281
+ ---
282
+
283
+ ## 九、在标注机上的实现 Prompt
284
+
285
+ 把以下 prompt 直接给标注机上的 Claude Code 执行:
286
+
287
+ ```
288
+ 请实现一个 3D 动作文本标注 Web 平台。
289
+
290
+ ## 技术栈
291
+ Flask + Jinja2 + vanilla JS + SQLite,无需 React/Vue 等前端框架。
292
+
293
+ ## 数据位置
294
+ 动作 GIF 渲染在: {DATA_ROOT}/renders/{dataset}/{id}.gif 和 {id}_overview.png
295
+ 动作元数据在: {DATA_ROOT}/motions/{dataset}/{id}.npz(numpy,含 texts/source_file/num_frames 等字段)
296
+ 标签在: {DATA_ROOT}/labels/{dataset}/labels.json(JSON,motion_id → {L1_action, source_file, ...})
297
+
298
+ 需要处理的 6 个数据集(排除 humanml3d):
299
+ - bandai_namco: 3053 条, 人类
300
+ - cmu_mocap: 2496 条, 人类
301
+ - mixamo: 2453 条, 人类
302
+ - truebones_zoo: 1110 条, 动物(73种)
303
+ - 100style: 810 条, 人类
304
+ - lafan1: 77 条, 人类
305
+
306
+ ## 数据库
307
+ SQLite,三张表:
308
+
309
+ motions 表(只读,init 时从 npz/labels 导入):
310
+ - dataset, motion_id, source_file, num_frames, fps, num_joints, species, existing_en, action_category_auto, gif_path, overview_path
311
+
312
+ annotations 表(标注者写入):
313
+ - dataset, motion_id, annotator, L1_zh, L2_zh, L3_zh, action_category, style, species_override, notes, status(unassigned/in_progress/submitted/reviewed/needs_revision/skipped), flag(uncertain/bad_render/ambiguous), version, created_at, updated_at
314
+
315
+ annotation_history 表(每次保存记录快照):
316
+ - dataset, motion_id, annotator, L1_zh, L2_zh, L3_zh, action_category, status, version, saved_at
317
+
318
+ ## 页面设计(全中文界面)
319
+
320
+ ### 主页 /
321
+ 6 个数据集卡片,显示名称/条数/已标注数/完成率进度条/优先级(P0/P1/P2)
322
+
323
+ ### 列表页 /dataset/<ds>
324
+ 分页表格(50条/页), 列: 序号|缩略图|ID|L1|L2|状态标签|操作
325
+ 顶部筛选: 全部/未标注/已标注/已提交/需修改/已跳过
326
+ 核心按钮: "开始标注下一条"(自动跳到第一条未标注的)
327
+
328
+ ### 标注页 /annotate/<ds>/<mid>(核心页面)
329
+ 左侧: GIF 动画(用 libgif-js 或 SuperGif 实现暂停/播放) + 多视角概览图
330
+ 右侧表单:
331
+ - L1 动作标签 *必填 (placeholder: "2-6字,如: 走路、跳跃")
332
+ - L2 短描述 *必填 (placeholder: "1句话12-30字,描述动作主体和核心动作")
333
+ - L3 详细描述 选填 (placeholder: "2-3句,包含时序和身体部位细节")
334
+ - 动作类别下拉 *必填: locomotion/upper_body/full_body/dance/combat/daily/gesture/idle/vocalization/aerial/interaction
335
+ - 风格标签 选填
336
+ - 物种 选填(仅 truebones_zoo 显示,预填充)
337
+ - 备注 选填
338
+ - 异常标记 复选框: 看不清/动作模糊/渲染异常
339
+ - 已有英文参考文本 **默认折叠**(点击展开,避免锚定偏差)
340
+ - 按钮: [保存草稿] [提交本条] [保存并下一条] [跳过]
341
+ - 导航: 上一条/下一条/跳到未标注
342
+ - 自动保存: 输入停止 2 秒后自动 AJAX 保存草稿(不改变 status)
343
+ - 键盘: Ctrl+S 保存, Ctrl+Enter 提交并下一条, Ctrl+→ 下一条, Ctrl+← 上一条
344
+ - 顶部显示当前进度: "第 42 / 3053 条"
345
+ - 底部可折叠标注指南(含正反例)
346
+
347
+ ### 统计页 /stats
348
+ 各数据集完成率, 标注者贡献
349
+
350
+ ## API
351
+ POST /api/save/<ds>/<mid> — 保存草稿(自动保存调用)
352
+ POST /api/submit/<ds>/<mid> — 提交(status→submitted, L1+L2+category 必填校验)
353
+ POST /api/skip/<ds>/<mid> — 跳过(flag 必填)
354
+ GET /api/next/<ds>?status=unassigned — 下一条指定状态的 motion_id
355
+ GET /api/stats — JSON 统计
356
+ GET /renders/<path> — 静态文件服务
357
+
358
+ ## 状态流转
359
+ unassigned → in_progress(打开标注页时自动)
360
+ in_progress → submitted(提交)
361
+ in_progress → skipped(跳过)
362
+ submitted → reviewed(审核通过, 预留)
363
+ submitted → needs_revision(打回, 预留)
364
+ needs_revision → submitted(重新提交)
365
+
366
+ ## 保存机制
367
+ - 每次保存同时写 annotations 表(更新 version+1)和 annotation_history 表(追加快照)
368
+ - 提交时校验: L1_zh 非空且 2-6 字, L2_zh 非空且 >=10 字, action_category 非空
369
+
370
+ ## 初始化脚本 init_db.py
371
+ 遍历 6 个数据集的 motions/*.npz 和 labels.json,填充 motions 表。
372
+ 从 npz 的 texts 字段读取 existing_en。
373
+ 从 labels.json 读取 action_category_auto。
374
+ 检查 renders 目录中 gif/png 是否存在。
375
+
376
+ ## 样式要求
377
+ - 中文界面,简洁实用
378
+ - 标注页左右分栏,左侧固定宽度(GIF),右侧表单
379
+ - 响应式,支持 1920px 宽屏
380
+ - 状态标签用颜色区分(未标注灰色/进行中蓝色/已提交绿色/需修改橙色/已跳过灰色)
381
+ - GIF 区域固定高度,不随表单滚动
382
+
383
+ ## 注意事项
384
+ - 不要安装 React/Vue/npm,纯 Flask+Jinja+vanilla JS
385
+ - libgif-js 可以从 CDN 引入或直接内嵌(用于 GIF 暂停/播放/帧控制)
386
+ - 如果 libgif-js 太复杂,可以简化为点击 GIF 暂停(替换为 overview.png)/再点播放(恢复 GIF src)
387
+ - SQLite 文件放在 annotation_tool/annotations.db
388
+ - 启动命令: python annotation_tool/app.py --port 8080 --data-root /path/to/data
389
+ ```
390
+
391
+ ---
392
+
393
+ ## 十、后续步骤
394
+
395
+ 1. 将 renders + labels.json + motions 拷贝到标注机
396
+ 2. 在标注机上用上述 prompt 让 Claude Code 实现
397
+ 3. 运行 `python init_db.py` 初始化数据库
398
+ 4. 启动 `python app.py --port 8080`
399
+ 5. 分配标注者账号(简单用户名即可)
400
+ 6. 标注完成后运行导出 → 翻译 → 注入流水线
401
+ 7. 翻译结果拷回深度学习机,注入 npz
scripts/preprocess_bvh.py CHANGED
@@ -35,45 +35,26 @@ def euler_to_6d_rotation(euler_angles: np.ndarray, order: str = 'ZYX') -> np.nda
35
  """
36
  Convert Euler angles (degrees) to continuous 6D rotation representation.
37
 
 
 
38
  Args:
39
  euler_angles: [..., 3] Euler angles in degrees
40
- order: rotation order string (e.g., 'ZYX')
41
 
42
  Returns:
43
  [..., 6] continuous 6D rotation (first two columns of rotation matrix)
44
  """
45
- rad = np.radians(euler_angles)
46
- shape = rad.shape[:-1]
47
-
48
- # Build rotation matrices from Euler angles
49
- c = np.cos(rad)
50
- s = np.sin(rad)
51
-
52
- # Map order to axis indices
53
- axis_map = {'X': 0, 'Y': 1, 'Z': 2}
54
- axes = [axis_map[ch] for ch in order.upper()]
55
-
56
- # Elementary rotation matrices
57
- def rot_matrix(axis, cos_a, sin_a):
58
- R = np.zeros(shape + (3, 3), dtype=np.float64)
59
- R[..., axis, axis] = 1.0
60
- other = [i for i in range(3) if i != axis]
61
- R[..., other[0], other[0]] = cos_a
62
- R[..., other[0], other[1]] = -sin_a
63
- R[..., other[1], other[0]] = sin_a
64
- R[..., other[1], other[1]] = cos_a
65
- return R
66
-
67
- R0 = rot_matrix(axes[0], c[..., 0], s[..., 0])
68
- R1 = rot_matrix(axes[1], c[..., 1], s[..., 1])
69
- R2 = rot_matrix(axes[2], c[..., 2], s[..., 2])
70
-
71
- # Combined rotation: R = R0 @ R1 @ R2
72
- R = np.einsum('...ij,...jk->...ik', R0, np.einsum('...ij,...jk->...ik', R1, R2))
73
 
74
  # Extract first two columns → 6D representation
75
- rot_6d = np.concatenate([R[..., :, 0], R[..., :, 1]], axis=-1)
76
- return rot_6d.astype(np.float32)
77
 
78
 
79
  def forward_kinematics(
@@ -82,48 +63,38 @@ def forward_kinematics(
82
  offsets: np.ndarray,
83
  parents: list[int],
84
  rotation_order: str = 'ZYX',
 
85
  ) -> np.ndarray:
86
  """
87
  Compute global joint positions from local rotations + skeleton offsets via FK.
88
 
 
 
 
89
  Args:
90
- rotations: [T, J, 3] Euler angles in degrees
91
  root_positions: [T, 3]
92
- offsets: [J, 3] rest-pose offsets from parent
93
  parents: [J] parent indices
94
- rotation_order: Euler rotation order
 
 
95
 
96
  Returns:
97
  [T, J, 3] global joint positions
98
  """
 
 
99
  T, J, _ = rotations.shape
100
- rad = np.radians(rotations)
101
 
102
  positions = np.zeros((T, J, 3), dtype=np.float64)
103
  global_rotmats = np.zeros((T, J, 3, 3), dtype=np.float64)
104
 
105
  for j in range(J):
106
- # Build local rotation matrix
107
- c = np.cos(rad[:, j])
108
- s = np.sin(rad[:, j])
109
-
110
- axis_map = {'X': 0, 'Y': 1, 'Z': 2}
111
- axes = [axis_map[ch] for ch in rotation_order.upper()]
112
-
113
- def _rot(axis, cos_a, sin_a):
114
- R = np.zeros((T, 3, 3), dtype=np.float64)
115
- R[:, axis, axis] = 1.0
116
- other = [i for i in range(3) if i != axis]
117
- R[:, other[0], other[0]] = cos_a
118
- R[:, other[0], other[1]] = -sin_a
119
- R[:, other[1], other[0]] = sin_a
120
- R[:, other[1], other[1]] = cos_a
121
- return R
122
-
123
- R0 = _rot(axes[0], c[:, 0], s[:, 0])
124
- R1 = _rot(axes[1], c[:, 1], s[:, 1])
125
- R2 = _rot(axes[2], c[:, 2], s[:, 2])
126
- local_rot = np.einsum('tij,tjk->tik', R0, np.einsum('tij,tjk->tik', R1, R2))
127
 
128
  p = parents[j]
129
  if p < 0:
@@ -133,10 +104,17 @@ def forward_kinematics(
133
  global_rotmats[:, j] = np.einsum(
134
  'tij,tjk->tik', global_rotmats[:, p], local_rot
135
  )
136
- offset = offsets[j] # [3]
137
- positions[:, j] = positions[:, p] + np.einsum(
138
- 'tij,j->ti', global_rotmats[:, p], offset
139
- )
 
 
 
 
 
 
 
140
 
141
  return positions.astype(np.float32)
142
 
@@ -161,20 +139,60 @@ def process_bvh_file(
161
  offsets = bvh.skeleton.rest_offsets
162
  rotations = bvh.rotations
163
  root_pos = bvh.root_positions
 
164
 
165
  # Remove end sites if requested
166
  if do_remove_end_sites:
167
  joint_names, parent_indices, offsets, rotations = remove_end_sites(
168
  joint_names, parent_indices, offsets, rotations
169
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
170
 
171
  J = len(joint_names)
172
 
173
  # Resample to target FPS
174
  if abs(bvh.fps - target_fps) > 0.5:
175
- rotations, root_pos = resample_motion(
176
- rotations, root_pos, bvh.fps, target_fps
177
- )
 
 
 
 
 
178
 
179
  T = rotations.shape[0]
180
  if T < min_frames:
@@ -182,6 +200,8 @@ def process_bvh_file(
182
  if T > max_frames:
183
  rotations = rotations[:max_frames]
184
  root_pos = root_pos[:max_frames]
 
 
185
  T = max_frames
186
 
187
  # Build skeleton graph
@@ -193,7 +213,8 @@ def process_bvh_file(
193
 
194
  # Forward kinematics → global joint positions
195
  joint_positions = forward_kinematics(
196
- rotations, root_pos, offsets, parent_indices, bvh.rotation_order
 
197
  )
198
 
199
  # Scale normalization to meters
 
35
  """
36
  Convert Euler angles (degrees) to continuous 6D rotation representation.
37
 
38
+ Uses scipy for correct BVH intrinsic Euler convention.
39
+
40
  Args:
41
  euler_angles: [..., 3] Euler angles in degrees
42
+ order: rotation order string (e.g., 'ZYX') — intrinsic convention
43
 
44
  Returns:
45
  [..., 6] continuous 6D rotation (first two columns of rotation matrix)
46
  """
47
+ from scipy.spatial.transform import Rotation
48
+
49
+ orig_shape = euler_angles.shape[:-1]
50
+ flat = euler_angles.reshape(-1, 3)
51
+
52
+ # BVH uses intrinsic rotations → scipy uppercase order
53
+ R = Rotation.from_euler(order.upper(), flat, degrees=True).as_matrix() # [N, 3, 3]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
 
55
  # Extract first two columns → 6D representation
56
+ rot_6d = np.concatenate([R[:, :, 0], R[:, :, 1]], axis=-1) # [N, 6]
57
+ return rot_6d.reshape(orig_shape + (6,)).astype(np.float32)
58
 
59
 
60
  def forward_kinematics(
 
63
  offsets: np.ndarray,
64
  parents: list[int],
65
  rotation_order: str = 'ZYX',
66
+ local_translations: np.ndarray = None,
67
  ) -> np.ndarray:
68
  """
69
  Compute global joint positions from local rotations + skeleton offsets via FK.
70
 
71
+ Uses scipy.spatial.transform.Rotation for correct BVH intrinsic Euler convention.
72
+ Verified against Blender's BVH FK (< 0.01mm error).
73
+
74
  Args:
75
+ rotations: [T, J, 3] Euler angles in degrees (columns match rotation_order)
76
  root_positions: [T, 3]
77
+ offsets: [J, 3] rest-pose offsets from parent (used when local_translations is None)
78
  parents: [J] parent indices
79
+ rotation_order: Euler rotation order (e.g., 'ZYX') — intrinsic convention
80
+ local_translations: [T, J, 3] optional per-frame local translations
81
+ (for BVH files where all joints have position channels)
82
 
83
  Returns:
84
  [T, J, 3] global joint positions
85
  """
86
+ from scipy.spatial.transform import Rotation
87
+
88
  T, J, _ = rotations.shape
 
89
 
90
  positions = np.zeros((T, J, 3), dtype=np.float64)
91
  global_rotmats = np.zeros((T, J, 3, 3), dtype=np.float64)
92
 
93
  for j in range(J):
94
+ # Build local rotation matrix using scipy (intrinsic Euler)
95
+ local_rot = Rotation.from_euler(
96
+ rotation_order.upper(), rotations[:, j], degrees=True
97
+ ).as_matrix() # [T, 3, 3]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98
 
99
  p = parents[j]
100
  if p < 0:
 
104
  global_rotmats[:, j] = np.einsum(
105
  'tij,tjk->tik', global_rotmats[:, p], local_rot
106
  )
107
+ # Use per-frame translations if available, otherwise static offsets
108
+ if local_translations is not None:
109
+ offset = local_translations[:, j, :] # [T, 3]
110
+ positions[:, j] = positions[:, p] + np.einsum(
111
+ 'tij,tj->ti', global_rotmats[:, p], offset
112
+ )
113
+ else:
114
+ offset = offsets[j] # [3]
115
+ positions[:, j] = positions[:, p] + np.einsum(
116
+ 'tij,j->ti', global_rotmats[:, p], offset
117
+ )
118
 
119
  return positions.astype(np.float32)
120
 
 
139
  offsets = bvh.skeleton.rest_offsets
140
  rotations = bvh.rotations
141
  root_pos = bvh.root_positions
142
+ local_trans = bvh.local_translations # [T, J, 3] or None
143
 
144
  # Remove end sites if requested
145
  if do_remove_end_sites:
146
  joint_names, parent_indices, offsets, rotations = remove_end_sites(
147
  joint_names, parent_indices, offsets, rotations
148
  )
149
+ # Also filter local_translations if present
150
+ if local_trans is not None:
151
+ keep_mask = [not name.endswith('_end') for name in bvh.skeleton.joint_names]
152
+ keep_indices = [i for i, k in enumerate(keep_mask) if k]
153
+ local_trans = local_trans[:, keep_indices, :]
154
+
155
+ # Remove dummy root: a static root joint whose only child is the real root (e.g. Hips).
156
+ if len(joint_names) > 1 and parent_indices[0] == -1:
157
+ children_of_root = [j for j in range(len(joint_names)) if parent_indices[j] == 0]
158
+ if len(children_of_root) == 1:
159
+ root_rot_range = rotations[:, 0].max(axis=0) - rotations[:, 0].min(axis=0)
160
+ root_is_static = np.all(root_rot_range < 1.0) # <1 degree range = static
161
+ if root_is_static:
162
+ old_root_name = joint_names[0]
163
+ child_idx = children_of_root[0]
164
+ # Use per-frame position of child as new root_pos if available
165
+ if local_trans is not None:
166
+ root_pos = local_trans[:, child_idx, :].copy()
167
+ local_trans = local_trans[:, 1:, :]
168
+ else:
169
+ root_pos = root_pos + offsets[child_idx]
170
+ # Remove joint 0
171
+ joint_names = joint_names[1:]
172
+ offsets = offsets[1:]
173
+ rotations = rotations[:, 1:]
174
+ # Remap parent indices
175
+ new_parents = []
176
+ for p in parent_indices[1:]:
177
+ if p <= 0:
178
+ new_parents.append(-1)
179
+ else:
180
+ new_parents.append(p - 1)
181
+ parent_indices = new_parents
182
+ print(f" Removed dummy root '{old_root_name}' → new root '{joint_names[0]}'")
183
 
184
  J = len(joint_names)
185
 
186
  # Resample to target FPS
187
  if abs(bvh.fps - target_fps) > 0.5:
188
+ if local_trans is not None:
189
+ rotations, root_pos, local_trans = resample_motion(
190
+ rotations, root_pos, bvh.fps, target_fps, local_trans
191
+ )
192
+ else:
193
+ rotations, root_pos = resample_motion(
194
+ rotations, root_pos, bvh.fps, target_fps
195
+ )
196
 
197
  T = rotations.shape[0]
198
  if T < min_frames:
 
200
  if T > max_frames:
201
  rotations = rotations[:max_frames]
202
  root_pos = root_pos[:max_frames]
203
+ if local_trans is not None:
204
+ local_trans = local_trans[:max_frames]
205
  T = max_frames
206
 
207
  # Build skeleton graph
 
213
 
214
  # Forward kinematics → global joint positions
215
  joint_positions = forward_kinematics(
216
+ rotations, root_pos, offsets, parent_indices, bvh.rotation_order,
217
+ local_translations=local_trans,
218
  )
219
 
220
  # Scale normalization to meters
scripts/render_motion.py ADDED
@@ -0,0 +1,348 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Unified motion visualization: npz → stick figure video/gif.
3
+
4
+ Supports all datasets (human + animal, any joint count).
5
+ Outputs: MP4 video or GIF for human/VLM review.
6
+
7
+ Usage:
8
+ # Render single motion
9
+ python scripts/render_motion.py --input data/processed/humanml3d/motions/000001.npz --output results/videos/
10
+
11
+ # Batch render dataset
12
+ python scripts/render_motion.py --dataset humanml3d --num 20 --output results/videos/humanml3d/
13
+
14
+ # Render with text overlay
15
+ python scripts/render_motion.py --input ... --output ... --show_text --show_skeleton_info
16
+ """
17
+
18
+ import sys
19
+ import os
20
+ import argparse
21
+ from pathlib import Path
22
+ import numpy as np
23
+ import matplotlib
24
+ matplotlib.use('Agg')
25
+ import matplotlib.pyplot as plt
26
+ from mpl_toolkits.mplot3d import Axes3D
27
+ from matplotlib.animation import FuncAnimation, PillowWriter
28
+ import matplotlib.patches as mpatches
29
+
30
+ project_root = Path(__file__).parent.parent
31
+ sys.path.insert(0, str(project_root))
32
+
33
+ from src.data.skeleton_graph import SkeletonGraph
34
+
35
+ SIDE_COLORS = {'left': '#e74c3c', 'right': '#3498db', 'center': '#555555'}
36
+
37
+
38
+ def load_motion_and_skeleton(npz_path: Path, dataset_path: Path = None):
39
+ """Load motion data and its skeleton."""
40
+ d = dict(np.load(npz_path, allow_pickle=True))
41
+ T = int(d['num_frames'])
42
+
43
+ # Find skeleton
44
+ skel_data = None
45
+ if dataset_path:
46
+ # Try per-species skeleton for Zoo
47
+ species = str(d.get('species', ''))
48
+ if species:
49
+ sp_path = dataset_path / 'skeletons' / f'{species}.npz'
50
+ if sp_path.exists():
51
+ skel_data = dict(np.load(sp_path, allow_pickle=True))
52
+ if skel_data is None:
53
+ main_skel = dataset_path / 'skeleton.npz'
54
+ if main_skel.exists():
55
+ skel_data = dict(np.load(main_skel, allow_pickle=True))
56
+
57
+ if skel_data is None:
58
+ # Infer from parent directory
59
+ parent = npz_path.parent.parent
60
+ main_skel = parent / 'skeleton.npz'
61
+ if main_skel.exists():
62
+ skel_data = dict(np.load(main_skel, allow_pickle=True))
63
+
64
+ skeleton = SkeletonGraph.from_dict(skel_data) if skel_data else None
65
+ canon = [str(n) for n in skel_data.get('canonical_names', [])] if skel_data else []
66
+
67
+ return d, T, skeleton, canon
68
+
69
+
70
+ def render_motion_to_gif(
71
+ npz_path: Path,
72
+ output_path: Path,
73
+ dataset_path: Path = None,
74
+ fps: int = 20,
75
+ show_text: bool = True,
76
+ show_skeleton_info: bool = True,
77
+ figsize: tuple = (10, 8),
78
+ frame_skip: int = 1,
79
+ view_angles: tuple = (25, -60),
80
+ ):
81
+ """Render a single motion npz to an animated GIF."""
82
+ d, T, skeleton, canon = load_motion_and_skeleton(npz_path, dataset_path)
83
+
84
+ joint_positions = d['joint_positions'][:T] # [T, J, 3]
85
+ J = joint_positions.shape[1]
86
+ parents = skeleton.parent_indices if skeleton else [-1] + [0] * (J - 1)
87
+ side_tags = skeleton.side_tags if skeleton else ['center'] * J
88
+
89
+ # Subsample frames
90
+ frames = range(0, T, frame_skip)
91
+ n_frames = len(list(frames))
92
+
93
+ # Compute scene bounds (from all frames)
94
+ all_pos = joint_positions.reshape(-1, 3)
95
+ center = all_pos.mean(axis=0)
96
+ span = max(all_pos.max(axis=0) - all_pos.min(axis=0)) / 2 + 0.1
97
+
98
+ # Metadata
99
+ texts = str(d.get('texts', ''))
100
+ first_text = texts.split('|||')[0] if texts else ''
101
+ species = str(d.get('species', ''))
102
+ skeleton_id = str(d.get('skeleton_id', ''))
103
+ motion_id = npz_path.stem
104
+
105
+ # Create figure
106
+ fig = plt.figure(figsize=figsize)
107
+ ax = fig.add_subplot(111, projection='3d')
108
+
109
+ def update(frame_idx):
110
+ ax.clear()
111
+ fi = list(frames)[frame_idx]
112
+ pos = joint_positions[fi]
113
+
114
+ # Draw bones
115
+ for j in range(J):
116
+ p = parents[j]
117
+ if p >= 0 and p < J:
118
+ ax.plot3D(
119
+ [pos[j, 0], pos[p, 0]],
120
+ [pos[j, 2], pos[p, 2]],
121
+ [pos[j, 1], pos[p, 1]],
122
+ color='#bdc3c7', linewidth=2, zorder=1,
123
+ )
124
+
125
+ # Draw joints (colored by side)
126
+ for j in range(J):
127
+ color = SIDE_COLORS.get(side_tags[j] if j < len(side_tags) else 'center', '#555')
128
+ ax.scatter3D(
129
+ [pos[j, 0]], [pos[j, 2]], [pos[j, 1]],
130
+ color=color, s=25, zorder=3, edgecolors='black', linewidths=0.3,
131
+ )
132
+
133
+ # Draw ground plane
134
+ ground_y = joint_positions[:, :, 1].min() - 0.02
135
+ gx = np.array([center[0] - span, center[0] + span])
136
+ gz = np.array([center[2] - span, center[2] + span])
137
+ gx, gz = np.meshgrid(gx, gz)
138
+ gy = np.full_like(gx, ground_y)
139
+ ax.plot_surface(gx, gz, gy, alpha=0.1, color='green')
140
+
141
+ # Draw root trajectory (faded)
142
+ traj = joint_positions[:fi+1, 0, :]
143
+ if len(traj) > 1:
144
+ ax.plot3D(traj[:, 0], traj[:, 2], np.full(len(traj), ground_y),
145
+ color='blue', alpha=0.3, linewidth=1)
146
+
147
+ # Axes
148
+ ax.set_xlim(center[0] - span, center[0] + span)
149
+ ax.set_ylim(center[2] - span, center[2] + span)
150
+ ax.set_zlim(center[1] - span, center[1] + span)
151
+ ax.set_xlabel('X')
152
+ ax.set_ylabel('Z')
153
+ ax.set_zlabel('Y')
154
+ ax.view_init(elev=view_angles[0], azim=view_angles[1])
155
+
156
+ # Title
157
+ title_parts = []
158
+ if show_skeleton_info:
159
+ info = f'{skeleton_id}'
160
+ if species:
161
+ info = f'{species}'
162
+ title_parts.append(f'{info} ({J}j)')
163
+ title_parts.append(f'frame {fi}/{T}')
164
+ if show_text and first_text:
165
+ # Truncate text
166
+ txt = first_text[:60] + ('...' if len(first_text) > 60 else '')
167
+ title_parts.append(f'"{txt}"')
168
+ ax.set_title('\n'.join(title_parts), fontsize=9)
169
+
170
+ anim = FuncAnimation(fig, update, frames=n_frames, interval=1000 // (fps // frame_skip))
171
+
172
+ # Save
173
+ output_path.parent.mkdir(parents=True, exist_ok=True)
174
+ suffix = output_path.suffix.lower()
175
+ if suffix == '.gif':
176
+ anim.save(str(output_path), writer=PillowWriter(fps=fps // frame_skip))
177
+ elif suffix == '.mp4':
178
+ try:
179
+ anim.save(str(output_path), writer='ffmpeg', fps=fps // frame_skip)
180
+ except Exception:
181
+ # Fallback to gif
182
+ gif_path = output_path.with_suffix('.gif')
183
+ anim.save(str(gif_path), writer=PillowWriter(fps=fps // frame_skip))
184
+ output_path = gif_path
185
+ else:
186
+ anim.save(str(output_path), writer=PillowWriter(fps=fps // frame_skip))
187
+
188
+ plt.close(fig)
189
+ return output_path
190
+
191
+
192
+ def render_multi_view(
193
+ npz_path: Path,
194
+ output_path: Path,
195
+ dataset_path: Path = None,
196
+ n_frames: int = 8,
197
+ ):
198
+ """Render a multi-frame overview image (static, for quick review)."""
199
+ d, T, skeleton, canon = load_motion_and_skeleton(npz_path, dataset_path)
200
+
201
+ joint_positions = d['joint_positions'][:T]
202
+ J = joint_positions.shape[1]
203
+ parents = skeleton.parent_indices if skeleton else [-1] + [0] * (J - 1)
204
+ side_tags = skeleton.side_tags if skeleton else ['center'] * J
205
+
206
+ frame_indices = np.linspace(0, T - 1, n_frames, dtype=int)
207
+ colors = plt.cm.viridis(np.linspace(0, 1, n_frames))
208
+
209
+ texts = str(d.get('texts', ''))
210
+ first_text = texts.split('|||')[0][:80] if texts else ''
211
+ species = str(d.get('species', ''))
212
+ skeleton_id = str(d.get('skeleton_id', ''))
213
+
214
+ fig = plt.figure(figsize=(16, 6))
215
+
216
+ # Left: multi-frame overlay
217
+ ax1 = fig.add_subplot(121, projection='3d')
218
+ for idx, fi in enumerate(frame_indices):
219
+ pos = joint_positions[fi]
220
+ alpha = 0.3 + 0.7 * (idx / max(n_frames - 1, 1))
221
+ for j in range(J):
222
+ p = parents[j]
223
+ if p >= 0 and p < J:
224
+ ax1.plot3D([pos[j, 0], pos[p, 0]], [pos[j, 2], pos[p, 2]],
225
+ [pos[j, 1], pos[p, 1]], color=colors[idx], alpha=alpha, linewidth=1.5)
226
+
227
+ all_pos = joint_positions.reshape(-1, 3)
228
+ mid = all_pos.mean(axis=0)
229
+ sp = max(all_pos.max(axis=0) - all_pos.min(axis=0)) / 2 + 0.1
230
+ ax1.set_xlim(mid[0]-sp, mid[0]+sp); ax1.set_ylim(mid[2]-sp, mid[2]+sp); ax1.set_zlim(mid[1]-sp, mid[1]+sp)
231
+ ax1.set_title(f'{species or skeleton_id} ({J}j, {T} frames)\n{first_text}', fontsize=9)
232
+ ax1.view_init(25, -60)
233
+
234
+ # Right: trajectory top view + velocity
235
+ ax2 = fig.add_subplot(222)
236
+ root = d['root_position'][:T]
237
+ ax2.plot(root[:, 0], root[:, 2], 'b-', linewidth=1)
238
+ ax2.plot(root[0, 0], root[0, 2], 'go', ms=6, label='start')
239
+ ax2.plot(root[-1, 0], root[-1, 2], 'ro', ms=6, label='end')
240
+ ax2.set_title('Root trajectory (top)', fontsize=8)
241
+ ax2.set_aspect('equal')
242
+ ax2.legend(fontsize=7)
243
+ ax2.grid(True, alpha=0.3)
244
+
245
+ ax3 = fig.add_subplot(224)
246
+ vel = d['velocities'][:T]
247
+ mean_vel = np.linalg.norm(vel, axis=-1).mean(axis=1)
248
+ ax3.plot(mean_vel, 'b-', alpha=0.7, linewidth=1)
249
+ fc = d['foot_contact'][:T]
250
+ if fc.shape[1] >= 4:
251
+ ax3.fill_between(range(T), 0, fc[:, :2].max(axis=1) * mean_vel.max() * 0.3,
252
+ alpha=0.2, color='red', label='L foot')
253
+ ax3.fill_between(range(T), 0, fc[:, 2:].max(axis=1) * mean_vel.max() * 0.3,
254
+ alpha=0.2, color='green', label='R foot')
255
+ ax3.set_title('Velocity + foot contact', fontsize=8)
256
+ ax3.legend(fontsize=7)
257
+ ax3.grid(True, alpha=0.3)
258
+
259
+ plt.tight_layout()
260
+ output_path.parent.mkdir(parents=True, exist_ok=True)
261
+ plt.savefig(output_path, dpi=120, bbox_inches='tight')
262
+ plt.close()
263
+ return output_path
264
+
265
+
266
+ def batch_render(
267
+ dataset_id: str,
268
+ output_dir: Path,
269
+ num: int = 20,
270
+ mode: str = 'overview', # 'overview' | 'gif' | 'both'
271
+ frame_skip: int = 2,
272
+ ):
273
+ """Batch render samples from a dataset."""
274
+ base = project_root / 'data' / 'processed' / dataset_id
275
+ mdir = base / 'motions'
276
+ files = sorted(os.listdir(mdir))
277
+
278
+ # Select diverse samples (spread across dataset)
279
+ indices = np.linspace(0, len(files) - 1, min(num, len(files)), dtype=int)
280
+ selected = [files[i] for i in indices]
281
+
282
+ output_dir.mkdir(parents=True, exist_ok=True)
283
+ rendered = 0
284
+
285
+ for f in selected:
286
+ npz_path = mdir / f
287
+ name = f.replace('.npz', '')
288
+
289
+ if mode in ('overview', 'both'):
290
+ out = output_dir / f'{name}_overview.png'
291
+ render_multi_view(npz_path, out, base)
292
+ rendered += 1
293
+
294
+ if mode in ('gif', 'both'):
295
+ out = output_dir / f'{name}.gif'
296
+ render_motion_to_gif(npz_path, out, base, frame_skip=frame_skip)
297
+ rendered += 1
298
+
299
+ return rendered
300
+
301
+
302
+ def main():
303
+ parser = argparse.ArgumentParser(description='Render motion visualizations')
304
+ parser.add_argument('--input', type=str, help='Single npz file to render')
305
+ parser.add_argument('--dataset', type=str, help='Dataset ID for batch render')
306
+ parser.add_argument('--output', type=str, default='results/videos/', help='Output directory')
307
+ parser.add_argument('--num', type=int, default=20, help='Number of samples for batch')
308
+ parser.add_argument('--mode', choices=['overview', 'gif', 'both'], default='both')
309
+ parser.add_argument('--frame_skip', type=int, default=2, help='Frame skip for GIF (1=all frames)')
310
+ parser.add_argument('--show_text', action='store_true', default=True)
311
+ parser.add_argument('--show_skeleton_info', action='store_true', default=True)
312
+ args = parser.parse_args()
313
+
314
+ output = Path(args.output)
315
+
316
+ if args.input:
317
+ npz_path = Path(args.input)
318
+ ds_path = npz_path.parent.parent
319
+ print(f'Rendering: {npz_path.name}')
320
+
321
+ if args.mode in ('overview', 'both'):
322
+ out = output / f'{npz_path.stem}_overview.png'
323
+ render_multi_view(npz_path, out, ds_path)
324
+ print(f' Overview: {out}')
325
+
326
+ if args.mode in ('gif', 'both'):
327
+ out = output / f'{npz_path.stem}.gif'
328
+ render_motion_to_gif(npz_path, out, ds_path, frame_skip=args.frame_skip)
329
+ print(f' GIF: {out}')
330
+
331
+ elif args.dataset:
332
+ print(f'Batch rendering: {args.dataset} ({args.num} samples, mode={args.mode})')
333
+ n = batch_render(args.dataset, output / args.dataset, args.num, args.mode, args.frame_skip)
334
+ print(f' Rendered: {n} files → {output / args.dataset}')
335
+
336
+ else:
337
+ # Render all datasets
338
+ for ds in ['humanml3d', 'lafan1', '100style', 'bandai_namco', 'cmu_mocap', 'mixamo', 'truebones_zoo']:
339
+ ds_path = project_root / 'data' / 'processed' / ds
340
+ if not ds_path.exists():
341
+ continue
342
+ print(f'Rendering {ds}...')
343
+ n = batch_render(ds, output / ds, min(args.num, 10), args.mode, args.frame_skip)
344
+ print(f' {n} files')
345
+
346
+
347
+ if __name__ == '__main__':
348
+ main()
src/__init__.py ADDED
File without changes
src/data/__init__.py CHANGED
@@ -1,2 +0,0 @@
1
- from .skeleton_graph import SkeletonGraph, normalize_joint_name
2
- from .bvh_parser import parse_bvh, BVHData, resample_motion, remove_end_sites
 
 
 
src/data/bvh_parser.py CHANGED
@@ -23,6 +23,7 @@ class BVHData:
23
  fps: float
24
  num_frames: int
25
  rotation_order: str # e.g., 'ZYX'
 
26
 
27
 
28
  def parse_bvh(filepath: str | Path) -> BVHData:
@@ -54,15 +55,16 @@ def parse_bvh(filepath: str | Path) -> BVHData:
54
  fps = 1.0 / frame_time if frame_time > 0 else 30.0
55
 
56
  # Extract rotations and root positions from channel data
57
- rotations, root_positions = _extract_motion_channels(
58
- motion_data, channels_per_joint, len(joint_names)
 
59
  )
60
 
61
  # Build skeleton graph
62
  skeleton = SkeletonGraph(
63
  joint_names=joint_names,
64
  parent_indices=parent_indices,
65
- rest_offsets=np.array(rest_offsets, dtype=np.float32),
66
  )
67
 
68
  return BVHData(
@@ -72,6 +74,7 @@ def parse_bvh(filepath: str | Path) -> BVHData:
72
  fps=fps,
73
  num_frames=num_frames,
74
  rotation_order=rotation_order,
 
75
  )
76
 
77
 
@@ -172,12 +175,24 @@ def _extract_motion_channels(
172
  motion_data: np.ndarray,
173
  channels_per_joint: list[tuple[int, bool]],
174
  num_joints: int,
175
- ) -> tuple[np.ndarray, np.ndarray]:
176
- """Extract per-joint rotations and root positions from flat channel data."""
 
 
 
 
 
 
 
 
177
  T = len(motion_data)
178
  rotations = np.zeros((T, num_joints, 3), dtype=np.float32)
179
  root_positions = np.zeros((T, 3), dtype=np.float32)
180
 
 
 
 
 
181
  col = 0
182
  joint_idx = 0
183
 
@@ -188,12 +203,15 @@ def _extract_motion_channels(
188
 
189
  if has_position and joint_idx == 0:
190
  # Root joint: extract position (3) + rotation (3)
191
- root_positions = motion_data[:, col:col + 3]
192
  rotations[:, 0, :] = motion_data[:, col + 3:col + 6]
 
193
  col += num_ch
194
  elif has_position:
195
- # Non-root with position (rare but possible)
196
  rotations[:, joint_idx, :] = motion_data[:, col + 3:col + 6]
 
 
197
  col += num_ch
198
  else:
199
  # Rotation only
@@ -202,7 +220,27 @@ def _extract_motion_channels(
202
 
203
  joint_idx += 1
204
 
205
- return rotations, root_positions
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
206
 
207
 
208
  def resample_motion(
@@ -210,9 +248,12 @@ def resample_motion(
210
  root_positions: np.ndarray,
211
  source_fps: float,
212
  target_fps: float = 20.0,
213
- ) -> tuple[np.ndarray, np.ndarray]:
 
214
  """Resample motion to target FPS via linear interpolation."""
215
  if abs(source_fps - target_fps) < 0.5:
 
 
216
  return rotations, root_positions
217
 
218
  T_src = len(rotations)
@@ -234,6 +275,14 @@ def resample_motion(
234
  for d in range(3):
235
  new_pos[:, d] = np.interp(tgt_times, src_times, root_positions[:, d])
236
 
 
 
 
 
 
 
 
 
237
  return new_rots, new_pos
238
 
239
 
 
23
  fps: float
24
  num_frames: int
25
  rotation_order: str # e.g., 'ZYX'
26
+ local_translations: np.ndarray | None = None # [T, J, 3] per-frame local translations (if available)
27
 
28
 
29
  def parse_bvh(filepath: str | Path) -> BVHData:
 
55
  fps = 1.0 / frame_time if frame_time > 0 else 30.0
56
 
57
  # Extract rotations and root positions from channel data
58
+ rest_offsets_arr = np.array(rest_offsets, dtype=np.float32)
59
+ rotations, root_positions, local_translations = _extract_motion_channels(
60
+ motion_data, channels_per_joint, len(joint_names), rest_offsets_arr
61
  )
62
 
63
  # Build skeleton graph
64
  skeleton = SkeletonGraph(
65
  joint_names=joint_names,
66
  parent_indices=parent_indices,
67
+ rest_offsets=rest_offsets_arr,
68
  )
69
 
70
  return BVHData(
 
74
  fps=fps,
75
  num_frames=num_frames,
76
  rotation_order=rotation_order,
77
+ local_translations=local_translations,
78
  )
79
 
80
 
 
175
  motion_data: np.ndarray,
176
  channels_per_joint: list[tuple[int, bool]],
177
  num_joints: int,
178
+ rest_offsets: np.ndarray = None,
179
+ ) -> tuple[np.ndarray, np.ndarray, np.ndarray | None]:
180
+ """Extract per-joint rotations, root positions, and local translations.
181
+
182
+ Returns:
183
+ rotations: [T, J, 3] Euler angles
184
+ root_positions: [T, 3]
185
+ local_translations: [T, J, 3] or None — per-frame local translations
186
+ for joints that have position channels. None if only root has positions.
187
+ """
188
  T = len(motion_data)
189
  rotations = np.zeros((T, num_joints, 3), dtype=np.float32)
190
  root_positions = np.zeros((T, 3), dtype=np.float32)
191
 
192
+ # Track which non-root joints have position channels
193
+ has_per_joint_positions = False
194
+ joint_has_pos = [False] * num_joints
195
+
196
  col = 0
197
  joint_idx = 0
198
 
 
203
 
204
  if has_position and joint_idx == 0:
205
  # Root joint: extract position (3) + rotation (3)
206
+ root_positions = motion_data[:, col:col + 3].copy()
207
  rotations[:, 0, :] = motion_data[:, col + 3:col + 6]
208
+ joint_has_pos[0] = True
209
  col += num_ch
210
  elif has_position:
211
+ # Non-root with position channels
212
  rotations[:, joint_idx, :] = motion_data[:, col + 3:col + 6]
213
+ joint_has_pos[joint_idx] = True
214
+ has_per_joint_positions = True
215
  col += num_ch
216
  else:
217
  # Rotation only
 
220
 
221
  joint_idx += 1
222
 
223
+ # Build per-joint local translations if any non-root joint has position channels
224
+ local_translations = None
225
+ if has_per_joint_positions:
226
+ local_translations = np.zeros((T, num_joints, 3), dtype=np.float32)
227
+ # Fill with rest_offsets as default
228
+ if rest_offsets is not None:
229
+ for j in range(num_joints):
230
+ local_translations[:, j, :] = rest_offsets[j]
231
+ # Overwrite with per-frame position data where available
232
+ col = 0
233
+ joint_idx = 0
234
+ for num_ch, has_position in channels_per_joint:
235
+ if num_ch == 0:
236
+ joint_idx += 1
237
+ continue
238
+ if has_position and joint_idx > 0:
239
+ local_translations[:, joint_idx, :] = motion_data[:, col:col + 3]
240
+ col += num_ch
241
+ joint_idx += 1
242
+
243
+ return rotations, root_positions, local_translations
244
 
245
 
246
  def resample_motion(
 
248
  root_positions: np.ndarray,
249
  source_fps: float,
250
  target_fps: float = 20.0,
251
+ local_translations: np.ndarray = None,
252
+ ) -> tuple[np.ndarray, np.ndarray] | tuple[np.ndarray, np.ndarray, np.ndarray]:
253
  """Resample motion to target FPS via linear interpolation."""
254
  if abs(source_fps - target_fps) < 0.5:
255
+ if local_translations is not None:
256
+ return rotations, root_positions, local_translations
257
  return rotations, root_positions
258
 
259
  T_src = len(rotations)
 
275
  for d in range(3):
276
  new_pos[:, d] = np.interp(tgt_times, src_times, root_positions[:, d])
277
 
278
+ # Interpolate local translations if present [T, J, 3]
279
+ if local_translations is not None:
280
+ new_trans = np.zeros((T_tgt, J, 3), dtype=np.float32)
281
+ for j in range(J):
282
+ for d in range(3):
283
+ new_trans[:, j, d] = np.interp(tgt_times, src_times, local_translations[:, j, d])
284
+ return new_rots, new_pos, new_trans
285
+
286
  return new_rots, new_pos
287
 
288