Upload docs/PROMPT_DESIGN.md with huggingface_hub
Browse files- docs/PROMPT_DESIGN.md +192 -0
docs/PROMPT_DESIGN.md
ADDED
|
@@ -0,0 +1,192 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# TopoSlots Text Prompt 设计方案
|
| 2 |
+
|
| 3 |
+
## 核心问题
|
| 4 |
+
|
| 5 |
+
我们的模型接收 **text + target_skeleton** 双条件输入。text prompt 该怎么设计?
|
| 6 |
+
|
| 7 |
+
```
|
| 8 |
+
生成管线: text prompt + skeleton graph → SlotAssignment → MaskGIT → Decoder → motion
|
| 9 |
+
↑ ↑
|
| 10 |
+
描述什么动作 决定什么骨架
|
| 11 |
+
```
|
| 12 |
+
|
| 13 |
+
## 设计决策
|
| 14 |
+
|
| 15 |
+
### 决策 1:拓扑信息是否放入 prompt?
|
| 16 |
+
|
| 17 |
+
**结论:不放**。理由:
|
| 18 |
+
|
| 19 |
+
1. **架构已分离**:骨架拓扑通过 skeleton graph 独立输入(SkeletonEncoder + SlotAssignment),text 不需要重复编码拓扑
|
| 20 |
+
2. **跨骨架泛化**:如果 prompt 包含 "a dog walks",模型就被绑定到 dog 骨架。但 "walks forward" 可以复用到任何四足/双足
|
| 21 |
+
3. **可用性**:用户在推理时可以自由组合 "任意 text × 任意 skeleton",例如把 "walks forward" 应用到新见的骨架上
|
| 22 |
+
4. **标注效率**:不需要为每个骨架重写 prompt
|
| 23 |
+
|
| 24 |
+
**但有一个例外**:物种名可以作为**可选前缀标签**出现,用于训练时的语义对齐,推理时可以省略。
|
| 25 |
+
|
| 26 |
+
```
|
| 27 |
+
训练时: "[dog] walks forward and sits down" ← 方括号内为可选物种标签
|
| 28 |
+
推理时: "walks forward and sits down" ← 纯动作描述
|
| 29 |
+
推理时: "[cat] walks forward and sits down" ← 也可以加物种提示
|
| 30 |
+
```
|
| 31 |
+
|
| 32 |
+
### 决策 2:动作描述粒度
|
| 33 |
+
|
| 34 |
+
三个级别,**只要求 L1,鼓励 L2,L3 可选**:
|
| 35 |
+
|
| 36 |
+
| 级别 | 格式 | 示例 | 标注成本 | 用途 |
|
| 37 |
+
|------|------|------|:--------:|------|
|
| 38 |
+
| **L1: 动作标签** | 1-3 个词 | `walk`, `run`, `attack`, `idle` | 极低 | 分类、检索、baseline 条件 |
|
| 39 |
+
| **L2: 短描述** | 1 句话,10-20 词 | `walks forward slowly then stops` | 中等 | 主要训练条件 |
|
| 40 |
+
| **L3: 详细描述** | 2-3 句话 | `The creature begins walking at a steady pace, gradually slowing down. It pauses briefly, shifts weight, then stops.` | 高 | 精细控制、评估 |
|
| 41 |
+
|
| 42 |
+
**L2 是核心**——在标注成本和语义丰富度之间取得平衡。
|
| 43 |
+
|
| 44 |
+
### 决策 3:描述应该是骨架无关的(Skeleton-Agnostic)
|
| 45 |
+
|
| 46 |
+
**关键原则:描述"做什么"而不是"怎么做"。**
|
| 47 |
+
|
| 48 |
+
| ✅ 好的描述 (骨架无关) | ❌ 差的描述 (骨架特定) |
|
| 49 |
+
|---|---|
|
| 50 |
+
| `walks forward` | `moves left leg then right leg alternately` |
|
| 51 |
+
| `attacks with its mouth` | `opens jaw joint 15 degrees then closes` |
|
| 52 |
+
| `flies in a circle` | `flaps left wing up 45 degrees, right wing follows` |
|
| 53 |
+
| `stands idle, looking around` | `rotates head joint on spine3` |
|
| 54 |
+
| `runs and jumps over obstacle` | `extends both hind legs while fore legs tuck` |
|
| 55 |
+
|
| 56 |
+
**为什么**:
|
| 57 |
+
- "walks forward" 对人类 22 关节、狗 55 关节、蜘蛛 71 关节都有意义
|
| 58 |
+
- "moves left leg then right leg" 假设了双足结构,蜘蛛 8 条腿怎么办?
|
| 59 |
+
- 模型的 slot assignment + decoder 会自己决定哪些关节参与 "walk"
|
| 60 |
+
|
| 61 |
+
### 决策 4:物种通用动作词汇表
|
| 62 |
+
|
| 63 |
+
标准化动作词汇,跨物种通用:
|
| 64 |
+
|
| 65 |
+
```
|
| 66 |
+
== 运动类 ==
|
| 67 |
+
walk, run, sprint, trot, gallop, crawl, slither, swim, fly, hover, glide
|
| 68 |
+
|
| 69 |
+
== 姿态类 ==
|
| 70 |
+
stand, sit, lie down, crouch, squat, rear up, perch
|
| 71 |
+
|
| 72 |
+
== 交互类 ==
|
| 73 |
+
attack, bite, claw, kick, charge, ram, pounce, grab
|
| 74 |
+
|
| 75 |
+
== 情绪/状态类 ==
|
| 76 |
+
idle, alert, sleep, eat, drink, shake, scratch, groom
|
| 77 |
+
|
| 78 |
+
== 转场类 ==
|
| 79 |
+
turn left/right, start, stop, accelerate, decelerate, transition
|
| 80 |
+
```
|
| 81 |
+
|
| 82 |
+
这些词对人类和动物都有意义(人类也可以 "crouch", "charge", "idle")。
|
| 83 |
+
|
| 84 |
+
---
|
| 85 |
+
|
| 86 |
+
## 标注规范 (更新版)
|
| 87 |
+
|
| 88 |
+
### 每条 motion 的标注格式
|
| 89 |
+
|
| 90 |
+
```json
|
| 91 |
+
{
|
| 92 |
+
"motion_id": "Dog_0001",
|
| 93 |
+
"labels": {
|
| 94 |
+
"L1_action": "walk",
|
| 95 |
+
"L1_action_secondary": "turn",
|
| 96 |
+
"L2_short": "walks forward then turns right",
|
| 97 |
+
"L3_detailed": "The creature walks forward at a moderate pace for several steps, then smoothly turns to the right while maintaining balance.",
|
| 98 |
+
"species_tag": "dog",
|
| 99 |
+
"species_category": "quadruped"
|
| 100 |
+
}
|
| 101 |
+
}
|
| 102 |
+
```
|
| 103 |
+
|
| 104 |
+
### 标注优先级
|
| 105 |
+
|
| 106 |
+
| 优先级 | 数据集 | 当前状态 | 需要做什么 |
|
| 107 |
+
|:------:|--------|---------|-----------|
|
| 108 |
+
| **P0** | HumanML3D | 100% L2 文本 | 补充 L1 标签(可从文本自动提取) |
|
| 109 |
+
| **P1** | Truebones Zoo | 80% L2 文本 | 补全剩余 20%;补充 L1 标签 |
|
| 110 |
+
| **P2** | LAFAN1 | 0% | 从文件名提取 L1(`aiming`, `dance`, `fight`...);VLM 生成 L2 |
|
| 111 |
+
| **P2** | 100Style | 0% | 从文件名提取 L1 + style 标签(`happy_walk`, `tired_run`...) |
|
| 112 |
+
| **P3** | Bandai Namco | 0% | 有 JSON metadata 可提取;VLM 补 L2 |
|
| 113 |
+
| **P3** | CMU MoCap | 0% | 从目录名提取 L1(subject/action 编号) |
|
| 114 |
+
| **P4** | Mixamo | 0% | 从文件名 hash→原始动画名映射 |
|
| 115 |
+
|
| 116 |
+
### 低成本批量标注方案
|
| 117 |
+
|
| 118 |
+
**Phase 1:自动提取 L1 标签**(零人工成本)
|
| 119 |
+
```
|
| 120 |
+
LAFAN1 文件名: "aiming1_subject1.bvh" → L1="aim"
|
| 121 |
+
100Style: "Happy_FW.bvh" → L1="walk", style="happy"
|
| 122 |
+
Bandai Namco: JSON metadata → L1 直接可用
|
| 123 |
+
CMU: 目录结构 → L1 分类
|
| 124 |
+
```
|
| 125 |
+
|
| 126 |
+
**Phase 2:VLM 自动生成 L2 短描述**
|
| 127 |
+
- 渲染骨架动画为 stick figure 视频
|
| 128 |
+
- 送入 VLM (GPT-4o / Qwen2.5-VL) 生成 L2 描述
|
| 129 |
+
- 模板 prompt: `"Describe the motion of this creature in one sentence. Focus on WHAT it does, not HOW its body parts move."`
|
| 130 |
+
|
| 131 |
+
**Phase 3:人工审核 + 修正**(只审核 VLM 输出,不从零写)
|
| 132 |
+
|
| 133 |
+
---
|
| 134 |
+
|
| 135 |
+
## 训练时的 Text Conditioning 策略
|
| 136 |
+
|
| 137 |
+
```python
|
| 138 |
+
# 训练时随机选择条件级别(鼓励模型学习多粒度)
|
| 139 |
+
if has_L3 and random() < 0.2:
|
| 140 |
+
text = sample['L3_detailed']
|
| 141 |
+
elif has_L2 and random() < 0.7:
|
| 142 |
+
text = sample['L2_short']
|
| 143 |
+
elif has_L1:
|
| 144 |
+
text = sample['L1_action']
|
| 145 |
+
else:
|
| 146 |
+
text = "" # unconditional (CFG dropout)
|
| 147 |
+
|
| 148 |
+
# 可选:加物种标签前缀
|
| 149 |
+
if random() < 0.5 and has_species:
|
| 150 |
+
text = f"[{sample['species_tag']}] {text}"
|
| 151 |
+
```
|
| 152 |
+
|
| 153 |
+
### Classifier-Free Guidance 设计
|
| 154 |
+
|
| 155 |
+
```
|
| 156 |
+
10% 概率: text = "" (完全无条件 → 学习 unconditional motion prior)
|
| 157 |
+
10% 概率: text = L1 only (极简条件 → 学习粗粒度控制)
|
| 158 |
+
60% 概率: text = L2 short (主要条件)
|
| 159 |
+
20% 概率: text = L3 detailed (精细条件)
|
| 160 |
+
|
| 161 |
+
skeleton 始终提供 (不 dropout skeleton 条件)
|
| 162 |
+
```
|
| 163 |
+
|
| 164 |
+
---
|
| 165 |
+
|
| 166 |
+
## 与竞争方法的对比
|
| 167 |
+
|
| 168 |
+
| 方法 | Text 设计 | 骨架条件 | 跨骨架泛化 |
|
| 169 |
+
|------|---------|---------|:---------:|
|
| 170 |
+
| **HumanML3D** | 自由文本,包含 "a person" | 无(固定 SMPL) | ✗ |
|
| 171 |
+
| **NECromancer** | VLM 自动文本 | skeleton 分离 | ✓ (tokenizer) |
|
| 172 |
+
| **T2M4LVO** | 多粒度文本,含物种名 | 无 | ✗ |
|
| 173 |
+
| **AnyTop** | 无文本条件 | skeleton 分离 | ✓ |
|
| 174 |
+
| **TopoSlots (ours)** | **L1+L2+L3 多粒度,骨架无关,可选物种标签** | **skeleton 分离** | **✓ (生成)** |
|
| 175 |
+
|
| 176 |
+
**我们的优势**:
|
| 177 |
+
1. 文本和骨架完全解耦 → "walk" 可以应用到任何骨架
|
| 178 |
+
2. 多粒度条件 → 支持从粗到细的控制
|
| 179 |
+
3. 可选物种标签 → 训练时提供弱监督语义,推理时灵活组合
|
| 180 |
+
|
| 181 |
+
---
|
| 182 |
+
|
| 183 |
+
## 总结:标注工作量评估
|
| 184 |
+
|
| 185 |
+
| 阶段 | 工作量 | 产出 |
|
| 186 |
+
|------|--------|------|
|
| 187 |
+
| L1 自动提取 | ~1 天(脚本) | 所有数据集 100% L1 标签 |
|
| 188 |
+
| L2 VLM 生成 | ~2 天(GPU) | 缺失数据集 ~9K 条 L2 描述 |
|
| 189 |
+
| L2 人工审核 | ~3-5 天 | 质量保证 |
|
| 190 |
+
| L3 详细描述 | 可选 | 仅 HumanML3D 已有 |
|
| 191 |
+
|
| 192 |
+
**结论:L1 可以零成本自动化,L2 用 VLM 生成+人工审核,L3 暂不需要。总人工投入约 3-5 天。**
|