Lemon-03 commited on
Commit
f08e6dd
·
verified ·
1 Parent(s): 4d2e3a5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +119 -58
README.md CHANGED
@@ -1,48 +1,48 @@
1
  ---
2
  datasets:
3
- - lerobot/pusht
4
  library_name: lerobot
5
  license: apache-2.0
6
  model_name: diffusion
7
  pipeline_tag: robotics
8
  tags:
9
- - lerobot
10
- - robotics
11
- - diffusion
12
- - pusht
13
- - imitation-learning
14
- - benchmark
15
  ---
16
 
17
- # 🦾 Diffusion Policy for Push-T (200k Steps)
18
 
19
  [![LeRobot](https://img.shields.io/badge/Library-LeRobot-yellow)](https://github.com/huggingface/lerobot)
20
- [![Task](https://img.shields.io/badge/Task-Push--T-blue)](https://huggingface.co/datasets/lerobot/pusht)
21
  [![UESTC](https://img.shields.io/badge/Author-UESTC_Graduate-red)](https://www.uestc.edu.cn/)
22
  [![License](https://img.shields.io/badge/License-Apache_2.0-green)](https://www.apache.org/licenses/LICENSE-2.0)
23
 
24
- > **Summary:** This model demonstrates the capabilities of **Diffusion Policy** on the precision-demanding **Push-T** task. It was trained using the [LeRobot](https://github.com/huggingface/lerobot) framework as part of a thesis research project benchmarking Imitation Learning algorithms.
25
 
26
- - **🧩 Task**: Push-T (Simulated)
27
  - **🧠 Algorithm**: [Diffusion Policy](https://huggingface.co/papers/2303.04137) (DDPM)
28
- - **🔄 Training Steps**: 200,000 (Fine-tuned via Resume)
29
  - **🎓 Author**: Graduate Student, **UESTC** (University of Electronic Science and Technology of China)
30
 
31
  ---
32
 
33
  ## 🔬 Benchmark Results (vs ACT)
34
 
35
- Compared to the ACT baseline (which achieved **0%** success rate in our controlled experiments), this Diffusion Policy model demonstrates significantly better control precision and trajectory stability.
36
 
37
  ### 📊 Evaluation Metrics (50 Episodes)
38
 
39
  | Metric | Value | Comparison to ACT Baseline | Status |
40
  | :--- | :---: | :--- | :---: |
41
- | **Success Rate** | **14.0%** | **Significant Improvement** (ACT: 0%) | 🏆 |
42
- | **Avg Max Reward** | **0.81** | **+58% Higher Precision** (ACT: ~0.51) | 📈 |
43
- | **Avg Sum Reward** | **130.46** | **+147% More Stable** (ACT: ~52.7) | ✅ |
44
 
45
- > **Note:** The Push-T environment requires **>95% target coverage** for success. An average max reward of `0.81` indicates the policy consistently moves the block very close to the target position, proving strong manipulation capabilities despite the strict success threshold.
46
 
47
  ---
48
 
@@ -51,42 +51,105 @@ Compared to the ACT baseline (which achieved **0%** success rate in our controll
51
  | Parameter | Description |
52
  | :--- | :--- |
53
  | **Architecture** | ResNet18 (Vision Backbone) + U-Net (Diffusion Head) |
 
54
  | **Prediction Horizon** | 16 steps |
55
  | **Observation History** | 2 steps |
56
  | **Action Steps** | 8 steps |
57
 
58
- - **Training Strategy**:
59
- - Phase 1: Initial training (100,000 steps) -> Model: `Lemon-03/DP_PushT_test`
60
- - Phase 2: Resume/Fine-tuning (+100,000 steps) -> Model: `Lemon-03/DP_PushT_test_Resume`
61
- - **Total**: 200,000 steps
62
-
63
  ---
64
 
65
- ## 🔧 Training Configuration (Reference)
66
-
67
- For reproducibility, here are the key parameters used during the training session:
68
-
69
- - **Batch Size**: 64
70
- - **Optimizer**: AdamW (`lr=1e-4`)
71
- - **Scheduler**: Cosine with warmup
72
- - **Vision**: ResNet18 with random crop (84x84)
73
- - **Precision**: Mixed Precision (AMP) enabled
74
-
75
- #### Original Training Command (My Resume Mode)
76
-
77
- ```bash
78
- python -m lerobot.scripts.lerobot_train \
79
- --policy.type diffusion \
80
- --env.type pusht \
81
- --dataset.repo_id lerobot/pusht \
82
- --wandb.enable true \
83
- --eval.batch_size 8 \
84
- --job_name DP_PushT_Resume \
85
- --policy.repo_id Lemon-03/DP_PushT_test_Resume \
86
- --policy.pretrained_path outputs/train/2025-12-02/14-33-35_DP_PushT/checkpoints/last/pretrained_model \
87
- --steps 100000
88
- ```
89
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90
 
91
  ## 🚀 Evaluate (My Evaluation Mode)
92
 
@@ -95,11 +158,11 @@ Run the following command in your terminal to evaluate the model for 50 episodes
95
  ```bash
96
  python -m lerobot.scripts.lerobot_eval \
97
  --policy.type diffusion \
98
- --policy.pretrained_path outputs/train/2025-12-04/14-47-37_DP_PushT_Resume/checkpoints/last/pretrained_model \
99
  --eval.n_episodes 50 \
100
- --eval.batch_size 10 \
101
- --env.type pusht \
102
- --env.task PushT-v0
103
  ```
104
 
105
  To evaluate this model locally, run the following command:
@@ -107,11 +170,9 @@ To evaluate this model locally, run the following command:
107
  ```bash
108
  python -m lerobot.scripts.lerobot_eval \
109
  --policy.type diffusion \
110
- --policy.pretrained_path Lemon-03/DP_PushT_test_Resume \
111
  --eval.n_episodes 50 \
112
- --eval.batch_size 10 \
113
- --env.type pusht \
114
- --env.task PushT-v0
115
- ```
116
-
117
- -----
 
1
  ---
2
  datasets:
3
+ - lerobot/aloha_sim_insertion_human
4
  library_name: lerobot
5
  license: apache-2.0
6
  model_name: diffusion
7
  pipeline_tag: robotics
8
  tags:
9
+ - lerobot
10
+ - robotics
11
+ - diffusion
12
+ - aloha
13
+ - imitation-learning
14
+ - benchmark
15
  ---
16
 
17
+ # 🦾 Diffusion Policy for Aloha Insertion (200k Steps)
18
 
19
  [![LeRobot](https://img.shields.io/badge/Library-LeRobot-yellow)](https://github.com/huggingface/lerobot)
20
+ [![Task](https://img.shields.io/badge/Task-Aloha_Insertion-blue)](https://huggingface.co/datasets/lerobot/aloha_sim_insertion_human)
21
  [![UESTC](https://img.shields.io/badge/Author-UESTC_Graduate-red)](https://www.uestc.edu.cn/)
22
  [![License](https://img.shields.io/badge/License-Apache_2.0-green)](https://www.apache.org/licenses/LICENSE-2.0)
23
 
24
+ > **Summary:** This model represents a benchmark experiment for **Diffusion Policy** on the challenging **Aloha Insertion** task (Simulated). It was trained using the [LeRobot](https://github.com/huggingface/lerobot) framework to evaluate the algorithm's performance on complex, high-dimensional 3D manipulation tasks compared to baseline methods.
25
 
26
+ - **🧩 Task**: Aloha Insertion (Simulated, 3D)
27
  - **🧠 Algorithm**: [Diffusion Policy](https://huggingface.co/papers/2303.04137) (DDPM)
28
+ - **🔄 Training Steps**: 200,000
29
  - **🎓 Author**: Graduate Student, **UESTC** (University of Electronic Science and Technology of China)
30
 
31
  ---
32
 
33
  ## 🔬 Benchmark Results (vs ACT)
34
 
35
+ This experiment highlights the significant difficulty of the Aloha Insertion task for generative policies under limited compute constraints (Batch Size=8). While the ACT baseline achieved a **2%** success rate (1/50), the Diffusion Policy focused on trajectory learning but struggled with the final insertion alignment.
36
 
37
  ### 📊 Evaluation Metrics (50 Episodes)
38
 
39
  | Metric | Value | Comparison to ACT Baseline | Status |
40
  | :--- | :---: | :--- | :---: |
41
+ | **Success Rate** | **0.0%** | **Slightly Lower** (ACT: 2.0%) | 📉 |
42
+ | **Avg Max Reward** | **0.10** | **Partial Success** (Grasping achieved) | 🚧 |
43
+ | **Avg Sum Reward** | **8.20** | **Stable Trajectories** | ✅ |
44
 
45
+ > **Note:** The Aloha Insertion task involves high-dimensional inputs (3 cameras) and precise 3D spatial reasoning. The results indicate that under low batch-size constraints (Batch Size=8), ACT's deterministic policy may converge faster than Diffusion Policy, which likely requires longer training or larger batches for this specific domain.
46
 
47
  ---
48
 
 
51
  | Parameter | Description |
52
  | :--- | :--- |
53
  | **Architecture** | ResNet18 (Vision Backbone) + U-Net (Diffusion Head) |
54
+ | **Input** | 3 Camera Views (Top, Left, Right) |
55
  | **Prediction Horizon** | 16 steps |
56
  | **Observation History** | 2 steps |
57
  | **Action Steps** | 8 steps |
58
 
 
 
 
 
 
59
  ---
60
 
61
+ ## 🔧 Training Configuration
62
+
63
+ For reproducibility, here are the key parameters used during the training session.
64
+
65
+ - **Source**: Configuration adapted from [CSCSX/LeRobotTutorial-CN](https://github.com/CSCSX/LeRobotTutorial-CN).
66
+ - **Batch Size**: 8 (Limited by 8GB VRAM)
67
+ - **Optimizer**: AdamW (`lr=1e-4`)
68
+ - **Scheduler**: Cosine with warmup
69
+ - **Vision**: ResNet18 with GroupNorm (Cropped to 420x560)
70
+
71
+ <details>
72
+ <summary>📄 <strong>Click to view full <code>diffusion_aloha.yaml</code> used for training</strong></summary>
73
+
74
+ ```yaml
75
+ # @package _global_
76
+
77
+ # 随机种子
78
+ seed: 100000
79
+ job_name: Diffusion-Aloha-Insertion
80
+
81
+ # 训练参数
82
+ steps: 200000 # 原文件写的是 20万步 (Aloha 比较难练)
83
+ eval_freq: 20000 # 稍微改频一点,方便看进度
84
+ save_freq: 20000
85
+ log_freq: 200
86
+ batch_size: 8 # ⚠️ 关键:Aloha 必须用小 Batch,否则 8G 显存不够
87
+
88
+ # 数据集
89
+ dataset:
90
+ repo_id: lerobot/aloha_sim_insertion_human
91
+
92
+ # 评估设置
93
+ eval:
94
+ n_episodes: 50
95
+ batch_size: 8 # 保持与训练一致
96
+
97
+ # 环境设置
98
+ env:
99
+ type: aloha
100
+ task: AlohaInsertion-v0
101
+ fps: 50
102
+
103
+ # 策略配置
104
+ policy:
105
+ type: diffusion
106
+
107
+ # --- 视觉处理 ---
108
+ vision_backbone: resnet18
109
+ # Aloha 的图片是矩形的,这里使用特定的裁剪尺寸
110
+ crop_shape: [420, 560]
111
+ crop_is_random: true
112
+ pretrained_backbone_weights: null # 原配置指定不加载预训练权重
113
+ use_group_norm: true
114
+ spatial_softmax_num_keypoints: 32
115
+
116
+ # --- Diffusion 核心架构 (U-Net) ---
117
+ down_dims: [512, 1024, 2048]
118
+ kernel_size: 5
119
+ n_groups: 8
120
+ diffusion_step_embed_dim: 128
121
+ use_film_scale_modulation: true
122
+
123
+ # --- 动作预测参数 ---
124
+ n_action_steps: 8
125
+ n_obs_steps: 2
126
+ horizon: 16
127
+
128
+ # --- 噪声调度器 (DDPM) ---
129
+ noise_scheduler_type: DDPM
130
+ num_train_timesteps: 100
131
+ num_inference_steps: 100
132
+ beta_schedule: squaredcos_cap_v2
133
+ beta_start: 0.0001
134
+ beta_end: 0.02
135
+ prediction_type: epsilon
136
+ clip_sample: true
137
+ clip_sample_range: 1.0
138
+
139
+ # --- 优化器 ---
140
+ optimizer_lr: 1e-4
141
+ optimizer_weight_decay: 1e-6
142
+ #grad_clip_norm: 10
143
+
144
+ scheduler_name: cosine
145
+ scheduler_warmup_steps: 500
146
+
147
+ use_amp: true
148
+ ````
149
+
150
+ \</details\>
151
+
152
+ -----
153
 
154
  ## 🚀 Evaluate (My Evaluation Mode)
155
 
 
158
  ```bash
159
  python -m lerobot.scripts.lerobot_eval \
160
  --policy.type diffusion \
161
+ --policy.pretrained_path Lemon-03/DP_Aloha_Insertion_test \
162
  --eval.n_episodes 50 \
163
+ --eval.batch_size 8 \
164
+ --env.type aloha \
165
+ --env.task AlohaInsertion-v0
166
  ```
167
 
168
  To evaluate this model locally, run the following command:
 
170
  ```bash
171
  python -m lerobot.scripts.lerobot_eval \
172
  --policy.type diffusion \
173
+ --policy.pretrained_path Lemon-03/DP_Aloha_Insertion_test \
174
  --eval.n_episodes 50 \
175
+ --eval.batch_size 8 \
176
+ --env.type aloha \
177
+ --env.task AlohaInsertion-v0
178
+ ```