File size: 1,783 Bytes
b398f82
 
 
 
c3467ad
 
 
 
b398f82
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
---
license: mit
datasets:
- lerobot/pusht_image
tags:
- lerobot
- pusht
- diffusion
---

# Model Card for Mini Diffusion Policy / PushT

We add few lines to add an extra level-2 minibatch for Diffusion Policy (as per [Mini Diffuser (ICRA 2026)](https://arxiv.org/abs/2505.09430)) trained for the `PushT` environment from [gym-pusht](https://github.com/huggingface/gym-pusht). This enables tens of equivalent batch size per gradient step, and end up saving at least 60% of the training time to obtain similar training results.

## How to Get Started with the Model

See the [LeRobot library](https://github.com/huggingface/lerobot) for instructions on how to load and evaluate this model.

## Training Details

Trained with a forked [LeRobot@
7bd533a](https://github.com/utomm/lerobot/tree/minidp-0.4.2).

The model was trained using [LeRobot's training script](https://github.com/huggingface/lerobot/blob/main/lerobot/scripts/train.py) and with the [pusht](https://huggingface.co/datasets/lerobot/pusht) dataset, using this command:

```bash
lerobot-train --policy.type=minidiffusion  --dataset.repo_id=lerobot/pusht_image\
 --env.type=pusht --seed=100000 --batch_size=32 --log_freq=200 --wandb.disable_artifact=true\
 --steps=100000  --eval_freq=10000 --save_freq=10000 --wandb.enable=true --policy.repo_id=id\
 --wandb.project=minidp --policy.push_to_hub=false --policy.level2_batch_size=8 --job_name=minidp-32-8
```


The training curves, and the comparasions with original DP may be found at https://api.wandb.ai/links/hu2240877635/defcr4wu
<iframe src="https://wandb.ai/hu2240877635/minidp/reports/Accelerating-Diffusion-Policy-Training-with-MiniDP--VmlldzoxNjI1NzQ0OA" style="border:none;height:1024px;width:100%">
The current model corresponds to the checkpoint at 90k steps.