Robotics
LeRobot
Safetensors
act
Beable commited on
Commit
a0737a8
·
verified ·
1 Parent(s): 78bac52

Upload policy weights, train config and readme

Browse files
Files changed (3) hide show
  1. README.md +62 -0
  2. model.safetensors +1 -1
  3. train_config.json +3 -3
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets: Beable/SOARM100-ep67joy
3
+ library_name: lerobot
4
+ license: apache-2.0
5
+ model_name: act
6
+ pipeline_tag: robotics
7
+ tags:
8
+ - robotics
9
+ - act
10
+ - lerobot
11
+ ---
12
+
13
+ # Model Card for act
14
+
15
+ <!-- Provide a quick summary of what the model is/does. -->
16
+
17
+
18
+ [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
19
+
20
+
21
+ This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
22
+ See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
23
+
24
+ ---
25
+
26
+ ## How to Get Started with the Model
27
+
28
+ For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
29
+ Below is the short version on how to train and run inference/eval:
30
+
31
+ ### Train from scratch
32
+
33
+ ```bash
34
+ lerobot-train \
35
+ --dataset.repo_id=${HF_USER}/<dataset> \
36
+ --policy.type=act \
37
+ --output_dir=outputs/train/<desired_policy_repo_id> \
38
+ --job_name=lerobot_training \
39
+ --policy.device=cuda \
40
+ --policy.repo_id=${HF_USER}/<desired_policy_repo_id>
41
+ --wandb.enable=true
42
+ ```
43
+
44
+ _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
45
+
46
+ ### Evaluate the policy/run inference
47
+
48
+ ```bash
49
+ lerobot-record \
50
+ --robot.type=so100_follower \
51
+ --dataset.repo_id=<hf_user>/eval_<dataset> \
52
+ --policy.path=<hf_user>/<desired_policy_repo_id> \
53
+ --episodes=10
54
+ ```
55
+
56
+ Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
57
+
58
+ ---
59
+
60
+ ## Model Details
61
+
62
+ - **License:** apache-2.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:22ad49b561bd680fd91d2cb0fe7f08f892f026713c74174b9561f193699a5163
3
  size 315819360
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e577e76ffbdd508b4706ec2367b1acf986b572fa42f09f8b86ca3ea78944990
3
  size 315819360
train_config.json CHANGED
@@ -128,17 +128,17 @@
128
  "optimizer_weight_decay": 0.0002,
129
  "optimizer_lr_backbone": 3e-06
130
  },
131
- "output_dir": "outputs/train/2025-08-24/23-01-30_act",
132
  "job_name": "act",
133
  "resume": false,
134
  "seed": 1000,
135
  "num_workers": 4,
136
  "batch_size": 16,
137
- "steps": 80000,
138
  "eval_freq": 2000,
139
  "log_freq": 100,
140
  "save_checkpoint": true,
141
- "save_freq": 80000,
142
  "use_policy_training_preset": true,
143
  "optimizer": {
144
  "type": "adamw",
 
128
  "optimizer_weight_decay": 0.0002,
129
  "optimizer_lr_backbone": 3e-06
130
  },
131
+ "output_dir": "outputs/train/2025-08-26/03-22-08_act",
132
  "job_name": "act",
133
  "resume": false,
134
  "seed": 1000,
135
  "num_workers": 4,
136
  "batch_size": 16,
137
+ "steps": 50000,
138
  "eval_freq": 2000,
139
  "log_freq": 100,
140
  "save_checkpoint": true,
141
+ "save_freq": 50000,
142
  "use_policy_training_preset": true,
143
  "optimizer": {
144
  "type": "adamw",