Robotics
LeRobot
Safetensors
act
Basr88 commited on
Commit
9f49c5e
·
verified ·
1 Parent(s): 9a6f8b6

Upload policy weights, train config and readme

Browse files
Files changed (3) hide show
  1. README.md +62 -0
  2. model.safetensors +1 -1
  3. train_config.json +3 -2
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets: Basr88/Yellow-HighRez-Dataset
3
+ library_name: lerobot
4
+ license: apache-2.0
5
+ model_name: act
6
+ pipeline_tag: robotics
7
+ tags:
8
+ - lerobot
9
+ - act
10
+ - robotics
11
+ ---
12
+
13
+ # Model Card for act
14
+
15
+ <!-- Provide a quick summary of what the model is/does. -->
16
+
17
+
18
+ [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
19
+
20
+
21
+ This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
22
+ See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
23
+
24
+ ---
25
+
26
+ ## How to Get Started with the Model
27
+
28
+ For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
29
+ Below is the short version on how to train and run inference/eval:
30
+
31
+ ### Train from scratch
32
+
33
+ ```bash
34
+ lerobot-train \
35
+ --dataset.repo_id=${HF_USER}/<dataset> \
36
+ --policy.type=act \
37
+ --output_dir=outputs/train/<desired_policy_repo_id> \
38
+ --job_name=lerobot_training \
39
+ --policy.device=cuda \
40
+ --policy.repo_id=${HF_USER}/<desired_policy_repo_id>
41
+ --wandb.enable=true
42
+ ```
43
+
44
+ _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
45
+
46
+ ### Evaluate the policy/run inference
47
+
48
+ ```bash
49
+ lerobot-record \
50
+ --robot.type=so100_follower \
51
+ --dataset.repo_id=<hf_user>/eval_<dataset> \
52
+ --policy.path=<hf_user>/<desired_policy_repo_id> \
53
+ --episodes=10
54
+ ```
55
+
56
+ Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
57
+
58
+ ---
59
+
60
+ ## Model Details
61
+
62
+ - **License:** apache-2.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:45d519e1fccf5a5609efcc97da459a977307d41ef6fa5bf9b059d8b0c9e29fce
3
  size 206699736
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5423ad7ceb2b7e738854f3d509961939574e3eee7eebe025e8fa043c91c13812
3
  size 206699736
train_config.json CHANGED
@@ -150,15 +150,16 @@
150
  "optimizer_weight_decay": 0.0001,
151
  "optimizer_lr_backbone": 1e-05
152
  },
153
- "output_dir": "outputs/ACT_trainingRez",
154
  "job_name": "ACT_training",
155
  "resume": false,
156
  "seed": 1000,
157
  "num_workers": 4,
158
  "batch_size": 16,
159
- "steps": 100000,
160
  "eval_freq": 20000,
161
  "log_freq": 200,
 
162
  "save_checkpoint": true,
163
  "save_freq": 20000,
164
  "use_policy_training_preset": true,
 
150
  "optimizer_weight_decay": 0.0001,
151
  "optimizer_lr_backbone": 1e-05
152
  },
153
+ "output_dir": "/content/drive/MyDrive/highrez1",
154
  "job_name": "ACT_training",
155
  "resume": false,
156
  "seed": 1000,
157
  "num_workers": 4,
158
  "batch_size": 16,
159
+ "steps": 50000,
160
  "eval_freq": 20000,
161
  "log_freq": 200,
162
+ "tolerance_s": 0.0001,
163
  "save_checkpoint": true,
164
  "save_freq": 20000,
165
  "use_policy_training_preset": true,