|
|
--- |
|
|
datasets: arclabmit/xarm7_beavrsim_shellgame_dataset |
|
|
library_name: lerobot |
|
|
license: apache-2.0 |
|
|
pipeline_tag: robotics |
|
|
tags: |
|
|
- robotics |
|
|
- lerobot |
|
|
- act |
|
|
model-index: |
|
|
- name: xarm7_act_beavrsim_shellgame_model |
|
|
results: |
|
|
- task: |
|
|
type: robotics |
|
|
name: Robotic Manipulation |
|
|
dataset: |
|
|
name: beavr_sim |
|
|
type: simulation |
|
|
metrics: |
|
|
- type: success_rate |
|
|
value: 10.299999999999999 |
|
|
name: Success Rate |
|
|
- type: reward |
|
|
value: -0.05 |
|
|
name: Avg Reward |
|
|
--- |
|
|
|
|
|
# Model Card for act |
|
|
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
|
|
|
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. |
|
|
|
|
|
|
|
|
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). |
|
|
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). |
|
|
|
|
|
--- |
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
|
|
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). |
|
|
Below is the short version on how to train and run inference/eval: |
|
|
|
|
|
### Train from scratch |
|
|
|
|
|
```bash |
|
|
lerobot-train \ |
|
|
--dataset.repo_id=${HF_USER}/<dataset> \ |
|
|
--policy.type=act \ |
|
|
--output_dir=outputs/train/<desired_policy_repo_id> \ |
|
|
--job_name=lerobot_training \ |
|
|
--policy.device=cuda \ |
|
|
--policy.repo_id=${HF_USER}/<desired_policy_repo_id> |
|
|
--wandb.enable=true |
|
|
``` |
|
|
|
|
|
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ |
|
|
|
|
|
### Evaluate the policy/run inference |
|
|
|
|
|
```bash |
|
|
lerobot-record \ |
|
|
--robot.type=so100_follower \ |
|
|
--dataset.repo_id=<hf_user>/eval_<dataset> \ |
|
|
--policy.path=<hf_user>/<desired_policy_repo_id> \ |
|
|
--episodes=10 |
|
|
``` |
|
|
|
|
|
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. |
|
|
|
|
|
--- |
|
|
|
|
|
## Model Details |
|
|
|
|
|
- **License:** apache-2.0 |
|
|
|
|
|
|
|
|
<!-- EVAL_RESULTS_START --> |
|
|
## Evaluation Results |
|
|
*Evaluated on 2026-02-05 09:45* |
|
|
|
|
|
| Metric | Value | |
|
|
| :--- | :--- | |
|
|
| **Success Rate** | 10.3% | |
|
|
| **Average Reward** | -0.050 | |
|
|
| **Max Reward (Avg)** | 1.030 | |
|
|
| **Episodes** | 1000 | |
|
|
| **Eval Speed** | 2.47 s/ep | |
|
|
| **Seed** | 26 | |
|
|
|
|
|
> [!TIP] |
|
|
> Detailed per-episode results can be found in [eval/eval_info.json](./eval/eval_info.json). |
|
|
|
|
|
<!-- EVAL_RESULTS_END --> |
|
|
|