You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

pi0_lora_cola

Pi0 LoRA fine-tuned weights on the Cola dataset (Franka + joint-space actions). Deploy-only: params + assets only (no train_state).

Model

  • Base: OpenPI Pi0 (LoRA: gemma_2b_lora + gemma_300m_lora)
  • Data: nokaikai/cola_lerobot_v2 (7 joints + 1 gripper, absolute joint targets)
  • Use: Load for inference; input image + current 8-dim state + prompt → output 8-dim action (7 joints + 1 gripper)

Usage (openpi)

# Install openpi and download this repo, or use repo_id directly
pip install huggingface_hub
huggingface-cli download nokaikai/pi0_lora_cola --local-dir ./pi0_lora_cola

# Point checkpoint to the step directory (e.g. 2999)
python pi0_deploy.py \
  --checkpoint_dir ./pi0_lora_cola/cola_experiment/2999 \
  --config_name pi0_cola_low_mem \
  --prompt "Pick up the left coca-cola"

Loading from code (after clone or snapshot_download):

from huggingface_hub import snapshot_download
path = snapshot_download(repo_id="YOUR_USERNAME/pi0_lora_cola")
checkpoint_dir = f"{path}/cola_experiment/2999"
# Then use openpi create_trained_policy(config, checkpoint_dir)

Layout

cola_experiment/
  2999/
    params/          # Model parameters (JAX/Orbax)
    assets/          # norm_stats, etc.
    _CHECKPOINT_METADATA

License & Credits

  • Weights trained with OpenPI and Cola data; for research/personal use.
  • Cola dataset: nokaikai/cola_lerobot_v2 on Hugging Face.
Downloads last month

-

Downloads are not tracked for this model. How to track
Video Preview
loading