GR00T-N1.6-3B — LIBERO All-4 Finetuned

Fine-tuned checkpoint of nvidia/GR00T-N1.6-3B on all four LIBERO task suites (Spatial, Object, Goal, LIBERO-10) with the Franka Panda embodiment.

Evaluation Results

Evaluated with 20 episodes per task across all 4 LIBERO suites (10 tasks × 20 episodes = 200 episodes per suite).

Suite Success Rate Episodes
LIBERO-Spatial 96.0% 192 / 200
LIBERO-Object 100.0% 200 / 200
LIBERO-Goal 98.0% 196 / 200
LIBERO-10 97.5% 195 / 200
Average 97.9% 783 / 800

Model Details

Parameter Value
Architecture GR00T-N1.6 (Gr00tN1d6)
Vision Backbone nvidia/Eagle-Block2A-2B-v2
Backbone Embedding Dim 2048
Hidden Size 1024
Diffusion Layers 32
Diffusion Attention Heads 32
Action Horizon 50
Inference Timesteps 4
Max Action Dim 128
Model Dtype bfloat16
Embodiment libero_panda (ID: 2)

Training Details

Hyperparameter Value
Base Checkpoint nvidia/GR00T-N1.6-3B
Training Steps 40,000
Global Batch Size 16
Learning Rate 1e-4
LR Scheduler Cosine
Warmup Ratio 5%
Weight Decay 1e-5
Tuned Modules Diffusion model, projector, top 4 LLM layers, VLLN
Frozen Modules Visual encoder, remaining LLM layers

Usage

from gr00t.model.policy import Gr00tPolicy

policy = Gr00tPolicy(
    model_path="0xAnkitSingh/GR00T-N1.6-3B_LIBERO",
    embodiment_tag="LIBERO_PANDA",
    use_sim_policy_wrapper=True,
)

Or serve via the inference server:

uv run python gr00t/eval/run_gr00t_server.py \
    --model-path 0xAnkitSingh/GR00T-N1.6-3B_LIBERO \
    --embodiment-tag LIBERO_PANDA \
    --use-sim-policy-wrapper \
    --port 5556
Downloads last month
434
Safetensors
Model size
3B params
Tensor type
F32
·
BF16
·
Video Preview
loading

Model tree for 0xAnkitSingh/GR00T-N1.6-LIBERO

Finetuned
(14)
this model

Datasets used to train 0xAnkitSingh/GR00T-N1.6-LIBERO