GENESIS -- ANIMA Module
Part of the ANIMA Perception Suite by Robot Flow Labs.
Architecture
TorchBCPolicy -- Behavioral Cloning MLP for 7-DoF robot manipulation.
| Parameter | Value |
|---|---|
| Observation dim | 7 (joint states) |
| Action dim | 7 (7-DoF actions) |
| Hidden layers | [256, 256, 128] |
| Activation | ReLU |
| Dropout | 0.1 |
| Parameters | 101,639 |
Training
| Setting | Value |
|---|---|
| Dataset | smol-libero (HuggingFace LeRobot, 13,021 samples) |
| Split | 90/5/5 (train/val/test) |
| Optimizer | AdamW (lr=3e-4, wd=1e-4) |
| Scheduler | Cosine annealing + 5% warmup |
| Precision | bf16 |
| Hardware | NVIDIA L4 (23.7 GB) |
| Epochs | 193 (early stopped, patience=10) |
| Best val_loss | 0.4628 (epoch 183) |
| Test loss | 0.4219 |
| Training time | 21 seconds |
| Seed | 42 |
Exported Formats
| Format | File | Use Case |
|---|---|---|
| PyTorch (.pth) | pytorch/genesis_bc_v1.pth |
Training, fine-tuning |
| SafeTensors | pytorch/genesis_bc_v1.safetensors |
Fast loading, safe |
| ONNX | onnx/genesis_bc_v1.onnx |
Cross-platform inference |
| TensorRT FP16 | tensorrt/genesis_bc_v1_fp16.trt |
Edge deployment (Jetson/L4) |
| TensorRT FP32 | tensorrt/genesis_bc_v1_fp32.trt |
Full precision inference |
Usage
from genesis.torch_policy import TorchBCPolicy
# Load from checkpoint
model, ckpt = TorchBCPolicy.from_checkpoint("pytorch/genesis_bc_v1.pth")
# Predict
import torch
obs = torch.randn(1, 7) # 7-DoF joint state
action = model(obs) # 7-DoF action output
Additional Files
checkpoints/best.pth-- Full training checkpoint (model + optimizer + scheduler, for resume)configs/training.yaml-- Complete training configuration (reproducibility)logs/training_history.json-- Per-epoch loss curveslogs/norm_stats.json-- Normalization statistics for inference
License
Apache 2.0 -- Robot Flow Labs / AIFLOW LABS LIMITED