pepijn223 HF Staff commited on
Commit
ecab36b
Β·
verified Β·
1 Parent(s): 69b3407

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -49
README.md CHANGED
@@ -1,69 +1,66 @@
1
- # Ο€β‚€.β‚… - Base
2
 
3
- This is a PyTorch version of the PI0.5 `pi05_base model`, converted from the original JAX/Flax implementation.
4
 
5
- ## Model Details
6
 
7
- - **Architecture**: PI0.5 (Vision-Language-Action model with discrete state input)
8
- - **Model Type**: PI0.5
9
- - **Domain**: Base model (general purpose)
10
- - **Precision**: 32-bit floating point (fp32)
11
- - **Vision Model**: PaliGemma (gemma_2b)
12
- - **Action Expert**: gemma_300m
13
 
14
- ## Key Features
15
 
16
- - **Discrete State Input**: Uses discrete language tokens for state representation
17
- - **Flow Matching**: Utilizes adaRMSNorm for timestep injection in action expert
18
- - **Enhanced Action Modeling**: Improved action prediction with flow matching approach
19
 
20
- ## Conversion Details
 
 
21
 
22
- This model was converted from JAX to PyTorch using the OpenPI conversion script:
23
 
24
- ```bash
25
- python examples/convert_jax_model_to_pytorch.py \
26
- --checkpoint_dir /pi05_base \
27
- --config_name pi05_base \
28
- --output_path /pi05_base/pytorch/fp32/ \
29
- --precision float32
30
- ```
31
 
32
- ## Usage
 
 
 
 
 
33
 
34
- ```python
35
- from openpi.models_pytorch.pi0_pytorch import PI0Pytorch
36
- import torch
37
 
38
- # Load the model
39
- model = PI0Pytorch.from_pretrained("pepijn223/pi05_base_fp32")
40
-
41
- # The model expects inputs in the format:
42
- # - images: torch.Tensor of shape [batch, height, width, channels]
43
- # - text: tokenized text prompts
44
- # - proprioceptive_state: robot state information (if applicable)
45
- ```
46
 
47
- ## Model Architecture
48
 
49
- The model consists of:
50
- 1. **Vision Encoder**: PaliGemma-based vision processing
51
- 2. **Language Encoder**: Text prompt understanding
52
- 3. **Action Expert**: Specialized network for action prediction
53
- 4. **Integration Layer**: Combines multimodal information for action output
54
 
55
- ## Training Data
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
 
57
- This model was trained on robotics datasets appropriate for its domain:
58
- - **DROID models**: Trained on diverse robot manipulation data
59
- - **LIBERO models**: Trained on diverse tabletop manipulation scenarios
60
- - **Base models**: Trained on general robotics datasets
61
 
62
- ## Limitations
63
 
64
- - Model performance depends on similarity between deployment and training environments
65
- - May require domain-specific fine-tuning for optimal performance
66
- - Action space must match the trained action dimension (32)
 
 
 
 
67
 
68
  ## Citation
69
 
 
1
+ # Ο€β‚€.β‚… (Pi05) Policy
2
 
3
+ Ο€β‚€.β‚… is a **Vision-Language-Action model with open-world generalization**, from Physical Intelligence. The LeRobot implementation is adapted from their open source [OpenPI](https://github.com/Physical-Intelligence/openpi) repository.
4
 
5
+ ## Model Overview
6
 
7
+ Ο€β‚€.β‚… represents a significant evolution from Ο€β‚€, developed by [Physical Intelligence](https://www.physicalintelligence.company/blog/pi05) to address a big challenge in robotics: **open-world generalization**. While robots can perform impressive tasks in controlled environments, Ο€β‚€.β‚… is designed to generalize to entirely new environments and situations that were never seen during training.
 
 
 
 
 
8
 
9
+ ### The Generalization Challenge
10
 
11
+ As Physical Intelligence explains, the fundamental challenge isn't performing tasks of agility or dexterity, but generalization, the ability to correctly perform tasks in new settings with new objects. Consider a robot cleaning different homes: each home has different objects in different places. Generalization must occur at multiple levels:
 
 
12
 
13
+ - **Physical Level**: Understanding how to pick up a spoon (by the handle) or plate (by the edge), even with unseen objects in cluttered environments
14
+ - **Semantic Level**: Understanding task semantics, where to put clothes and shoes (laundry hamper, not on the bed), and what tools are appropriate for cleaning spills
15
+ - **Environmental Level**: Adapting to "messy" real-world environments like homes, grocery stores, offices, and hospitals
16
 
17
+ ### Co-Training on Heterogeneous Data
18
 
19
+ The breakthrough innovation in Ο€β‚€.β‚… is **co-training on heterogeneous data sources**. The model learns from:
 
 
 
 
 
 
20
 
21
+ 1. **Multimodal Web Data**: Image captioning, visual question answering, object detection
22
+ 2. **Verbal Instructions**: Humans coaching robots through complex tasks step-by-step
23
+ 3. **Subtask Commands**: High-level semantic behavior labels (e.g., "pick up the pillow" for an unmade bed)
24
+ 4. **Cross-Embodiment Robot Data**: Data from various robot platforms with different capabilities
25
+ 5. **Multi-Environment Data**: Static robots deployed across many different homes
26
+ 6. **Mobile Manipulation Data**: ~400 hours of mobile robot demonstrations
27
 
28
+ This diverse training mixture creates a "curriculum" that enables generalization across physical, visual, and semantic levels simultaneously.
 
 
29
 
 
 
 
 
 
 
 
 
30
 
31
+ ## Training
32
 
33
+ Here's a complete training command for finetuning the base Ο€β‚€.β‚… model on your own dataset:
 
 
 
 
34
 
35
+ ```bash
36
+ python src/lerobot/scripts/train.py \
37
+ --dataset.repo_id=your_dataset \
38
+ --policy.type=pi05 \
39
+ --output_dir=./outputs/pi05_training \
40
+ --job_name=pi0_training \
41
+ --policy.repo_id=pepijn223/pi05_base \
42
+ --policy.pretrained_path=your_repo_id \
43
+ --policy.compile_model=true \
44
+ --policy.gradient_checkpointing=true \
45
+ --wandb.enable=true \
46
+ --policy.dtype=bfloat16 \
47
+ --steps=3000 \
48
+ --policy.scheduler_decay_steps=3000 \
49
+ --policy.device=cuda \
50
+ --batch_size=32
51
+ ```
52
 
53
+ ## Conversion Details
 
 
 
54
 
55
+ This model was converted from JAX to PyTorch using the OpenPI conversion script:
56
 
57
+ ```bash
58
+ python examples/convert_jax_model_to_pytorch.py \
59
+ --checkpoint_dir /pi05_base \
60
+ --config_name pi05_base \
61
+ --output_path /pi05_base/pytorch/fp32/ \
62
+ --precision float32
63
+ ```
64
 
65
  ## Citation
66