Instructions to use physical-intelligence/fast with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use physical-intelligence/fast with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("physical-intelligence/fast", dtype="auto") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -8,7 +8,7 @@ tags:
|
|
| 8 |
|
| 9 |
# FAST: Efficient Action Tokenization for Vision-Language-Action Models
|
| 10 |
|
| 11 |
-
This is the official repo for the FAST action tokenizer.
|
| 12 |
|
| 13 |
The action tokenizer maps any sequence of robot actions into a sequence of dense, discrete **action tokens** for training autoregressive VLA models.
|
| 14 |
|
|
|
|
| 8 |
|
| 9 |
# FAST: Efficient Action Tokenization for Vision-Language-Action Models
|
| 10 |
|
| 11 |
+
This is the official repo for the [FAST action tokenizer](https://www.pi.website/research/fast).
|
| 12 |
|
| 13 |
The action tokenizer maps any sequence of robot actions into a sequence of dense, discrete **action tokens** for training autoregressive VLA models.
|
| 14 |
|