Instructions to use allenai/MolmoAct2-FAST-Tokenizer with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use allenai/MolmoAct2-FAST-Tokenizer with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("allenai/MolmoAct2-FAST-Tokenizer", dtype="auto") - Notebooks
- Google Colab
- Kaggle
File size: 252 Bytes
d26997e | 1 2 3 4 5 6 7 8 9 10 11 12 | {
"action_dim": null,
"auto_map": {
"AutoProcessor": "processing_action_tokenizer.UniversalActionProcessor"
},
"min_token": -55,
"processor_class": "UniversalActionProcessor",
"scale": 10,
"time_horizon": null,
"vocab_size": 2048
}
|