Instructions to use aho-tai/PixtralEncoderDecoder-v0 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use aho-tai/PixtralEncoderDecoder-v0 with Transformers:
# Load model directly from transformers import VisionPixtralEncoderDecoder model = VisionPixtralEncoderDecoder.from_pretrained("aho-tai/PixtralEncoderDecoder-v0", dtype="auto") - Notebooks
- Google Colab
- Kaggle
Update config.json
Browse files- config.json +3 -1
config.json
CHANGED
|
@@ -4,7 +4,9 @@
|
|
| 4 |
],
|
| 5 |
"auto_map": {
|
| 6 |
"AutoConfig": "configuration.VisionPixtralEncoderDecoderConfig",
|
| 7 |
-
"AutoModel": "modeling.VisionPixtralEncoderDecoder"
|
|
|
|
|
|
|
| 8 |
},
|
| 9 |
"decoder": {
|
| 10 |
"_attn_implementation_autoset": true,
|
|
|
|
| 4 |
],
|
| 5 |
"auto_map": {
|
| 6 |
"AutoConfig": "configuration.VisionPixtralEncoderDecoderConfig",
|
| 7 |
+
"AutoModel": "modeling.VisionPixtralEncoderDecoder",
|
| 8 |
+
"AutoConfig": "configuration.PixtralVisionModelBatch",
|
| 9 |
+
"AutoModel": "modeling.PixtralVisionModelBatch"
|
| 10 |
},
|
| 11 |
"decoder": {
|
| 12 |
"_attn_implementation_autoset": true,
|