Update README.md
Browse files
README.md
CHANGED
|
@@ -249,6 +249,13 @@ Pixtral-Large-Instruct-2411 is a 124B multimodal model built on top of Mistral L
|
|
| 249 |
|
| 250 |
For more details about this model please refer to the [Pixtral Large blog post](https://mistral.ai/news/pixtral-large/) and the [Pixtral 12B blog post](https://mistral.ai/news/pixtral-12b/).
|
| 251 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 252 |
## Key features
|
| 253 |
- Frontier-class multimodal performance
|
| 254 |
- State-of-the-art on MathVista, DocVQA, VQAv2
|
|
|
|
| 249 |
|
| 250 |
For more details about this model please refer to the [Pixtral Large blog post](https://mistral.ai/news/pixtral-large/) and the [Pixtral 12B blog post](https://mistral.ai/news/pixtral-12b/).
|
| 251 |
|
| 252 |
+
|
| 253 |
+
> [!IMPORTANT]
|
| 254 |
+
> ❗
|
| 255 |
+
> The Transformers implementation is not yet working, please use the vLLM implementation
|
| 256 |
+
> as shown below while the Transformers implementation is being correct (see [here](https://huggingface.co/mistralai/Pixtral-Large-Instruct-2411/discussions/3#673b8bfe55cb1761c8d50ce2))
|
| 257 |
+
|
| 258 |
+
|
| 259 |
## Key features
|
| 260 |
- Frontier-class multimodal performance
|
| 261 |
- State-of-the-art on MathVista, DocVQA, VQAv2
|