Devstral-Small-2-24B-Instruct-2512-GGUF
This repository provides a GGUF conversion of
mistralai/Devstral-Small-2-24B-Instruct-2512.
The model is intended for local inference using the llama.cpp ecosystem and compatible tools.
Model Details
- Base model: mistralai/Devstral-Small-2-24B-Instruct-2512
- Parameters: 24B
- Format: GGUF (F16)
- Task: Text Generation
- Fine-tuning: None (direct conversion)
Relationship to the base model
This model is a format conversion only.
No additional training, fine-tuning, or alignment steps were applied.
All weights originate from the original model published by Mistral AI.
Usage (llama.cpp)
./llama-cli \
-m Devstral-Small-2-24B-Instruct-2512.gguf \
-p "Your prompt here"
- Downloads last month
- 309
Hardware compatibility
Log In
to view the estimation
We're not able to determine the quantization variants.
Model tree for keypa/Devstral-Small-2-24B-Instruct-2512-GGUF
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503