Devstral-Small-2-24B TextOnly FP8 (Training)
Training-compatible variant of levara/Devstral-Small-2-24B-TextOnly-FP8 with Mistral/transformers-convention FP8 scale names.
Weight values are byte-for-byte identical to the serving checkpoint. Only the safetensors key names differ:
| This repo (training) | Serving repo (vLLM) |
|---|---|
activation_scale |
input_scale |
weight_scale_inv |
weight_scale |
Why two repos?
vLLM's TransformersForCausalLM backend registers FP8 parameters as input_scale/weight_scale and errors on other names. Transformers 5 and Unsloth expect activation_scale/weight_scale_inv. Neither tolerates the other's names.
Using this repo for LoRA training ensures the adapter trains against the true FP8 ground truth weights — the same values used at serving time. No dequant/re-quant mismatch.
Usage with Unsloth
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
"levara/Devstral-Small-2-24B-TextOnly-FP8-Training",
max_seq_length=8192,
load_in_4bit=False,
)
model = FastLanguageModel.get_peft_model(model, r=16, target_modules=[
"q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",
])
Serving
For vLLM serving, use the companion checkpoint: levara/Devstral-Small-2-24B-TextOnly-FP8
vllm serve levara/Devstral-Small-2-24B-TextOnly-FP8 \
--tensor-parallel-size 2 \
--max-model-len 32768 \
--enable-lora \
--lora-modules my-adapter=path/to/adapter
Model Details
| Property | Value |
|---|---|
| Architecture | Ministral3ForCausalLM |
| Parameters | 23.57B |
| Quantization | FP8 W8A8 static (float8_e4m3fn) |
| Layers | 40 |
| Hidden size | 5120 |
| Context length | 393K tokens (YaRN RoPE) |
- Downloads last month
- 67
Model tree for levara/Devstral-Small-2-24B-TextOnly-FP8-Training
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503