File size: 14,207 Bytes
343bf73 da54402 405c8a9 343bf73 9cb69ae 343bf73 405c8a9 e2f16bf 4014960 7b7851e 4014960 62235da 4014960 9cb69ae 4014960 9cb69ae 4014960 c010780 4014960 c010780 4014960 c010780 4014960 c010780 4014960 c010780 4014960 c010780 4014960 7b7851e 2647a49 7b7851e 4014960 9cb69ae 4014960 9cb69ae 4014960 9cb69ae 4014960 9cb69ae 4014960 9cb69ae 4014960 9cb69ae 4014960 9cb69ae 4014960 9cb69ae |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 |
---
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
base_model:
- mistralai/Mistral-Small-24B-Instruct-2501
pipeline_tag: text-generation
tags:
- mistral
- mistral-small
- fp8
- vllm
- conversational
- text-generation-inference
- compressed-tensors
license: apache-2.0
license_name: apache-2.0
name: RedHatAI/Mistral-Small-24B-Instruct-2501-FP8-dynamic
description: FP8-Quantized variant of Mistral-Small-24B-Instruct-2501.
readme: https://huggingface.co/RedHatAI/Mistral-Small-24B-Instruct-2501-FP8-dynamic/main/README.md
tasks:
- text-to-text
provider: Red Hat
license_link: https://www.apache.org/licenses/LICENSE-2.0
validated_on:
- RHOAI 2.20
- RHAIIS 3.0
- RHELAI 1.5
---
<h1 style="display: flex; align-items: center; gap: 10px; margin: 0;">
Mistral-Small-24B-Instruct-2501-FP8-dynamic
<img src="https://www.redhat.com/rhdc/managed-files/Catalog-Validated_model_0.png" alt="Model Icon" width="40" style="margin: 0; padding: 0;" />
</h1>
<a href="https://www.redhat.com/en/products/ai/validated-models" target="_blank" style="margin: 0; padding: 0;">
<img src="https://www.redhat.com/rhdc/managed-files/Validated_badge-Dark.png" alt="Validated Badge" width="250" style="margin: 0; padding: 0;" />
</a>
## Model Overview
- **Model Architecture:** Mistral-Small-24B-Instruct-2501
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Release Date:** 3/1/2025
- **Version:** 1.0
- **Validated on:** RHOAI 2.20, RHAIIS 3.0, RHELAI 1.5
- **Model Developers:** Neural Magic
Quantized version of [Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501).
It achieves an average score of 78.88 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 79.45.
### Model Optimizations
This model was obtained by quantizing the weights and activations to FP8 data type, ready for inference with vLLM.
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. Only the weights and activations of the linear operators within transformers blocks are quantized.
## Deployment
### Use with vLLM
1. Initialize vLLM server:
```
vllm serve RedHatAI/Mistral-Small-24B-Instruct-2501-FP8-dynamic --tensor_parallel_size 1 --tokenizer_mode mistral
```
2. Send requests to the server:
```python
from openai import OpenAI
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://<your-server-host>:8000/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
model = "RedHatAI/Mistral-Small-24B-Instruct-2501-FP8-dynamic"
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = client.chat.completions.create(
model=model,
messages=messages,
)
generated_text = outputs.choices[0].message.content
print(generated_text)
```
<details>
<summary>Deploy on <strong>Red Hat AI Inference Server</strong></summary>
```bash
podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
--ipc=host \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
--name=vllm \
registry.access.redhat.com/rhaiis/rh-vllm-cuda \
vllm serve \
--tensor-parallel-size 8 \
--max-model-len 32768 \
--enforce-eager --model RedHatAI/Mistral-Small-24B-Instruct-2501-FP8-dynamic
```
See [Red Hat AI Inference Server documentation](https://docs.redhat.com/en/documentation/red_hat_ai_inference_server/) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Enterprise Linux AI</strong></summary>
```bash
# Download model from Red Hat Registry via docker
# Note: This downloads the model to ~/.cache/instructlab/models unless --model-dir is specified.
ilab model download --repository docker://registry.redhat.io/rhelai1/mistral-small-24b-instruct-2501-fp8-dynamic:1.5
```
```bash
# Serve model via ilab
ilab model serve --model-path ~/.cache/instructlab/models/mistral-small-24b-instruct-2501-fp8-dynamic --gpu 1 -- --trust-remote-code
# Chat with model
ilab model chat --model ~/.cache/instructlab/models/mistral-small-24b-instruct-2501-fp8-dynamic
```
See [Red Hat Enterprise Linux AI documentation](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Openshift AI</strong></summary>
```python
# Setting up vllm server with ServingRuntime
# Save as: vllm-servingruntime.yaml
apiVersion: serving.kserve.io/v1alpha1
kind: ServingRuntime
metadata:
name: vllm-cuda-runtime # OPTIONAL CHANGE: set a unique name
annotations:
openshift.io/display-name: vLLM NVIDIA GPU ServingRuntime for KServe
opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
labels:
opendatahub.io/dashboard: 'true'
spec:
annotations:
prometheus.io/port: '8080'
prometheus.io/path: '/metrics'
multiModel: false
supportedModelFormats:
- autoSelect: true
name: vLLM
containers:
- name: kserve-container
image: quay.io/modh/vllm:rhoai-2.20-cuda # CHANGE if needed. If AMD: quay.io/modh/vllm:rhoai-2.20-rocm
command:
- python
- -m
- vllm.entrypoints.openai.api_server
args:
- "--port=8080"
- "--model=/mnt/models"
- "--served-model-name={{.Name}}"
env:
- name: HF_HOME
value: /tmp/hf_home
ports:
- containerPort: 8080
protocol: TCP
```
```python
# Attach model to vllm server. This is an NVIDIA template
# Save as: inferenceservice.yaml
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
annotations:
openshift.io/display-name: mistral-small-24b-instruct-2501-fp8-dynamic # OPTIONAL CHANGE
serving.kserve.io/deploymentMode: RawDeployment
name: mistral-small-24b-instruct-2501-fp8-dynamic # specify model name. This value will be used to invoke the model in the payload
labels:
opendatahub.io/dashboard: 'true'
spec:
predictor:
maxReplicas: 1
minReplicas: 1
model:
args:
- "--tokenizer-mode=mistral"
- "--config-format=mistral"
- "--load-format=mistral"
- "--tool-call-parser=mistral"
- "--enable-auto-tool-choice"
- "--limit-mm-per-prompt=image=10"
- "--max-model-len=16384"
- "--uvicorn-log-level=debug"
- "--trust-remote-code"
modelFormat:
name: vLLM
name: ''
resources:
limits:
cpu: '2' # this is model specific
memory: 8Gi # this is model specific
nvidia.com/gpu: '1' # this is accelerator specific
requests: # same comment for this block
cpu: '1'
memory: 4Gi
nvidia.com/gpu: '1'
runtime: vllm-cuda-runtime # must match the ServingRuntime name above
storageUri: oci://registry.redhat.io/rhelai1/modelcar-mistral-small-24b-instruct-2501-fp8-dynamic:1.5
tolerations:
- effect: NoSchedule
key: nvidia.com/gpu
operator: Exists
```
```bash
# make sure first to be in the project where you want to deploy the model
# oc project <project-name>
# apply both resources to run model
# Apply the ServingRuntime
oc apply -f vllm-servingruntime.yaml
# Apply the InferenceService
oc apply -f qwen-inferenceservice.yaml
```
```python
# Replace <inference-service-name> and <cluster-ingress-domain> below:
# - Run `oc get inferenceservice` to find your URL if unsure.
# Call the server using curl:
curl https://<inference-service-name>-predictor-default.<domain>/v1/chat/completions
-H "Content-Type: application/json" \
-d '{
"model": "mistral-small-24b-instruct-2501-fp8-dynamic",
"stream": true,
"stream_options": {
"include_usage": true
},
"max_tokens": 1,
"messages": [
{
"role": "user",
"content": "How can a bee fly when its wings are so small?"
}
]
}'
```
See [Red Hat Openshift AI documentation](https://docs.redhat.com/en/documentation/red_hat_openshift_ai/2025) for more details.
</details>
## Creation
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
```python
import argparse
from transformers import AutoModelForCausalLM, AutoTokenizer
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
import os
def main():
parser = argparse.ArgumentParser(description='Quantize a transformer model to FP8')
parser.add_argument('--model_id', type=str, required=True,
help='The model ID from HuggingFace (e.g., "meta-llama/Meta-Llama-3-8B-Instruct")')
parser.add_argument('--save_path', type=str, default='.',
help='Custom path to save the quantized model. If not provided, will use model_name-FP8-dynamic')
args = parser.parse_args()
# Load model
model = AutoModelForCausalLM.from_pretrained(
args.model_id, device_map="auto", torch_dtype="auto", trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(args.model_id)
# Configure the quantization algorithm and scheme
recipe = QuantizationModifier(
targets="Linear", scheme="FP8_DYNAMIC", ignore=["lm_head"]
)
# Apply quantization
oneshot(model=model, recipe=recipe)
save_path = os.path.join(args.save_path, args.model_id.split("/")[1] + "-FP8-dynamic")
os.makedirs(save_path, exist_ok=True)
# Save to disk in compressed-tensors format
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")
if __name__ == "__main__":
main()
```
## Evaluation
The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard) and [V2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/), using the following commands:
OpenLLM Leaderboard V1:
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Mistral-Small-24B-Instruct-2501-FP8-Dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--tasks openllm \
--write_out \
--batch_size auto \
--output_path output_dir \
--show_config
```
OpenLLM Leaderboard V2:
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Mistral-Small-24B-Instruct-2501-FP8-Dynamic",dtype=auto,add_bos_token=False,max_model_len=4096,tensor_parallel_size=1,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--apply_chat_template \
--fewshot_as_multiturn \
--tasks leaderboard \
--write_out \
--batch_size auto \
--output_path output_dir \
--show_config
```
### Accuracy
#### OpenLLM Leaderboard V1 evaluation scores
| Metric | mistralai/Mistral-Small-24B-Instruct-2501 | nm-testing/Mistral-Small-24B-Instruct-2501-FP8-dynamic |
|-----------------------------------------|:---------------------------------:|:-------------------------------------------:|
| ARC-Challenge (Acc-Norm, 25-shot) | 72.18 | 71.76 |
| GSM8K (Strict-Match, 5-shot) | 90.14 | 89.01 |
| HellaSwag (Acc-Norm, 10-shot) | 85.05 | 84.65 |
| MMLU (Acc, 5-shot) | 80.69 | 80.55 |
| TruthfulQA (MC2, 0-shot) | 65.55 | 64.85 |
| Winogrande (Acc, 5-shot) | 83.11 | 82.48 |
| **Average Score** | **79.45** | **78.88** |
| **Recovery (%)** | **100.00** | **99.28** |
#### OpenLLM Leaderboard V2 evaluation scores
| Metric | mistralai/Mistral-Small-24B-Instruct-2501 | nm-testing/Mistral-Small-24B-Instruct-2501-FP8-dynamic |
|---------------------------------------------------------|:---------------------------------:|:-------------------------------------------:|
| IFEval (Inst-and-Prompt Level Strict Acc, 0-shot) | 73.27 | 73.53 |
| BBH (Acc-Norm, 3-shot) | 45.18 | 44.39 |
| MMLU-Pro (Acc, 5-shot) | 38.83 | 37.28 |
| **Average Score** | **52.42** | **51.73** |
| **Recovery (%)** | **100.00** | **98.68** |
| Math-Hard (Exact-Match, 4-shot) | 6.35 | 2.99 |
| GPQA (Acc-Norm, 0-shot) | 8.29 | 6.97 |
| MUSR (Acc-Norm, 0-shot) | 7.84 | 8.04 |
Results on Math-Hard, GPQA, and MUSR are not considred for accuracy recovery calculation because the unquantized model has close to random prediction accuracy (6.35, 8.29, 7.84) which doesn't provide a reliable baseline for recovery calculation.
|