Files changed (1) hide show
  1. README.md +166 -2
README.md CHANGED
@@ -37,7 +37,14 @@ tags:
37
  - int8
38
  ---
39
 
40
- # Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8
 
 
 
 
 
 
 
41
 
42
  ## Model Overview
43
  - **Model Architecture:** Mistral3ForConditionalGeneration
@@ -79,7 +86,7 @@ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/
79
  from vllm import LLM, SamplingParams
80
  from transformers import AutoProcessor
81
 
82
- model_id = "RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic"
83
  number_gpus = 1
84
 
85
  sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)
@@ -99,6 +106,163 @@ print(generated_text)
99
 
100
  vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
101
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
102
  ## Creation
103
 
104
  <details>
 
37
  - int8
38
  ---
39
 
40
+ <h1 style="display: flex; align-items: center; gap: 10px; margin: 0;">
41
+ Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8
42
+ <img src="https://www.redhat.com/rhdc/managed-files/Catalog-Validated_model_0.png" alt="Model Icon" width="40" style="margin: 0; padding: 0;" />
43
+ </h1>
44
+
45
+ <a href="https://www.redhat.com/en/products/ai/validated-models" target="_blank" style="margin: 0; padding: 0;">
46
+ <img src="https://www.redhat.com/rhdc/managed-files/Validated_badge-Dark.png" alt="Validated Badge" width="250" style="margin: 0; padding: 0;" />
47
+ </a>
48
 
49
  ## Model Overview
50
  - **Model Architecture:** Mistral3ForConditionalGeneration
 
86
  from vllm import LLM, SamplingParams
87
  from transformers import AutoProcessor
88
 
89
+ model_id = "RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8"
90
  number_gpus = 1
91
 
92
  sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)
 
106
 
107
  vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
108
 
109
+ <details>
110
+ <summary>Deploy on <strong>Red Hat AI Inference Server</strong></summary>
111
+
112
+ ```bash
113
+ $ podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
114
+ --ipc=host \
115
+ --env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
116
+ --env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
117
+ --name=vllm \
118
+ registry.access.redhat.com/rhaiis/rh-vllm-cuda \
119
+ vllm serve \
120
+ --tensor-parallel-size 8 \
121
+ --max-model-len 32768 \
122
+ --enforce-eager --model RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8
123
+ ```
124
+ ​​See [Red Hat AI Inference Server documentation](https://docs.redhat.com/en/documentation/red_hat_ai_inference_server/) for more details.
125
+ </details>
126
+
127
+ <details>
128
+ <summary>Deploy on <strong>Red Hat Enterprise Linux AI</strong></summary>
129
+
130
+ ```bash
131
+ # Download model from Red Hat Registry via docker
132
+ # Note: This downloads the model to ~/.cache/instructlab/models unless --model-dir is specified.
133
+ ilab model download --repository docker://registry.redhat.io/rhelai1/mistral-small-3-1-24b-instruct-2503-quantized-w8a8:1.5
134
+ ```
135
+
136
+ ```bash
137
+ # Serve model via ilab
138
+ ilab model serve --model-path ~/.cache/instructlab/models/mistral-small-3-1-24b-instruct-2503-quantized-w8a8
139
+
140
+ # Chat with model
141
+ ilab model chat --model ~/.cache/instructlab/models/mistral-small-3-1-24b-instruct-2503-quantized-w8a8
142
+ ```
143
+ See [Red Hat Enterprise Linux AI documentation](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4) for more details.
144
+ </details>
145
+
146
+ <details>
147
+ <summary>Deploy on <strong>Red Hat Openshift AI</strong></summary>
148
+
149
+ ```python
150
+ # Setting up vllm server with ServingRuntime
151
+ # Save as: vllm-servingruntime.yaml
152
+ apiVersion: serving.kserve.io/v1alpha1
153
+ kind: ServingRuntime
154
+ metadata:
155
+ name: vllm-cuda-runtime # OPTIONAL CHANGE: set a unique name
156
+ annotations:
157
+ openshift.io/display-name: vLLM NVIDIA GPU ServingRuntime for KServe
158
+ opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
159
+ labels:
160
+ opendatahub.io/dashboard: 'true'
161
+ spec:
162
+ annotations:
163
+ prometheus.io/port: '8080'
164
+ prometheus.io/path: '/metrics'
165
+ multiModel: false
166
+ supportedModelFormats:
167
+ - autoSelect: true
168
+ name: vLLM
169
+ containers:
170
+ - name: kserve-container
171
+ image: quay.io/modh/vllm:rhoai-2.20-cuda # CHANGE if needed. If AMD: quay.io/modh/vllm:rhoai-2.20-rocm
172
+ command:
173
+ - python
174
+ - -m
175
+ - vllm.entrypoints.openai.api_server
176
+ args:
177
+ - "--port=8080"
178
+ - "--model=/mnt/models"
179
+ - "--served-model-name={{.Name}}"
180
+ env:
181
+ - name: HF_HOME
182
+ value: /tmp/hf_home
183
+ ports:
184
+ - containerPort: 8080
185
+ protocol: TCP
186
+ ```
187
+
188
+ ```python
189
+ # Attach model to vllm server. This is an NVIDIA template
190
+ # Save as: inferenceservice.yaml
191
+ apiVersion: serving.kserve.io/v1beta1
192
+ kind: InferenceService
193
+ metadata:
194
+ annotations:
195
+ openshift.io/display-name: mistral-small-3-1-24b-instruct-2503-quantized-w8a8 # OPTIONAL CHANGE
196
+ serving.kserve.io/deploymentMode: RawDeployment
197
+ name: mistral-small-3-1-24b-instruct-2503-quantized-w8a8 # specify model name. This value will be used to invoke the model in the payload
198
+ labels:
199
+ opendatahub.io/dashboard: 'true'
200
+ spec:
201
+ predictor:
202
+ maxReplicas: 1
203
+ minReplicas: 1
204
+ model:
205
+ modelFormat:
206
+ name: vLLM
207
+ name: ''
208
+ resources:
209
+ limits:
210
+ cpu: '2' # this is model specific
211
+ memory: 8Gi # this is model specific
212
+ nvidia.com/gpu: '1' # this is accelerator specific
213
+ requests: # same comment for this block
214
+ cpu: '1'
215
+ memory: 4Gi
216
+ nvidia.com/gpu: '1'
217
+ runtime: vllm-cuda-runtime # must match the ServingRuntime name above
218
+ storageUri: oci://registry.redhat.io/rhelai1/modelcar-mistral-small-3-1-24b-instruct-2503-quantized-w8a8:1.5
219
+ tolerations:
220
+ - effect: NoSchedule
221
+ key: nvidia.com/gpu
222
+ operator: Exists
223
+ ```
224
+
225
+ ```bash
226
+ # make sure first to be in the project where you want to deploy the model
227
+ # oc project <project-name>
228
+
229
+ # apply both resources to run model
230
+
231
+ # Apply the ServingRuntime
232
+ oc apply -f vllm-servingruntime.yaml
233
+
234
+ # Apply the InferenceService
235
+ oc apply -f qwen-inferenceservice.yaml
236
+ ```
237
+
238
+ ```python
239
+ # Replace <inference-service-name> and <cluster-ingress-domain> below:
240
+ # - Run `oc get inferenceservice` to find your URL if unsure.
241
+
242
+ # Call the server using curl:
243
+ curl https://<inference-service-name>-predictor-default.<domain>/v1/chat/completions
244
+ -H "Content-Type: application/json" \
245
+ -d '{
246
+ "model": "mistral-small-3-1-24b-instruct-2503-quantized-w8a8",
247
+ "stream": true,
248
+ "stream_options": {
249
+ "include_usage": true
250
+ },
251
+ "max_tokens": 1,
252
+ "messages": [
253
+ {
254
+ "role": "user",
255
+ "content": "How can a bee fly when its wings are so small?"
256
+ }
257
+ ]
258
+ }'
259
+
260
+ ```
261
+
262
+ See [Red Hat Openshift AI documentation](https://docs.redhat.com/en/documentation/red_hat_openshift_ai/2025) for more details.
263
+ </details>
264
+
265
+
266
  ## Creation
267
 
268
  <details>