robgreenberg3's picture
Update README.md
c286303 verified
metadata
license: apache-2.0
license_name: apache-2.0
license_link: https://www.apache.org/licenses/LICENSE-2.0
name: RedHatAI/Apertus-8B-Instruct-2509-FP8-dynamic
description: >-
  Language model supports over 1000 languages and long context, it uses only
  fully compliant and open training data, and achieves comparable performance to
  models trained behind closed doors.
readme: >-
  https://huggingface.co/RedHatAI/Apertus-8B-Instruct-2509-FP8-dynamic/main/README.md
pipeline_tag: text-generation
library_name: transformers
tags:
  - multilingual
  - compliant
  - swiss-ai
  - apertus
  - fp8
  - vllm
  - compressed-tensors
  - llm-compressor
tasks:
  - text-to-text
  - text-generation
  - tool-calling
provider: Swiss AI
validated_on:
  - RHOAI 3.0
  - RHAIIS 3.2.5
base_model:
  - swiss-ai/Apertus-8B-Instruct-2509

Apertus-8B-Instruct-2509-FP8-dynamic Model Icon

Validated Badge

Model Overview

  • Model Architecture: ApertusForCausalLM
    • Input: Text
    • Output: Text
  • Model Optimizations:
    • Weight quantization: FP8
    • Activation quantization: FP8
  • Release Date: 9/18/2025
  • Version: 1.0
  • Model Developers: Red Hat

Quantized version of swiss-ai/Apertus-8B-2509.

Model Optimizations

This model was obtained by quantizing the weights and activations of swiss-ai/Apertus-8B-2509 to FP8 data type. This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. Only the weights and activations of the linear operators within transformers blocks are quantized.

Deployment

Use with vLLM

  1. Initialize vLLM server:
vllm serve RedHatAI/Apertus-8B-Instruct-2509-FP8-dynamic
  1. Send requests to the server:
from openai import OpenAI

# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://<your-server-host>:8000/v1"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

model = "RedHatAI/Apertus-8B-Instruct-2509-FP8-dynamic"

messages = [
    {"role": "user", "content": "Give me a short introduction to large language model."},
]

outputs = client.chat.completions.create(
    model=model,
    messages=messages,
)

generated_text = outputs.choices[0].message.content
print(generated_text)
Deploy on Red Hat AI Inference Server
podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
 --ipc=host \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
--name=vllm \
registry.access.redhat.com/rhaiis/rh-vllm-cuda \
vllm serve \
--tensor-parallel-size 8 \
--max-model-len 32768  \
--enforce-eager --model RedHatAI/Apertus-8B-Instruct-2509-FP8-dynamic
Deploy on Red Hat Openshift AI
# Setting up vllm server with ServingRuntime
# Save as: vllm-servingruntime.yaml
apiVersion: serving.kserve.io/v1alpha1
kind: ServingRuntime
metadata:
 name: vllm-cuda-runtime # OPTIONAL CHANGE: set a unique name
 annotations:
   openshift.io/display-name: vLLM NVIDIA GPU ServingRuntime for KServe
   opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
 labels:
   opendatahub.io/dashboard: 'true'
spec:
 annotations:
   prometheus.io/port: '8080'
   prometheus.io/path: '/metrics'
 multiModel: false
 supportedModelFormats:
   - autoSelect: true
     name: vLLM
 containers:
   - name: kserve-container
     image: quay.io/modh/vllm:rhoai-3.0-cuda # CHANGE if needed. If AMD: quay.io/modh/vllm:rhoai-3.0-rocm
     command:
       - python
       - -m
       - vllm.entrypoints.openai.api_server
     args:
       - "--port=8080"
       - "--model=/mnt/models"
       - "--served-model-name={{.Name}}"
     env:
       - name: HF_HOME
         value: /tmp/hf_home
     ports:
       - containerPort: 8080
         protocol: TCP
# Attach model to vllm server. This is an NVIDIA template
# Save as: inferenceservice.yaml
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
  annotations:
    openshift.io/display-name: Apertus-8B-Instruct-2509-FP8-dynamic # OPTIONAL CHANGE
    serving.kserve.io/deploymentMode: RawDeployment
  name: Apertus-8B-Instruct-2509-FP8-dynamic        # specify model name. This value will be used to invoke the model in the payload
  labels:
    opendatahub.io/dashboard: 'true'
spec:
  predictor:
    maxReplicas: 1
    minReplicas: 1
    model:
      modelFormat:
        name: vLLM
      name: ''
      resources:
        limits:
          cpu: '2'			# this is model specific
          memory: 8Gi		# this is model specific
          nvidia.com/gpu: '1'	# this is accelerator specific
        requests:			# same comment for this block
          cpu: '1'
          memory: 4Gi
          nvidia.com/gpu: '1'
      runtime: vllm-cuda-runtime	# must match the ServingRuntime name above
      storageUri: oci://registry.redhat.io/rhai/modelcar-apertus-8b-instruct-2509-fp8-dynamic:3.0
    tolerations:
    - effect: NoSchedule
      key: nvidia.com/gpu
      operator: Exists
# make sure first to be in the project where you want to deploy the model
# oc project <project-name>

# apply both resources to run model

# Apply the ServingRuntime
oc apply -f vllm-servingruntime.yaml
# Replace <inference-service-name> and <cluster-ingress-domain> below:
# - Run `oc get inferenceservice` to find your URL if unsure.

# Call the server using curl:
curl https://<inference-service-name>-predictor-default.<domain>/v1/chat/completions
        -H "Content-Type: application/json" \
        -d '{
    "model": "Apertus-8B-Instruct-2509-FP8-dynamic",
    "stream": true,
    "stream_options": {
        "include_usage": true
    },
    "max_tokens": 1,
    "messages": [
        {
            "role": "user",
            "content": "How can a bee fly when its wings are so small?"
        }
    ]
}'

See Red Hat Openshift AI documentation for more details.

Creation

This model was created with llm-compressor by running the code snippet below.

Model Creation Code
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load model
model_stub = "swiss-ai/Apertus-70B-Instruct-2509"
model_name = model_stub.split("/")[-1]

model = AutoModelForCausalLM.from_pretrained(model_stub, dtype="auto")

tokenizer = AutoTokenizer.from_pretrained(model_stub)

# Configure the quantization algorithm and scheme
recipe = QuantizationModifier(
    ignore=["lm_head"],
    targets="Linear",
    scheme="FP8_dynamic",
)

# Apply quantization
oneshot(
    model=model,
    recipe=recipe,
)

# Save to disk in compressed-tensors format
save_path = model_name + "-FP8-dynamic"
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")

Evaluation

The model was evaluated on OpenLLM Leaderboard V1, using the following command:

Evaluation Commands

OpenLLM Leaderboard V1:

lm_eval \
  --model vllm \
  --model_args pretrained="RedHatAI/Apertus-8B-Instruct-2509-FP8-dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1,gpu_memory_utilization=0.6,enable_chunked_prefill=True \
  --tasks openllm \
  --write_out \
  --batch_size auto \
  --output_path output_dir \
  --show_config

Accuracy

Category Metric swiss-ai/Apertus-8B-Instruct-2509 RedHatAI/Apertus-8B-Instruct-2509-FP8-dynamic Recovery (%)
OpenLLM V1 ARC-Challenge (Acc-Norm, 25-shot) 65.02 65.59 101.4
GSM8K (Strict-Match, 5-shot) 58.07 55.50 95.6
HellaSwag (Acc-Norm, 10-shot) 80.87 81.06 100.2
MMLU (Acc, 5-shot) 61.97 61.86 99.8
TruthfulQA (MC2, 0-shot) 58.14 58.18 100.1
Winogrande (Acc, 5-shot) 75.14 75.45 100.4
Average Score 66.54 66.33 99.7