gemma-4-31B-it-FP8-block-8192

Model Overview

  • Model Architecture: google/gemma-4-31B-it
    • Input: Text / Image
    • Output: Text
  • Model Optimizations:
    • Weight quantization: FP8
    • Activation quantization: FP8
  • Release Date: 2026-04-04
  • Version: 1.0
  • Model Developers: RedHatAI

This model is a quantized version of google/gemma-4-31B-it. It was evaluated on several tasks to assess its quality in comparison to the unquantized model.

Default limits for this repo:

  • Context window: 8192 tokens
  • Max output tokens: 8192 tokens

Generated output still shares the same 8192-token total context budget with the prompt, so the full 8192 output tokens are only available when the prompt is short enough.

Model Optimizations

This model was obtained by quantizing the weights and activations of google/gemma-4-31B-it to FP8 data type, ready for inference with vLLM. This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.

Only the weights and activations of the linear operators within transformers blocks are quantized using LLM Compressor. Vision tower, embedding, and output head layers are kept in their original precision.

Deployment

Use with vLLM

This model can be deployed using vLLM. For detailed instructions including multi-GPU deployment, multimodal inference, thinking mode, function calling, and benchmarking, see the Gemma 4 vLLM usage guide.

  1. Start the vLLM server:
vllm serve RedHatAI/gemma-4-31B-it-FP8-block-8192 --max-model-len 8192

To enable thinking/reasoning and tool calling:

vllm serve RedHatAI/gemma-4-31B-it-FP8-block-8192 \
  --max-model-len 8192 \
  --reasoning-parser gemma4 \
  --tool-call-parser gemma4 \
  --enable-auto-tool-choice

Tip: For text-only workloads, pass --limit-mm-per-prompt image=0 to skip vision encoder memory allocation. Set --gpu-memory-utilization 0.90 to maximize KV cache capacity.

  1. Send requests to the server:
from openai import OpenAI

openai_api_key = "EMPTY"
openai_api_base = "http://<your-server-host>:8000/v1"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

model = "RedHatAI/gemma-4-31B-it-FP8-block-8192"

messages = [
    {"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]

outputs = client.chat.completions.create(
    model=model,
    messages=messages,
    max_tokens=8192,
)

generated_text = outputs.choices[0].message.content
print(generated_text)

Creation

This model was created by applying data-free FP8 block quantization with LLM Compressor, as presented in the code snippet below.

from llmcompressor import model_free_ptq

MODEL_ID = "google/gemma-4-31B-it"
SAVE_DIR = MODEL_ID.split("/")[1] + "-FP8-block"

model_free_ptq(
    model_stub=MODEL_ID,
    save_directory=SAVE_DIR,
    scheme="FP8_BLOCK",
    ignore=["re:.*vision.*", "lm_head", "re:.*embed_tokens.*"],
    max_workers=8,
    device="cuda:0",
)

Evaluation

This model was evaluated on GSM8k-Platinum, MMLU-CoT, MMLU-Pro, and IFEval using lm-evaluation-harness, served with vLLM (OpenAI-compatible API). All evaluations were performed with thinking turned off.

Accuracy

Category Benchmark google/gemma-4-31B-it RedHatAI/gemma-4-31B-it-FP8-block-8192 Recovery
Instruction Following GSM8k-Platinum (5-shot, strict-match) 97.60 97.82 100.2%
MMLU-CoT (5-shot, strict_match) 90.53 90.70 100.2%
MMLU-Pro (5-shot, custom-extract) 85.03 84.92 99.9%
IFEval (0-shot, prompt-level strict) 91.07 91.31 100.3%
IFEval (0-shot, inst-level strict) 93.76 93.84 100.1%

Reproduction

The results were obtained using the following commands:

Each benchmark was run 3 times with different random seeds (42, 1234, 4158) and the scores were averaged.

vLLM server:

vllm serve RedHatAI/gemma-4-31B-it-FP8-block-8192 --max-model-len 8192

GSM8k-Platinum (lm-eval, 5-shot, 3 repetitions)

lm_eval --model local-chat-completions \
  --tasks gsm8k_platinum_cot_llama \
  --model_args "model=RedHatAI/gemma-4-31B-it-FP8-block-8192,max_length=8192,base_url=http://0.0.0.0:8000/v1/chat/completions,num_concurrent=128,max_retries=3,tokenized_requests=False,tokenizer_backend=None,timeout=2400" \
  --num_fewshot 5 \
  --apply_chat_template \
  --fewshot_as_multiturn \
  --output_path results_gsm8k_platinum.json \
  --seed 1234 \
  --gen_kwargs "do_sample=True,temperature=1.0,top_p=0.95,top_k=64,max_gen_toks=8192,seed=1234"

MMLU-CoT (lm-eval, 5-shot, 3 repetitions)

lm_eval --model local-chat-completions \
  --tasks mmlu_cot_llama \
  --model_args "model=RedHatAI/gemma-4-31B-it-FP8-block-8192,max_length=8192,base_url=http://0.0.0.0:8000/v1/chat/completions,num_concurrent=128,max_retries=3,tokenized_requests=False,tokenizer_backend=None,timeout=2400" \
  --num_fewshot 5 \
  --apply_chat_template \
  --fewshot_as_multiturn \
  --output_path results_mmlu_cot.json \
  --seed 1234 \
  --gen_kwargs "do_sample=True,temperature=1.0,top_p=0.95,top_k=64,max_gen_toks=8192,seed=1234"

MMLU-Pro (lm-eval, 5-shot, 3 repetitions)

lm_eval --model local-chat-completions \
  --tasks mmlu_pro_chat \
  --model_args "model=RedHatAI/gemma-4-31B-it-FP8-block-8192,max_length=8192,base_url=http://0.0.0.0:8000/v1/chat/completions,num_concurrent=128,max_retries=3,tokenized_requests=False,tokenizer_backend=None,timeout=2400" \
  --num_fewshot 5 \
  --apply_chat_template \
  --fewshot_as_multiturn \
  --output_path results_mmlu_pro.json \
  --seed 1234 \
  --gen_kwargs "do_sample=True,temperature=1.0,top_p=0.95,top_k=64,max_gen_toks=8192,seed=1234"

IFEval (lm-eval, 0-shot, 3 repetitions)

lm_eval --model local-chat-completions \
  --tasks ifeval \
  --model_args "model=RedHatAI/gemma-4-31B-it-FP8-block-8192,max_length=8192,base_url=http://0.0.0.0:8000/v1/chat/completions,num_concurrent=128,max_retries=3,tokenized_requests=False,tokenizer_backend=None,timeout=2400" \
  --apply_chat_template \
  --fewshot_as_multiturn \
  --output_path results_ifeval.json \
  --seed 1234 \
  --gen_kwargs "do_sample=True,temperature=1.0,top_p=0.95,top_k=64,max_gen_toks=8192,seed=1234"
Downloads last month
121
Safetensors
Model size
31B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jamesleeht/gemma-4-31B-it-FP8-block-8192

Quantized
(160)
this model