SERA-32B-GA-FP8
FP8 quantization of allenai/SERA-32B-GA, produced with llmcompressor and validated with vLLM.
Quantization Details
| Parameter | Value |
|---|---|
| Method | FP8 (W8A8) via llmcompressor oneshot |
| Targets | All Linear layers except lm_head |
| Calibration dataset | allenai/Sera-4.5A-Lite-T2 |
| Calibration samples | 512 |
| Calibration sequence length | 2048 tokens |
| llmcompressor version | 0.9.0.2 |
| Hardware | AWS g6e.4xlarge (NVIDIA L40S, 48 GB VRAM) |
| Model size (uploaded) | ~31.7 GB (7 safetensors shards) |
The quantization pipeline processes one decoder layer at a time (pipeline="sequential") and offloads activations to CPU between layers, allowing 32B+ models to be quantized on a single GPU without OOM.
GPU Stats
- 1x L40 (g6e.xlarge)
- Total time: 1 hr
Usage
from vllm import LLM, SamplingParams
llm = LLM(model="ikarabulut-dev/SERA-32B-GA-FP8", max_model_len=16384)
params = SamplingParams(temperature=0.7, max_tokens=512)
outputs = llm.generate(
[{"role": "user", "content": "Explain quantum entanglement simply."}],
params,
)
print(outputs[0].outputs[0].text)
Note: This model was validated with
--max-model-len 16384. Attempting a larger context on a single 48 GB GPU may OOM.
Validation
After quantization the model was loaded into vLLM and a test chat completion request was sent. The model became healthy in ~120 seconds and produced a well-formed thinking-style response — validation passed.
Limitations
- Quality degradation relative to the BF16 base model has not been formally benchmarked. FP8 quantization with 512 calibration samples is generally low-loss for instruction-tuned models, but edge cases may differ.
- Maximum recommended context length is 16 384 tokens on a single L40S GPU.
- The
lm_headlayer is kept in BF16 (not quantized) to preserve output distribution.
Related
- Base model: allenai/SERA-32B-GA
- Quantization tooling: vllm-project/llm-compressor
- Downloads last month
- 15
Model tree for bluetrace/SERA-32B-GA-FP8
Base model
allenai/SERA-32B-GA