SERA-14B-FP8
FP8 quantization of allenai/SERA-14B, produced with llmcompressor and validated with vLLM.
Quantization Details
| Parameter | Value |
|---|---|
| Method | FP8 (W8A8) via llmcompressor oneshot |
| Targets | All Linear layers except lm_head |
| Calibration dataset | allenai/Sera-4.5A-Lite-T2 |
| Calibration samples | 512 |
| Calibration sequence length | 2048 tokens |
| llmcompressor version | 0.9.0.2 |
| Hardware | Local GPU (RTX 5080, 16 GB VRAM) |
| Model size (uploaded) | ~16.2 GB (4 safetensors shards) |
GPU Stats
- 1x RTX 5080
- Total time: 1 hr
Usage
from vllm import LLM, SamplingParams
llm = LLM(model="bluetrace/SERA-14B-FP8", max_model_len=16384)
params = SamplingParams(temperature=0.7, max_tokens=512)
outputs = llm.generate(
[{"role": "user", "content": "Explain quantum entanglement simply."}],
params,
)
print(outputs[0].outputs[0].text)
Validation
After quantization the model was loaded into vLLM and a test chat completion request was sent.
Limitations
- Quality degradation relative to the BF16 base model has not been formally benchmarked. FP8 quantization with 512 calibration samples is generally low-loss for instruction-tuned models, but edge cases may differ.
- Maximum recommended context length is 16 384 tokens on a single L40S GPU.
- The
lm_headlayer is kept in BF16 (not quantized) to preserve output distribution.
Related
- Base model: allenai/SERA-14B
- Quantization tooling: vllm-project/llm-compressor
- Downloads last month
- 2
Model tree for bluetrace/SERA-14B-FP8
Base model
allenai/SERA-14B