granite-34b-code-instruct-8k-FP8
FP8 quantized version of IBM's Granite 34B Code model for efficient inference
This is an FP8 (E4M3) quantized version of ibm-granite/granite-34b-code-instruct-8k using compressed_tensors format. Quantized by TevunahAi on enterprise-grade hardware.
π― Recommended Usage: vLLM (Required)
For 34B models, vLLM is essential for practical deployment. FP8 quantization makes this flagship model accessible on high-end consumer GPUs.
Quick Start with vLLM
pip install vllm
Python API:
from vllm import LLM, SamplingParams
# vLLM auto-detects FP8 from model config
llm = LLM(model="TevunahAi/granite-34b-code-instruct-8k-FP8", dtype="auto")
# Generate
prompt = "Write a Python function to calculate fibonacci numbers:"
sampling_params = SamplingParams(temperature=0.7, max_tokens=256)
outputs = llm.generate([prompt], sampling_params)
for output in outputs:
print(output.outputs[0].text)
OpenAI-Compatible API Server:
vllm serve TevunahAi/granite-34b-code-instruct-8k-FP8 \
--dtype auto \
--max-model-len 8192
Then use with OpenAI client:
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="token-abc123", # dummy key
)
response = client.chat.completions.create(
model="TevunahAi/granite-34b-code-instruct-8k-FP8",
messages=[
{"role": "user", "content": "Write a Python function to calculate fibonacci numbers"}
],
temperature=0.7,
max_tokens=256,
)
print(response.choices[0].message.content)
vLLM Benefits
- β Weights, activations, and KV cache in FP8
- β ~34GB VRAM (50% reduction vs BF16's ~68GB)
- β Single high-end GPU deployment (H100, RTX 6000 Ada, A100 80GB)
- β Native FP8 tensor core acceleration
- β Production-grade performance
β οΈ Transformers: Not Practical
At 34B parameters, transformers will decompress to ~68GB+ VRAM, requiring multi-GPU setups or data center GPUs. This is not recommended for deployment.
Transformers Example (Multi-GPU Required - Click to expand)
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Requires multi-GPU or 80GB+ single GPU
model = AutoModelForCausalLM.from_pretrained(
"TevunahAi/granite-34b-code-instruct-8k-FP8",
device_map="auto", # Will distribute across GPUs
torch_dtype="auto",
low_cpu_mem_usage=True,
)
tokenizer = AutoTokenizer.from_pretrained("TevunahAi/granite-34b-code-instruct-8k-FP8")
# Generate
prompt = "Write a Python function to calculate fibonacci numbers:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Requirements:
pip install torch>=2.1.0 transformers>=4.40.0 accelerate compressed-tensors
System Requirements:
- ~68GB+ VRAM (decompressed to BF16)
- Multi-GPU setup or A100 80GB / H100 80GB
- Not practical for most deployments
β οΈ Critical: Use vLLM instead. Transformers is only viable for research/testing with multi-GPU setups.
π Quantization Details
| Property | Value |
|---|---|
| Base Model | ibm-granite/granite-34b-code-instruct-8k |
| Quantization Method | FP8 E4M3 weight-only |
| Framework | llm-compressor + compressed_tensors |
| Calibration Dataset | open_platypus (512 samples) |
| Storage Size | ~34GB (sharded safetensors) |
| VRAM (vLLM) | ~34GB |
| VRAM (Transformers) | ~68GB+ (decompressed to BF16) |
| Target Hardware | NVIDIA H100, A100 80GB, RTX 6000 Ada |
| Quantization Time | 31.0 minutes |
Quantization Infrastructure
Professional hardware ensures consistent, high-quality quantization:
- CPUs: Dual Intel Xeon Max 9480 (112 cores / 224 threads, 128GB HBM2e)
- GPU: NVIDIA RTX 5000 Ada Generation (32GB VRAM, native FP8 support)
- Memory: 256GB DDR5 + 128GB HBM2e = 384GB total system memory
- Software Stack: Ubuntu 25.10 | Python 3.12 | PyTorch 2.8 | CUDA 13.0 | llm-compressor
Performance Notes
Optimal HBM2e Utilization:
- This 34B model demonstrates ideal sizing for our dual Xeon Max architecture
- Full CPU/HBM2e processing path for maximum efficiency
- Superior per-parameter performance (0.91 min/B vs 1.1 min/B for 20B)
- Counterintuitively faster quantization than smaller models due to pure HBM2e workflow
- Sweet spot for our hardware infrastructure
π§ Why FP8 for 34B Models?
With vLLM/TensorRT-LLM:
- β Enables single-GPU deployment (~34GB vs ~68GB BF16)
- β 50% memory reduction across weights, activations, and KV cache
- β Faster inference via native FP8 tensor cores
- β Makes flagship model accessible on high-end consumer/prosumer GPUs
- β Minimal quality loss for code generation tasks
Without FP8:
- β BF16 requires ~68GB VRAM (H100 80GB or multi-GPU)
- β Limited deployment options
- β Higher infrastructure costs
FP8 quantization transforms 34B from "data center only" to "high-end workstation deployable".
πΎ Model Files
This model is sharded into multiple safetensors files (all required for inference). The compressed format enables efficient storage and faster downloads.
π Granite Code Model Family
IBM's Granite Code models are specifically trained for enterprise code generation. The 34B version represents the flagship tier:
| Model | VRAM (vLLM) | Quality | Use Case |
|---|---|---|---|
| 8B-FP8 | ~8GB | Good | Fast iteration, prototyping |
| 20B-FP8 | ~20GB | Better | Complex tasks, better reasoning |
| 34B-FP8 | ~34GB | Best | Flagship performance, production |
34B Benefits:
- β State-of-the-art code quality for Granite family
- β Superior reasoning and complex problem solving
- β Enterprise-grade completions for mission-critical applications
- β Best context understanding across the model family
- β 8K context window for larger codebases
π¬ Quality Assurance
- Professional calibration: 512 diverse code samples
- Validation: Tested on code generation benchmarks
- Format: Standard compressed_tensors for broad compatibility
- Optimization: Hardware-optimized quantization workflow
π Original Model
This quantization is based on ibm-granite/granite-34b-code-instruct-8k by IBM.
For comprehensive information about:
- Model architecture and training methodology
- Supported programming languages
- Evaluation benchmarks and results
- Ethical considerations and responsible AI guidelines
Please refer to the original model card.
π§ Hardware Requirements
Minimum (vLLM):
- GPU: NVIDIA A100 40GB or RTX 6000 Ada (48GB)
- VRAM: 34GB minimum, 40GB+ recommended
- CUDA: 11.8 or newer
Recommended (vLLM):
- GPU: NVIDIA H100 (80GB) / A100 80GB / RTX 6000 Ada (48GB)
- VRAM: 40GB+
- CUDA: 12.0+
Transformers:
- GPU: Multi-GPU setup (2x A100 40GB) or single A100/H100 80GB
- VRAM: 68GB+ total
- Not recommended - use vLLM instead
π Additional Resources
- vLLM Documentation: docs.vllm.ai
- TensorRT-LLM: github.com/NVIDIA/TensorRT-LLM
- TevunahAi Models: huggingface.co/TevunahAi
- llm-compressor: github.com/vllm-project/llm-compressor
- IBM Granite: github.com/ibm-granite
π License
This model inherits the Apache 2.0 License from the original Granite model.
π Acknowledgments
- Original Model: IBM Granite team
- Quantization Framework: Neural Magic's llm-compressor
- Quantized by: TevunahAi
π Citation
If you use this model, please cite the original Granite work:
@misc{granite2024,
title={Granite Code Models},
author={IBM Research},
year={2024},
url={https://huggingface.co/ibm-granite/granite-34b-code-instruct-8k}
}
Professional AI Model Quantization by TevunahAi
Making flagship models accessible through enterprise-grade quantization
- Downloads last month
- 20
Model tree for TevunahAi/granite-34b-code-instruct-8k-FP8
Base model
ibm-granite/granite-34b-code-base-8k