File size: 5,100 Bytes
22c0960 0963fc9 c2fa53d 0963fc9 c2fa53d 0963fc9 ac471ce 0963fc9 2bf0a5b | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 | ---
tags:
- fp8
- vllm
pipeline_tag: text-generation
base_model: sarvamai/sarvam-30b
---
# sarvam-30b-FP8-dynamic
## Model Overview
- **Model Architecture:** sarvamai/sarvam-30b
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws).
- **Version:** 1.0
- **Model Developers:** RedHatAI
This model is a quantized version of [sarvamai/sarvam-30b](https://huggingface.co/sarvamai/sarvam-30b).
It was evaluated on several tasks to assess its quality in comparison to the unquantized model.
### Model Optimizations
This model was obtained by quantizing the weights and activations of [sarvamai/sarvam-30b](https://huggingface.co/sarvamai/sarvam-30b) to FP8 data type, ready for inference with vLLM.
Only the weights and activations of the linear operators within transformers blocks are quantized using [LLM Compressor](https://github.com/vllm-project/llm-compressor).
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend.
1. Install vLLM from main:
```
uv pip install -U git+https://github.com/vllm-project/vllm.git \
--extra-index-url https://wheels.vllm.ai/nightly \
--no-deps \
--no-cache
```
2. Run using vLLM
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "RedHatAI/sarvam-30b-FP8-dynamic"
number_gpus = 1
sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
This model was created by applying [LLM Compressor](https://github.com/vllm-project/llm-compressor), as presented in the code snippet below.
<details>
<summary>Creation details</summary>
Install specific llm-compression version:
```
uv pip install git+https://github.com/vllm-project/llm-compressor.git
uv pip install --upgrade torchvision --break-system-packages --no-cache
```
```python
from compressed_tensors.offload import dispatch_model
from transformers import AutoModelForCausalLM, AutoTokenizer
from llmcompressor import oneshot
from llmcompressor.modifiers.quantization import QuantizationModifier
MODEL_ID = "sarvamai/sarvam-30b"
# Load model.
model = AutoModelForCausalLM.from_pretrained(MODEL_ID, dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
# Configure the quantization algorithm and scheme.
# In this case, we:
# * quantize the weights to fp8 with per channel via ptq
# * quantize the activations to fp8 with dynamic per token
recipe = QuantizationModifier(
targets="Linear", scheme="FP8_DYNAMIC", ignore=["lm_head"]
)
# Apply quantization.
oneshot(model=model, recipe=recipe)
# Confirm generations of the quantized model look sane.
print("========== SAMPLE GENERATION ==============")
dispatch_model(model)
input_ids = tokenizer("Hello my name is", return_tensors="pt").input_ids.to(
model.device
)
output = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(output[0]))
print("==========================================")
# Save to disk in compressed-tensors format.
SAVE_DIR = MODEL_ID.rstrip("/").split("/")[-1] + "-FP8-Dynamic"
model.save_pretrained(SAVE_DIR)
tokenizer.save_pretrained(SAVE_DIR)
```
</details>
## Evaluation
This model was evaluated on the well-known text benchmarks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/sarvam-30b-FP8-Dynamic",dtype=auto,add_bos_token=True,max_model_len=16384,tensor_parallel_size=2,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--tasks openllm \
--write_out \
--batch_size auto \
--show_config
```
### Accuracy
| Benchmark | sarvamai/sarvam-30b | RedHatAI/sarvam-30b-FP8-Dynamic | Recovery (%) |
|---|---|---|---|
| BBH (exact_match) | 63.32 | 62.95 | 99.42% |
| GSM8K (strict-match) | 72.33 | 72.40 | 100.10% |
| GSM8K (flexible-extract) | 69.67 | 70.81 | 101.63% |
| IFEval (inst_level_strict_acc) | 34.17 | 31.65 | 92.63% |
| MMLU-Pro (exact_match) | 45.69 | 45.81 | 100.25% |
| ARC-Challenge (acc) | 58.28 | 57.76 | 99.12% |
| HellaSwag (acc) | 53.98 | 53.98 | 100.00% |
| MMLU (acc) | 66.20 | 66.15 | 99.92% |
| TruthfulQA MC2 (acc) | 50.34 | 50.58 | 100.48% |
| Winogrande (acc) | 61.09 | 61.17 | 100.13% |
|