krishnateja95 commited on
Commit
cec01a0
·
verified ·
1 Parent(s): 1739e6a

Update Readme.md

Browse files
Files changed (1) hide show
  1. Readme.md +161 -0
Readme.md CHANGED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - fp8
4
+ - vllm
5
+ pipeline_tag: text-generation
6
+ base_model: sarvamai/sarvam-105b
7
+ ---
8
+
9
+ # sarvam-105b-FP8-dynamic
10
+
11
+ ## Model Overview
12
+ - **Model Architecture:** sarvamai/sarvam-105b
13
+ - **Input:** Text
14
+ - **Output:** Text
15
+ - **Model Optimizations:**
16
+ - **Weight quantization:** FP8
17
+ - **Activation quantization:** FP8
18
+ - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws).
19
+ - **Version:** 1.0
20
+ - **Model Developers:** RedHatAI
21
+
22
+ This model is a quantized version of [sarvamai/sarvam-105b](https://huggingface.co/sarvamai/sarvam-105b).
23
+ It was evaluated on several tasks to assess its quality in comparison to the unquantized model.
24
+
25
+ ### Model Optimizations
26
+
27
+ This model was obtained by quantizing the weights and activations of [sarvamai/sarvam-105b](https://huggingface.co/sarvamai/sarvam-105b) to FP8 data type, ready for inference with vLLM.
28
+
29
+ Only the weights and activations of the linear operators within transformers blocks are quantized using [LLM Compressor](https://github.com/vllm-project/llm-compressor).
30
+
31
+ ## Deployment
32
+
33
+ ### Use with vLLM
34
+
35
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend.
36
+
37
+
38
+ 1. Install vLLM from main:
39
+ ```
40
+ uv pip install -U git+https://github.com/vllm-project/vllm.git \
41
+ --extra-index-url https://wheels.vllm.ai/nightly \
42
+ --no-deps \
43
+ --no-cache
44
+ ```
45
+
46
+ 2. Run using vLLM
47
+ ```python
48
+ from vllm import LLM, SamplingParams
49
+ from transformers import AutoTokenizer
50
+
51
+ model_id = "RedHatAI/sarvam-105b-FP8-dynamic"
52
+ number_gpus = 1
53
+
54
+ sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
55
+
56
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
57
+
58
+ messages = [
59
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
60
+ {"role": "user", "content": "Who are you?"},
61
+ ]
62
+
63
+ prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
64
+
65
+ llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
66
+
67
+ outputs = llm.generate(prompts, sampling_params)
68
+
69
+ generated_text = outputs[0].outputs[0].text
70
+ print(generated_text)
71
+ ```
72
+
73
+ vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
74
+
75
+ ## Creation
76
+
77
+ This model was created by applying [LLM Compressor](https://github.com/vllm-project/llm-compressor), as presented in the code snippet below.
78
+
79
+
80
+ <details>
81
+ <summary>Creation details</summary>
82
+
83
+ Install specific llm-compression version:
84
+ ```
85
+ uv pip install git+https://github.com/vllm-project/llm-compressor.git
86
+ uv pip install --upgrade torchvision --break-system-packages --no-cache
87
+ ```
88
+
89
+ ```python
90
+ from compressed_tensors.offload import dispatch_model
91
+ from transformers import AutoModelForCausalLM, AutoTokenizer
92
+
93
+ from llmcompressor import oneshot
94
+ from llmcompressor.modifiers.quantization import QuantizationModifier
95
+
96
+ MODEL_ID = "sarvamai/sarvam-105b"
97
+
98
+ # Load model.
99
+ model = AutoModelForCausalLM.from_pretrained(MODEL_ID, dtype="auto", trust_remote_code=True)
100
+ tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
101
+
102
+ # Configure the quantization algorithm and scheme.
103
+ # In this case, we:
104
+ # * quantize the weights to fp8 with per channel via ptq
105
+ # * quantize the activations to fp8 with dynamic per token
106
+ recipe = QuantizationModifier(
107
+ targets="Linear", scheme="FP8_DYNAMIC", ignore=["lm_head"]
108
+ )
109
+
110
+ # Apply quantization.
111
+ oneshot(model=model, recipe=recipe)
112
+
113
+ # Confirm generations of the quantized model look sane.
114
+ print("========== SAMPLE GENERATION ==============")
115
+ dispatch_model(model)
116
+ input_ids = tokenizer("Hello my name is", return_tensors="pt").input_ids.to(
117
+ model.device
118
+ )
119
+ output = model.generate(input_ids, max_new_tokens=20)
120
+ print(tokenizer.decode(output[0]))
121
+ print("==========================================")
122
+
123
+ # Save to disk in compressed-tensors format.
124
+ SAVE_DIR = MODEL_ID.rstrip("/").split("/")[-1] + "-FP8-Dynamic"
125
+ model.save_pretrained(SAVE_DIR)
126
+ tokenizer.save_pretrained(SAVE_DIR)
127
+ ```
128
+
129
+ </details>
130
+
131
+ ## Evaluation
132
+
133
+ This model was evaluated on the well-known text benchmarks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
134
+
135
+ ```
136
+ lm_eval \
137
+ --model vllm \
138
+ --model_args pretrained="RedHatAI/sarvam-105b-FP8-Dynamic",dtype=auto,add_bos_token=True,max_model_len=16384,tensor_parallel_size=2,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
139
+ --tasks openllm \
140
+ --write_out \
141
+ --batch_size auto \
142
+ --show_config
143
+ ```
144
+
145
+
146
+
147
+ ### Accuracy
148
+
149
+
150
+ | Benchmark | sarvamai/sarvam-105b | RedHatAI/sarvam-105b-FP8-Dynamic | Recovery (%) |
151
+ |---|---|---|---|
152
+ | BBH (exact_match) | 80.86 | 79.93 | 98.84% |
153
+ | GSM8K (strict-match) | 84.38 | 85.37 | 101.17% |
154
+ | GSM8K (flexible-extract) | 84.61 | 85.90 | 101.52% |
155
+ | IFEval (inst_level_strict_acc) | 50.84 | 51.08 | 100.47% |
156
+ | MMLU-Pro (exact_match) | 57.40 | 57.25 | 99.74% |
157
+ | ARC-Challenge (acc) | 65.70 | 66.72 | 101.56% |
158
+ | HellaSwag (acc) | 63.57 | 63.52 | 99.92% |
159
+ | MMLU (acc) | 77.59 | 77.56 | 99.96% |
160
+ | TruthfulQA MC2 (acc) | 51.21 | 51.64 | 100.85% |
161
+ | Winogrande (acc) | 76.32 | 76.40 | 100.10% |