WilhelmT commited on
Commit
d8ad33a
·
verified ·
1 Parent(s): df0fb1e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +185 -5
README.md CHANGED
@@ -1,5 +1,185 @@
1
- ---
2
- license: other
3
- license_name: embedl-models-community-licence-1.0
4
- license_link: https://github.com/embedl/embedl-models/blob/main/LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: embedl-models-community-licence-1.0
4
+ license_link: https://github.com/embedl/embedl-models/blob/main/LICENSE
5
+ base_model:
6
+ - meta-llama/Llama-3.2-1B-Instruct
7
+ tags:
8
+ - text-generation-inference
9
+ ---
10
+
11
+
12
+ # Llama-3.2-1B-Instruct-FlashHead-W4A16
13
+
14
+ **Optimized version of Llama-3.2-1B-Instruct using Quantization and FlashHead, Embedl’s efficient replacement for the language model head, reducing size while preserving accuracy.**
15
+ Designed for **low-latency inference** on **NVIDIA RTX GPUs**, leveraging:
16
+
17
+ - FlashHead
18
+ - Quantization (W4A16)
19
+ - Custom vLLM generation via `embedl-models`
20
+
21
+ FlashHead matches the baseline **Llama-3.2-1B** within rounding on standard evaluations (MMLU-Pro, HellaSwag, GSM8K, etc.) and, in combination with quantization, achieves **H200-level latency** on **RTX Ada** GPUs.
22
+
23
+ ---
24
+
25
+ ## Model Details
26
+ | **Field** | **Value** |
27
+ |------------|------------|
28
+ | **Base Model** | Llama-3.2-1B-Instruct |
29
+ | **Input / Output** | Text → Text |
30
+ | **Release Date** | 2025-12-08 |
31
+ | **Version** | 1.0 |
32
+ | **Optimizations** | FlashHead LM Head, W4A16|
33
+ | **Developers** | Embedl |
34
+ | **Licenses** | Upstream: Meta Llama 3.2 License. Built with Llama. <br>Optimized components: Embedl Models Community Licence v1.0 *(no redistribution)* |
35
+ | **Intended Use** | Text generation, reasoning, assistant-style interaction, and general-purpose NLP on NVIDIA RTX GPUs |
36
+
37
+ ---
38
+
39
+ ## Optimizations
40
+
41
+ - **FlashHead LM Head** - lightweight replacement for the dense LM head, significantly improving throughput.
42
+ - **Mixed-Precision Quantization (W4A16)** - optimal balance of memory footprint and accuracy.
43
+ - **Custom Runtime Integration** - compatible with both **vLLM (0.10.2)** via the `embedl-models` package.
44
+
45
+ ---
46
+
47
+ ## Performance
48
+
49
+ ### Token Generation Speed (RTX 3500 Ada, batch size = 1)
50
+
51
+ | **Precision** | **Tokens/sec** | **Speedup vs BF16** |
52
+ |----------------|----------------|----------------------|
53
+ | BF16 baseline | 130 | 1.0× |
54
+ | **FlashHead (Embedl)** | **163** | **1.25×** |
55
+ | W4A16 baseline | 278 | 2.14× |
56
+ | **FlashHead W4A16 (Embedl)** | **485** | **3.73×** |
57
+
58
+ FlashHead improves end-to-end speed by **1.75×** over state-of-the-art, while maintaining full accuracy parity.
59
+
60
+ ---
61
+
62
+ ## Accuracy (Parity with Baseline)
63
+
64
+ | **Method** | **MMLU-Pro** | **HellaSwag** | **IFEval** | **BoolQ** | **BBH** | **TruthfulQA** | **GSM8K** |
65
+ |-------------|---------------|----------------|--------------|-------------|-------------|----------------|--------------|
66
+ | **Baseline** | 0.18 | 0.59 | 0.45 | 0.69 | 0.38 | 0.36 | 0.46 |
67
+ | **FlashHead** | 0.18 | 0.59 | 0.45 | 0.69 | 0.38 | 0.36 | 0.46 |
68
+
69
+ FlashHead matches baseline performance exactly across all evaluation benchmarks.
70
+
71
+ ---
72
+
73
+ ## Installation
74
+
75
+ ```bash
76
+ pip install embedl-models
77
+ ```
78
+
79
+ The `embedl-models` package is required, it provides the optimized FlashHead implementation and quantized model runtime.
80
+
81
+ ---
82
+
83
+ ## Usage Examples
84
+
85
+ ### vLLM Inference
86
+
87
+ ```python
88
+ from vllm import SamplingParams
89
+ from embedl.models.vllm import LLM
90
+
91
+ model_id = "embedl/Llama-3.2-1B-Instruct-FlashHead-W4A16"
92
+
93
+ sampling = SamplingParams(max_tokens=128, temperature=0.0)
94
+ llm = LLM(model=model_id, trust_remote_code=True)
95
+
96
+ prompt = "Write a haiku about coffee."
97
+ output = llm.generate([prompt], sampling)
98
+ print(output[0].outputs[0].text)
99
+ ```
100
+
101
+ ---
102
+
103
+ ### Interactive REPL Example
104
+
105
+ The `run_repl()` coroutine launches an **interactive, streaming chat interface** using the vLLM backend with FlashHead enabled.
106
+ It maintains an in-memory chat history and supports simple commands such as `/exit` to quit and `/reset` to clear context.
107
+
108
+ ```python
109
+ import asyncio
110
+ from embedl.models.vllm.demo import run_repl
111
+
112
+ model_id = "embedl/Llama-3.2-1B-Instruct-FlashHead-W4A16"
113
+ asyncio.run(
114
+ run_repl(
115
+ model=model_id
116
+ )
117
+ ```
118
+ ---
119
+
120
+ ---
121
+
122
+ ## ⚠️ Important Warning: Hugging Face Transformers Support
123
+
124
+ > **FlashHead is currently not applied when using the Hugging Face `transformers` pipeline.**
125
+ > Generation through `transformers` will fall back to the standard dense LM head, **disabling FlashHead acceleration**.
126
+ >
127
+ > For now, **we strongly recommend using the vLLM integration** (`embedl.models.vllm.LLM`) to ensure FlashHead is active and optimized for low-latency inference.
128
+ >
129
+ > Full support for the Hugging Face `transformers` pipeline with FlashHead integration will be released **in the coming days**.
130
+
131
+ ---
132
+
133
+ ## Limitations
134
+
135
+ - Limited to **vLLM 0.10.2** (pinned dependency)
136
+ - **Batch size = 1** (real-time generation)
137
+ - Currently optimized for **NVIDIA RTX GPUs**
138
+
139
+ ---
140
+
141
+ ## Roadmap
142
+
143
+ Planned improvements:
144
+
145
+ - Huggingface transformers generation
146
+ - Advanced mixed precision quantization
147
+ - vLLM CLI benchmarking for detailed latency evaluation
148
+ - `lm-eval-harness` integration for detailed accuracy evaluation
149
+ - Upstream support in **Transformers** and **vLLM**
150
+ - Compatibility with **GGUF**, **MLC**, **Llama.cpp**, **Ollama**, etc.
151
+ - Broader model coverage (larger models, VLMs, VLAs)
152
+
153
+ ---
154
+
155
+ ## License
156
+
157
+ - **Upstream:** Meta Llama 3.2 License
158
+ - **Optimized Components:** Embedl Models Community Licence v1.0 *(no redistribution)*
159
+
160
+ ---
161
+
162
+ ## Contact
163
+
164
+ **Enterprise & Commercial Inquiries**
165
+ [sales@embedl.com](mailto:sales@embedl.com)
166
+
167
+ **Technical Issues & Early Access**
168
+ [https://github.com/embedl/embedl-models](https://github.com/embedl/embedl-models)
169
+
170
+ **More Information & Model Releases**
171
+ [https://embedl.com](https://embedl.com)
172
+
173
+ ---
174
+
175
+ ### Partner & Developer Opportunities
176
+
177
+ If you are evaluating on-device inference, building products on SLMs, or exploring custom model optimization, reach out for:
178
+
179
+ - Embedl SDK - AI optimization tools & profiling
180
+ - Embedl HUB - benchmarking platform
181
+ - Engineering support for on-prem/edge deployments
182
+ - Migration guidance (Llama / Qwen / Gemma)
183
+ - Early access & partner co-marketing opportunities
184
+
185
+ Contact: [sales@embedl.com](mailto:sales@embedl.com)