File size: 3,098 Bytes
e0c6f36 2020be1 8c9ef78 2020be1 e0c6f36 2020be1 e0c6f36 2020be1 e0c6f36 098c7d9 2020be1 e0c6f36 2020be1 e0c6f36 2020be1 e0c6f36 2020be1 e0c6f36 2020be1 e0c6f36 2020be1 e0c6f36 2020be1 e0c6f36 2020be1 e0c6f36 2020be1 e0c6f36 2020be1 e0c6f36 2020be1 e0c6f36 2020be1 e0c6f36 2020be1 e0c6f36 2020be1 e0c6f36 2020be1 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 | ---
license: mit
language:
- en
pipeline_tag: text-generation
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
tags:
- chat
library_name: transformers
---
# Model Overview
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Release Date:** 1/28/2025
Quantized version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) to FP8 data type, ready for inference with SGLang >= 0.3 or vLLM >= 0.5.2.
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. Only the weights and activations of the linear operators within transformers blocks are quantized.
## Deployment
### Use with SGLang
```bash
python -m sglang.launch_server --model-path JamAndTeaStudios/DeepSeek-R1-Distill-Qwen-14B-FP8-Dynamic \
--port 30000 --host 0.0.0.0
```
## Creation
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
<details>
<summary>Model Creation Code</summary>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
MODEL_ID = "google/gemma-2-27b-it"
# 1) Load model.
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID, device_map="auto", torch_dtype="auto"
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
# 2) Configure the quantization algorithm and scheme.
# In this case, we:
# * quantize the weights to fp8 with per channel via ptq
# * quantize the activations to fp8 with dynamic per token
recipe = QuantizationModifier(
targets="Linear", scheme="FP8_DYNAMIC", ignore=["lm_head"]
)
# 3) Apply quantization and save in compressed-tensors format.
OUTPUT_DIR = MODEL_ID.split("/")[1] + "-FP8-Dynamic"
oneshot(
model=model,
recipe=recipe,
tokenizer=tokenizer,
output_dir=OUTPUT_DIR,
)
# Confirm generations of the quantized model look sane.
print("========== SAMPLE GENERATION ==============")
input_ids = tokenizer("Hello my name is", return_tensors="pt").input_ids.to("cuda")
output = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(output[0]))
print("==========================================")
```
</details>
## Evaluation
TBA
## Play Retail Mage

[Retail Mage (Steam)](https://store.steampowered.com/app/3224380/Retail_Mage/) is an immersive sim that uses online LLM inference in almost all features in the gameplay!
Reviews
“A true to life experience detailing how customer service really works.”
10/10 – kpolupo
“I enjoyed how many things were flammable in the store.”
5/5 – mr_srsbsns
“I've only known that talking little crow plushie in MageMart for a day and a half but if anything happened to him I would petrify everyone in this store and then myself.”
7/7 – neondenki |