| | --- |
| | library_name: transformers |
| | license: apache-2.0 |
| | pipeline_tag: text-generation |
| | base_model: |
| | - Qwen/Qwen2.5-Coder-7B-Instruct |
| | tags: |
| | - llmcompressor |
| | - quantized |
| | - FP8 |
| | --- |
| | |
| | # Qwen2.5-Coder-7B-Instruct-FP8-dynamic |
| |
|
| | ## Model Overview |
| | - **Model Architecture:** Qwen2ForCausalLM |
| | - **Input:** Text |
| | - **Output:** Text |
| | - **Model Optimizations:** |
| | - **Activation quantization:** FP8 |
| | - **Weight quantization:** FP8 |
| | - **Release Date:** 09/06/2025 |
| | - **Version:** 1.0 |
| | - **Model Developers:** duydq12 (enhance by RedHatAI) |
| |
|
| | ### Model Optimizations |
| |
|
| | This model was obtained by quantizing activations and weights of [Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) to FP8 data type. |
| | This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x). |
| | Weight quantization also reduces disk size requirements by approximately 50%. |
| |
|
| | Only weights and activations of the linear operators within transformers blocks are quantized. |
| | Weights are quantized with a symmetric static per-channel scheme, whereas activations are quantized with a symmetric dynamic per-token scheme. |
| | The [llm-compressor](https://github.com/vllm-project/llm-compressor) library is used for quantization. |
| |
|
| |
|
| | ## Deployment |
| |
|
| | This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below. |
| |
|
| | ```python |
| | from vllm import LLM, SamplingParams |
| | from transformers import AutoTokenizer |
| | |
| | model_id = "duydq12/Qwen2.5-Coder-7B-Instruct-FP8-dynamic" |
| | number_gpus = 1 |
| | sampling_params = SamplingParams(temperature=0.6, top_p=0.95, top_k=20, min_p=0, max_tokens=256) |
| | |
| | messages = [ |
| | {"role": "user", "content": prompt} |
| | ] |
| | |
| | tokenizer = AutoTokenizer.from_pretrained(model_id) |
| | |
| | messages = [{"role": "user", "content": "Give me a short introduction to large language model."}] |
| | |
| | prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) |
| | |
| | llm = LLM(model=model_id, tensor_parallel_size=number_gpus) |
| | |
| | outputs = llm.generate(prompts, sampling_params) |
| | |
| | generated_text = outputs[0].outputs[0].text |
| | print(generated_text) |
| | ``` |
| |
|
| | vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details. |
| |
|
| | ## Creation |
| |
|
| | <details> |
| | <summary>Creation details</summary> |
| | This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below. |
| |
|
| |
|
| | ```python |
| | from llmcompressor.modifiers.quantization import QuantizationModifier |
| | from llmcompressor.transformers import oneshot |
| | from transformers import AutoModelForCausalLM, AutoTokenizer |
| | |
| | # Load model |
| | model_stub = "Qwen/Qwen2.5-Coder-7B-Instruct" |
| | model_name = model_stub.split("/")[-1] |
| | |
| | model = AutoModelForCausalLM.from_pretrained(model_stub, torch_dtype="auto", device_map="auto") |
| | tokenizer = AutoTokenizer.from_pretrained(model_stub, torch_dtype="auto", device_map="auto") |
| | |
| | # Configure the quantization algorithm and scheme |
| | recipe = QuantizationModifier( |
| | ignore=["lm_head"], |
| | targets="Linear", |
| | scheme="FP8_dynamic", |
| | ) |
| | |
| | # Apply quantization |
| | oneshot( |
| | model=model, |
| | recipe=recipe, |
| | ) |
| | |
| | # Save to disk in compressed-tensors format |
| | save_path = model_name + "-FP8-dynamic" |
| | model.save_pretrained(save_path) |
| | tokenizer.save_pretrained(save_path) |
| | print(f"Model and tokenizer saved to: {save_path}") |
| | ``` |
| | </details> |
| | |
| |
|
| |
|
| | ## Evaluation |
| | private |
| |
|
| | ### Accuracy |
| | private |