Instructions to use QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF", dtype="auto") - llama-cpp-python
How to use QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF", filename="Qwen2.5-7B-Instruct-1M.Q2_K.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF:Q4_K_M
Use Docker
docker model run hf.co/QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF:Q4_K_M
- SGLang
How to use QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF with Ollama:
ollama run hf.co/QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF:Q4_K_M
- Unsloth Studio new
How to use QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF to start chatting
- Pi new
How to use QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF with Docker Model Runner:
docker model run hf.co/QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF:Q4_K_M
- Lemonade
How to use QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.Qwen2.5-7B-Instruct-1M-GGUF-Q4_K_M
List all available models
lemonade list
QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF
This is quantized version of Qwen/Qwen2.5-7B-Instruct-1M created using llama.cpp
Original Model Card
Qwen2.5-7B-Instruct-1M
Introduction
Qwen2.5-1M is the long-context version of the Qwen2.5 series models, supporting a context length of up to 1M tokens. Compared to the Qwen2.5 128K version, Qwen2.5-1M demonstrates significantly improved performance in handling long-context tasks while maintaining its capability in short tasks.
The model has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 7.61B
- Number of Paramaters (Non-Embedding): 6.53B
- Number of Layers: 28
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
- Context Length: Full 1,010,000 tokens and generation 8192 tokens
- We recommend deploying with our custom vLLM, which introduces sparse attention and length extrapolation methods to ensure efficiency and accuracy for long-context tasks. For specific guidance, refer to this section.
- You can also use the previous framework that supports Qwen2.5 for inference, but accuracy degradation may occur for sequences exceeding 262,144 tokens.
For more details, please refer to our blog, GitHub, Technical Report, and Documentation.
Requirements
The code of Qwen2.5 has been in the latest Hugging face transformers and we advise you to use the latest version of transformers.
With transformers<4.37.0, you will encounter the following error:
KeyError: 'qwen2'
Quickstart
Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-7B-Instruct-1M"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Processing Ultra Long Texts
To enhance processing accuracy and efficiency for long sequences, we have developed an advanced inference framework based on vLLM, incorporating sparse attention and length extrapolation. This approach significantly improves model generation performance for sequences exceeding 256K tokens and achieves a 3 to 7 times speedup for sequences up to 1M tokens.
Here we provide step-by-step instructions for deploying the Qwen2.5-1M models with our framework.
1. System Preparation
To achieve the best performance, we recommend using GPUs with Ampere or Hopper architecture, which support optimized kernels.
Ensure your system meets the following requirements:
- CUDA Version: 12.1 or 12.3
- Python Version: >=3.9 and <=3.12
VRAM Requirements:
- For processing 1 million-token sequences:
- Qwen2.5-7B-Instruct-1M: At least 120GB VRAM (total across GPUs).
- Qwen2.5-14B-Instruct-1M: At least 320GB VRAM (total across GPUs).
If your GPUs do not have sufficient VRAM, you can still use Qwen2.5-1M for shorter tasks.
2. Install Dependencies
For now, you need to clone the vLLM repository from our custom branch and install it manually. We are working on getting our branch merged into the main vLLM project.
git clone -b dev/dual-chunk-attn git@github.com:QwenLM/vllm.git
cd vllm
pip install -e . -v
3. Launch vLLM
vLLM supports offline inference or launch an openai-like server.
Example of Offline Inference
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
# Initialize the tokenizer
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-7B-Instruct-1M")
# Pass the default decoding hyperparameters of Qwen2.5-7B-Instruct
# max_tokens is for the maximum length for generation.
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=512)
# Input the model name or path. See below for parameter explanation (after the example of openai-like server).
llm = LLM(model="Qwen/Qwen2.5-7B-Instruct-1M",
tensor_parallel_size=4,
max_model_len=1010000,
enable_chunked_prefill=True,
max_num_batched_tokens=131072,
enforce_eager=True,
# quantization="fp8", # Enabling FP8 quantization for model weights can reduce memory usage.
)
# Prepare your prompts
prompt = "Tell me something about large language models."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# generate outputs
outputs = llm.generate([text], sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
Example of Openai-like Server
vllm serve Qwen/Qwen2.5-7B-Instruct-1M \
--tensor-parallel-size 4 \
--max-model-len 1010000 \
--enable-chunked-prefill --max-num-batched-tokens 131072 \
--enforce-eager \
--max-num-seqs 1
# --quantization fp8 # Enabling FP8 quantization for model weights can reduce memory usage.
Then you can use curl or python to interact with the deployed model.
Parameter Explanations:
--tensor-parallel-size- Set to the number of GPUs you are using. Max 4 GPUs for the 7B model, and 8 GPUs for the 14B model.
--max-model-len- Defines the maximum input sequence length. Reduce this value if you encounter Out of Memory issues.
--max-num-batched-tokens- Sets the chunk size in Chunked Prefill. A smaller value reduces activation memory usage but may slow down inference.
- Recommend 131072 for optimal performance.
--max-num-seqs- Limits concurrent sequences processed.
You can also refer to our Documentation for usage of vLLM.
Troubleshooting:
Encountering the error: "The model's max sequence length (xxxxx) is larger than the maximum number of tokens that can be stored in the KV cache."
The VRAM reserved for the KV cache is insufficient. Consider reducing the
max_model_lenor increasing thetensor_parallel_size. Alternatively, you can reducemax_num_batched_tokens, although this may significantly slow down inference.Encountering the error: "torch.OutOfMemoryError: CUDA out of memory."
The VRAM reserved for activation weights is insufficient. You can try setting
gpu_memory_utilizationto 0.85 or lower, but be aware that this might reduce the VRAM available for the KV cache.Encountering the error: "Input prompt (xxxxx tokens) + lookahead slots (0) is too long and exceeds the capacity of the block manager."
The input is too lengthy. Consider using a shorter sequence or increasing the
max_model_len.
Evaluation & Performance
Detailed evaluation results are reported in this 📑 blog and our technical report.
Citation
If you find our work helpful, feel free to give us a cite.
@misc{qwen2.5-1m,
title = {Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M Tokens},
url = {https://qwenlm.github.io/blog/qwen2.5-1m/},
author = {Qwen Team},
month = {January},
year = {2025}
}
@article{qwen2.5,
title={Qwen2.5-1M Technical Report},
author={An Yang and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoyan Huang and Jiandong Jiang and Jianhong Tu and Jianwei Zhang and Jingren Zhou and Junyang Lin and Kai Dang and Kexin Yang and Le Yu and Mei Li and Minmin Sun and Qin Zhu and Rui Men and Tao He and Weijia Xu and Wenbiao Yin and Wenyuan Yu and Xiafei Qiu and Xingzhang Ren and Xinlong Yang and Yong Li and Zhiying Xu and Zipeng Zhang},
journal={arXiv preprint arXiv:2501.15383},
year={2025}
}
- Downloads last month
- 291
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Model tree for QuantFactory/Qwen2.5-7B-Instruct-1M-GGUF
Base model
Qwen/Qwen2.5-7B