Instructions to use YOYO-AI/QwQ-coder-32B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use YOYO-AI/QwQ-coder-32B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="YOYO-AI/QwQ-coder-32B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("YOYO-AI/QwQ-coder-32B") model = AutoModelForCausalLM.from_pretrained("YOYO-AI/QwQ-coder-32B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use YOYO-AI/QwQ-coder-32B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "YOYO-AI/QwQ-coder-32B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "YOYO-AI/QwQ-coder-32B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/YOYO-AI/QwQ-coder-32B
- SGLang
How to use YOYO-AI/QwQ-coder-32B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "YOYO-AI/QwQ-coder-32B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "YOYO-AI/QwQ-coder-32B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "YOYO-AI/QwQ-coder-32B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "YOYO-AI/QwQ-coder-32B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use YOYO-AI/QwQ-coder-32B with Docker Model Runner:
docker model run hf.co/YOYO-AI/QwQ-coder-32B
Short context? why?
Both the parent models have longer context, up to 128k but this one is only 32k. which is really disappointing..Is it possible to fix that?
Just re-checked..Qwen says that both, QwQ and Coder-instruct have 128k context.. But you mentioned Just Coder(no instruct?).. Actually, "no instruct" also has 128k..So it's really should not be a problem..
@d00mus The problem has been solved. Thank you for your feedback. The context has been changed to 128K!
@mradermacher Now that the problem of the context length has been solved, could you please provide the quantized version of the model again? Thank you so much for your help!