Instructions to use Qwen/Qwen2.5-Coder-32B-Instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Qwen/Qwen2.5-Coder-32B-Instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Qwen/Qwen2.5-Coder-32B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-Coder-32B-Instruct") model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-Coder-32B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- HuggingChat
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Qwen/Qwen2.5-Coder-32B-Instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Qwen/Qwen2.5-Coder-32B-Instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Qwen/Qwen2.5-Coder-32B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Qwen/Qwen2.5-Coder-32B-Instruct
- SGLang
How to use Qwen/Qwen2.5-Coder-32B-Instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Qwen/Qwen2.5-Coder-32B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Qwen/Qwen2.5-Coder-32B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Qwen/Qwen2.5-Coder-32B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Qwen/Qwen2.5-Coder-32B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Qwen/Qwen2.5-Coder-32B-Instruct with Docker Model Runner:
docker model run hf.co/Qwen/Qwen2.5-Coder-32B-Instruct
It's a code chat/agent, not an AI coder.
This is a chat agent, and not a very good one, it can't code anything other than something a child could do like a calendar, it's funny that it takes so much space and compute power while basically adding no value, it's not even uncensored, it refuses to educate you on certain tasks if it deems inappropriate, we are still in the stone age when it comes to ai coders.
Seems to do fine for me with spitting out python. Tho i have found that its not drastically better ( for my use case ) than the 14, which flies on my hardware ( I dont lease cloud time ). So i'm saving it for cases when the 14 isn't quite doing the job.
I cant comment much on censorship, since all i do is code, not how to make drugs or something silly. But that said, there are 3rd party versions where they have worked to remove the guard rails, and you could always do that yourself if its that important to you to have an unbiased code oriented llm.
Seems to do fine for me with spitting out python. Tho i have found that its not drastically better ( for my use case ) than the 14, which flies on my hardware ( I dont lease cloud time ). So i'm saving it for cases when the 14 isn't quite doing the job.
I cant comment much on censorship, since all i do is code, not how to make drugs or something silly. But that said, there are 3rd party versions where they have worked to remove the guard rails, and you could always do that yourself if its that important to you to have an unbiased code oriented llm.
It wasn't about drugs or anything evil, i asked about reverse engineering, the point is, the world is still far away from having a decent coder ai, at least in the open source space.
it's not the model, it's just you,
the model is fantastic lol,
setting aside amazing scores on aider-bench, which you probably have no clue what it is
it literally one-shotted it's own interface for github.gg
:)
I tend to agree, performance aside due to my limited hardware ( and why 14b is now my go-to ), its doing wonderful for me.
