Instructions to use OpenPipe/Deductive-Reasoning-Qwen-32B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OpenPipe/Deductive-Reasoning-Qwen-32B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="OpenPipe/Deductive-Reasoning-Qwen-32B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("OpenPipe/Deductive-Reasoning-Qwen-32B") model = AutoModelForCausalLM.from_pretrained("OpenPipe/Deductive-Reasoning-Qwen-32B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use OpenPipe/Deductive-Reasoning-Qwen-32B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "OpenPipe/Deductive-Reasoning-Qwen-32B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenPipe/Deductive-Reasoning-Qwen-32B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/OpenPipe/Deductive-Reasoning-Qwen-32B
- SGLang
How to use OpenPipe/Deductive-Reasoning-Qwen-32B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "OpenPipe/Deductive-Reasoning-Qwen-32B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenPipe/Deductive-Reasoning-Qwen-32B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "OpenPipe/Deductive-Reasoning-Qwen-32B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenPipe/Deductive-Reasoning-Qwen-32B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use OpenPipe/Deductive-Reasoning-Qwen-32B with Docker Model Runner:
docker model run hf.co/OpenPipe/Deductive-Reasoning-Qwen-32B
Llamacpp Fixes (to GGUF), and System Prompt to invoke "thinking"
To repair for use with LLamacpp / Create GGUFS:
- Rename all model safetensor files, remove the "ft-"
- Fix "model.safetensors.index.json" => remove the "ft-" from all entries. (search/replace in NOTEPAD)
Likely same issue with 14B model too (?) - same format.
Operation:
Tested q2k quant in LMStudio, with this system prompt:
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside tags, and then provide your solution or response to the problem.
Seemed to work well with the model (using Jinja Template).
Higher temps seemed to cause/invoke more reasoning.
OpenPipe:
This will severely limit the users of your model.
Quanters like Mradermacher will not pick it up, likewise "GGUF my repo" will crash and burn.
After it (and 14B?) fixed, submit a ticket at MRadermacher 's repo to auto-quant the model to GGUF.
The will create the 32B in gguf, gguf-imatrix (and 14B too).