Instructions to use prithivMLmods/Qwen-7B-Distill-Reasoner with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use prithivMLmods/Qwen-7B-Distill-Reasoner with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="prithivMLmods/Qwen-7B-Distill-Reasoner") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Qwen-7B-Distill-Reasoner") model = AutoModelForCausalLM.from_pretrained("prithivMLmods/Qwen-7B-Distill-Reasoner") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use prithivMLmods/Qwen-7B-Distill-Reasoner with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "prithivMLmods/Qwen-7B-Distill-Reasoner" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Qwen-7B-Distill-Reasoner", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/prithivMLmods/Qwen-7B-Distill-Reasoner
- SGLang
How to use prithivMLmods/Qwen-7B-Distill-Reasoner with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "prithivMLmods/Qwen-7B-Distill-Reasoner" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Qwen-7B-Distill-Reasoner", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "prithivMLmods/Qwen-7B-Distill-Reasoner" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Qwen-7B-Distill-Reasoner", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use prithivMLmods/Qwen-7B-Distill-Reasoner with Docker Model Runner:
docker model run hf.co/prithivMLmods/Qwen-7B-Distill-Reasoner
Qwen-7B-Distill-Reasoner
Qwen-7B-Distill-Reasoner is based on the Qwen [ KT ] model, which was distilled by DeepSeek-AI/DeepSeek-R1-Distill-Qwen-7B. It has been fine-tuned on the long chain-of-thought reasoning model and specialized datasets, focusing on chain-of-thought (CoT) reasoning for problem-solving. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.
Quickstart with Transformers
Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Qwen-7B-Distill-Reasoner"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Intended Use:
- Instruction-Following: The model excels in understanding and executing detailed instructions, making it ideal for automation systems, virtual assistants, and educational tools.
- Text Generation: It can produce coherent, logically structured, and contextually relevant text for use in content creation, summarization, and report writing.
- Complex Reasoning Tasks: With its fine-tuning for chain-of-thought reasoning, the model is well-suited for multi-step problem-solving, logical deduction, and question-answering tasks.
- Research and Development: It can support researchers and developers in exploring advancements in logical reasoning and fine-tuning methodologies.
- Educational Applications: The model can assist in teaching logical reasoning and problem-solving by generating step-by-step solutions.
Limitations:
- Domain-Specific Knowledge: While fine-tuned on reasoning datasets, the model may lack deep expertise in highly specialized or technical domains.
- Hallucination: Like many large language models, it can generate incorrect or fabricated information, especially when reasoning beyond its training data.
- Bias in Training Data: The model's outputs may reflect biases present in the datasets it was fine-tuned on, which could limit its objectivity in certain contexts.
- Performance on Non-Reasoning Tasks: The model is optimized for chain-of-thought reasoning and may underperform on tasks that require simpler, less structured responses.
- Resource-Intensive: Running the model efficiently requires significant computational resources, which may limit accessibility for smaller-scale deployments.
- Dependence on Input Quality: The model’s performance heavily depends on the clarity and quality of the input provided. Ambiguous or poorly structured prompts may yield suboptimal results.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
| Metric | Value (%) |
|---|---|
| Average | 18.43 |
| IFEval (0-Shot) | 33.96 |
| BBH (3-Shot) | 22.18 |
| MATH Lvl 5 (4-Shot) | 21.15 |
| GPQA (0-shot) | 10.29 |
| MuSR (0-shot) | 2.78 |
| MMLU-PRO (5-shot) | 20.20 |
- Downloads last month
- 21
Model tree for prithivMLmods/Qwen-7B-Distill-Reasoner
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-7BCollection including prithivMLmods/Qwen-7B-Distill-Reasoner
Evaluation results
- averaged accuracy on IFEval (0-Shot)Open LLM Leaderboard33.960
- normalized accuracy on BBH (3-Shot)test set Open LLM Leaderboard22.180
- exact match on MATH Lvl 5 (4-Shot)test set Open LLM Leaderboard21.150
- acc_norm on GPQA (0-shot)Open LLM Leaderboard10.290
- acc_norm on MuSR (0-shot)Open LLM Leaderboard2.780
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard20.200