Instructions to use qihoo360/Light-R1-32B-DS with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use qihoo360/Light-R1-32B-DS with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="qihoo360/Light-R1-32B-DS") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("qihoo360/Light-R1-32B-DS") model = AutoModelForCausalLM.from_pretrained("qihoo360/Light-R1-32B-DS") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use qihoo360/Light-R1-32B-DS with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "qihoo360/Light-R1-32B-DS" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "qihoo360/Light-R1-32B-DS", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/qihoo360/Light-R1-32B-DS
- SGLang
How to use qihoo360/Light-R1-32B-DS with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "qihoo360/Light-R1-32B-DS" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "qihoo360/Light-R1-32B-DS", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "qihoo360/Light-R1-32B-DS" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "qihoo360/Light-R1-32B-DS", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use qihoo360/Light-R1-32B-DS with Docker Model Runner:
docker model run hf.co/qihoo360/Light-R1-32B-DS
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("qihoo360/Light-R1-32B-DS")
model = AutoModelForCausalLM.from_pretrained("qihoo360/Light-R1-32B-DS")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Light-R1-32B-DS: near-SOTA 32B Math Model with Only 3K Data
Paper: https://huggingface.co/papers/2503.10460
| Model | Trained From | Release Date | AIME24 | AIME25 | GPQA |
|---|---|---|---|---|---|
| DeepSeek-R1-Distill-Qwen-32B | Qwen2.5-32B | 25.1.20 | 72.6 | 54.9 | 62.1 |
| TinyR1-32B-Preview | DeepSeek-R1-Distill-Qwen-32B | 25.2.25 | 77.1 | 65.9 | 65.0 |
| Light-R1-32B-DS (ours) 🤗 | DeepSeek-R1-Distill-Qwen-32B | 25.3.12 | 78.1 | 65.9 | 68.0 |
| Light-R1-32B (ours) 🤗 | Qwen2.5-32B-Instruct | 25.3.4 | 76.6 | 64.6 | 61.8 |
| QwQ-32B | N/A | 25.3.6 | 78.5 | 69.3 | 67.7 |
Light-R1-32B-DS is a near-SOTA 32B math model with AIME24 & 25 scores 78.1 & 65.9.
Originated from DeepSeek-R1-Distill-Qwen-32B, Light-R1-32B-DS is further trained with only 3K SFT data as we've open-sourced, demonstrating the strong applicability of the released data.
We are excited to release this model along with the technical report.
Usage
Same as DeepSeek-R1-Distill-Qwen-32B.
Data Decontamination
We carefully evaluated data contamination of several open-sourced datasets. While certain contamination may be inevitable during pre-training, it is unacceptable for post-training to compare on benchmarks. MATH-500 is somewhat compromised with tens of questions that are identical or only numbers changed. AIME 24 and 25 stay intact but we have to pay special attention when we incorporate AIME data up to 2023.
Light-R1 did thorough decontamination with exact matching (excluding digits) and N-gram (N=32) matching.
Citation
@misc{lightr1proj,
title={Light-R1: Curriculum SFT, DPO and RL for Long COT from Scratch and Beyond},
author={Liang Wen, Yunke Cai, Fenrui Xiao, Xin He, Qi An, Zhenyu Duan, Yimin Du, Junchen Liu, Lifu Tang, Xiaowei Lv, Haosheng Zou, Yongchao Deng, Shousheng Jia, Xiangzheng Zhang},
year={2025},
eprint={},
archivePrefix={},
url={https://github.com/Qihoo360/Light-R1},
}
- Downloads last month
- 56
Model tree for qihoo360/Light-R1-32B-DS
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="qihoo360/Light-R1-32B-DS") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)