Instructions to use qihoo360/Light-R1-14B-DS with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use qihoo360/Light-R1-14B-DS with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="qihoo360/Light-R1-14B-DS") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("qihoo360/Light-R1-14B-DS") model = AutoModelForCausalLM.from_pretrained("qihoo360/Light-R1-14B-DS") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use qihoo360/Light-R1-14B-DS with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "qihoo360/Light-R1-14B-DS" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "qihoo360/Light-R1-14B-DS", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/qihoo360/Light-R1-14B-DS
- SGLang
How to use qihoo360/Light-R1-14B-DS with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "qihoo360/Light-R1-14B-DS" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "qihoo360/Light-R1-14B-DS", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "qihoo360/Light-R1-14B-DS" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "qihoo360/Light-R1-14B-DS", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use qihoo360/Light-R1-14B-DS with Docker Model Runner:
docker model run hf.co/qihoo360/Light-R1-14B-DS
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("qihoo360/Light-R1-14B-DS")
model = AutoModelForCausalLM.from_pretrained("qihoo360/Light-R1-14B-DS")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Light-R1-14B-DS: SOTA 14B Math Model with RL
| Model | Trained From | Release Date | AIME24 | AIME25 | GPQA |
|---|---|---|---|---|---|
| OpenThinker-32B | Qwen2.5-32B-Instruct | 25.2.12 | 66.0 | 50.9 | 61.6 |
| DeepSeek-R1-Distill-Qwen-14B | Qwen2.5-14B | 25.1.20 | 69.7 | 50.2 | 59.1 |
| Light-R1-14B-DS (ours) 🤗 | DeepSeek-R1-Distill-Qwen-14B | 25.3.12 | 74.0 | 60.2 | 61.7 |
| Light-R1-32B (ours) 🤗 | Qwen2.5-32B-Instruct | 25.3.4 | 76.6 | 64.6 | 61.8 |
We introduce Light-R1-14B-DS, the first open-source successful RL attempt on already long-COT finetuned models of simialr sizes under light budget. Light-R1-14B-DS is also the State-Of-The-Art 14B math model with AIME24 & 25 scores 74.0 & 60.2, outperforming many 32B models.
Recent RL works have successfully trained RL on base models (usually with -zero in their names), or on 1.5B models (with response length interestingly decreasing then increasing), or on QwQ-32B with presumably prohibitively heavy compute.
Light-R1-14B-DS marks one step further in reproducing and democratizing DeepSeek-R1. We have finally seen expected behavior during RL training: simultaneous increase in response length and reward score on an already long-COT finetuned model (see wandb log).
Originated from DeepSeek-R1-Distill-Qwen-14B, Light-R1-14B-DS underwent our long-COT RL Post-Training and achieved a new State-Of-The-Art across 14B-Math models: 74.0 & 60.2 on AIME 24 & 25 respectively. Light-R1-14B-DS also performed well on GPQA without any specific training. We are excited to release this model along with the technical report, and will continue to perfect our long-COT RL Post-Training.
Usage
Same as DeepSeek-R1-Distill-Qwen-14B.
Data Decontamination
We carefully evaluated data contamination of several open-sourced datasets. While certain contamination may be inevitable during pre-training, it is unacceptable for post-training to compare on benchmarks. MATH-500 is somewhat compromised with tens of questions that are identical or only numbers changed. AIME 24 and 25 stay intact but we have to pay special attention when we incorporate AIME data up to 2023.
Light-R1 did thorough decontamination with exact matching (excluding digits) and N-gram (N=32) matching.
Citation
@misc{lightr1proj,
title={Light-R1: Curriculum SFT, DPO and RL for Long COT from Scratch and Beyond},
author={Liang Wen, Yunke Cai, Fenrui Xiao, Xin He, Qi An, Zhenyu Duan, Yimin Du, Junchen Liu, Lifu Tang, Xiaowei Lv, Haosheng Zou, Yongchao Deng, Shousheng Jia, Xiangzheng Zhang},
year={2025},
eprint={},
archivePrefix={},
url={https://github.com/Qihoo360/Light-R1},
}
- Downloads last month
- 11
Model tree for qihoo360/Light-R1-14B-DS
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="qihoo360/Light-R1-14B-DS") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)