DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models
Paper • 2402.03300 • Published • 145
How to use divelab/DAPO_E2H-countdown-gaussian_0p5_0p5 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="divelab/DAPO_E2H-countdown-gaussian_0p5_0p5")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("divelab/DAPO_E2H-countdown-gaussian_0p5_0p5")
model = AutoModelForCausalLM.from_pretrained("divelab/DAPO_E2H-countdown-gaussian_0p5_0p5")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use divelab/DAPO_E2H-countdown-gaussian_0p5_0p5 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "divelab/DAPO_E2H-countdown-gaussian_0p5_0p5"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "divelab/DAPO_E2H-countdown-gaussian_0p5_0p5",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/divelab/DAPO_E2H-countdown-gaussian_0p5_0p5
How to use divelab/DAPO_E2H-countdown-gaussian_0p5_0p5 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "divelab/DAPO_E2H-countdown-gaussian_0p5_0p5" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "divelab/DAPO_E2H-countdown-gaussian_0p5_0p5",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "divelab/DAPO_E2H-countdown-gaussian_0p5_0p5" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "divelab/DAPO_E2H-countdown-gaussian_0p5_0p5",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use divelab/DAPO_E2H-countdown-gaussian_0p5_0p5 with Docker Model Runner:
docker model run hf.co/divelab/DAPO_E2H-countdown-gaussian_0p5_0p5
This model is a fine-tuned version of Qwen/Qwen2.5-1.5B-Instruct on the Countdown dataset. It has been trained using E2H on the top of TRL.
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shubhamprshr/Qwen2.5-1.5B-Instruct_math_grpo_cosine_0.5_0.5_SEC0.3DRO1.0G0.0_minpTrue_1600", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
This model was trained with GRPO, a method introduced in DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models.
Cite E2H as:
@inproceedings{parashar2026curriculum,
title = {Curriculum Reinforcement Learning from Easy to Hard Tasks Improves {LLM} Reasoning},
author = {Parashar, Shubham and Gui, Shurui and Li, Xiner and Ling, Hongyi and Vemuri, Sushil and Olson, Blake and Li, Eric and Zhang, Yu and Caverlee, James and Kalathil, Dileep and Ji, Shuiwang},
booktitle = {The Fourteenth International Conference on Learning Representations},
year = {2026},
url = {https://openreview.net/forum?id=KJvHnl3kUv}
}