Safetensors
English
qwen2
File size: 3,822 Bytes
003f1b1
 
 
 
 
 
 
 
 
 
0c2562b
897cd02
 
 
 
 
 
 
88802e9
 
 
4693a7e
897cd02
 
0c2562b
 
 
 
 
 
0cf5f38
0c2562b
 
 
 
2abe2e9
0c2562b
2abe2e9
 
 
0c2562b
 
 
 
 
 
 
f5aa4cf
 
0c2562b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2abe2e9
0c2562b
 
 
2abe2e9
0c2562b
 
d248dc5
 
 
 
 
2abe2e9
d248dc5
0c2562b
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
datasets:
- agentica-org/DeepScaleR-Preview-Dataset
language:
- en
metrics:
- accuracy
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
---
# Model Overview
<div align="center">
<span style="font-family: default; font-size: 1.5em;">DLER-R1-7B</span>
<div>
🚀 The leading efficient reasoning model for cutting-edge research and development 🌟
</div>
</div>

[![Paper](https://img.shields.io/badge/ArXiv-Paper-brown)](https://www.arxiv.org/abs/2510.15110)
[![Code](https://img.shields.io/badge/GitHub-Link-blue)](https://github.com/NVlabs/DLER)
[![Model](https://img.shields.io/badge/HuggingFace-Model-yellow)](https://huggingface.co/collections/nvidia/reasoning-efficiency-research)
[![Website](https://img.shields.io/badge/Web-Page-orange)](https://nvlabs.github.io/DLER/)
![Comparison between DeepSeek-R1-7B and DLER-R1-7B](./asset/latency_7b.png)

### Description:
DLER-Qwen-R1-7B is an ultra-efficient 7B open-weight reasoning model designed for challenging tasks such as mathematics, programming, and scientific problem-solving. It is trained with the DLER algorithm on agentica-org/DeepScaleR-Preview-Dataset. Compared to DeepSeek’s 7B model, DLER-Qwen-R1-7B achieves substantial efficiency gains, reducing the average response length by nearly 80% across diverse mathematical benchmarks with better accuracy.

This model is for research and development only.

### Evaluation Results:
| Model            | MATH | Length | AIME               | Length        | AMC                | Length        | Minerva            |Length         | Olympiad           |Length         | Total Avg Length  |
|------------------|----------|------------|--------------------|------------------|--------------------|------------------|--------------------|------------------|--------------------|------------------|-----------------|
| Deepseek-R1-7B   | 93.60    | 3999       | 55.40              | 13241            | 82.90              | 7461             | 49.79              | 5199             | 58.21              | 8837             | 7747            |
| **DLER-R1-7B**   | **94.21 (+0.61%)** | **1634 (-60%)** | **55.62 (+0.22%)** | **3230 (-76%)** | **84.41 (+1.51%)** | **2512 (-0.67%)** | **53.88 (+4.09%)** | **2058 (-61%)** | **60.48 (+2.27%)** | **2592 (-71%)** | **2405 (-69%)** |

### Environment Setup

```
pip install transformers==4.51.3
```
# Inference:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

model = AutoModelForCausalLM.from_pretrained('nvidia/DLER-R1-7B-Research').to(device)
tokenizer = AutoTokenizer.from_pretrained('nvidia/DLER-R1-7B-Research')

messages = [
  {"role": "user", "content": "Convert the point $(0,3)$ in rectangular coordinates to polar coordinates.  Enter your answer in the form $(r,\\theta),$ where $r > 0$ and $0 \\le \\theta < 2 \\pi.$"+" Let's think step by step and output the final answer within \\boxed{}."},
]


tokenized_chat = tokenizer.apply_chat_template(
  messages,
  tokenize=True,
  add_generation_prompt=True,
  return_tensors="pt"
).to(model.device)

outputs = model.generate(
  tokenized_chat,
  max_new_tokens=10000,
  eos_token_id=tokenizer.eos_token_id
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

### License/Terms of Use
NSCLv1


## Citation
If you find our model helpful, please cite the following [paper]():

```
@article{liu2025dler,
  title={DLER: Doing Length pEnalty Right-Incentivizing More Intelligence per Token via Reinforcement Learning},
  author={Liu, Shih-Yang and Dong, Xin and Lu, Ximing and Diao, Shizhe and Liu, Mingjie and Chen, Min-Hung and Yin, Hongxu and Wang, Yu-Chiang Frank and Cheng, Kwang-Ting and Choi, Yejin and others},
  journal={arXiv preprint arXiv:2510.15110},
  year={2025}
}
```