Update README.md
Browse files
README.md
CHANGED
|
@@ -1,4 +1,13 @@
|
|
| 1 |
# Model Overview
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
| 3 |
### Description:
|
| 4 |
DLER-Qwen-R1-1.5B is an ultra-efficient 1.5B open-weight reasoning model designed for challenging tasks such as mathematics, programming, and scientific problem-solving. It is trained with the DLER algorithm on agentica-org/DeepScaleR-Preview-Dataset. Compared to DeepSeek’s 1.5B model, DLER-Qwen-R1-1.5B achieves substantial efficiency gains, reducing the average response length by nearly 80% across diverse mathematical benchmarks with better accuracy.
|
|
|
|
| 1 |
# Model Overview
|
| 2 |
+
<div align="center">
|
| 3 |
+
<span style="font-family: default; font-size: 1.5em;">DLER-R1-1.5B</span>
|
| 4 |
+
<div>
|
| 5 |
+
🚀 The leading efficient reasoning model for cutting-edge research and development 🌟
|
| 6 |
+
</div>
|
| 7 |
+
</div>
|
| 8 |
+
|
| 9 |
+

|
| 10 |
+
|
| 11 |
|
| 12 |
### Description:
|
| 13 |
DLER-Qwen-R1-1.5B is an ultra-efficient 1.5B open-weight reasoning model designed for challenging tasks such as mathematics, programming, and scientific problem-solving. It is trained with the DLER algorithm on agentica-org/DeepScaleR-Preview-Dataset. Compared to DeepSeek’s 1.5B model, DLER-Qwen-R1-1.5B achieves substantial efficiency gains, reducing the average response length by nearly 80% across diverse mathematical benchmarks with better accuracy.
|