Update README.md
Browse files
README.md
CHANGED
|
@@ -35,9 +35,9 @@ For full transparency and reproducibility, please refer to our technical report
|
|
| 35 |
|
| 36 |
|
| 37 |
|
| 38 |
-
## Model
|
| 39 |
|
| 40 |
-
JT-Math-8B-Thinking achieves its cutting-edge performance on complex mathematical challenges through a rigorous, multi-stage training methodology. Starting with the robust JT-Math-8B-Base model, our pipeline first implemented Supervised Fine-Tuning (SFT). This involved training on a high-quality, bilingual dataset of intricate math problems
|
| 41 |
|
| 42 |
|
| 43 |
|
|
|
|
| 35 |
|
| 36 |
|
| 37 |
|
| 38 |
+
## Model Highlights
|
| 39 |
|
| 40 |
+
JT-Math-8B-Thinking achieves its cutting-edge performance on complex mathematical challenges through a rigorous, multi-stage training methodology. Starting with the robust JT-Math-8B-Base model, our pipeline first implemented Supervised Fine-Tuning (SFT). This involved training on a high-quality, bilingual dataset of intricate math problems with 32,768-token context window. Subsequently, an advanced Reinforcement Learning (RL) phase, incorporating a multi-stage curriculum of progressively harder problems, further honed its reasoning abilities.
|
| 41 |
|
| 42 |
|
| 43 |
|