Update README.md
Browse files
README.md
CHANGED
|
@@ -37,7 +37,7 @@ For full transparency and reproducibility, please refer to our technical report
|
|
| 37 |
|
| 38 |
## Model Details
|
| 39 |
|
| 40 |
-
JT-Math-8B-Thinking achieves its cutting-edge performance on complex mathematical challenges through a rigorous, multi-stage training methodology. Starting with the robust JT-Math-8B-Base model, our pipeline first implemented Supervised Fine-Tuning (SFT). This involved training on a high-quality, bilingual dataset of intricate math problems, capitalizing on the model's impressive native 32,768-token context window.
|
| 41 |
|
| 42 |
|
| 43 |
|
|
|
|
| 37 |
|
| 38 |
## Model Details
|
| 39 |
|
| 40 |
+
JT-Math-8B-Thinking achieves its cutting-edge performance on complex mathematical challenges through a rigorous, multi-stage training methodology. Starting with the robust JT-Math-8B-Base model, our pipeline first implemented Supervised Fine-Tuning (SFT). This involved training on a high-quality, bilingual dataset of intricate math problems, capitalizing on the model's impressive native 32,768-token context window. Subsequently, an advanced Reinforcement Learning (RL) phase, incorporating a multi-stage curriculum of progressively harder problems, further honed its reasoning abilities.
|
| 41 |
|
| 42 |
|
| 43 |
|