File size: 3,868 Bytes
a483bb4
 
 
bba6abe
e762563
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6f8a3ed
 
 
e762563
 
 
 
 
 
 
 
 
 
 
a17bf96
e762563
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
license: apache-2.0
---
# JT-Math-8B-Base



<p align="center">
    <a href="<PAPER_LINK_PLACEHOLDER>" target="_blank">
        <img src="https://img.shields.io/badge/Paper-ArXiv-red">
    </a>
    <a href="https://huggingface.co/JT-LM/JT-Math-8B-Base" target="_blank">
        <img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue">
    </a>
    <a href="./LICENSE" target="_blank">
        <img alt="License" src="https://img.shields.io/badge/License-Apache%202.0-yellow.svg">
    </a>
</p>



We are excited to introduce JT-Math-8B-Base: an 8-billion-parameter foundation model engineered for mathematical reasoning and the cornerstone of the JT-Math family. 
JT-Math-8B-Base was pre-trained on top of JT-Coder-8B-Base using an additional 210 billion tokens of high-quality mathematical and general-domain data. 
With a native 32,768-token context window, it provides a robust, scalable, and reproducible foundation for downstream fine-tuning—enabling researchers and developers to advance the frontier of math-centric AI applications. Technical details, training recipes, and reproducibility notes are available in our technical report.





## Model Downloads

We release the following models to support a wide range of applications.

| Model Name      | Length | Download                                          | Notes                                                        |
| --------------- | ------ | ------------------------------------------------- | ------------------------------------------------------------ |
| JT-Math-8B-Base | 32K    | [🤗](https://huggingface.co/JT-LM/JT-Math-8B-Base/tree/main) | The base model. Continually pre-trained from JT-Coder-8B-Base. |





## Evaluation Results

| Model                       | GSM8K | Math  | CMath (zh) | Average |
| --------------------------- | ----- | ----- | ---------- | ------- |
| Qwen2.5-Base-72B            | 91.5  | 62.12 | 84.5       | 79.4    |
| Llama-3.1-Base-405B         | 89.0  | 53.8  | 77.4       | 73.4    |
| DeepSeek-Math-Base-7B       | 64.2  | 36.2  | 71.7       | 57.4    |
| DeepSeek-Coder-V2-Lite-Base | 68.3  | 38.1  | 77.8       | 61.4    |
| InternLM2-Math-Base-20B     | 68.2  | 30.4  | 65.9       | 54.8    |
| Qwen2.5-Math-7B             | 91.6  | 55.4  | 85.0       | 77.3    |
| *JT-Math-8B-Base*           | 88.0  | 58.1  | 90.0       | *78.7*  |







## How to Get Started

We provide a basic example of how to run inference with the `JT-Math-8B-Base` model.

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Jiutian/JT-Math-8B-Base"

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto",
    trust_remote_code=True,
)

prompt = "Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?"
text = f"Question:\n{prompt}\nAnswer:\n"
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

gen_kwargs = {
    "do_sample": False,
    "max_new_tokens": 8192,
}
generated_ids = model.generate(
    **model_inputs,
    **gen_kwargs
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()

response = tokenizer.decode(output_ids, skip_special_tokens=True)
print("response:", response)
```





## Citation



If you find our work useful, please consider citing our paper:

```latex
@article{jiutian-math2025,
  title={JIUTIAN MATH: A MULTI-STAGE FRAMEWORK FOR ADVANCED MATHEMATICAL REASONING IN LARGE LANGUAGE MODELS},
  author={Authors},
  journal={arXiv preprint arXiv:xxxx.xxxxx},
  year={2025}
}
```