lyk2586 commited on
Commit
c566ffb
·
verified ·
1 Parent(s): 63b6555

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +135 -0
README.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # JT-Math-8B-Thinking
5
+
6
+
7
+
8
+ <p align="center">
9
+ <a href="<PAPER_LINK_PLACEHOLDER>" target="_blank">
10
+ <img src="https://img.shields.io/badge/Paper-ArXiv-red">
11
+ </a>
12
+ <a href="https://huggingface.co/JT-LM/JT-Math-8B-Thinking" target="_blank">
13
+ <img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue">
14
+ </a>
15
+ <a href="./LICENSE" target="_blank">
16
+ <img alt="License" src="https://img.shields.io/badge/License-Apache%202.0-yellow.svg">
17
+ </a>
18
+ </p>
19
+
20
+
21
+
22
+
23
+
24
+ We are excited to present JT-Math-8B-Thinking, a powerful 8-billion parameter model from the JT-Math series, engineered specifically for advanced mathematical reasoning and complex problem-solving. Fine-tuned on a carefully curated, bilingual (English and Chinese) dataset, ensuring high performance on mathematical tasks in both languages. This model has 32,768-token context window, allowing it to process and reason over extensive and intricate problem descriptions.
25
+ JT-Math-8B-Thinking has been meticulously optimized to tackle difficult mathematical challenges, achieving state-of-the-art (SOTA) performance on key reasoning benchmarks when compared against models of a similar parameter class. Its development process involves a multi-stage training pipeline designed to maximize its reasoning capabilities.
26
+ For full transparency and reproducibility, please refer to our technical report which details our training recipe and pipeline.
27
+
28
+
29
+
30
+
31
+
32
+
33
+
34
+ **Figure 1: Performance of JT-Math-8B-Thinking on math reasoning benchmarks.**
35
+
36
+
37
+
38
+ ## Model Details
39
+
40
+ The performance of **JT-Math-8B-Thinking** stems from a meticulous, multi-stage training approach aimed at tackling complex mathematical challenges with state-of-the-art accuracy. Building on the **JT-Math-8B-Base** model, its training pipeline involved **Supervised Fine-Tuning (SFT)** using a high-quality, bilingual dataset of intricate math problems. This SFT phase leveraged the model's native **32,768-token context window**, enabling it to comprehend lengthy premises, multi-step instructions, and problems with extensive background information right from the start. Following SFT, an advanced **Reinforcement Learning (RL)** phase further refined its reasoning capabilities. This RL process employed a multi-stage curriculum, gradually introducing problems of increasing difficulty, and was specifically engineered to boost the model's focus and accuracy across the entire 32K context window, ensuring the coherence and precision of even the longest reasoning chains.
41
+
42
+
43
+
44
+
45
+
46
+ ## Model Downloads
47
+
48
+ We release the following models to support a wide range of applications.
49
+
50
+ | Model Name | Length | Download | Notes |
51
+ | ------------------- | ------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
52
+ | JT-Math-8B-Thinking | 32K | [🤗](https://huggingface.co/JT-LM/JT-Math-8B-Thinking/tree/main) | Optimized with SFT on a 32K context window and RL based on multi-stage curriculum learning. |
53
+
54
+
55
+
56
+
57
+
58
+ ## Evaluation Results
59
+
60
+ JT-Math-8B-Thinking achieves competitive performance among open-source models in the ~8B class on mathematical reasoning benchmarks.
61
+
62
+
63
+
64
+
65
+
66
+
67
+
68
+ ## How to Get Started
69
+
70
+ This example shows how to use the `JT-Math-8B-Thinking` model to solve math problems.
71
+
72
+ ```python
73
+ from transformers import AutoModelForCausalLM, AutoTokenizer
74
+
75
+ model_name = "Jiutian/JT-Math-8B-Thinking"
76
+
77
+ tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
78
+ model = AutoModelForCausalLM.from_pretrained(
79
+ model_name,
80
+ torch_dtype="auto",
81
+ device_map="auto",
82
+ trust_remote_code=True,
83
+ )
84
+
85
+ prompt = "Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?"
86
+ messages = [
87
+ {"role": "user", "content": prompt}
88
+ ]
89
+ text = tokenizer.apply_chat_template(
90
+ messages,
91
+ tokenize=False,
92
+ add_generation_prompt=True,
93
+ )
94
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
95
+
96
+ gen_kwargs = {
97
+ "do_sample": True,
98
+ "temperature": 0.6,
99
+ "max_new_tokens": 32768,
100
+ }
101
+ generated_ids = model.generate(
102
+ **model_inputs,
103
+ **gen_kwargs
104
+ )
105
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
106
+
107
+ raw_content = tokenizer.decode(output_ids, skip_special_tokens=True)
108
+ if "</think>" in raw_content:
109
+ thinking_content = raw_content.rsplit("</think>", 1)[0].strip("\n")
110
+ content = raw_content.rsplit("</think>", 1)[1].strip("\n")
111
+ else:
112
+ think_content = raw_content
113
+ content = ""
114
+
115
+ print("raw content:", raw_content)
116
+ print("thinking content:", thinking_content)
117
+ print("content:", content)
118
+ ```
119
+
120
+
121
+
122
+ ## Citation
123
+
124
+
125
+
126
+ If you use JT-Math-8B-Thinking in your research, please cite our work:
127
+
128
+ ```latex
129
+ @article{jiutian-math2025,
130
+ title={JIUTIAN MATH: A MULTI-STAGE FRAMEWORK FOR ADVANCED MATHEMATICAL REASONING IN LARGE LANGUAGE MODELS},
131
+ author={Authors},
132
+ journal={arXiv preprint arXiv:xxxx.xxxxx},
133
+ year={2025}
134
+ }
135
+ ```