Aisaka013 commited on
Commit
e5cf50e
·
verified ·
1 Parent(s): 11e1de3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +147 -5
README.md CHANGED
@@ -1,5 +1,147 @@
1
- ---
2
- license: other
3
- license_name: other
4
- license_link: LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: other
4
+ base_model: Qwen2.5-Math-7B-Instruct
5
+ tags:
6
+ - llama-factory
7
+ - full
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: SocraticLM
11
+ results: []
12
+ language:
13
+ - en
14
+ - zh
15
+ pipeline_tag: text-generation
16
+ ---
17
+
18
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
19
+ should probably proofread and complete it, then remove this comment. -->
20
+
21
+ ## Model description
22
+
23
+ This model is a fine-tuned version of [Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on the [SocraTeach dataset](https://github.com/Ljyustc/SocraticLM).
24
+
25
+ It is an implementation of [SocraticLM](https://github.com/Ljyustc/SocraticLM).
26
+
27
+ ## Intended uses & limitations
28
+
29
+ [SocraticLM](https://github.com/Ljyustc/SocraticLM) is designed for educational perposes, where students need a Socratic-style guidance when having difficulties learning to solve mathematical problems.
30
+ Also, [SocraticLM](https://github.com/Ljyustc/SocraticLM) can solve mathematical problems itself.
31
+
32
+ This model mainly supports English and Chinese.
33
+
34
+ ## How to use
35
+
36
+ For Huggingface transformers:
37
+ ```python
38
+ import torch
39
+ from transformers import AutoTokenizer, AutoModelForCausalLM
40
+
41
+ tokenizer = AutoTokenizer.from_pretrained("CogBase-USTC/SocraticLM")
42
+ model = AutoModelForCausalLM.from_pretrained(
43
+ "CogBase-USTC/SocraticLM",
44
+ torch_dtype=torch.bfloat16,
45
+ device_map="auto",
46
+ trust_remote_code=True
47
+ )
48
+
49
+ ### Math Problem Solving ###
50
+ messages = [
51
+ {"role": "system", "content" : "Please analyse and solve the following problem step by step."},
52
+ {"role": "user", "content": "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"},
53
+ ]
54
+
55
+ ### Socratic-style Guidance ###
56
+ # messages = [
57
+ # {"role": "system", "content" : "You are a Socratic teacher, please guide me to solve the [Problem] with heuristic questions based on the following information. \n"},
58
+ # {"role": "user", "content": "[Problem] Debelyn, Christel, and Andrena collect dolls. Debelyn had 20 dolls before she gave Andrena 2 dolls. Christel had 24 dolls before giving Andrena 5 dolls. After all the gifts, Andrena now has 2 more dolls than Christel, how many more dolls does andrena have now than Debelyn? [Answer] 3 [Analysis] Debelyn had 20 - 2 = 18 dolls left after giving out 2 dolls to Christel. Christel had 24 + 2 = 26 dolls after receiving 2 dolls from Debelyn. Christel had 24 - 5 = 19 dolls after giving Andrena 5 dolls. So, Andrena has 19 +2 = 21 dolls now. Therefore, Andrena has 21 - 18 = 3 more dolls than Debelyn."},
59
+ # ]
60
+
61
+ prompt = tokenizer.apply_chat_template(
62
+ messages,
63
+ tokenize=False,
64
+ add_generation_prompt=True
65
+ )
66
+
67
+ inputs = tokenizer.encode(prompt, return_tensors="pt")
68
+ outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=4096)
69
+ print(tokenizer.decode(outputs[0]))
70
+ ```
71
+
72
+ For vLLM:
73
+ ```python
74
+ from vllm import LLM, SamplingParams
75
+
76
+ llm = LLM(model=r'CogBase-USTC/SocraticLM',
77
+ tokenizer=r'CogBase-USTC/SocraticLM',
78
+ trust_remote_code=True,
79
+ tensor_parallel_size=1,
80
+ gpu_memory_utilization=0.99,
81
+ enable_chunked_prefill=True,
82
+ max_num_batched_tokens=512,
83
+ max_num_seqs=128)
84
+ sampling_params = SamplingParams(temperature=0, max_tokens=4096, seed=42)
85
+
86
+
87
+ def print_outputs(outputs):
88
+ for output in outputs:
89
+ prompt = output.prompt
90
+ generated_text = output.outputs[0].text
91
+ print(f"Generated text: {generated_text!r}")
92
+ print("-" * 80)
93
+
94
+
95
+ print("=" * 80)
96
+
97
+ ### Math Problem Solving ###
98
+ conversation = [
99
+ {
100
+ "role": "system",
101
+ "content": "Please analyse and solve the following problem step by step."
102
+ },
103
+ {
104
+ "role": "user",
105
+ "content": "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"
106
+ },
107
+ ]
108
+
109
+ ### Socratic-style Guidance ###
110
+ # conversation = [
111
+ # {
112
+ # "role": "system",
113
+ # "content": "You are a Socratic teacher, please guide me to solve the [Problem] with heuristic questions based on the following information. \n"
114
+ # },
115
+ # {
116
+ # "role": "user",
117
+ # "content": "[Problem] Debelyn, Christel, and Andrena collect dolls. Debelyn had 20 dolls before she gave Andrena 2 dolls. Christel had 24 dolls before giving Andrena 5 dolls. After all the gifts, Andrena now has 2 more dolls than Christel, how many more dolls does andrena have now than Debelyn? [Answer] 3 [Analysis] Debelyn had 20 - 2 = 18 dolls left after giving out 2 dolls to Christel. Christel had 24 + 2 = 26 dolls after receiving 2 dolls from Debelyn. Christel had 24 - 5 = 19 dolls after giving Andrena 5 dolls. So, Andrena has 19 +2 = 21 dolls now. Therefore, Andrena has 21 - 18 = 3 more dolls than Debelyn."
118
+ # },
119
+ # ]
120
+
121
+ outputs = llm.chat(conversation,
122
+ sampling_params=sampling_params,
123
+ use_tqdm=False,
124
+ )
125
+ print_outputs(outputs)
126
+ ```
127
+
128
+ ### Training hyperparameters
129
+
130
+ The following hyperparameters were used during training:
131
+ - learning_rate: 1e-05
132
+ - train_batch_size: 8
133
+ - seed: 42
134
+ - distributed_type: multi-GPU
135
+ - num_devices: 4
136
+ - total_train_batch_size: 32
137
+ - total_eval_batch_size: 32
138
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
139
+ - lr_scheduler_type: cosine
140
+ - lr_scheduler_warmup_steps: 20
141
+
142
+ ### Framework versions
143
+
144
+ - Transformers 4.46.2
145
+ - Pytorch 2.5.1+cu124
146
+ - Datasets 3.1.0
147
+ - Tokenizers 0.20.3