Improve language tag

#1
by lbourdois - opened
Files changed (1) hide show
  1. README.md +56 -44
README.md CHANGED
@@ -1,45 +1,57 @@
1
- ---
2
- library_name: transformers
3
- tags:
4
- - trl
5
- - sft
6
- datasets:
7
- - qwedsacf/grade-school-math-instructions
8
- language:
9
- - en
10
- base_model:
11
- - Qwen/Qwen2.5-3B
12
- ---
13
-
14
- <img src="https://huggingface.co/entfane/math-professor-3B/resolve/main/math-professor-image.png" width="300" height="300"/>
15
-
16
-
17
- # Math Professor 3B
18
-
19
- This model is a math instruction fine-tuned version of Qwen2.5-3B model.
20
-
21
- ### Fine-tuning dataset
22
-
23
- Model was fine-tuned on [qwedsacf/grade-school-math-instructions](https://huggingface.co/datasets/qwedsacf/grade-school-math-instructions) instruction dataset.
24
-
25
- ### Inference
26
-
27
- ```python
28
- !pip install transformers accelerate
29
-
30
- from transformers import AutoTokenizer, AutoModelForCausalLM
31
-
32
- model_name = "entfane/math-professor-3B"
33
-
34
- tokenizer = AutoTokenizer.from_pretrained(model_name)
35
- model = AutoModelForCausalLM.from_pretrained(model_name)
36
-
37
- messages = [
38
- {"role": "user", "content": "What's the derivative of 2x^2?"}
39
- ]
40
-
41
- input = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
42
- encoded_input = tokenizer(input, return_tensors = "pt").to(model.device)
43
- output = model.generate(**encoded_input, max_new_tokens=1024)
44
- print(tokenizer.decode(output[0], skip_special_tokens=False))
 
 
 
 
 
 
 
 
 
 
 
 
45
  ```
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - trl
5
+ - sft
6
+ datasets:
7
+ - qwedsacf/grade-school-math-instructions
8
+ language:
9
+ - zho
10
+ - eng
11
+ - fra
12
+ - spa
13
+ - por
14
+ - deu
15
+ - ita
16
+ - rus
17
+ - jpn
18
+ - kor
19
+ - vie
20
+ - tha
21
+ - ara
22
+ base_model:
23
+ - Qwen/Qwen2.5-3B
24
+ ---
25
+
26
+ <img src="https://huggingface.co/entfane/math-professor-3B/resolve/main/math-professor-image.png" width="300" height="300"/>
27
+
28
+
29
+ # Math Professor 3B
30
+
31
+ This model is a math instruction fine-tuned version of Qwen2.5-3B model.
32
+
33
+ ### Fine-tuning dataset
34
+
35
+ Model was fine-tuned on [qwedsacf/grade-school-math-instructions](https://huggingface.co/datasets/qwedsacf/grade-school-math-instructions) instruction dataset.
36
+
37
+ ### Inference
38
+
39
+ ```python
40
+ !pip install transformers accelerate
41
+
42
+ from transformers import AutoTokenizer, AutoModelForCausalLM
43
+
44
+ model_name = "entfane/math-professor-3B"
45
+
46
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
47
+ model = AutoModelForCausalLM.from_pretrained(model_name)
48
+
49
+ messages = [
50
+ {"role": "user", "content": "What's the derivative of 2x^2?"}
51
+ ]
52
+
53
+ input = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
54
+ encoded_input = tokenizer(input, return_tensors = "pt").to(model.device)
55
+ output = model.generate(**encoded_input, max_new_tokens=1024)
56
+ print(tokenizer.decode(output[0], skip_special_tokens=False))
57
  ```