anuj0456 commited on
Commit
f9909b3
·
verified ·
1 Parent(s): 40e0f0c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -49
README.md CHANGED
@@ -1,49 +1,49 @@
1
- ---
2
- language:
3
- - en
4
- license: apache-2.0
5
- tags:
6
- - math
7
- - reasoning
8
- - mathematics
9
- - causal-lm
10
- - text-generation
11
- library_name: transformers
12
- pipeline_tag: text-generation
13
- model_name: Math-A1-2B
14
- ---
15
-
16
- # 🐟 Math-A1-2B
17
-
18
- **Math-A1-2B** is a 2B-parameter language model by **Kitefish**, focused on mathematical reasoning, symbolic understanding, and structured problem solving.
19
-
20
- This is an early release and part of our ongoing effort to build strong, efficient models for reasoning-heavy tasks.
21
-
22
- ---
23
-
24
- ## ✨ What this model is good at
25
-
26
- - Basic to intermediate **math problem solving**
27
- - **Step-by-step reasoning** for equations and word problems
28
- - Understanding **mathematical symbols and structure**
29
- - Educational and experimentation use cases
30
-
31
- ---
32
-
33
- ## 🚀 Quick start
34
-
35
- ```python
36
- from transformers import AutoTokenizer, AutoModelForCausalLM
37
-
38
- tokenizer = AutoTokenizer.from_pretrained("kitefish/math-a1-2B")
39
- model = AutoModelForCausalLM.from_pretrained(
40
- "kitefish/math-a1-2B",
41
- torch_dtype="auto",
42
- device_map="auto"
43
- )
44
-
45
- prompt = "Solve: 2x + 5 = 13"
46
- inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
47
-
48
- outputs = model.generate(**inputs, max_new_tokens=100)
49
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - math
7
+ - reasoning
8
+ - mathematics
9
+ - causal-lm
10
+ - text-generation
11
+ library_name: transformers
12
+ pipeline_tag: text-generation
13
+ model_name: Minnow-Math-2B
14
+ ---
15
+
16
+ # 🐟 Minnow-Math-2B
17
+
18
+ **Minnow-Math-2B** is a 2B-parameter language model by **Kitefish**, focused on mathematical reasoning, symbolic understanding, and structured problem solving.
19
+
20
+ This is an early release and part of our ongoing effort to build strong, efficient models for reasoning-heavy tasks.
21
+
22
+ ---
23
+
24
+ ## ✨ What this model is good at
25
+
26
+ - Basic to intermediate **math problem solving**
27
+ - **Step-by-step reasoning** for equations and word problems
28
+ - Understanding **mathematical symbols and structure**
29
+ - Educational and experimentation use cases
30
+
31
+ ---
32
+
33
+ ## 🚀 Quick start
34
+
35
+ ```python
36
+ from transformers import AutoTokenizer, AutoModelForCausalLM
37
+
38
+ tokenizer = AutoTokenizer.from_pretrained("kitefish/Minnow-Math-2B")
39
+ model = AutoModelForCausalLM.from_pretrained(
40
+ "kitefish/Minnow-Math-2B",
41
+ torch_dtype="auto",
42
+ device_map="auto"
43
+ )
44
+
45
+ prompt = "Solve: 2x + 5 = 13"
46
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
47
+
48
+ outputs = model.generate(**inputs, max_new_tokens=100)
49
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))