anuj0456 commited on
Commit
c9f8a72
·
1 Parent(s): 3b7f247

README.md updated

Browse files
Files changed (2) hide show
  1. .gitignore +1 -0
  2. README.md +32 -1
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ ._*
README.md CHANGED
@@ -1,3 +1,34 @@
 
 
 
 
 
 
1
  ---
2
- license: mit
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🐟 Math-A1-2B
2
+
3
+ **Math-A1-2B** is a 2B-parameter language model by **Kitefish**, focused on mathematical reasoning, symbolic understanding, and structured problem solving.
4
+
5
+ This is an early release and part of our ongoing effort to build strong, efficient models for reasoning-heavy tasks.
6
+
7
  ---
8
+
9
+ ## ✨ What this model is good at
10
+
11
+ - Basic to intermediate **math problem solving**
12
+ - **Step-by-step reasoning** for equations and word problems
13
+ - Understanding **mathematical symbols and structure**
14
+ - Educational and experimentation use cases
15
+
16
  ---
17
+
18
+ ## 🚀 Quick start
19
+
20
+ ```python
21
+ from transformers import AutoTokenizer, AutoModelForCausalLM
22
+
23
+ tokenizer = AutoTokenizer.from_pretrained("kitefish/math-a1-2B")
24
+ model = AutoModelForCausalLM.from_pretrained(
25
+ "kitefish/math-a1-2B",
26
+ torch_dtype="auto",
27
+ device_map="auto"
28
+ )
29
+
30
+ prompt = "Solve: 2x + 5 = 13"
31
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
32
+
33
+ outputs = model.generate(**inputs, max_new_tokens=100)
34
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))