metadata
language:
- en
license: apache-2.0
tags:
- math
- reasoning
- mathematics
- causal-lm
- text-generation
library_name: transformers
pipeline_tag: text-generation
model_name: Math-A1-2B
π Math-A1-2B
Math-A1-2B is a 2B-parameter language model by Kitefish, focused on mathematical reasoning, symbolic understanding, and structured problem solving.
This is an early release and part of our ongoing effort to build strong, efficient models for reasoning-heavy tasks.
β¨ What this model is good at
- Basic to intermediate math problem solving
- Step-by-step reasoning for equations and word problems
- Understanding mathematical symbols and structure
- Educational and experimentation use cases
π Quick start
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("kitefish/math-a1-2B")
model = AutoModelForCausalLM.from_pretrained(
"kitefish/math-a1-2B",
torch_dtype="auto",
device_map="auto"
)
prompt = "Solve: 2x + 5 = 13"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))