metadata
license: apache-2.0
base_model: Qwen/Qwen2.5-Math-7B-Instruct
tags:
- math
- ib-mathematics
- qwen2
- fine-tuned
- education
- ontology
- chain-of-thought
language:
- en
pipeline_tag: text-generation
IB-Math-Ontology-7B
Fine-tuned Qwen2.5-Math-7B-Instruct for IB Mathematics AA with ontology-based Chain-of-Thought reasoning.
Features
- 🎯 IB Math AA Specialized: Trained on 1,332 ontology-based examples
- 💭 Chain-of-Thought: Uses
<think>tags for step-by-step reasoning - 📚 Curriculum-Aligned: Covers all 5 IB Math AA topics
- ⚠️ Pitfall Awareness: Warns about common student mistakes
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("ongilLabs/IB-Math-Ontology-7B", torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("ongilLabs/IB-Math-Ontology-7B")
prompt = "Find the derivative of f(x) = x³ - 2x² + 5x [6 marks]"
messages = [
{"role": "system", "content": "You are an expert IB Mathematics AA tutor. Think step-by-step and explain concepts clearly."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
- Base Model: Qwen2.5-Math-7B-Instruct
- Method: LoRA (r=64, alpha=128)
- Dataset: 1,332 IB Math Ontology examples with CoT
- Hardware: NVIDIA A100 (80GB)
- Epochs: 3
- Precision: BF16