GanitLLM-0.6B_SFT

Paper Dataset Models

Highlights

GanitLLM-0.6B_SFT is our smallest Bengali mathematical reasoning model trained with Supervised Fine-Tuning on the GANIT dataset. Ideal for resource-constrained deployments. Key improvements over the base Qwen3-0.6B model:

  • +20.00 accuracy on Bn-MGSM benchmark (8.40 → 28.40)
  • +39.20 accuracy on Bn-MSVAMP benchmark (12.20 → 51.40)
  • 88.60% Bengali reasoning (vs 12.43% for base model)
  • 79.2% fewer words in generated solutions (1265 → 263 words)

Note: This is the SFT-only checkpoint. For best results, use the RL-enhanced versions: GanitLLM-0.6B_SFT_CGRPO or GanitLLM-0.6B_SFT_GRPO.

Model Overview

Property Value
Model Type Causal Language Model
Base Model Qwen/Qwen3-0.6B
Parameters 0.6B
Training Supervised Fine-Tuning
Context Length 4,096 tokens
Language Bengali, English

Training Details

This model was trained with a single-stage pipeline:

  1. Supervised Fine-Tuning (SFT): Trained on GANIT-SFT (~11k examples) to ground reasoning in Bengali

Training Data

  • Dataset: GANIT-SFT (11,023 examples)
  • Format: Bengali math problems with chain-of-thought reasoning
  • Structure: <think> tags for reasoning, <answer> tags for final answer

Quickstart

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "dipta007/GanitLLM-0.6B_SFT"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

problem = "একটি দোকানে ১২টি আপেল আছে। যদি ৫টি আপেল বিক্রি হয়, তাহলে কতটি আপেল বাকি থাকবে?"

prompt = f"""A conversation takes place between the user and the assistant. The user asks a question, and the assistant solves the problem. Please reason step by step in Bengali, and put your final answer in the <answer> </answer> tags.

Question: {problem}"""

messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(**model_inputs, max_new_tokens=2048, temperature=0.7)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
response = tokenizer.decode(output_ids, skip_special_tokens=True)
print(response)

Using vLLM

vllm serve dipta007/GanitLLM-0.6B_SFT --max-model-len 4096

Performance

Model Bn-MGSM Bn-MSVAMP Avg. Words Bengali %
Qwen3-0.6B (base) 8.40 12.20 1265 12.43%
GanitLLM-0.6B_SFT 28.40 51.40 263 88.60%

Related Models

Model Parameters Training Link
GanitLLM-0.6B_SFT_CGRPO 0.6B SFT + CGRPO Link
GanitLLM-0.6B_SFT_GRPO 0.6B SFT + GRPO Link
GanitLLM-0.6B_SFT 0.6B SFT Link
GanitLLM-0.6B_CGRPO 0.6B CGRPO Link

Citation

will be updated

License

This model is released under the Apache 2.0 License.

Downloads last month
438
Safetensors
Model size
0.8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dipta007/GanitLLM-0.6B-SFT

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(579)
this model
Quantizations
1 model

Dataset used to train dipta007/GanitLLM-0.6B-SFT

Collection including dipta007/GanitLLM-0.6B-SFT