GanitLLM-1.7B_SFT

Paper Dataset Models

Highlights

GanitLLM-1.7B_SFT is a Bengali mathematical reasoning model trained with Supervised Fine-Tuning on the GANIT dataset. This model serves as the foundation for further RL training (GRPO/CGRPO). Key improvements over the base Qwen3-1.7B model:

  • +33.60 accuracy on Bn-MGSM benchmark (15.20 → 48.80)
  • +50.50 accuracy on Bn-MSVAMP benchmark (14.10 → 64.60)
  • 87.79% Bengali reasoning (vs 19.64% for base model)
  • 77.5% fewer words in generated solutions (1124 → 253 words)

Note: This is the SFT-only checkpoint. For best results, use the RL-enhanced versions: GanitLLM-1.7B_SFT_CGRPO or GanitLLM-1.7B_SFT_GRPO.

Model Overview

Property Value
Model Type Causal Language Model
Base Model Qwen/Qwen3-1.7B
Parameters 1.7B
Training Supervised Fine-Tuning
Context Length 4,096 tokens
Language Bengali, English

Training Details

This model was trained with a single-stage pipeline:

  1. Supervised Fine-Tuning (SFT): Trained on GANIT-SFT (~11k examples) to ground reasoning in Bengali

Training Data

  • Dataset: GANIT-SFT (11,023 examples)
  • Format: Bengali math problems with chain-of-thought reasoning
  • Structure: <think> tags for reasoning, <answer> tags for final answer

Quickstart

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "dipta007/GanitLLM-1.7B_SFT"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

problem = "একটি দোকানে ১২টি আপেল আছে। যদি ৫টি আপেল বিক্রি হয়, তাহলে কতটি আপেল বাকি থাকবে?"

prompt = f"""A conversation takes place between the user and the assistant. The user asks a question, and the assistant solves the problem. Please reason step by step in Bengali, and put your final answer in the <answer> </answer> tags.

Question: {problem}"""

messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(**model_inputs, max_new_tokens=2048, temperature=0.7)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
response = tokenizer.decode(output_ids, skip_special_tokens=True)
print(response)

Using vLLM

vllm serve dipta007/GanitLLM-1.7B_SFT --max-model-len 4096

Performance

Model Bn-MGSM Bn-MSVAMP Avg. Words Bengali %
Qwen3-1.7B (base) 15.20 14.10 1124 19.64%
GanitLLM-1.7B_SFT 48.80 64.60 253 87.79%

Related Models

Model Parameters Training Link
GanitLLM-1.7B_SFT_CGRPO 1.7B SFT + CGRPO Link
GanitLLM-1.7B_SFT_GRPO 1.7B SFT + GRPO Link
GanitLLM-1.7B_SFT 1.7B SFT Link
GanitLLM-1.7B_CGRPO 1.7B CGRPO Link

Citation

will be updated

License

This model is released under the Apache 2.0 License.

Downloads last month
396
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dipta007/GanitLLM-1.7B-SFT

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(403)
this model
Quantizations
1 model

Dataset used to train dipta007/GanitLLM-1.7B-SFT

Collection including dipta007/GanitLLM-1.7B-SFT