dipta007 commited on
Commit
2ea8133
·
verified ·
1 Parent(s): dfecf21

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +109 -13
README.md CHANGED
@@ -1,21 +1,117 @@
1
  ---
2
- base_model: dipta007/GanitLLM-0.6B-SFT-1404
3
- tags:
4
- - text-generation-inference
5
- - transformers
6
- - unsloth
7
- - qwen3
8
  license: apache-2.0
 
 
9
  language:
10
- - en
 
 
 
 
 
 
 
 
11
  ---
12
 
13
- # Uploaded finetuned model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
- - **Developed by:** dipta007
16
- - **License:** apache-2.0
17
- - **Finetuned from model :** dipta007/GanitLLM-0.6B-SFT-1404
18
 
19
- This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
20
 
21
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
1
  ---
2
+ library_name: transformers
 
 
 
 
 
3
  license: apache-2.0
4
+ base_model: Qwen/Qwen3-0.6B
5
+ pipeline_tag: text-generation
6
  language:
7
+ - bn
8
+ - en
9
+ tags:
10
+ - math
11
+ - bengali
12
+ - reasoning
13
+ - grpo
14
+ datasets:
15
+ - dipta007/Ganit
16
  ---
17
 
18
+ # GanitLLM-0.6B_SFT_GRPO
19
+
20
+ [![Paper](https://img.shields.io/badge/arXiv-Paper-red)](https://arxiv.org/)
21
+ [![Dataset](https://img.shields.io/badge/HuggingFace-Dataset-yellow)](https://huggingface.co/datasets/dipta007/Ganit)
22
+ [![Models](https://img.shields.io/badge/HuggingFace-Models-orange)](https://huggingface.co/collections/dipta007/ganitllm)
23
+
24
+ ## Highlights
25
+
26
+ **GanitLLM-0.6B_SFT_GRPO** is our smallest Bengali mathematical reasoning model trained with SFT followed by standard GRPO. Ideal for resource-constrained deployments. Key improvements over the base Qwen3-0.6B model:
27
+
28
+ - **+24.0 accuracy** on Bn-MGSM benchmark (8.4 → 32.4)
29
+ - **+40.3 accuracy** on Bn-MSVAMP benchmark (12.2 → 52.5)
30
+ - **88.45% Bengali reasoning** (vs 12.43% for base model)
31
+ - **80.6% fewer tokens** in generated solutions (1265 → 246 words)
32
+
33
+ ## Model Overview
34
+
35
+ | Property | Value |
36
+ |----------|-------|
37
+ | **Model Type** | Causal Language Model |
38
+ | **Base Model** | Qwen/Qwen3-0.6B |
39
+ | **Parameters** | 0.6B |
40
+ | **Training** | SFT + GRPO |
41
+ | **Context Length** | 4,096 tokens |
42
+ | **Language** | Bengali, English |
43
+
44
+ ## Training Details
45
+
46
+ This model was trained using a two-stage pipeline:
47
+
48
+ 1. **Supervised Fine-Tuning (SFT)**: Trained on GANIT-SFT (~11k examples) to ground reasoning in Bengali
49
+ 2. **GRPO**: Standard reinforcement learning with random sampling on GANIT-RLVR (~7.3k examples)
50
+
51
+ ### Reward Functions
52
+ - **Format Reward**: Validates `<think>` and `<answer>` tag structure
53
+ - **Correctness Reward**: +2.0 for Bengali answer match, +1.0 for English match
54
+ - **Bengali Reasoning Reward**: Ensures >80% Bengali text in reasoning
55
+
56
+ ## Quickstart
57
+
58
+ ```python
59
+ from transformers import AutoModelForCausalLM, AutoTokenizer
60
+
61
+ model_name = "dipta007/GanitLLM-0.6B_SFT_GRPO"
62
+
63
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
64
+ model = AutoModelForCausalLM.from_pretrained(
65
+ model_name,
66
+ torch_dtype="auto",
67
+ device_map="auto"
68
+ )
69
+
70
+ problem = "একটি দোকানে ১২টি আপেল আছে। যদি ৫টি আপেল বিক্রি হয়, তাহলে কতটি আপেল বাকি থাকবে?"
71
+
72
+ prompt = f"""A conversation takes place between the user and the assistant. The user asks a question, and the assistant solves the problem. Please reason step by step in Bengali, and put your final answer in the <answer> </answer> tags.
73
+
74
+ Question: {problem}"""
75
+
76
+ messages = [{"role": "user", "content": prompt}]
77
+ text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
78
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
79
+
80
+ generated_ids = model.generate(**model_inputs, max_new_tokens=2048, temperature=0.7)
81
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
82
+ response = tokenizer.decode(output_ids, skip_special_tokens=True)
83
+ print(response)
84
+ ```
85
+
86
+ ### Using vLLM
87
+
88
+ ```bash
89
+ vllm serve dipta007/GanitLLM-0.6B_SFT_GRPO --max-model-len 4096
90
+ ```
91
+
92
+ ## Performance
93
+
94
+ | Model | Bn-MGSM | Bn-MSVAMP | Avg. Words | Bengali % |
95
+ |-------|---------|-----------|------------|-----------|
96
+ | Qwen3-0.6B (base) | 8.40 | 12.20 | 1265 | 12.43% |
97
+ | **GanitLLM-0.6B_SFT_GRPO** | **32.40** | **52.50** | **246** | **88.45%** |
98
+
99
+ ## Related Models
100
+
101
+ | Model | Parameters | Training | Link |
102
+ |-------|------------|----------|------|
103
+ | GanitLLM-4B_SFT_GRPO | 4B | SFT + GRPO | [Link](https://huggingface.co/dipta007/GanitLLM-4B_SFT_GRPO) |
104
+ | GanitLLM-1.7B_SFT_GRPO | 1.7B | SFT + GRPO | [Link](https://huggingface.co/dipta007/GanitLLM-1.7B_SFT_GRPO) |
105
+ | GanitLLM-0.6B_SFT_CGRPO | 0.6B | SFT + CGRPO | [Link](https://huggingface.co/dipta007/GanitLLM-0.6B_SFT_CGRPO) |
106
+ | **GanitLLM-0.6B_SFT_GRPO** | 0.6B | SFT + GRPO | [Link](https://huggingface.co/dipta007/GanitLLM-0.6B_SFT_GRPO) |
107
+ | GanitLLM-0.6B_CGRPO | 0.6B | CGRPO | [Link](https://huggingface.co/dipta007/GanitLLM-0.6B_CGRPO) |
108
+
109
+ ## Citation
110
 
111
+ ```bibtex
112
+ will be updated
113
+ ```
114
 
115
+ ## License
116
 
117
+ This model is released under the Apache 2.0 License.