File size: 9,226 Bytes
05d39f8 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 | ---
license: cc-by-4.0
language:
- bn
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- code
- bangla
- bengali
- code-generation
- nlp
- low-resource
datasets:
- md-nishat-008/Bangla-Code-Instruct
base_model:
- md-nishat-008/TigerLLM-9B-it
---
<div align="center">
<img src="https://img.shields.io/badge/🐯_TigerCoder-9B-orange?style=for-the-badge" alt="TigerCoder-9B"/>
<h1 style="color: #2e8b57;">🐯 TigerCoder: A Novel Suite of LLMs for Code Generation in Bangla</h1>
<h3>Accepted at LREC 2026</h3>
<h4>Nishat Raihan, Antonios Anastasopoulos, Marcos Zampieri</h4>
<h5>George Mason University, Fairfax, VA, USA</h5>
<br/>
<table>
<tr>
<td>
<a href="https://arxiv.org/abs/2509.09101">
<img src="https://img.shields.io/badge/arXiv-2509.09101-b31b1b?style=for-the-badge&logo=arxiv" alt="arXiv"/>
</a>
</td>
<td>
<a href="https://arxiv.org/pdf/2509.09101">
<img src="https://img.shields.io/badge/Paper-Read_PDF-blue?style=for-the-badge&logo=adobeacrobatreader" alt="Read PDF"/>
</a>
</td>
<td>
<a href="mailto:mraihan2@gmu.edu">
<img src="https://img.shields.io/badge/Email-Contact_Us-green?style=for-the-badge&logo=gmail" alt="Contact Us"/>
</a>
</td>
</tr>
</table>
<table>
<tr>
<td>
<a href="https://huggingface.co/md-nishat-008/TigerCoder-1B">
<img src="https://img.shields.io/badge/🤗_HuggingFace-TigerCoder--1B-yellow?style=for-the-badge" alt="TigerCoder-1B"/>
</a>
</td>
<td>
<a href="https://huggingface.co/md-nishat-008/TigerCoder-9B">
<img src="https://img.shields.io/badge/🤗_HuggingFace-TigerCoder--9B-yellow?style=for-the-badge" alt="TigerCoder-9B"/>
</a>
</td>
</tr>
</table>
<br/>
**The first dedicated family of Code LLMs for Bangla, achieving 11-18% Pass@1 gains over prior baselines.**
</div>
---
> **⚠️ Note:** Model weights will be released after the LREC 2026 conference. Stay tuned!
## Overview
Despite being the 5th most spoken language globally (242M+ native speakers), Bangla remains severely underrepresented in code generation. **TigerCoder** addresses this gap by introducing the first dedicated Bangla Code LLM family, available in 1B and 9B parameter variants.
This model card is for **TigerCoder-9B**, the instruction-tuned 9B parameter variant, finetuned on **300K Bangla instruction-code pairs** from the Bangla-Code-Instruct dataset. TigerCoder-9B pushes the frontier of Bangla code generation to **0.82 Pass@1 on MBPP-Bangla**, achieving 11-18% gains over the strongest prior baselines (Gemma-3 27B and TigerLLM-9B) while being only one-third their size.
## Key Contributions
1. **Bangla-Code-Instruct**: A comprehensive 300K instruction-code dataset comprising three subsets: Self-Instruct (SI, 100K), Synthetic (Syn, 100K), and Translated+Filtered (TE, 100K).
2. **MBPP-Bangla**: A 974-problem benchmark with expert-validated Bangla programming tasks across 5 programming languages (Python, C++, Java, JavaScript, Ruby).
3. **TigerCoder Model Family**: Specialized Bangla Code LLMs (1B and 9B) that set a new state-of-the-art for Bangla code generation.
## Performance
### Python (Pass@K on Bangla Prompts)
| Model | mHumanEval P@1 | mHumanEval P@10 | mHumanEval P@100 | MBPP P@1 | MBPP P@10 | MBPP P@100 |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|
| GPT-3.5 | 0.56 | 0.56 | 0.59 | 0.60 | 0.62 | 0.62 |
| Gemini-Flash 2.5 | 0.58 | 0.61 | 0.62 | 0.62 | 0.62 | 0.70 |
| Gemma-3 (27B) | 0.64 | 0.65 | 0.69 | 0.69 | 0.70 | 0.70 |
| TigerLLM (9B) | 0.63 | 0.69 | 0.72 | 0.61 | 0.68 | 0.73 |
| TigerCoder (1B) | 0.69 | 0.73 | 0.77 | 0.74 | 0.74 | 0.81 |
| **TigerCoder (9B)** | **0.75** | **0.80** | **0.84** | **0.82** | **0.84** | **0.91** |
### Improvements over Strongest Prior Baseline (Δ)
| Model | mHumanEval P@1 | mHumanEval P@10 | mHumanEval P@100 | MBPP P@1 | MBPP P@10 | MBPP P@100 |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|
| TigerCoder (1B) | +0.05 | +0.04 | +0.05 | +0.05 | +0.04 | +0.08 |
| **TigerCoder (9B)** | **+0.11** | **+0.11** | **+0.12** | **+0.13** | **+0.14** | **+0.18** |
### Multi-Language Performance (TigerCoder-9B, Pass@1 on Bangla Prompts)
| Language | mHumanEval P@1 | MBPP P@1 |
|:---|:---:|:---:|
| Python | 0.75 | 0.82 |
| C++ | 0.67 | 0.72 |
| Java | 0.62 | 0.67 |
| JavaScript | 0.57 | 0.62 |
## Usage
### Quickstart
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "md-nishat-008/TigerCoder-9B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
# Bangla coding prompt
chat = [{"role": "user", "content": "একটি ফাংশন লিখুন যা একটি সংখ্যার ফ্যাক্টরিয়াল গণনা করে।"}]
inputs = tokenizer.apply_chat_template(chat, tokenize=True, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
inputs=inputs,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Details
| Hyperparameter | Value |
|:---|:---|
| Base Model | TigerLLM-9B-it |
| Training Data | Bangla-Code-Instruct (300K examples) |
| Max Sequence Length | 2048 |
| Batch Size (Train / Eval) | 32 |
| Gradient Accumulation Steps | 8 |
| Epochs | 3 |
| Learning Rate | 1 × 10⁻⁶ |
| Weight Decay | 0.04 |
| Warm-up Steps | 15% |
| Optimizer | AdamW |
| LR Scheduler | Cosine |
| Precision | BF16 |
| Hardware | NVIDIA A100 (40GB) |
## Datasets
The **Bangla-Code-Instruct** dataset (300K total) consists of three complementary subsets:
| Subset | Size | Method | Prompt Origin | Code Origin |
|:---|:---:|:---|:---|:---|
| SI (Self-Instruct) | 100K | 5000 expert seeds + GPT-4o expansion | Semi-Natural | Synthetic |
| Syn (Synthetic) | 100K | GPT-4o + Claude 3.5 generation | Synthetic | Synthetic |
| TE (Translated) | 100K | NLLB-200 MT from Evol-Instruct | Translated | Natural (Source) |
All code in SI and Syn subsets is validated via syntax checking (`ast.parse`) and execution testing (Python 3.13.0, 10s timeout, 16GB memory).
## Key Findings
1. **LLMs exhibit a notable performance drop when coding prompts are in Bangla rather than English.** Most models lose 20-50+ percentage points.
2. **Bangla → English machine translation does not help.** Translated prompts perform similarly or worse than native Bangla prompts due to mistranslation of code-specific keywords (e.g., "অক্ষর" (Character) → "Letter", "চলক" (Variable) → "Clever", "স্ট্রিং" (String) → "Rope").
3. **High-quality, targeted data beats scale.** TigerCoder-1B surpasses models 27x its size, and TigerCoder-9B widens the lead to 11-18%, confirming that curated, domain-specific data outweighs model scale for low-resource code generation.
## Limitations
- TigerCoder is optimized primarily for Bangla code generation tasks. Performance on general NLU or non-code tasks may not match general-purpose models.
- The training data is synthetically generated and/or machine-translated, which may introduce biases or artifacts.
- Evaluation is currently limited to MBPP-Bangla and mHumanEval-Bangla; performance on real-world, production-level coding tasks has not been benchmarked.
## Ethics Statement
We adhere to the ethical guidelines outlined in the LREC 2026 CFP. Our benchmark creation involved careful translation and verification by qualified native speakers. We promote transparency through the open-source release of our models, datasets, and benchmark. We encourage responsible downstream use and community scrutiny.
---
## Citation
If you find our work helpful, please consider citing our paper:
```bibtex
@article{raihan2025tigercoder,
title={Tigercoder: A novel suite of llms for code generation in bangla},
author={Raihan, Nishat and Anastasopoulos, Antonios and Zampieri, Marcos},
journal={arXiv preprint arXiv:2509.09101},
year={2025}
}
```
You may also find our related work useful:
```bibtex
@inproceedings{raihan-zampieri-2025-tigerllm,
title = "{T}iger{LLM} - A Family of {B}angla Large Language Models",
author = "Raihan, Nishat and
Zampieri, Marcos",
booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.acl-short.69/",
doi = "10.18653/v1/2025.acl-short.69",
pages = "887--896",
ISBN = "979-8-89176-252-7"
}
```
```bibtex
@inproceedings{raihan-etal-2025-mhumaneval,
title = "m{H}uman{E}val - A Multilingual Benchmark to Evaluate Large Language Models for Code Generation",
author = "Raihan, Nishat and
Anastasopoulos, Antonios and
Zampieri, Marcos",
booktitle = "Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
year = "2025",
}
``` |