The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.

MBPP-Bangla

🐯 MBPP-Bangla: A Benchmark for Evaluating Bangla Code Generation

Accepted at LREC 2026

Nishat Raihan, Antonios Anastasopoulos, Marcos Zampieri

George Mason University, Fairfax, VA, USA

arXiv Read PDF Contact Us
TigerCoder-1B TigerCoder-9B Bangla-Code-Instruct MBPP-Bangla

The first expert-validated, multi-language Bangla code generation benchmark with 974 problems across 5 programming languages.


⚠️ Note: The benchmark will be released after the LREC 2026 conference. Stay tuned!

Overview

MBPP-Bangla is a comprehensive evaluation benchmark for Bangla code generation, derived from the Mostly Basic Python Programs (MBPP) dataset (Austin et al., 2021). It comprises 974 programming problems, each presented as a Bangla natural language prompt paired with canonical reference solutions in five programming languages: Python, Java, JavaScript, Ruby, and C++. Every problem is assigned to one of five topical classes, enabling controlled, category-wise evaluation.

MBPP-Bangla was designed to complement mHumanEval-Bangla. While mHumanEval-Bangla features docstring-style completions, MBPP-Bangla uses conversational Bangla prompts, stressing a model's ability to comprehend varied natural language specifications and generate syntactically valid code across multiple target languages. Using both benchmarks jointly provides a comprehensive assessment of Bangla code generation capability.

Benchmark Statistics

Property Value
Total Problems 974
Source MBPP (Austin et al., 2021)
Prompt Language Bangla (human-translated)
Programming Languages Python, Java, JavaScript, Ruby, C++
Difficulty Basic to Intermediate
Topical Classes 5 (String, Math, Data Structures, Algorithms, File-I/O)

Curation Process

MBPP-Bangla was constructed through a rigorous five-step pipeline ensuring linguistic fidelity, technical correctness, and cross-language consistency:

Step Description Details
1. Extraction Extract all 974 basic-to-intermediate problems from MBPP Covers five topical classes
2. Translation Two native Bangla speakers independently translate every English prompt into Bangla Each translator has certified English proficiency (TOEFL iBT > 100)
3. Multi-Language Solutions Curate or port five reference solutions per task Python, Java, JavaScript, Ruby, C++
4. Expert Verification A third reviewer manually verifies every item Fluent in Bangla and all five programming languages
5. Release Package validated records into the benchmark Each record includes: Task ID, Bangla prompt, 5 reference codes, test cases, topic label

Annotator Qualifications

Role Count Qualifications
Translators 2 Native Bangla speakers, TOEFL iBT > 100, worked independently
Verifier 1 Native Bangla speaker, proficient in all 5 programming languages

Verification Scope

The expert verifier checked every item across three dimensions:

  1. Linguistic Fidelity: Bangla prompt accurately captures the intent of the original English prompt
  2. Technical Correctness: All five reference solutions produce correct outputs for the provided test cases
  3. Cross-Language Consistency: Solutions across all five programming languages are semantically equivalent

Output Components

Each record in MBPP-Bangla contains the following fields:

Field Description
Task ID Unique identifier for each problem
Bangla Prompt Human-translated Bangla natural language description
Python Solution Reference solution in Python
Java Solution Reference solution in Java
JavaScript Solution Reference solution in JavaScript
Ruby Solution Reference solution in Ruby
C++ Solution Reference solution in C++
Test Cases Original MBPP test cases for automated evaluation
Topic Label One of: String, Math, Data Structures, Algorithms, File-I/O

Evaluation Metric

We use Pass@K as the evaluation metric. For a task with n generated programs, m of which pass all tests:

Pass@K=1(nmK)(nK),1Kn\text{Pass@K} = 1 - \frac{\binom{n-m}{K}}{\binom{n}{K}}, \quad 1 \leq K \leq n

We report three variations: Pass@1 (single-shot quality), Pass@10 (practical shortlist), and Pass@100 (upper potential under heavy sampling).

Benchmark Results (Python, Bangla Prompts)

Model P@1 P@10 P@100
GPT-3.5 0.60 0.62 0.62
Gemini-Flash 2.5 0.62 0.62 0.70
GPT-4o-mini 0.51 0.53 0.54
LLaMA-3.2 (11B) 0.22 0.22 0.30
Gemma-3 (27B) 0.69 0.70 0.70
Phi-4 (7B) 0.09 0.15 0.20
TigerLLM (1B) 0.65 0.68 0.71
TigerLLM (9B) 0.61 0.68 0.73
TigerCoder (1B) 0.74 0.74 0.81
TigerCoder (9B) 0.82 0.84 0.91

Comparison with mHumanEval-Bangla

Feature MBPP-Bangla mHumanEval-Bangla
Size 974 problems 164 problems
Prompt Style Conversational Bangla Docstring-style completion
Programming Languages 5 (Python, Java, JS, Ruby, C++) 5 (Python, Java, JS, Ruby, C++)
Translation Human (independent, dual-annotator) Human
Topical Classes 5 categories N/A
Statistical Robustness Higher (larger size) Lower (smaller size)
Complementary Strength Varied NL specifications Traditional code completion

Using both benchmarks jointly affords a comprehensive assessment of Bangla code generation capability.

Intended Use

  • Evaluating LLMs on Bangla code generation across multiple programming languages
  • Benchmarking multilingual code generation performance
  • Comparing model capabilities on conversational vs. docstring-style Bangla prompts (when paired with mHumanEval-Bangla)
  • Studying the impact of natural language on code generation quality across low-resource languages
  • Category-wise evaluation using topical class labels

Limitations

  • The benchmark is derived from MBPP, which covers basic-to-intermediate difficulty. It does not include advanced or production-level programming tasks.
  • Reference solutions were curated or ported, not independently authored from scratch for each language, which may introduce stylistic uniformity.
  • The benchmark evaluates functional correctness via test cases only. Code quality aspects such as readability, efficiency, and best practices are not assessed.
  • While translations were independently produced and expert-verified, subtle nuances in Bangla phrasing may still vary from the original English intent.

Ethics Statement

We adhere to the ethical guidelines outlined in the LREC 2026 CFP. The benchmark was constructed through careful human translation and expert verification by qualified native speakers. We promote transparency through open-source release and encourage responsible downstream use and community scrutiny.


Citation

If you find our work helpful, please consider citing our paper:

@article{raihan2025tigercoder,
  title={Tigercoder: A novel suite of llms for code generation in bangla},
  author={Raihan, Nishat and Anastasopoulos, Antonios and Zampieri, Marcos},
  journal={arXiv preprint arXiv:2509.09101},
  year={2025}
}

You may also find our related work useful:

@inproceedings{raihan-zampieri-2025-tigerllm,
    title = "{T}iger{LLM} - A Family of {B}angla Large Language Models",
    author = "Raihan, Nishat and
      Zampieri, Marcos",
    booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
    month = jul,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.acl-short.69/",
    doi = "10.18653/v1/2025.acl-short.69",
    pages = "887--896",
    ISBN = "979-8-89176-252-7"
}
@inproceedings{raihan-etal-2025-mhumaneval,
    title = "m{H}uman{E}val - A Multilingual Benchmark to Evaluate Large Language Models for Code Generation",
    author = "Raihan, Nishat and
      Anastasopoulos, Antonios and
      Zampieri, Marcos",
    booktitle = "Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
    year = "2025",
}
Downloads last month
4

Paper for md-nishat-008/MBPP-Bangla