BenNumEval / README.md
ka05ar's picture
Update README.md
b054525 verified
metadata
configs:
  - config_name: CA
    data_files:
      - split: test
        path:
          - Task1(CA).csv
  - config_name: DS
    data_files:
      - split: test
        path:
          - Task2(DS).csv
  - config_name: CQ
    data_files:
      - split: test
        path:
          - Task3(CQ).csv
  - config_name: FiB
    data_files:
      - split: test
        path:
          - Task4(FiB).csv
  - config_name: QNLI
    data_files:
      - split: test
        path:
          - Task5(QNLI).csv
  - config_name: AWP
    data_files:
      - split: test
        path:
          - Task6(AWP).csv
license: mit
language:
  - bn
pretty_name: BenNumEval
size_categories:
  - 1K<n<10K

BenNumEval: A Benchmark to Assess LLM’s Numerical Reasoning Capabilities in Bengali

BenNumEval is a novel benchmark designed to evaluate the numerical reasoning abilities of Large Language Models (LLMs) in the Bengali language. It introduces six diverse task categories and a high-quality dataset containing over 3,200 examples derived from educational and real-world sources.

📁 Dataset Overview

BenNumEval includes 3,255 curated examples divided into six task types:

Task Type Description Examples
Commonsense + Arithmetic (CA) Problems combining arithmetic with common-sense knowledge 410
Domain-Specific (DS) Problems requiring domain knowledge (e.g., physics, chemistry, CS) 705
Commonsense + Quantitative (CQ) Simple comparisons based on everyday logic 400
Fill-in-the-Blanks (FiB) Arithmetic word problems in fill-in-the-blank style 665
Quantitative NLI (QNLI) Natural language inference involving numerical understanding 425
Arithmetic Word Problems (AWP) Real-world word problems requiring arithmetic reasoning 650

Code Snipet to Download the dataset

Install the datasets library if you've not installed yet.

pip install datasets

Then load the dataset

from datasets import load_dataset

dataset = load_dataset("ka05ar/BenNumEval", 'CA') #for downloading Task1(CA) subset

📜 Citation

If you use BenNumEval in your work, please cite:

@inproceedings{ahmed2025bennumeval,
  title={BenNumEval: A Benchmark to Assess LLMs’ Numerical Reasoning Capabilities in Bengali},
  author={Ahmed, Kawsar and Osama, Md and Sharif, Omar and Hossain, Eftekhar and Hoque, Mohammed Moshiul},
  booktitle={Findings of the Association for Computational Linguistics: ACL 2025},
  pages={17782--17799},
  year={2025}
}