You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

ProverbEval: Benchmark for Evaluating LLMs on Low-Resource Proverbs

This dataset accompanies the paper:
"ProverbEval: Exploring LLM Evaluation Challenges for Low-resource Language Understanding"
ArXiv:2411.05049v3

Dataset Summary

ProverbEval is a culturally grounded evaluation benchmark designed to assess the language understanding abilities of large language models (LLMs) in low-resource settings. It consists of tasks based on proverbs in five languages:

  • Amharic
  • Afaan Oromo
  • Tigrinya
  • Ge’ez
  • English

The benchmark focuses on three tasks:

  • Task 1: Multiple Choice Meaning Matching
  • Task 2: Fill-in-the-Blank
  • Task 3: Proverb Generation

Supported Tasks and Formats

Each task is formatted to support multilingual and cross-lingual evaluation:

Task 1: Meaning Multiple Choice

  • Input: A proverb in a given language.
  • Output: Select one correct meaning from four choices.
  • Format: Multiple-choice question with optional language variants for choices (native or English).

Task 2: Fill-in-the-Blank

  • Input: A proverb with one word removed and four candidate words.
  • Output: Select the most suitable word to complete the proverb.
  • Format: Cloze-style multiple-choice.

Task 3: Generation

  • Input: A detailed description of a proverb in English or the native language.
  • Output: The matching proverb generated in the target language.
  • Format: Text generation.

Languages and Statistics


| Language     | Task 1 | Task 2 | Task 3 |
|--------------|--------|--------|--------|
| Amharic      | 483    | 494    | 484    |
| Afaan Oromo  | 502    | 493    | 502    |
| Tigrinya     | 380    | 503    | 380    |
| Ge’ez        | 434    | 429    | 434    |
| English      | 437    | 462    | 437    |

Note: The dataset focuses on test sets only. Few-shot examples are also included for Task 2.

Data Structure

Each example includes the following fields (depending on the task):

  • language
  • task
  • prompt
  • choices or description
  • answer or target_proverb
  • prompt_type (native or English)
  • choice_language (native or English)

Usage

You can load the dataset directly using the datasets library:

from datasets import load_dataset

# Load the full dataset
dataset = load_dataset("israel/ProverbEval")
.
β”œβ”€β”€ amh
β”‚   β”œβ”€β”€ amharic-fill_test.csv
β”‚   β”œβ”€β”€ amh_english_test_1.csv
β”‚   β”œβ”€β”€ amh_english_test_2.csv
β”‚   β”œβ”€β”€ amh_english_test_3.csv
β”‚   β”œβ”€β”€ amh_fill_1.csv
β”‚   β”œβ”€β”€ amh_fill_2.csv
β”‚   β”œβ”€β”€ amh_fill_3.csv
β”‚   β”œβ”€β”€ amh_meaining_generation_english.csv
β”‚   β”œβ”€β”€ amh_meaining_generation_native.csv
β”‚   β”œβ”€β”€ amh_native_test_1.csv
β”‚   β”œβ”€β”€ amh_native_test_2.csv
β”‚   └── amh_native_test_3.csv
β”œβ”€β”€ eng
β”‚   β”œβ”€β”€ eng_fill_test.csv
β”‚   β”œβ”€β”€ eng_meaining_generation_native.csv
β”‚   β”œβ”€β”€ eng_native_test_1.csv
β”‚   β”œβ”€β”€ eng_native_test_2.csv
β”‚   └── eng_native_test_3.csv
β”œβ”€β”€ geez
β”‚   β”œβ”€β”€ geez_english_test_1.csv
β”‚   β”œβ”€β”€ geez_english_test_2.csv
β”‚   β”œβ”€β”€ geez_english_test_3.csv
β”‚   β”œβ”€β”€ geez_fill_1.csv
β”‚   β”œβ”€β”€ geez_fill_2.csv
β”‚   β”œβ”€β”€ geez_fill_3.csv
β”‚   └── geez_meaining_generation_english.csv
β”œβ”€β”€ orm
β”‚   β”œβ”€β”€ orm_english_test_1.csv
β”‚   β”œβ”€β”€ orm_english_test_2.csv
β”‚   β”œβ”€β”€ orm_english_test_3.csv
β”‚   β”œβ”€β”€ orm_fill_1.csv
β”‚   β”œβ”€β”€ orm_fill_2.csv
β”‚   β”œβ”€β”€ orm_fill_3.csv
β”‚   β”œβ”€β”€ orm_meaining_generation_english.csv
β”‚   β”œβ”€β”€ orm_meaining_generation_native.csv
β”‚   β”œβ”€β”€ orm_native_test_1.csv
β”‚   β”œβ”€β”€ orm_native_test_2.csv
β”‚   β”œβ”€β”€ orm_native_test_3.csv
β”‚   └── oromo_fill_test.csv
└── tir
    β”œβ”€β”€ tir_fill_1.csv
    β”œβ”€β”€ tir_fill_2.csv
    └── tir_fill_3.csv
@article{azime2024proverbeval,
  title={ProverbEval: Exploring LLM Evaluation Challenges for Low-resource Language Understanding},
  author={Azime, Israel Abebe and Tonja, Atnafu Lambebo and Belay, Tadesse Destaw and Chanie, Yonas and Balcha, Bontu Fufa and Abadi, Negasi Haile and Ademtew, Henok Biadglign and Nerea, Mulubrhan Abebe and Yadeta, Debela Desalegn and Geremew, Derartu Dagne and others},
  journal={arXiv preprint arXiv:2411.05049},
  year={2024}
}
Downloads last month
5

Paper for yonas/ProverbEval