ProverbEval: Benchmark for Evaluating LLMs on Low-Resource Proverbs
This dataset accompanies the paper:
"ProverbEval: Exploring LLM Evaluation Challenges for Low-resource Language Understanding"
ArXiv:2411.05049v3
Dataset Summary
ProverbEval is a culturally grounded evaluation benchmark designed to assess the language understanding abilities of large language models (LLMs) in low-resource settings. It consists of tasks based on proverbs in five languages:
- Amharic
- Afaan Oromo
- Tigrinya
- Geβez
- English
The benchmark focuses on three tasks:
- Task 1: Multiple Choice Meaning Matching
- Task 2: Fill-in-the-Blank
- Task 3: Proverb Generation
Supported Tasks and Formats
Each task is formatted to support multilingual and cross-lingual evaluation:
Task 1: Meaning Multiple Choice
- Input: A proverb in a given language.
- Output: Select one correct meaning from four choices.
- Format: Multiple-choice question with optional language variants for choices (native or English).
Task 2: Fill-in-the-Blank
- Input: A proverb with one word removed and four candidate words.
- Output: Select the most suitable word to complete the proverb.
- Format: Cloze-style multiple-choice.
Task 3: Generation
- Input: A detailed description of a proverb in English or the native language.
- Output: The matching proverb generated in the target language.
- Format: Text generation.
Languages and Statistics
| Language | Task 1 | Task 2 | Task 3 |
|--------------|--------|--------|--------|
| Amharic | 483 | 494 | 484 |
| Afaan Oromo | 502 | 493 | 502 |
| Tigrinya | 380 | 503 | 380 |
| Geβez | 434 | 429 | 434 |
| English | 437 | 462 | 437 |
Note: The dataset focuses on test sets only. Few-shot examples are also included for Task 2.
Data Structure
Each example includes the following fields (depending on the task):
languagetaskpromptchoicesordescriptionanswerortarget_proverbprompt_type(native or English)choice_language(native or English)
Usage
You can load the dataset directly using the datasets library:
from datasets import load_dataset
# Load the full dataset
dataset = load_dataset("israel/ProverbEval")
.
βββ amh
β βββ amharic-fill_test.csv
β βββ amh_english_test_1.csv
β βββ amh_english_test_2.csv
β βββ amh_english_test_3.csv
β βββ amh_fill_1.csv
β βββ amh_fill_2.csv
β βββ amh_fill_3.csv
β βββ amh_meaining_generation_english.csv
β βββ amh_meaining_generation_native.csv
β βββ amh_native_test_1.csv
β βββ amh_native_test_2.csv
β βββ amh_native_test_3.csv
βββ eng
β βββ eng_fill_test.csv
β βββ eng_meaining_generation_native.csv
β βββ eng_native_test_1.csv
β βββ eng_native_test_2.csv
β βββ eng_native_test_3.csv
βββ geez
β βββ geez_english_test_1.csv
β βββ geez_english_test_2.csv
β βββ geez_english_test_3.csv
β βββ geez_fill_1.csv
β βββ geez_fill_2.csv
β βββ geez_fill_3.csv
β βββ geez_meaining_generation_english.csv
βββ orm
β βββ orm_english_test_1.csv
β βββ orm_english_test_2.csv
β βββ orm_english_test_3.csv
β βββ orm_fill_1.csv
β βββ orm_fill_2.csv
β βββ orm_fill_3.csv
β βββ orm_meaining_generation_english.csv
β βββ orm_meaining_generation_native.csv
β βββ orm_native_test_1.csv
β βββ orm_native_test_2.csv
β βββ orm_native_test_3.csv
β βββ oromo_fill_test.csv
βββ tir
βββ tir_fill_1.csv
βββ tir_fill_2.csv
βββ tir_fill_3.csv
@article{azime2024proverbeval,
title={ProverbEval: Exploring LLM Evaluation Challenges for Low-resource Language Understanding},
author={Azime, Israel Abebe and Tonja, Atnafu Lambebo and Belay, Tadesse Destaw and Chanie, Yonas and Balcha, Bontu Fufa and Abadi, Negasi Haile and Ademtew, Henok Biadglign and Nerea, Mulubrhan Abebe and Yadeta, Debela Desalegn and Geremew, Derartu Dagne and others},
journal={arXiv preprint arXiv:2411.05049},
year={2024}
}
- Downloads last month
- 5
Paper for yonas/ProverbEval
Paper
β’ 2411.05049 β’ Published
β’ 4