metadata
license: cc-by-4.0
configs:
- config_name: Arab-West
data_files:
- split: ar
path: arab-west/ar/*.jsonl
- split: en
path: arab-west/en/*.jsonl
- config_name: Asia-West
data_files:
- split: ko
path: asia-west/ko/*.jsonl
- split: en
path: asia-west/en/*.jsonl
- config_name: South America-West
data_files:
- split: es
path: south_america-west/es/*.jsonl
- split: en
path: south_america-west/en/*.jsonl
DLAMA-v1
A representative benchmark of factual triples curated from Wikidata and Wikipedia.
| Predicate | Template |
|---|---|
| P17 (Country) | [X] is located in [Y] . |
| P19 (Place of birth) | [X] was born in [Y] . |
| P20 (Place of death) | [X] died in [Y] . |
| P27 (Country of citizenship) | [X] is [Y] citizen . |
| P30 (Continent) | [X] is located in [Y] . |
| P36 (Capital) | The capital of [X] is [Y] . |
| P37 (Official language) | The official language of [X] is [Y] . |
| P47 (Shares border with) | [X] shares border with [Y] . |
| P103 (Native language) | The native language of [X] is [Y] . |
| P106 (Occupation) | [X] is a [Y] by profession . |
| P136 (Genre) | [X] plays [Y] music . |
| P190 (Sister city) | [X] and [Y] are twin cities . |
| P264 (Record label) | [X] is represented by music label [Y] . |
| P364 (Original language of work) | The original language of [X] is [Y] . |
| P449 (Original network) | [X] was originally aired on [Y] . |
| P495 (Country of origin) | [X] was created in [Y] . |
| P530 (Diplomatic relation) | [X] maintains diplomatic relations with [Y] . |
| P1303 (Instrument) | [X] plays [Y] . |
| P1376 (Capital of) | [X] is the capital of [Y] . |
| P1412 (Languages spoken or published) | [X] used to communicate in [Y] . |
Note: For each triple, [X] is the subject (sub) and [Y] is the object (obj).
Citation
If you find the benchmark useful, please cite the following respective paper:
@inproceedings{keleg-magdy-2023-dlama,
title = "{DLAMA}: A Framework for Curating Culturally Diverse Facts for Probing the Knowledge of Pretrained Language Models",
author = "Keleg, Amr and
Magdy, Walid",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-acl.389/",
doi = "10.18653/v1/2023.findings-acl.389",
pages = "6245--6266",
abstract = "A few benchmarking datasets have been released to evaluate the factual knowledge of pretrained language models. These benchmarks (e.g., LAMA, and ParaRel) are mainly developed in English and later are translated to form new multilingual versions (e.g., mLAMA, and mParaRel). Results on these multilingual benchmarks suggest that using English prompts to recall the facts from multilingual models usually yields significantly better and more consistent performance than using non-English prompts. Our analysis shows that mLAMA is biased toward facts from Western countries, which might affect the fairness of probing models. We propose a new framework for curating factual triples from Wikidata that are culturally diverse. A new benchmark DLAMA-v1 is built of factual triples from three pairs of contrasting cultures having a total of 78,259 triples from 20 relation predicates. The three pairs comprise facts representing the (Arab and Western), (Asian and Western), and (South American and Western) countries respectively. Having a more balanced benchmark (DLAMA-v1) supports that mBERT performs better on Western facts than non-Western ones, while monolingual Arabic, English, and Korean models tend to perform better on their culturally proximate facts. Moreover, both monolingual and multilingual models tend to make a prediction that is culturally or geographically relevant to the correct label, even if the prediction is wrong."
}