XNationQA / README.md
anwoy's picture
Update README.md
53c3a5b verified
metadata
license: mit
dataset_info:
  features:
    - name: id
      dtype: string
    - name: entity
      dtype: string
    - name: answer
      list: string
    - name: type
      dtype: string
    - name: sub-type
      dtype: string
    - name: prompts
      struct:
        - name: English
          list: string
        - name: German
          list: string
        - name: Hindi
          list: string
        - name: Japanese
          list: string
        - name: Mandarin
          list: string
        - name: Russian
          list: string
        - name: Spanish
          list: string
    - name: prompt_ans
      struct:
        - name: English
          list: string
        - name: German
          list: string
        - name: Hindi
          list: string
        - name: Japanese
          list: string
        - name: Mandarin
          list: string
        - name: Russian
          list: string
        - name: Spanish
          list: string
    - name: translate_entity
      struct:
        - name: English
          dtype: string
        - name: German
          dtype: string
        - name: Hindi
          dtype: string
        - name: Japanese
          dtype: string
        - name: Mandarin
          dtype: string
        - name: Russian
          dtype: string
        - name: Spanish
          dtype: string
    - name: topic
      dtype: string
    - name: country
      dtype: string
  splits:
    - name: test
      num_bytes: 10138515
      num_examples: 1820
  download_size: 1772881
  dataset_size: 10138515
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
task_categories:
  - question-answering
language:
  - en
  - de
  - es
  - ja
  - hi
  - zh
  - ru
tags:
  - multilingual
  - question-answering
  - cultural-literacy
  - benchmark
pretty_name: XNationQA
size_categories:
  - 10K<n<100K

XNationQA

This is the official dataset for the EMNLP 2025 paper: "Do You Know About My Nation? Investigating Multilingual Language Models' Cultural Literacy Through Factual Knowledge".

Abstract

Most multilingual question-answering benchmarks, while covering a diverse pool of languages, do not factor in regional diversity in the information they capture and tend to be Western-centric. This introduces a significant gap in fairly evaluating multilingual models' comprehension of factual information from diverse geographical locations. To address this, we introduce XNationQA for investigating the cultural literacy of multilingual LLMs. XNationQA encompasses a total of 49,280 questions on the geography, culture, and history of nine countries, presented in seven languages. We benchmark eight standard multilingual LLMs on XNationQA and evaluate them using two novel transference metrics. Our analyses uncover a considerable discrepancy in the models' accessibility to culturally specific facts across languages. Notably, we often find that a model demonstrates greater knowledge of cultural information in English than in the dominant language of the respective culture. The models exhibit better performance in Western languages, although this does not necessarily translate to being more literate for Western countries, which is counterintuitive. Furthermore, we observe that models have a very limited ability to transfer knowledge across languages, particularly evident in open-source models.

🚀 How to Use

You can easily load the dataset using the datasets library:

from datasets import load_dataset

# Load the dataset from the Hub
dataset = load_dataset("anwoy/XNationQA")

# This dataset contains only a 'test' split
test_data = dataset['test']

# Inspect an example
print(test_data[0])

# {
#  'id': 'india_0',
#  'entity': 'Campbell Bay National Park',
#  'answer': ['Andaman and Nicobar Islands'],
#  'type': 'location',
#  'sub-type': 'state',
#  'prompts': {
#    'Hindi': [...],
#    'English': [...],
#    'Spanish': [...],
#    'Mandarin': [...],
#    'Japanese': [...],
#    'Russian': [...],
#    'German': [...]
#  },
#  'prompt_ans': {
#    'Hindi': [...],
#    'English': [...],
#    ...
#  },
#  'translate_entity': {
#    'Hindi': 'कैंपबेल बे राष्ट्रीय उद्यान',
#    'English': 'Campbell Bay National Park',
#    ...
#  },
#  'topic': 'national_park_qa',
#  'country': 'india'
# }

📜 Citation

If you use XNationQA in your work, please cite our original paper:

@inproceedings{tanwar-etal-2025-know,
    title = "Do You Know About My Nation? Investigating Multilingual Language Models' Cultural Literacy Through Factual Knowledge",
    author = "Tanwar, Eshaan  and
      Chatterjee, Anwoy  and
      Saxon, Michael  and
      Albalak, Alon  and
      Wang, William Yang  and
      Chakraborty, Tanmoy",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.emnlp-main.756/",
    pages = "14967--14990",
    ISBN = "979-8-89176-332-6",
    abstract = "Most multilingual question-answering benchmarks, while covering a diverse pool of languages, do not factor in regional diversity in the information they capture and tend to be Western-centric. This introduces a significant gap in fairly evaluating multilingual models' comprehension of factual information from diverse geographical locations. To address this, we introduce XNationQA for investigating the cultural literacy of multilingual LLMs. XNationQA encompasses a total of 49,280 questions on the geography, culture, and history of nine countries, presented in seven languages. We benchmark eight standard multilingual LLMs on XNationQA and evaluate them using two novel transference metrics. Our analyses uncover a considerable discrepancy in the models' accessibility to culturally specific facts across languages. Notably, we often find that a model demonstrates greater knowledge of cultural information in English than in the dominant language of the respective culture. The models exhibit better performance in Western languages, although this does not necessarily translate to being more literate for Western countries, which is counterintuitive. Furthermore, we observe that models have a very limited ability to transfer knowledge across languages, particularly evident in open-source models."
}