Datasets:
Dataset size discrepancy between the repository and the paper
In the paper, you mentioned that the dataset contains 52.6k entries. However, on Hugging Face, there are only approximately 314k entries (308k multiple-choice questions and 8k short-answer questions).
Could you please clarify the reason for this difference? Is the full dataset available?
For short-answer questions, the discrepancy comes from how the dataset is represented on Hugging Face. As shown in Table 1 of the paper, for non-English-speaking countries (except US/GB), we include 500 questions in English and 500 in the local language (1,000 per country). On Hugging Face, each entry contains the local-language question under Question and its English version under Translation, so the preview counts only 500 entries per country, resulting in ~8k entries, even though the total corresponds to 15k short-answer questions as reported in the paper.
For MCQs, the dataset has been updated after the paper submission through option-number unification (to 4 options) and additional validation. The numbers in the paper reflect an earlier version; the latest and complete version is the one currently available on Hugging Face.