bzantium commited on
Commit
8972c91
·
verified ·
1 Parent(s): 685996b

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. README.md +56 -0
  2. kosimpleqa.jsonl +0 -0
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # KoSimpleQA
2
+
3
+ KoSimpleQA (Korean SimpleQA) is a 1,000-prompt benchmark for evaluating factuality in large language models (LLMs) with a focus on **Korean cultural knowledge**. The benchmark is designed to be challenging yet easy to grade, consisting of short, fact-seeking questions with unambiguous answers. Unlike simply translating existing English benchmarks, KoSimpleQA addresses the need for culturally grounded evaluation—assessing not just linguistic competence in Korean, but also understanding of Korean cultural context.
4
+
5
+ ## Motivation
6
+
7
+ Existing factuality benchmarks like SimpleQA primarily focus on English and Chinese, with questions rooted in Anglophone cultural contexts. Simply translating these benchmarks into Korean is insufficient, as they do not meaningfully assess models trained on Korean data. Evaluating LLMs in Korean requires both **linguistic competence** and **cultural knowledge** associated with the Korean language community.
8
+
9
+ ## Key Results
10
+
11
+ - Even the strongest model evaluated achieves only **33.7%** correct answers, underscoring the challenging nature of KoSimpleQA.
12
+ - Performance rankings on KoSimpleQA **diverge substantially** from those on English SimpleQA, highlighting the distinct cultural dimension it captures.
13
+ - Analysis of reasoning LLMs shows that engaging reasoning capabilities can help models better elicit their latent knowledge and improve their ability to abstain when uncertain.
14
+
15
+ ## Data Format
16
+
17
+ The dataset is provided in JSONL format with the following fields:
18
+
19
+ | Field | Description |
20
+ |-------|-------------|
21
+ | `topic` | Category of the question (e.g., art, science, history, sports, etc.) |
22
+ | `problem` | The Korean fact-seeking question |
23
+ | `answer` | The unambiguous correct answer |
24
+ | `urls` | Reference URLs for answer verification |
25
+
26
+ ### Category Distribution
27
+
28
+ The benchmark covers diverse topics including: Art, Science, History, Sports, Geography, Politics, Economy, Society, and more.
29
+
30
+ ## Example
31
+
32
+ ```json
33
+ {
34
+ "topic": "art",
35
+ "problem": "봉준호 감독의 '기생충'은 대한민국 몇 번째 천만영화인가요?",
36
+ "answer": "26번째",
37
+ "urls": "https://www.newspim.com/news/view/20190722000045,https://www.khan.co.kr/article/201907221353001"
38
+ }
39
+ ```
40
+
41
+ ## Citation
42
+
43
+ ```bibtex
44
+ @article{ko2025kosimpleqa,
45
+ title={KoSimpleQA: A Korean Factuality Benchmark with an Analysis of Reasoning LLMs},
46
+ author={Ko, Donghyeon and Jin, Yeguk and Chae, Kyubyung and Lee, Byungwook and Jo, Chansong and In, Sookyo and Lee, Jaehong and Kim, Taesup and Kwak, Donghyun},
47
+ journal={arXiv preprint arXiv:2510.18368},
48
+ year={2025}
49
+ }
50
+ ```
51
+
52
+ ## Resources
53
+
54
+ - Paper: https://arxiv.org/abs/2510.18368
55
+ - Original Dataset: https://anonymous.4open.science/r/KoSimpleQA-62EB
56
+
kosimpleqa.jsonl ADDED
The diff for this file is too large to render. See raw diff