sackoh commited on
Commit
79e14d1
ยท
verified ยท
1 Parent(s): 4c7d1b2

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +122 -0
README.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - ko
7
+ size_categories:
8
+ - n<1K
9
+ dataset_info:
10
+ features:
11
+ - name: id
12
+ dtype: string
13
+ - name: subset
14
+ dtype: string
15
+ - name: category
16
+ dtype: string
17
+ - name: input
18
+ dtype: string
19
+ - name: decomposed_questions
20
+ sequence:
21
+ - dtype: string
22
+ - name: question_label
23
+ sequence:
24
+ - dtype: string
25
+ - name: ref
26
+ dtype: string
27
+
28
+ ---
29
+
30
+ # KoInFoBench
31
+
32
+ KoInFoBench is a specialized evaluation dataset designed to assess the performance of Language Learning Models (LLMs) on capabilities of Korean instructions following.
33
+
34
+ Inspired by [InFoBench](https://huggingface.co/datasets/kqsong/InFoBench) dataset, we extends their concpet by focusing on the nuances and features of Korean language.
35
+
36
+ ๐Ÿ–ฅ๏ธ Code to reproduce or evaluate own LLMs is available at [https://github.com/KIFAI/KoInFoBench](https://github.com/KIFAI/KoInFoBench)
37
+
38
+ ๐Ÿ“„ Paper is under writing and open soon!
39
+
40
+ ## Dataset Overview
41
+
42
+ ### Usage
43
+ ```python
44
+ from datasets import load_dataset
45
+
46
+ dataset = load_dataset('kifai/KoInFoBench')
47
+ ```
48
+
49
+ ### Example
50
+ ```json
51
+ {
52
+ 'id': '19',
53
+ 'subset': 'input_intensive_set',
54
+ 'category': '๊ตฌ๊ธ€์บ˜๋ฆฐ๋”',
55
+ 'instruction': '๋‹ค์Œ์€ ํ•ด์™ธ ์ฝ˜์„œํŠธ ์ฐธ๊ฐ€ ํ™•์ •์— ๋Œ€ํ•œ ์˜๋ฌธ์œผ๋กœ ์ž‘์„ฑ๋œ ์ด๋ฉ”์ผ์ž…๋‹ˆ๋‹ค. ํ•œ๊ตญ์‹œ๊ฐ„(KST) ๊ธฐ์ค€์œผ๋กœ ์ฐธ๊ฐ€ ํ™•์ •๋œ ๋‚ ์งœ, ์ฝ˜์„œํŠธ ๋‚ ์งœ์™€ ์‹œ๊ฐ„์„ "๋…„-์›”-์ผ ์‹œ๊ฐ„" ํ˜•์‹์œผ๋กœ ์ž‘์„ฑํ•˜๊ณ  ํ•œ๊ตญ์‹œ๊ฐ„ ๊ธฐ์ค€์œผ๋กœ ์ฐธ๊ฐ€ ํ™•์ •์ผ๋กœ๋ถ€ํ„ฐ ์ฝ˜์„œํŠธ ๋‚ ์งœ๊นŒ์ง€ ๋ช‡ ์ผ ๋‚จ์•˜๋Š”์ง€ ๊ณ„์‚ฐํ•˜์—ฌ ๊ตญ๋ฌธ์œผ๋กœ ์ •๋‹ต์„ ํ•จ๊ป˜ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค.',
56
+ 'input': 'Email: We are pleased to inform you that your concert ticket purchase has been successfully confirmed at approximately 11am GMT today (26 March 2024). The concert you have been eagerly awaiting is scheduled to take place on 17 September 2024, starting at 6 PM UTC+2. Please mark your calendar and prepare to join us for an unforgettable evening of live music and entertainment. Your ticket grants you access to a night filled with exceptional performances, engaging visuals, and the vibrant energy of live music. We recommend arriving early to enjoy the full experience, including pre-concert activities and amenities.',
57
+ 'decomposed_questions': [
58
+ '๋‹ต๋ณ€์€ ํ•ด์™ธ ์ฝ˜์„œํŠธ ์ฐธ๊ฐ€ ์ผ์ •์— ๋Œ€ํ•œ ๋‚ด์šฉ์ด ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๊นŒ?',
59
+ '๋‹ต๋ณ€์œผ๋กœ ์ž‘์„ฑ๋œ ๋ชจ๋“  ์ผ์ •์€ ํ•œ๊ตญ์‹œ๊ฐ„(KST) ๊ธฐ์ค€์œผ๋กœ ์ž‘์„ฑ๋˜์—ˆ์Šต๋‹ˆ๊นŒ?',
60
+ '์ฝ˜์„œํŠธ ์ฐธ๊ฐ€๊ฐ€ ํ™•์ •๋œ ๋‚ ์งœ ๊ทธ๋ฆฌ๊ณ  ์ฝ˜์„œํŠธ ๋‚ ์งœ์™€ ์‹œ๊ฐ„ 2๊ฐœ์˜ ์ผ์ •์„ ๋ชจ๋‘ ํฌํ•จํ•ฉ๋‹ˆ๊นŒ?',
61
+ '๋‚ ์งœ์™€ ์‹œ๊ฐ„์ด "๋…„-์›”-์ผ ์‹œ๊ฐ„" ํ˜•์‹์œผ๋กœ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ž‘์„ฑ๋˜์—ˆ์Šต๋‹ˆ๊นŒ?',
62
+ '์ฝ˜์„œํŠธ ํ™•์ •์ผ๋กœ๋ถ€ํ„ฐ ์ฝ˜์„œํŠธ๊นŒ์ง€ ๋‚จ์€ ๊ธฐ๊ฐ„์€ ์ฝ˜์„œํŠธ ์‹œ์ž‘์ผ์„ ํฌํ•จํ•  ๊ฒฝ์šฐ 177์ผ, ๋ฏธํฌํ•จ์ธ ๊ฒฝ์šฐ 176์ผ์ž…๋‹ˆ๋‹ค. ๋‚จ์€ ๊ธฐ๊ฐ„์„ 176์ผ ํ˜น์€ 177์ผ๋กœ ๊ณ„์‚ฐํ•˜์˜€์Šต๋‹ˆ๊นŒ?'],
63
+ 'question_label': [
64
+ 'Format',
65
+ 'Format, Content',
66
+ 'Format',
67
+ 'Format',
68
+ 'Number'
69
+ ],
70
+ 'ref': ''
71
+ }
72
+ ```
73
+
74
+ ### Fields
75
+ - **id**: unique identifier for each entry in the dataset
76
+ - **subset**: include `input_intensive_set` and `instruction_intensive_set`. where "intensive" indicates the entry's focus on evaluating Korean specific input or detailed instruction following
77
+ - **category**: a string which each entry belongs. For example, '๊ตฌ๊ธ€์บ˜๋ฆฐ๋”' indicates that the entry is related to tasks associated with Google Calander
78
+ - **instruction**: a string containing instructions
79
+ - **input**: a string containing context information and can be empty
80
+ - **decomposed_questions**: a list of string questions that decompose the task related to the entry. Each question is designed to evaluate the response of LLM
81
+ - **question_label**: a list of string labels that identify the type of each decomposed question. Each lable belong to multiple aspects, such as Format, Content, Number, Linguistic, Style
82
+ - **ref**: references a string for references or additional information and it could be empty
83
+
84
+
85
+ ## Evaluation Result
86
+
87
+ ### DRFR
88
+ Decomposed Requirements Following Ratio(DRFR) is the metric to evaluate how LLMs accurately respond to the instruction/input.
89
+ This metric calculates the average accuracy across answers to the decomposed questions for each instruction.
90
+ The following is the summary of the model performance on our dataset.
91
+
92
+ | Model | H_DRFR | A_DRFR | Alignment |
93
+ |------------------------------|--------|--------|-----------|
94
+ | **claude-3-opus-20240229** | 0.854 | 0.850 | 0.867 |
95
+ | **gemini** | 0.773 | 0.811 | 0.833 |
96
+ | **gpt-3.5-turbo-0125** | 0.678 | 0.734 | 0.824 |
97
+ | **gpt-4-turbo-preview** | 0.824 | 0.824 | 0.828 |
98
+ | **gpt-4-turbo-2024-04-09** | 0.850 | 0.880 | 0.867 |
99
+ | **hpx003** | 0.691 | 0.738 | 0.833 |
100
+
101
+ - `H_DRFR`: The accuracy of model responses as evaluated by the human expert
102
+ - `A_DRFR`: The accuracy of model responses automatically evaluated by GPT-4 as employing the capability of LLM-as-a-judge
103
+ - `Alignment`: The degree of agreement or consistency between the human and automated evaluation
104
+
105
+
106
+ ## Additional Information
107
+
108
+ ### License Information
109
+
110
+ This dataset is released under the [MIT LISENCE](https://github.com/KIFAI/KoInfoBench/blob/main/LICENSE)
111
+
112
+ ### Citation Information
113
+ ```
114
+ @article{,
115
+ title={KoInFoBench},
116
+ author={Sungwoo Oh, Sungjun Kown, Donggyu Kim},
117
+ year={2024},
118
+ eprint={},
119
+ archivePrefix={arXiv},
120
+ primaryClass={cs.CL}
121
+ }
122
+ ```