Update README.md
Browse files
README.md
CHANGED
|
@@ -10,13 +10,17 @@ size_categories:
|
|
| 10 |
|
| 11 |
# KoInFoBench
|
| 12 |
|
| 13 |
-
KoInFoBench is a specialized evaluation dataset designed to assess the performance of Large Language Models (LLMs) on capabilities of Korean instructions following
|
|
|
|
| 14 |
|
| 15 |
Inspired by [InFoBench](https://huggingface.co/datasets/kqsong/InFoBench) dataset, we extends their concpet by focusing on the nuances and features of Korean language.
|
| 16 |
|
| 17 |
- 🖥️ Code to reproduce or evaluate own LLMs is available at [https://github.com/KIFAI/KoInFoBench](https://github.com/KIFAI/KoInFoBench)
|
| 18 |
- 📄 Paper is under writing and open soon!
|
| 19 |
|
|
|
|
|
|
|
|
|
|
| 20 |
## Dataset Overview
|
| 21 |
|
| 22 |
### Usage
|
|
@@ -74,17 +78,21 @@ The following is the summary of the model performance on our dataset.
|
|
| 74 |
|------------------------------ |-------- |--------|-----------|
|
| 75 |
| **claude-3-opus-20240229** | **0.854** | 0.850 | 87% |
|
| 76 |
| **gpt-4-turbo-2024-04-09** | 0.850 | 0.880 | 87% |
|
|
|
|
| 77 |
| **gpt-4-0125-preview** | 0.824 | 0.824 | 83% |
|
|
|
|
| 78 |
| **gemini-1.5-pro** | 0.773 | 0.811 | 83% |
|
| 79 |
| **meta-llama/Meta-Llama-3-70B-Instruct-** | 0.747 | 0.863 | 84% |
|
| 80 |
| **hpx003** | 0.691 | 0.738 | 83% |
|
| 81 |
| **gpt-3.5-turbo-0125** | 0.678 | 0.734 | 82% |
|
| 82 |
-
| **
|
|
|
|
| 83 |
|
| 84 |
- `H_DRFR`: The accuracy of model responses as evaluated by the human expert
|
| 85 |
- `A_DRFR`: The accuracy of model responses automatically evaluated by GPT-4 as employing the capability of LLM-as-a-judge
|
| 86 |
- `Alignment`: The degree of agreement or consistency between the human and automated evaluation
|
| 87 |
|
|
|
|
| 88 |
|
| 89 |
## Additional Information
|
| 90 |
|
|
|
|
| 10 |
|
| 11 |
# KoInFoBench
|
| 12 |
|
| 13 |
+
KoInFoBench is a specialized evaluation dataset designed to assess the performance of Large Language Models (LLMs) on capabilities of Korean instructions following.<br>
|
| 14 |
+
The current version of `KoInFoBench` consists of 60 instruction sets and 233 questions.
|
| 15 |
|
| 16 |
Inspired by [InFoBench](https://huggingface.co/datasets/kqsong/InFoBench) dataset, we extends their concpet by focusing on the nuances and features of Korean language.
|
| 17 |
|
| 18 |
- 🖥️ Code to reproduce or evaluate own LLMs is available at [https://github.com/KIFAI/KoInFoBench](https://github.com/KIFAI/KoInFoBench)
|
| 19 |
- 📄 Paper is under writing and open soon!
|
| 20 |
|
| 21 |
+
### 🚀 Update
|
| 22 |
+
- **2024.05.18**: add other results `gpt-4o-2024-05-13`, `claude-3-sonnet-20240229`, `solar-1-mini-chat`
|
| 23 |
+
|
| 24 |
## Dataset Overview
|
| 25 |
|
| 26 |
### Usage
|
|
|
|
| 78 |
|------------------------------ |-------- |--------|-----------|
|
| 79 |
| **claude-3-opus-20240229** | **0.854** | 0.850 | 87% |
|
| 80 |
| **gpt-4-turbo-2024-04-09** | 0.850 | 0.880 | 87% |
|
| 81 |
+
| **gpt-4o-2024-05-13** | 0.850 | 0.863 | 89% |
|
| 82 |
| **gpt-4-0125-preview** | 0.824 | 0.824 | 83% |
|
| 83 |
+
| **claude-3-sonnet-20240229** | 0.790 | 0.828 | 84% |
|
| 84 |
| **gemini-1.5-pro** | 0.773 | 0.811 | 83% |
|
| 85 |
| **meta-llama/Meta-Llama-3-70B-Instruct-** | 0.747 | 0.863 | 84% |
|
| 86 |
| **hpx003** | 0.691 | 0.738 | 83% |
|
| 87 |
| **gpt-3.5-turbo-0125** | 0.678 | 0.734 | 82% |
|
| 88 |
+
| **solar-1-mini-chat** | 0.614 | 0.695 | 79% |
|
| 89 |
+
| **yanolja/EEVE-Korean-Instruct-10.8B-v1.0** | 0.597 | 0.730 | 79% |`
|
| 90 |
|
| 91 |
- `H_DRFR`: The accuracy of model responses as evaluated by the human expert
|
| 92 |
- `A_DRFR`: The accuracy of model responses automatically evaluated by GPT-4 as employing the capability of LLM-as-a-judge
|
| 93 |
- `Alignment`: The degree of agreement or consistency between the human and automated evaluation
|
| 94 |
|
| 95 |
+
> Please note that the evaluation results of the LLMs presented in the above table may vary due to its randomness.
|
| 96 |
|
| 97 |
## Additional Information
|
| 98 |
|