Update README.md
Browse files
README.md
CHANGED
|
@@ -74,7 +74,7 @@ This repository hosts the **Korean Canonical Legal Benchmark (KCL)** datasets.
|
|
| 74 |
KCL is designed to **disentangle knowledge coverage from evidence-grounded reasoning**.
|
| 75 |
|
| 76 |
KCL supports two complementary evaluation axes:
|
| 77 |
-
1. **Knowledge Coverage**: performance without extra context
|
| 78 |
2. **Evidence-Grounded Reasoning**: performance **with per-question supporting precedents** provided in-context.
|
| 79 |
|
| 80 |
For essay questions, KCL further offers **instance-level rubrics** to enable **LLM-as-a-Judge** automated scoring.
|
|
@@ -90,7 +90,7 @@ For more information, please refer to our paper
|
|
| 90 |
|
| 91 |
- **KCL-Essay** (open-ended generation)
|
| 92 |
- 169 questions, 550 supporting precedents, 2,739 instance-level rubrics.
|
| 93 |
-
- **KCL-MCQA** (five-choice
|
| 94 |
- 283 questions, 1,103 supporting precedents.
|
| 95 |
|
| 96 |
## Usage
|
|
|
|
| 74 |
KCL is designed to **disentangle knowledge coverage from evidence-grounded reasoning**.
|
| 75 |
|
| 76 |
KCL supports two complementary evaluation axes:
|
| 77 |
+
1. **Knowledge Coverage**: performance without extra context.
|
| 78 |
2. **Evidence-Grounded Reasoning**: performance **with per-question supporting precedents** provided in-context.
|
| 79 |
|
| 80 |
For essay questions, KCL further offers **instance-level rubrics** to enable **LLM-as-a-Judge** automated scoring.
|
|
|
|
| 90 |
|
| 91 |
- **KCL-Essay** (open-ended generation)
|
| 92 |
- 169 questions, 550 supporting precedents, 2,739 instance-level rubrics.
|
| 93 |
+
- **KCL-MCQA** (five-choice question answering)
|
| 94 |
- 283 questions, 1,103 supporting precedents.
|
| 95 |
|
| 96 |
## Usage
|