Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Korean
Size:
< 1K
ArXiv:
Tags:
legal
License:
doolayer commited on
Commit
55eba9d
·
verified ·
1 Parent(s): e48cfa1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -74,7 +74,7 @@ This repository hosts the **Korean Canonical Legal Benchmark (KCL)** datasets.
74
  KCL is designed to **disentangle knowledge coverage from evidence-grounded reasoning**.
75
 
76
  KCL supports two complementary evaluation axes:
77
- 1. **Knowledge Coverage**: performance without extra context (`vanilla` setting).
78
  2. **Evidence-Grounded Reasoning**: performance **with per-question supporting precedents** provided in-context.
79
 
80
  For essay questions, KCL further offers **instance-level rubrics** to enable **LLM-as-a-Judge** automated scoring.
@@ -90,7 +90,7 @@ For more information, please refer to our paper
90
 
91
  - **KCL-Essay** (open-ended generation)
92
  - 169 questions, 550 supporting precedents, 2,739 instance-level rubrics.
93
- - **KCL-MCQA** (five-choice multiple-choice)
94
  - 283 questions, 1,103 supporting precedents.
95
 
96
  ## Usage
 
74
  KCL is designed to **disentangle knowledge coverage from evidence-grounded reasoning**.
75
 
76
  KCL supports two complementary evaluation axes:
77
+ 1. **Knowledge Coverage**: performance without extra context.
78
  2. **Evidence-Grounded Reasoning**: performance **with per-question supporting precedents** provided in-context.
79
 
80
  For essay questions, KCL further offers **instance-level rubrics** to enable **LLM-as-a-Judge** automated scoring.
 
90
 
91
  - **KCL-Essay** (open-ended generation)
92
  - 169 questions, 550 supporting precedents, 2,739 instance-level rubrics.
93
+ - **KCL-MCQA** (five-choice question answering)
94
  - 283 questions, 1,103 supporting precedents.
95
 
96
  ## Usage