Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Korean
Size:
< 1K
ArXiv:
Tags:
legal
License:
doolayer commited on
Commit
6962065
·
verified ·
1 Parent(s): fd2cb3e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -3
README.md CHANGED
@@ -65,9 +65,84 @@ size_categories:
65
 
66
  # KCL
67
 
68
- This repository contains the **Korean Canonical Legal Benchmark (KCL)** dataset
69
 
70
- [![Github](https://img.shields.io/badge/GitHub-KCL-blue?style=flat&logo=github)](https://github.com/lbox-kr/kcl) [![Paper](https://img.shields.io/badge/arXiv-1234.1234-red?style=flat&logo=arxiv&logoColor=red)](https://arxiv.org/abs/1234.1234)
71
 
72
- Our benchmark dataset is licensed under the xx License.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
  The problems we processed were sourced from the [Korean Bar Exam](https://www.moj.go.kr/moj/405/subview.do), and they are released under the [KOGL Type 1](https://www.kogl.or.kr/info/licenseType1.do) license.
 
65
 
66
  # KCL
67
 
68
+ This repository hosts the **Korean Canonical Legal Benchmark (KCL)** datasets.
69
 
70
+ Evaluation Code Repository [![Github](https://img.shields.io/badge/GitHub-KCL-blue?style=flat&logo=github)](https://github.com/lbox-kr/kcl)
71
 
72
+ For more information, please refer to our paper [![Paper](https://img.shields.io/badge/arXiv-1234.1234-red?style=flat&logo=arxiv&logoColor=red)](https://arxiv.org/abs/1234.1234)
73
+
74
+ ## Why KCL?
75
+
76
+ KCL is designed to **disentangle knowledge coverage from evidence-grounded reasoning**.
77
+
78
+ KCL supports two complementary evaluation axes:
79
+ 1. **Knowledge Coverage** — performance without extra context (`vanilla` setting).
80
+ 2. **Evidence-Grounded Reasoning** — performance **with per-question supporting precedents** provided in-context.
81
+
82
+ For essay questions, KCL further offers **instance-level rubrics** to enable **LLM-as-a-Judge** automated scoring.
83
+
84
+ #### Intended Uses
85
+ - Separating knowledge vs. reasoning by comparing vanilla and with-precedent settings.
86
+ - Legal RAG research using question-aligned gold precedents to establish retriever/reader upper bounds.
87
+ - Fine-grained feedback via rubric-level diagnostics on essay outputs.
88
+
89
+ ## Components
90
+
91
+ - **KCL-Essay** (open-ended generation)
92
+ - 169 questions, 550 supporting precedents, 2,739 instance-level rubrics.
93
+ - **KCL-MCQA** (five-choice multiple-choice)
94
+ - 283 questions, 1,103 supporting precedents.
95
+
96
+ ## Usage
97
+
98
+
99
+ ```python
100
+ from datasets import load_dataset
101
+
102
+ # Essay subset
103
+ kcl_essay = load_dataset("lbox/kcl", "kcl_essay", split="test")
104
+ # MCQA subset
105
+ kcl_mcqa = load_dataset("lbox/kcl", "kcl_mcqa", split="test")
106
+ ```
107
+
108
+ ## KCL-Essay
109
+
110
+ ## Dataset Fields
111
+
112
+ meta: Metadata such as exam year, subject, and question id.
113
+ question: The full prompt presented to models.
114
+ rubrics: Instance-level grading rubrics for automated evaluation.
115
+ score: The original point value assigned in the official bar exam (reflecting difficulty).
116
+ supporting\_precedents: Question-aligned court decisions required to solve the problem.
117
+
118
+ #### Results
119
+
120
+ ![스크린샷 2025-10-20 오후 5.20.30](https://cdn-uploads.huggingface.co/production/uploads/6364b581a53b71b7a1b62364/fEb_RSiHVCGT6v0V7A13B.png)
121
+
122
+ ## KCL-MCQA
123
+
124
+ ### Dataset Fields
125
+
126
+ meta: Metadata about the source exam item.
127
+ question: The full prompt presented to models.
128
+ A–E: Five answer options.
129
+ label: The gold answer option letter (one of 'A'|'B'|'C'|'D'|'E').
130
+ supporting\_precedents: Question-aligned court decisions required to solve the problem.
131
+
132
+ #### Results
133
+
134
+ ![스크린샷 2025-10-20 오후 5.20.24](https://cdn-uploads.huggingface.co/production/uploads/6364b581a53b71b7a1b62364/OmiTG5Tv6pN2PRtiBhspy.png)
135
+
136
+ ## Citation
137
+ ```bibtex
138
+ @misc{kcl,
139
+ title = {Korean Canonical Legal Benchmark: Toward Knowledge-Independent Evaluation of LLMs' Legal Reasoning Capabilities},
140
+ author = {Hongseok Oh and Wonseok Hwang and Kyoung-Woon On},
141
+ year = {2025}
142
+ }
143
+ ```
144
+
145
+ ## LICENSE
146
+
147
+ Our benchmark dataset is licensed under the xx License.
148
  The problems we processed were sourced from the [Korean Bar Exam](https://www.moj.go.kr/moj/405/subview.do), and they are released under the [KOGL Type 1](https://www.kogl.or.kr/info/licenseType1.do) license.