Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Korean
Size:
< 1K
ArXiv:
Tags:
legal
License:
doolayer commited on
Commit
db32ea7
·
verified ·
1 Parent(s): 6962065

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -13
README.md CHANGED
@@ -67,9 +67,7 @@ size_categories:
67
 
68
  This repository hosts the **Korean Canonical Legal Benchmark (KCL)** datasets.
69
 
70
- Evaluation Code Repository [![Github](https://img.shields.io/badge/GitHub-KCL-blue?style=flat&logo=github)](https://github.com/lbox-kr/kcl)
71
-
72
- For more information, please refer to our paper [![Paper](https://img.shields.io/badge/arXiv-1234.1234-red?style=flat&logo=arxiv&logoColor=red)](https://arxiv.org/abs/1234.1234)
73
 
74
  ## Why KCL?
75
 
@@ -81,6 +79,8 @@ KCL supports two complementary evaluation axes:
81
 
82
  For essay questions, KCL further offers **instance-level rubrics** to enable **LLM-as-a-Judge** automated scoring.
83
 
 
 
84
  #### Intended Uses
85
  - Separating knowledge vs. reasoning by comparing vanilla and with-precedent settings.
86
  - Legal RAG research using question-aligned gold precedents to establish retriever/reader upper bounds.
@@ -109,11 +109,11 @@ kcl_mcqa = load_dataset("lbox/kcl", "kcl_mcqa", split="test")
109
 
110
  ## Dataset Fields
111
 
112
- meta: Metadata such as exam year, subject, and question id.
113
- question: The full prompt presented to models.
114
- rubrics: Instance-level grading rubrics for automated evaluation.
115
- score: The original point value assigned in the official bar exam (reflecting difficulty).
116
- supporting\_precedents: Question-aligned court decisions required to solve the problem.
117
 
118
  #### Results
119
 
@@ -123,11 +123,11 @@ supporting\_precedents: Question-aligned court decisions required to solve the p
123
 
124
  ### Dataset Fields
125
 
126
- meta: Metadata about the source exam item.
127
- question: The full prompt presented to models.
128
- A–E: Five answer options.
129
- label: The gold answer option letter (one of 'A'|'B'|'C'|'D'|'E').
130
- supporting\_precedents: Question-aligned court decisions required to solve the problem.
131
 
132
  #### Results
133
 
 
67
 
68
  This repository hosts the **Korean Canonical Legal Benchmark (KCL)** datasets.
69
 
70
+ [![Github](https://img.shields.io/badge/GitHub-KCL-blue?style=flat&logo=github)](https://github.com/lbox-kr/kcl) [![Paper](https://img.shields.io/badge/arXiv-1234.1234-red?style=flat&logo=arxiv&logoColor=red)](https://arxiv.org/abs/1234.1234)
 
 
71
 
72
  ## Why KCL?
73
 
 
79
 
80
  For essay questions, KCL further offers **instance-level rubrics** to enable **LLM-as-a-Judge** automated scoring.
81
 
82
+ For more information, please refer to our paper
83
+
84
  #### Intended Uses
85
  - Separating knowledge vs. reasoning by comparing vanilla and with-precedent settings.
86
  - Legal RAG research using question-aligned gold precedents to establish retriever/reader upper bounds.
 
109
 
110
  ## Dataset Fields
111
 
112
+ - meta: Metadata such as exam year, subject, and question id.
113
+ - question: The full prompt presented to models.
114
+ - rubrics: Instance-level grading rubrics for automated evaluation.
115
+ - score: The original point value assigned in the official bar exam (reflecting difficulty).
116
+ - supporting\_precedents: Question-aligned court decisions required to solve the problem.
117
 
118
  #### Results
119
 
 
123
 
124
  ### Dataset Fields
125
 
126
+ - meta: Metadata about the source exam item.
127
+ - question: The full prompt presented to models.
128
+ - A–E: Five answer options.
129
+ - label: The gold answer option letter (one of 'A'|'B'|'C'|'D'|'E').
130
+ - supporting\_precedents: Question-aligned court decisions required to solve the problem.
131
 
132
  #### Results
133