servantez commited on
Commit
a463840
·
1 Parent(s): 29c26aa

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +81 -1
README.md CHANGED
@@ -1,3 +1,83 @@
1
  ---
2
- license: cc-by-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ pretty_name: OpenExempt
3
+ language:
4
+ - en
5
+ license:
6
+ - cc-by-4.0
7
+ task_categories:
8
+ - question-answering
9
+ - text-generation
10
+ tags:
11
+ - legal
12
+ - law
13
+ - bankruptcy
14
+ - reasoning
15
+ source_datasets:
16
+ - original
17
+ multilinguality:
18
+ - monolingual
19
+ dataset_info:
20
+ splits:
21
+ - name: test
22
+ num_examples: 9300
23
+ - name: validation
24
+ num_examples: 465
25
+ features:
26
+ - name: id
27
+ dtype: string
28
+ - name: prompt
29
+ dtype: string
30
+ - name: solution
31
+ dtype: string
32
+ - name: config
33
+ dtype: string
34
+ - name: case
35
+ dtype: string
36
  ---
37
+
38
+ # Dataset Card for OpenExempt
39
+ OpenExempt: A Diagnostic Benchmark for Legal Reasoning and a Framework for Creating Custom Benchmarks on Demand.
40
+
41
+ - **Paper:**
42
+ - **Repository: https://github.com/servantez/OpenExempt**
43
+
44
+ - **License: [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)**
45
+
46
+
47
+ ## OpenExempt Overview
48
+ OpenExempt is a framework and benchmark for diagnostic evaluation of legal reasoning capabilities in language models. The [OpenExempt Framework](https://github.com/servantez/OpenExempt) is capable of creating complex legal reasoning tasks on demand, where each task scenario is dynamically shaped by the user through configuration settings. OpenExempt computes gold solutions for each task using expert-crafted symbolic representations of relevant U.S. federal and state statutes. Using this framework, we construct the OpenExempt Benchmark, a diagnostic benchmark with 9,765 samples across nine evaluation suites, designed to carefully probe model capabilities through controlled task variation.
49
+
50
+ ## Dataset Summary
51
+ The OpenExempt Benchmark provides diagnostic evaluation of legal reasoning in language models.
52
+
53
+ ### Languages
54
+ All OpenExempt tasks are in English.
55
+
56
+ ### Dataset Structure
57
+ OpenExempt is organized into 9 evaluation suites (3 competency suites and 6 diagnostic suites):
58
+
59
+ **Competency Suites**. These suites evaluate core legal reasoning abilities at increasing levels of difficulty:
60
+
61
+ - `basic_competency`: 1,050 samples (1,000 test, 50 validation)
62
+ - `intermediate_competency`: 1,470 samples (1,400 test, 70 validation)
63
+ - `advanced_competency`: 1,470 samples (1,400 test, 70 validation)
64
+
65
+ **Diagnostic Suites**. These suites are designed to probe specific dimensions of reasoning, robustness, and error propagation:
66
+
67
+ - `temporal_reasoning`: 525 samples (500 test, 25 validation)
68
+ - `reasoning_decomposition`: 1,470 samples (1,400 test, 70 validation)
69
+ - `asset_scaling`: 1,680 samples (1,600 test, 80 validation)
70
+ - `distractor_robustness`: 525 samples (500 test, 25 validation)
71
+ - `sycophancy_robustness`: 525 samples (500 test, 25 validation)
72
+ - `obfuscation_robustness`: 525 samples (500 test, 25 validation)
73
+
74
+ The `baseline_robustness` suite contains tasks without obfuscating statements and serves as a direct point of comparison against robustness suites.
75
+
76
+ ### Data Fields
77
+ OpenExempt examples contain the following data fields:
78
+
79
+ - `id`: A unique identifier for the task instance.
80
+ - `prompt`: The natural-language task prompt presented to the model, including the factual scenario, instructions, and relevant statutes.
81
+ - `solution`: The gold solution for the task, expressed as a string (often containing structured content).
82
+ - `config`: The configuration parameters used to construct the example, expressed as a string.
83
+ - `case`: The case details for the example, expressed as a string.