pretty_name: OpenExempt
language:
- en
license:
- cc-by-4.0
task_categories:
- question-answering
- text-generation
tags:
- legal
- law
- bankruptcy
- reasoning
source_datasets:
- original
multilinguality:
- monolingual
dataset_info:
splits:
- name: test
num_examples: 9300
- name: validation
num_examples: 465
features:
- name: id
dtype: string
- name: prompt
dtype: string
- name: solution
dtype: string
- name: config
dtype: string
- name: case
dtype: string
Dataset Card for OpenExempt
OpenExempt: A Diagnostic Benchmark for Legal Reasoning and a Framework for Creating Custom Benchmarks on Demand.
Repository: https://github.com/servantez/OpenExempt
License: CC BY 4.0
OpenExempt Overview
OpenExempt is a framework and benchmark for diagnostic evaluation of legal reasoning capabilities in language models. The OpenExempt Framework is capable of creating complex legal reasoning tasks on demand, where each task scenario is dynamically shaped by the user through configuration settings. OpenExempt computes gold solutions for each task using expert-crafted symbolic representations of relevant U.S. federal and state statutes. Using this framework, we construct the OpenExempt Benchmark, a diagnostic benchmark with 9,765 samples across nine evaluation suites, designed to carefully probe model capabilities through controlled task variation.
Dataset Summary
The OpenExempt Benchmark provides diagnostic evaluation of legal reasoning in language models.
Languages
All OpenExempt tasks are in English.
Dataset Structure
OpenExempt is organized into 9 evaluation suites (3 competency suites and 6 diagnostic suites):
Competency Suites. These suites evaluate core legal reasoning abilities at increasing levels of difficulty:
basic_competency: 1,050 samples (1,000 test, 50 validation)intermediate_competency: 1,470 samples (1,400 test, 70 validation)advanced_competency: 1,470 samples (1,400 test, 70 validation)
Diagnostic Suites. These suites are designed to probe specific dimensions of reasoning, robustness, and error propagation:
temporal_reasoning: 525 samples (500 test, 25 validation)reasoning_decomposition: 1,470 samples (1,400 test, 70 validation)asset_scaling: 1,680 samples (1,600 test, 80 validation)distractor_robustness: 525 samples (500 test, 25 validation)sycophancy_robustness: 525 samples (500 test, 25 validation)obfuscation_robustness: 525 samples (500 test, 25 validation)
The baseline_robustness suite contains tasks without obfuscating statements and serves as a direct point of comparison against robustness suites.
Data Fields
OpenExempt examples contain the following data fields:
id: A unique identifier for the task instance.prompt: The natural-language task prompt presented to the model, including the factual scenario, instructions, and relevant statutes.solution: The gold solution for the task, expressed as a string (often containing structured content).config: The configuration parameters used to construct the example, expressed as a string.case: The case details for the example, expressed as a string.
Citation Information
@misc{servantez2026openexemptdiagnosticbenchmarklegal,
title={OpenExempt: A Diagnostic Benchmark for Legal Reasoning and a Framework for Creating Custom Benchmarks on Demand},
author={Sergio Servantez and Sarah B. Lawsky and Rajiv Jain and Daniel W. Linna Jr. and Kristian Hammond},
year={2026},
eprint={2601.13183},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2601.13183},
}