Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    RuntimeError
Message:      Dataset scripts are no longer supported, but found OpenExempt.py
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 989, in dataset_module_factory
                  raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
              RuntimeError: Dataset scripts are no longer supported, but found OpenExempt.py

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for OpenExempt

OpenExempt: A Diagnostic Benchmark for Legal Reasoning and a Framework for Creating Custom Benchmarks on Demand.

OpenExempt Overview

OpenExempt is a framework and benchmark for diagnostic evaluation of legal reasoning capabilities in language models. The OpenExempt Framework is capable of creating complex legal reasoning tasks on demand, where each task scenario is dynamically shaped by the user through configuration settings. OpenExempt computes gold solutions for each task using expert-crafted symbolic representations of relevant U.S. federal and state statutes. Using this framework, we construct the OpenExempt Benchmark, a diagnostic benchmark with 9,765 samples across nine evaluation suites, designed to carefully probe model capabilities through controlled task variation.

Dataset Summary

The OpenExempt Benchmark provides diagnostic evaluation of legal reasoning in language models.

Languages

All OpenExempt tasks are in English.

Dataset Structure

OpenExempt is organized into 9 evaluation suites (3 competency suites and 6 diagnostic suites):

Competency Suites. These suites evaluate core legal reasoning abilities at increasing levels of difficulty:

  • basic_competency: 1,050 samples (1,000 test, 50 validation)
  • intermediate_competency: 1,470 samples (1,400 test, 70 validation)
  • advanced_competency: 1,470 samples (1,400 test, 70 validation)

Diagnostic Suites. These suites are designed to probe specific dimensions of reasoning, robustness, and error propagation:

  • temporal_reasoning: 525 samples (500 test, 25 validation)
  • reasoning_decomposition: 1,470 samples (1,400 test, 70 validation)
  • asset_scaling: 1,680 samples (1,600 test, 80 validation)
  • distractor_robustness: 525 samples (500 test, 25 validation)
  • sycophancy_robustness: 525 samples (500 test, 25 validation)
  • obfuscation_robustness: 525 samples (500 test, 25 validation)

The baseline_robustness suite contains tasks without obfuscating statements and serves as a direct point of comparison against robustness suites.

Data Fields

OpenExempt examples contain the following data fields:

  • id: A unique identifier for the task instance.
  • prompt: The natural-language task prompt presented to the model, including the factual scenario, instructions, and relevant statutes.
  • solution: The gold solution for the task, expressed as a string (often containing structured content).
  • config: The configuration parameters used to construct the example, expressed as a string.
  • case: The case details for the example, expressed as a string.

Citation Information

@misc{servantez2026openexemptdiagnosticbenchmarklegal,
      title={OpenExempt: A Diagnostic Benchmark for Legal Reasoning and a Framework for Creating Custom Benchmarks on Demand}, 
      author={Sergio Servantez and Sarah B. Lawsky and Rajiv Jain and Daniel W. Linna Jr. and Kristian Hammond},
      year={2026},
      eprint={2601.13183},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2601.13183}, 
}
Downloads last month
14

Paper for SergioServantez/OpenExempt