Datasets:
Minotaur-Brief
A pre-registered benchmark for long-form federal-court advocacy across five doctrinal architectures (Title IX deliberate indifference, Title IX retaliation, ADA Title II / Rehabilitation Act Section 504 accommodation, Pennsylvania common-law fraud, Pennsylvania common-law civil conspiracy) and two procedural postures (offensive partial-summary-judgment motion on Counts I–III; opposition to defendants' summary-judgment motion on Counts IV–V), at four context-noise levels including a cross-count condition.
Source: Mullins v. Duquesne Univ., No. 2:25-cv-01366 (W.D. Pa.). Released materials apply a role-based pseudonymization protocol to third-party individuals named in the docket; see §5.1 of the paper.
Contents
Prompts (prompts/)
prompts/tasks.json— five (count, posture) tasks withrequired_elements,relevant_cases,relevant_statutes,parties_context,defendant_context,doc_type, andsumffields.prompts/ground_truth.json— 119 (element, exhibit) mappings across 22 elements and 5 counts, plus designated cross-count pairings at L4.prompts/exhibit_pool.json— 38 verified documentary exhibits.prompts/context_ladder.json— four context-noise level definitions (L1_focused, L2_exhibit_noise, L3_full_noise, L4_cross_count).
Rubrics and provenance (rubrics/)
rubrics/PREREGISTRATION.md— frozen rubrics, scoring scales, pitfall-to-score rules, version history (frozen 2026-05-07 prior to main-run launch).rubrics/doctrinal_rubric.json— Layer 2 doctrinal element rubric on a 0/1/2 schema (Claude Haiku 4.5).rubrics/writing_rubric_v2.md— Layer 2 writing-quality rubric, five 1–5 axes (Claude Sonnet 4.6), grounded in Garner (2014), Aldisert (2017), Edwards (2018).rubrics/GROUND_TRUTH_PROVENANCE.md— per-(element, exhibit) provenance covering all 22 elements and 119 mappings.
Scoring code (root)
gates.py— Layer 1 deterministic gates (element-presence, citation-existence, exhibit precision/recall).scorer.py— orchestration and Layer 2/3 invocation.
Metadata
croissant.json— Croissant metadata.
License
CC BY 4.0 on original contributions (prompts, rubrics, ground-truth maps, scoring code, structured JSON). Underlying federal judicial opinions are public-domain works of the U.S. government and are not covered by this license.
Code
Scoring code at https://github.com/dntstoplaughing-max/minotaur-brief.
Citation
Mullins, D. M. (2026). Minotaur-Brief: A Pre-Registered Benchmark for Long-Form Federal-Court Advocacy Across Doctrines and Procedural Postures. NeurIPS 2026 Datasets and Benchmarks Track (under review).
Disclaimer
Minotaur-Brief measures textual properties of long-form legal advocacy. It does not certify legal correctness or constitute legal advice. The underlying matter remains active at the time of release. See §10 of the paper for the conflict-of-interest disclosure.
- Downloads last month
- 22