mmikaildemir commited on
Commit
c06db5e
·
verified ·
1 Parent(s): bb98dee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -3
README.md CHANGED
@@ -1,3 +1,75 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ ## Validate Your Authority: Precedent Treatment Dataset and Benchmarks
5
+
6
+ This repository contains data and code for analyzing how later judicial opinions treat earlier precedents, with a focus on negative and limiting treatments. It includes a filtered dataset suitable for model training/evaluation and scripts to reproduce experiments.
7
+
8
+ ### Background: Paul Hellyer’s Study and Annotations
9
+
10
+ Paul D. Hellyer’s empirical study examined the reliability of three U.S. citator systems (Shepard’s, KeyCite, and BCite) in reporting negative and limiting treatments of precedents. Hellyer manually read a stratified sample of citing opinions to determine the “ground truth” treatment of the cited (seed) opinion, then compared his judgments to each citator’s labels.
11
+
12
+ Key points about Hellyer’s annotations used in this project:
13
+ - Hellyer read each citing opinion in context and assigned a treatment judgment based on the legal effect on the seed case (e.g., overruled, reversed, distinguished, questioned, not followed).
14
+ - He recorded both the citator-provided label and his own “correct” judgment, allowing measurement of precision/recall by category and overall accuracy.
15
+ - The annotations cover both strong negatives (e.g., Overruled, Reversed) and softer limiting signals (e.g., Distinguished, Declined to extend, Not followed), along with meta signals like Disagreement recognized.
16
+
17
+ For details, see Hellyer’s paper: [Hellyer, Paul D., empirical evaluation of citators](https://scholarship.law.wm.edu/cgi/viewcontent.cgi?article=1130&context=libpubs).
18
+
19
+ ### This Paper: Validate Your Authority — Benchmarking LLMs on Multi‑Label Precedent Treatment Classification
20
+
21
+ This work introduces a benchmark that frames precedent treatment classification as a multi‑label problem spanning both strong negative and limiting signals. We evaluate multiple LLMs (closed and open) with zero‑shot and few‑shot prompting, and report performance across fine‑grained and high‑level labels. The dataset originates from Hellyer’s annotations and associated opinion texts, curated into a machine‑learning‑ready format. (Link forthcoming.)
22
+
23
+ ### Dataset
24
+
25
+ Primary file for sharing/experiments:
26
+ - `SOURCE_hf.csv`
27
+
28
+ Included columns:
29
+ - `seed_case`: the cited opinion (precedent) citation string.
30
+ - `citing_case`: the citing opinion citation string.
31
+ - `hellyer_correct_label`: Hellyer’s ground‑truth fine‑grained treatment label for this pair.
32
+ - `seed_case_name`: human‑readable name of the cited opinion.
33
+ - `html_cleaned`: cleaned citing‑opinion text segment(s) used for labeling and modeling.
34
+ - `high_level_label`: coarser roll‑up of treatment (e.g., Negative vs. Limiting/Other), used for high‑level evaluation.
35
+
36
+ The original full table (`SOURCE.csv`) contains additional audit/tracking fields (citator‑provided labels, notes, etc.). The helper script `scripts/filter_source_csv.py` generates the reduced file and raises the CSV field size limit to handle long HTML fields.
37
+
38
+ ### Abbreviations (Fine‑Grained Treatment Signals)
39
+
40
+ The following codes appear in Hellyer’s labels and/or citator labels:
41
+ - **ACAN**: Among conflicting authorities noted in
42
+ - **AR**: Abrogation recognized by
43
+ - **C**: Criticized by
44
+ - **CID**: Called into doubt by
45
+ - **D**: Distinguished by
46
+ - **DAS**: Disapproved as stated in
47
+ - **DE**: Declined to extend by
48
+ - **DR**: Disagreement recognized by
49
+ - **DW**: Disagreed with by
50
+ - **IOR**: Implied overruling recognized by
51
+ - **LHR**: Limitation of holding recognized by
52
+ - **NF**: Not followed by
53
+ - **O**: Overruled
54
+ - **OR**: Overruling recognized by
55
+ - **Q**: Questioned by
56
+ - **R**: Reversed by
57
+
58
+ Note: Different citators sometimes use overlapping or slightly different taxonomies; Hellyer’s ground‑truth label harmonizes to the most accurate treatment after reading the opinion.
59
+
60
+ ### Getting Started
61
+
62
+ 1. Use `SOURCE_hf.csv` for training/evaluation; it contains only the necessary fields for modeling and sharing.
63
+ 2. If you need to regenerate the reduced CSV from `SOURCE.csv`, run:
64
+
65
+ ```bash
66
+ python3 scripts/filter_source_csv.py
67
+ ```
68
+
69
+ ### Citation
70
+
71
+ If you use this dataset or code, please cite:
72
+ - Paul D. Hellyer, empirical evaluation of citators (see link above).
73
+ - Validate Your Authority: Benchmarking LLMs on Multi‑Label Precedent Treatment Classification (link forthcoming).
74
+
75
+