| --- |
| license: mit |
| task_categories: |
| - text-classification |
| - structure-prediction |
| language: |
| - en |
| tags: |
| - legal |
| - echr |
| - annotated-corpus |
| size_categories: |
| - n<1K |
| --- |
| |
| # ECHR Annotated Corpus |
|
|
| A corpus of 289 European Court of Human Rights judgments (English) with |
| hierarchical structural annotations produced via an AI-assisted 4-task pipeline. |
|
|
| ## Contents |
|
|
| Each case includes: |
|
|
| | File | Description | |
| |------|-------------| |
| | `meta.json` | Case metadata: docname, date, respondent, doctype, ECLI, importance | |
| | `paragraphs.json` | Raw paragraphs extracted from HUDOC HTML | |
| | `html.html` | Original HUDOC HTML | |
| | `state.json` | Annotation state and per-task costs | |
| | `task1/` | L1 heading detection (suggestions, decisions, final) | |
| | `task2/` | Quote and numbered-paragraph detection | |
| | `task3/` | Sub-heading classification | |
| | `task4/` | 5-segment mapping (preamble, facts, law, conclusion, post-conclusion) | |
|
|
| The annotation pipeline produces a unified hierarchy: |
|
|
| ``` |
| segments[] → headings[] (L1-L5) → numbered paragraphs[] → quotes[] |
| ``` |
|
|
| Task internals are preserved for reproducibility — anyone can rebuild the final |
| hierarchy or re-run downstream analysis. |
|
|
| ## Annotation methodology |
|
|
| 1. **L1 heading detection** — regex + AI for canonical sections (INTRODUCTION, |
| THE FACTS, THE LAW, FOR THESE REASONS, etc.) |
| 2. **Quote detection** — numbered-paragraph spine + between-spine classification |
| 3. **Sub-heading classification** — Claude Haiku assigns L2-L6 levels per L1 |
| 4. **Segment mapping** — deterministic 5-segment mapping from L1 headings |
|
|
| Each task has explicit `suggestions` (AI/deterministic) → `decisions` (human |
| review) → `final` (committed) provenance. |
|
|
| ## Loading the dataset |
|
|
| ```bash |
| git clone <project-repo> echr-project |
| cd echr-project |
| pip install -r requirements.txt |
| |
| # Pull the corpus from HuggingFace |
| python scripts/bootstrap.py |
| |
| # Tell the apps where to read from |
| export ECHR_DATA_DIR=$(pwd)/data |
| |
| # Start the viewer |
| cd experiments/viewer && python server.py |
| # Open http://127.0.0.1:5092 |
| ``` |
|
|
| ## Schema |
|
|
| ### `meta.json` |
|
|
| ```json |
| { |
| "itemid": "001-249367", |
| "docname": "CASE OF MAKKI v. DENMARK", |
| "judgementdate": "2025-12-15", |
| "respondent": "DNK", |
| "doctypebranch": "CHAMBER", |
| "importance": "2", |
| "ecli": "ECLI:CE:ECHR:2025:1215JUD003161818" |
| } |
| ``` |
|
|
| ### `paragraphs.json` |
|
|
| Array of paragraph objects: |
|
|
| ```json |
| [ |
| {"index": 0, "tag": "p", "text": "...", "char_count": 134}, |
| ... |
| ] |
| ``` |
|
|
| ### `task<N>/final.json` |
|
|
| Task-specific schemas. See the project repository for details. |
|
|
| ## Source |
|
|
| Judgments retrieved from HUDOC — the official ECHR case-law database. Original |
| judgment texts are public-domain works of the European Court of Human Rights. |
|
|
| ## License |
|
|
| MIT — annotations and processing code. |
| ECHR judgments themselves are public-domain works of the European Court of |
| Human Rights. |
|
|
| ## Acknowledgements |
|
|
| Annotation produced via a Claude-assisted pipeline (Anthropic) with human review. |
|
|