File size: 2,967 Bytes
e93dc3e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
---
license: mit
task_categories:
  - text-classification
  - structure-prediction
language:
  - en
tags:
  - legal
  - echr
  - annotated-corpus
size_categories:
  - n<1K
---

# ECHR Annotated Corpus

A corpus of 289 European Court of Human Rights judgments (English) with
hierarchical structural annotations produced via an AI-assisted 4-task pipeline.

## Contents

Each case includes:

| File | Description |
|------|-------------|
| `meta.json` | Case metadata: docname, date, respondent, doctype, ECLI, importance |
| `paragraphs.json` | Raw paragraphs extracted from HUDOC HTML |
| `html.html` | Original HUDOC HTML |
| `state.json` | Annotation state and per-task costs |
| `task1/` | L1 heading detection (suggestions, decisions, final) |
| `task2/` | Quote and numbered-paragraph detection |
| `task3/` | Sub-heading classification |
| `task4/` | 5-segment mapping (preamble, facts, law, conclusion, post-conclusion) |

The annotation pipeline produces a unified hierarchy:

```
segments[] → headings[] (L1-L5) → numbered paragraphs[] → quotes[]
```

Task internals are preserved for reproducibility — anyone can rebuild the final
hierarchy or re-run downstream analysis.

## Annotation methodology

1. **L1 heading detection** — regex + AI for canonical sections (INTRODUCTION,
   THE FACTS, THE LAW, FOR THESE REASONS, etc.)
2. **Quote detection** — numbered-paragraph spine + between-spine classification
3. **Sub-heading classification** — Claude Haiku assigns L2-L6 levels per L1
4. **Segment mapping** — deterministic 5-segment mapping from L1 headings

Each task has explicit `suggestions` (AI/deterministic) → `decisions` (human
review) → `final` (committed) provenance.

## Loading the dataset

```bash
git clone <project-repo> echr-project
cd echr-project
pip install -r requirements.txt

# Pull the corpus from HuggingFace
python scripts/bootstrap.py

# Tell the apps where to read from
export ECHR_DATA_DIR=$(pwd)/data

# Start the viewer
cd experiments/viewer && python server.py
# Open http://127.0.0.1:5092
```

## Schema

### `meta.json`

```json
{
  "itemid": "001-249367",
  "docname": "CASE OF MAKKI v. DENMARK",
  "judgementdate": "2025-12-15",
  "respondent": "DNK",
  "doctypebranch": "CHAMBER",
  "importance": "2",
  "ecli": "ECLI:CE:ECHR:2025:1215JUD003161818"
}
```

### `paragraphs.json`

Array of paragraph objects:

```json
[
  {"index": 0, "tag": "p", "text": "...", "char_count": 134},
  ...
]
```

### `task<N>/final.json`

Task-specific schemas. See the project repository for details.

## Source

Judgments retrieved from HUDOC — the official ECHR case-law database. Original
judgment texts are public-domain works of the European Court of Human Rights.

## License

MIT — annotations and processing code.
ECHR judgments themselves are public-domain works of the European Court of
Human Rights.

## Acknowledgements

Annotation produced via a Claude-assisted pipeline (Anthropic) with human review.