Datasets:

Modalities:
Text
Formats:
json
ArXiv:
License:
File size: 5,628 Bytes
437e44a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
# EP Registry Scraper

A Python-based web scraper that extracts text and Eurovoc metadata from European Parliament (EP) documents through the EU Publications Office API.

## Overview

This tool automates the collection of EU legislative documents from the CELLAR (Commission des EuropΓ©ennes Legislation Archive and Repository) system. It extracts document text in multiple formats (PDF, DOCX, XHTML) and enriches documents with Eurovoc thesaurus concept labels and their corresponding unique identifiers.

## Features

- **SPARQL-based document discovery**: Queries the EU Publications Office SPARQL endpoint to find documents by date range
- **Multi-format text extraction**: Supports PDF, DOCX, DOC, and XHTML document formats
- **Eurovoc enrichment**: Automatically adds Eurovoc concept labels and IDs to documents
- **Parallel processing**: Uses multiprocessing for faster document text extraction
- **Caching**: Joblib-based caching to avoid redundant API calls and document downloads
- **Robust error handling**: Comprehensive logging of failures with detailed error messages
- **Compressed output**: Handles gzip-compressed JSONL files

## Directory Structure

```
ep_registry_scrapper/
β”œβ”€β”€ README.md                    # This file
β”œβ”€β”€ requirements.txt             # Python dependencies
β”œβ”€β”€ update.py                    # Main scraper script
β”œβ”€β”€ add_id.py                    # Post-processing script to add Eurovoc IDs
β”œβ”€β”€ get_eurovoc_dict.py          # Utility to fetch and cache Eurovoc data
β”œβ”€β”€ process_all.sh               # Bash wrapper for complete pipeline
└── query.j2                      # Jinja2 template for SPARQL queries (required)
```

## Installation

### Prerequisites

- Python 3.7+
- pip or conda

### Setup

1. Clone or download the repository
2. Install dependencies:

```bash
pip install -r requirements.txt
```

3. Note: You'll need a `query.j2` template file that defines the SPARQL query for document discovery. This should contain a Jinja2 template with `{{ start }}` and `{{ end }}` date variables.

## Usage


### 1. Extract Documents and Text

Extract documents from CELLAR and their text content:

```bash
python update.py <output_prefix> <num_days>
```

Arguments:
- `output_prefix`: Prefix for output JSONL files (organized by month)
- `num_days`: Number of days to look back from today

Example:
```bash
python update.py output_ 30
```

This will create files like `output_2025-11.jsonl` for the last 30 days.

### 2. Fetch Eurovoc Dictionary

Generate a pickled dictionary mapping Eurovoc terms to their IDs:

```bash
python get_eurovoc_dict.py
```

Output: `eurovoc_dict.pkl`

### 3. Add Eurovoc IDs to Documents

Enrich existing JSONL files with Eurovoc concept IDs:

```bash
python add_id.py <input_file> <output_file>
```

Arguments:
- `input_file`: Input JSONL file (can be `/dev/stdin` for piping)
- `output_file`: Output JSONL file

Example:
```bash
python add_id.py documents.jsonl documents_with_ids.jsonl
```

### 4. Batch process to add IDs to multiple jsonls

If you need to add Eurovoc IDs to multiple JSONL files, use a shell loop:

```bash
for file in output_*.jsonl; do
  python add_id.py "$file" "${file%.jsonl}_with_ids.jsonl"
done
```

### 5. Generate Eurovoc metrics

Create language barplots, document length per language boxplots

```bash
python analyse.py
```

## Complete Pipeline

The **main workflow** is simply:

```bash
python update.py output_ 30
```

This extracts documents and text for the last 30 days.

**If Eurovoc IDs were not appended during extraction** (or want to update a already generated file), you can enrich the documents afterwards:

1. Fetch the Eurovoc dictionary (one-time setup):
   ```bash
   python get_eurovoc_dict.py
   ```

2. Add IDs to your JSONL files:
   ```bash
   python add_id.py documents.jsonl documents_with_ids.jsonl
   ```

3. Or use the bash wrapper to automate the entire pipeline:
   ```bash
   bash process_all.sh
   ```

This ensures all steps are executed in the correct order.

Updating the IDs of all documents is recommended as labels and IDs can change over time

## Output Format

The scraper produces JSONL (JSON Lines) files with the following structure per document:

```json
{
  "url": "https://publications.europa.eu/...",
  "title": "Document Title",
  "date": "2025-11-20",
  "lang": "en",
  "formats": ["pdf", "xhtml"],
  "eurovoc_concepts": ["term1", "term2", ...],
  "eurovoc_concepts_ids": ["id1", "id2", ...],
  "text": "Extracted document text..."
}
```

## Logging

All operations are logged to `collect.log` in the script directory. This includes:
- Document processing status
- Missing Eurovoc terms
- Download and parsing failures
- API errors

Monitor logs for issues:
```bash
tail -f collect.log
```

## API Dependencies

- **EU Publications Office SPARQL Endpoint**: `https://publications.europa.eu/webapi/rdf/sparql`
- **Eurovoc Dataset API**: `http://publications.europa.eu/resource/dataset/eurovoc`

## Performance Considerations

- **Caching**: Downloads are cached for 2 hours to avoid redundant API calls
- **Parallel Processing**: Uses 16 worker processes for text extraction
- **Rate Limiting**: Consider adding delays if processing large date ranges

## Dependencies

See `requirements.txt` for the complete list. Key packages:
- `requests`: HTTP client for API calls
- `beautifulsoup4`: HTML/XML parsing
- `pdfminer.six`: PDF text extraction
- `docx2txt`: Microsoft Word document extraction
- `jinja2`: Template rendering for SPARQL queries
- `joblib`: Caching and parallel processing
- `tqdm`: Progress bars
- `xmltodict`: XML parsing