Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
celex_id: string
reference: string
summary: string
tags: list<item: string>
subjects: list<item: string>
split: string
reference_annotations: struct<>
summary_annotations: struct<21981A0905(01)_p1: struct<text: string, triples: list<item: string>>, 21981A0905(01)_p2: struct<text: string, triples: list<item: string>>, 21981A0905(01)_p3: struct<text: string, triples: list<item: string>>, 21981A0905(01)_p4: struct<text: string, triples: list<item: string>>>
vs
celex_id: string
reference: string
summary: string
tags: list<item: string>
subjects: list<item: string>
split: string
reference_annotations: struct<>
summary_annotations: struct<21986A0618(01)_p1: struct<text: string, triples: list<item: string>>>
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head
return next(iter(self.iter(batch_size=n)))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter
for key, example in iterator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
for key, pa_table in self._iter_arrow():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 527, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
celex_id: string
reference: string
summary: string
tags: list<item: string>
subjects: list<item: string>
split: string
reference_annotations: struct<>
summary_annotations: struct<21981A0905(01)_p1: struct<text: string, triples: list<item: string>>, 21981A0905(01)_p2: struct<text: string, triples: list<item: string>>, 21981A0905(01)_p3: struct<text: string, triples: list<item: string>>, 21981A0905(01)_p4: struct<text: string, triples: list<item: string>>>
vs
celex_id: string
reference: string
summary: string
tags: list<item: string>
subjects: list<item: string>
split: string
reference_annotations: struct<>
summary_annotations: struct<21986A0618(01)_p1: struct<text: string, triples: list<item: string>>>Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata
Warning:
The task_categories "relation-extraction" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
EUR-Lex-Triples: A Legal Relation Extraction Dataset from European Legislation
EUR-Lex-Sum dataset Aumiller, 2022 annotated with triples
Relation Extraction Baselines
Code/RE-Baselinescontains the code used to run the RE baselines : Fine-Tuning and Inference.- Results of baseline models for Relation Extraction are :
| Model | Precision | Recall | F1-Score |
|---|---|---|---|
| Legal-Bert | 0.64 | 0.59 | 0.60 |
| Bert | 0.58 | 0.52 | 0.54 |
| Rebel-Large | 0.88 | 0.75 | 0.80 |
| Mistral 7b zero-Shot | 0.38 | 0.30 | 0.33 |
| Mistral 7b In-Context | 0.42 | 0.36 | 0.38 |
| Mistral 7b Finetuning | 0.84 | 0.69 | 0.75 |
| Zephyr 7b Zero-Shot | 0.40 | 0.36 | 0.37 |
| Zephyr 7b In-Context | 0.52 | 0.44 | 0.47 |
| Zephyr 7b Finetuning | 0.85 | 0.61 | 0.71 |
| Llama 2 13b Zero-Shot | 0.31 | 0.25 | 0.27 |
| Llama 2 13b In-Context | 0.33 | 0.29 | 0.30 |
| Llama 2 13b Finetuning | 0.82 | 0.61 | 0.69 |
Citation
EUR-Lex-Triples: A Legal Relation Extraction Dataset from European Legislation. Paper accepted at TPDL 2025.
Licence
Copyright for the editorial content of EUR-Lex website, the summaries of EU legislation and the consolidated texts owned by the EU, are licensed under the Creative Commons Attribution 4.0 International licence, i.e., CC BY 4.0 as mentioned on the official EUR-Lex website. Any data artifacts remain licensed under the CC BY 4.0 license.
- Downloads last month
- 31
EUR-Lex-Triples consists on 1504 annotated documents. All Documents come from the english part of EUR-Lex-Sum Dataset.
EUR-Lex-Triples consists on 1504 annotated documents. All Documents come from the english part of EUR-Lex-Sum Dataset.
Filtered_Annotated_Documents
contains json files containing for each document its summary, the annotated paragraphs, and for each paragraph the triples derived from the annotations.