|
|
--- |
|
|
|
|
|
viewer: true |
|
|
|
|
|
annotations_creators: |
|
|
- expert-generated |
|
|
|
|
|
language_creators: |
|
|
- crowdsourced |
|
|
|
|
|
language: |
|
|
{%- set languages = [] -%} |
|
|
{%- for name,metadata in data.items()|sort(attribute='1.dirname') -%} |
|
|
{{ languages.append(metadata.lcode)|default("", True)}} |
|
|
{%- endfor -%} |
|
|
{%- for language in languages|unique %} |
|
|
- {{ language if language != 'no' else "'no'" }} |
|
|
{%- endfor %} |
|
|
|
|
|
license: |
|
|
- apache-2.0 |
|
|
|
|
|
multilinguality: |
|
|
- multilingual |
|
|
|
|
|
size_categories: |
|
|
- '1K<n<10K' |
|
|
|
|
|
source_datasets: |
|
|
- original |
|
|
|
|
|
task_categories: |
|
|
- token-classification |
|
|
|
|
|
task_ids: |
|
|
- parsing |
|
|
- part-of-speech |
|
|
- lemmatization |
|
|
|
|
|
paperswithcode_id: universal-dependencies |
|
|
pretty_name: Universal Dependencies Treebank |
|
|
|
|
|
tags: |
|
|
- text |
|
|
- constituency-parsing |
|
|
- dependency-parsing |
|
|
- part-of-speech-tagging |
|
|
|
|
|
configs: |
|
|
{%- for name,metadata in data.items()|sort(attribute='1.dirname') %} |
|
|
{%- if not metadata.blocked %} |
|
|
- config_name: {{ name }} |
|
|
data_files: |
|
|
{%- set ns = namespace(dataset_size=0) -%} |
|
|
{%- for fileset_split_name,fileset_split_data in metadata.splits.items() %} |
|
|
- split: {{ fileset_split_name }} |
|
|
path: parquet/{{ name }}/{{ fileset_split_name }}.parquet |
|
|
{%- endfor %} |
|
|
{%- if name == 'en_ewt' %} |
|
|
default: true |
|
|
{%- endif %} |
|
|
{%- endif %} |
|
|
{%- endfor %} |
|
|
--- |
|
|
|
|
|
|
|
|
|
|
|
**Version 2.0.0** introduces significant improvements and breaking changes: |
|
|
- **Parquet Format:** faster loading with HuggingFace datasets >=4.0.0 |
|
|
- **MWT Support:** New `mwt` field provides structured multi-word token information |
|
|
- **Enhanced Security:** No more `trust_remote_code=True` required |
|
|
- **Separate Versioning:** Loader version (2.0.0) distinct from UD data version (2.17) |
|
|
|
|
|
**Breaking Changes:** |
|
|
- Token sequences now exclude MWT surface forms (matches UD guidelines) |
|
|
- Requires `datasets>=4.0.0` for Parquet support |
|
|
|
|
|
- **Migration Guide:** See [MIGRATION.md](MIGRATION.md) for detailed upgrade instructions |
|
|
- **Changelog:** See [CHANGELOG.md](CHANGELOG.md) for complete release notes |
|
|
|
|
|
|
|
|
|
|
|
- [Dataset Description](#dataset-description) |
|
|
- [Dataset Summary](#dataset-summary) |
|
|
- [Data Quality & Fidelity](#data-quality--fidelity) |
|
|
- [Usage](#usage) |
|
|
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) |
|
|
- [Dataset Structure](#dataset-structure) |
|
|
- [Data Instances](#data-instances) |
|
|
- [Data Fields](#data-fields) |
|
|
- [Data Splits](#data-splits) |
|
|
- [Dataset Creation](#dataset-creation) |
|
|
- [Curation Rationale](#curation-rationale) |
|
|
- [Source Data](#source-data) |
|
|
- [Annotations](#annotations) |
|
|
- [Personal and Sensitive Information](#personal-and-sensitive-information) |
|
|
- [Considerations for Using the Data](#considerations-for-using-the-data) |
|
|
- [Social Impact of Dataset](#social-impact-of-dataset) |
|
|
- [Discussion of Biases](#discussion-of-biases) |
|
|
- [Other Known Limitations](#other-known-limitations) |
|
|
- [Additional Information](#additional-information) |
|
|
- [Dataset Curators](#dataset-curators) |
|
|
- [Licensing Information](#licensing-information) |
|
|
- [Citation Information](#citation-information) |
|
|
- [Contributions](#contributions) |
|
|
|
|
|
|
|
|
|
|
|
- **Homepage:** [Universal Dependencies](https://universaldependencies.org) |
|
|
- **Repository:** [Universal Dependencies](https://github.com/UniversalDependencies) |
|
|
- **Paper:** [Universal Dependencies v2: An Evergrowing Multilingual Treebank Collection](https://arxiv.org/abs/2004.10643) |
|
|
- **Leaderboard:** |
|
|
- **Point of Contact:** [appliedlinguisticsdevs@eurac.edu](mailto:appliedlinguisticsdevs@eurac.edu) |
|
|
- **Point of Contact:** [IAL Homepage](https://www.eurac.edu/linguistics) |
|
|
|
|
|
|
|
|
|
|
|
{{ description }} |
|
|
|
|
|
This is a (temporary) fork of |
|
|
[/universal-dependencies/universal_dependencies](https://huggingface.co/datasets/universal-dependencies/universal_dependencies). |
|
|
|
|
|
|
|
|
|
|
|
This dataset achieves **100% fidelity** for linguistic data (tokens, |
|
|
annotations, dependencies) and **very high (~98%) fidelity** for metadata. The |
|
|
Parquet files can be perfectly reconstructed back to the original CoNLL-U |
|
|
format with: |
|
|
- All linguistic annotations preserved exactly |
|
|
- Multi-word tokens (MWTs) and empty nodes fully supported |
|
|
- Duplicate metadata keys preserved (1,323 sentences across 14 treebanks) |
|
|
- Enhanced dependencies and rare annotation edge cases handled correctly |
|
|
|
|
|
Recent improvements include fixes for: |
|
|
- Double equals parsing in FEATS/MISC fields (e.g., `Gloss==POSS`) |
|
|
- Empty nodes with ID < 1 (e.g., `0.1` for pro-drop subjects) |
|
|
- Empty metadata values and keys without values |
|
|
- Raw field parsing to bypass library bugs |
|
|
|
|
|
For technical details, see [CONLLU_PARSING.md](https://github.com/bot-zen/ud-hf-parquet-tools/blob/main/CONLLU_PARSING.md) in the ud-hf-parquet-tools repository. |
|
|
|
|
|
|
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
|
|
|
ds = load_dataset("commul/universal_dependencies", "en_ewt", revision="{{ ud_ver }}", split="train") |
|
|
|
|
|
|
|
|
sentence = ds[0] |
|
|
print(f"Sentence ID: {sentence['sent_id']}") |
|
|
print(f"Text: {sentence['text']}") |
|
|
print(f"Tokens: {sentence['tokens']}") |
|
|
|
|
|
## TODO: Make helper functions available |
|
|
## post v2.0 universal_dependencies.py is not part of the Dataset any longer! |
|
|
## |
|
|
# Parse optional fields using helper functions |
|
|
from universal_dependencies import parse_feats, parse_misc |
|
|
|
|
|
for i, token in enumerate(sentence['tokens']): |
|
|
feats = parse_feats(sentence['feats'][i]) # Returns dict or {} |
|
|
misc = parse_misc(sentence['misc'][i]) # Returns dict or {} |
|
|
print(f"{token}: UPOS={sentence['upos'][i]}, feats={feats}, misc={misc}") |
|
|
|
|
|
|
|
|
from universal_dependencies import write_conllu |
|
|
|
|
|
|
|
|
write_conllu(ds) |
|
|
|
|
|
|
|
|
write_conllu(ds, "output.conllu") |
|
|
|
|
|
|
|
|
import io |
|
|
buffer = io.StringIO() |
|
|
write_conllu(ds, buffer) |
|
|
conllu_text = buffer.getvalue() |
|
|
``` |
|
|
|
|
|
|
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
|
|
|
|
|
|
All files use a revised version of [the CoNLL-X |
|
|
format](http://anthology.aclweb.org/W/W06/W06-2920.pdf) called CoNLL-U. |
|
|
Annotations are encoded in plain text files (UTF-8, [normalized to |
|
|
NFC](http://unicode.org/reports/tr15/), using only the LF character as line |
|
|
break, including an LF character at the end of file). |
|
|
|
|
|
* [Revision (r{{ ud_ver }}) specific documentation](https://github.com/UniversalDependencies/docs/blob/r{{ ud_ver }}/format.md) |
|
|
* [Latest UD CoNLL-U Format documentation](https://universaldependencies.org/format.html) |
|
|
|
|
|
|
|
|
|
|
|
This dataset has {{ data.items()|length }} configurations (treebanks). |
|
|
```python |
|
|
from datasets import get_dataset_config_names, load_dataset |
|
|
|
|
|
|
|
|
configs = get_dataset_config_names("commul/universal_dependencies", revision="{{ ud_ver }}") |
|
|
print(f"Available treebanks: {len(configs)}") |
|
|
|
|
|
# Example configurations: |
|
|
# ['af_afribooms', |
|
|
# 'akk_pisandub', |
|
|
# 'aqz_tudet', |
|
|
# 'sq_tsa', |
|
|
# 'gsw_uzh', |
|
|
# 'am_att', |
|
|
# ... |
|
|
# ] |
|
|
|
|
|
# Get the latest configurations |
|
|
get_dataset_config_names("commul/universal_dependencies") |
|
|
|
|
|
|
|
|
dataset = load_dataset("commul/universal_dependencies", "en_ewt") |
|
|
print(dataset) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
``` |
|
|
|
|
|
|
|
|
|
|
|
Each example in the dataset contains the following fields: |
|
|
|
|
|
- **sent_id** (string): Sentence ID from the CoNLL-U file metadata |
|
|
- **text** (string): Full sentence text (surface form) |
|
|
- **tokens** (list of strings): Syntactic word forms (MWT surface forms excluded) |
|
|
- **lemmas** (list of strings): Lemmas for each syntactic word |
|
|
- **upos** (list of strings): Universal POS tags |
|
|
- **xpos** (list of strings): Language-specific POS tags |
|
|
- **feats** (list of strings): Morphological features in UD format |
|
|
- **head** (list of strings): Head indices for dependency relations |
|
|
- **deprel** (list of strings): Dependency relation labels |
|
|
- **deps** (list of strings): Enhanced dependency graph |
|
|
- **misc** (list of strings): Miscellaneous annotations |
|
|
- **mwt** (list of dicts): Multi-Word Token information (NEW in v2.0) |
|
|
- **id** (string): Token range (e.g., "1-2") |
|
|
- **form** (string): Surface form (e.g., "don't") |
|
|
- **misc** (string): MWT-specific metadata |
|
|
- **empty_nodes** (list of dicts): Empty Node Token information (NEW in v2.0) |
|
|
- **comments** (list of strings): All comments including duplicates, empty values, and original ordering (NEW in v2.0) |
|
|
|
|
|
**Example:** |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("commul/universal_dependencies", "en_ewt", split="train") |
|
|
print(dataset[0]) |
|
|
|
|
|
|
|
|
{ |
|
|
'sent_id': 'weblog-juancole.com_juancole_20051126063000_ENG_20051126_063000-0001', |
|
|
'text': 'Al-Zaman : American forces killed Shaikh Abdullah al-Ani, the preacher at the mosque in the town of Qaim, near the Syrian border.', |
|
|
'comments': [ |
|
|
'newdoc id = weblog-juancole.com_juancole_20051126063000_ENG_20051126_063000', |
|
|
'__SENT_ID__', |
|
|
'newpar id = weblog-juancole.com_juancole_20051126063000_ENG_20051126_063000-p0001', |
|
|
'__TEXT__' |
|
|
], |
|
|
'tokens': ['Al', '-', 'Zaman', ':', 'American', 'forces', 'killed', 'Shaikh', 'Abdullah', 'al', '-', 'Ani', ',', 'the', 'preacher', 'at', 'the', 'mosque', 'in', 'the', 'town', 'of', 'Qaim', ',', 'near', 'the', 'Syrian', 'border', '.'], |
|
|
'lemmas': ['Al', '-', 'Zaman', ':', 'American', 'force', 'kill', 'Shaikh', 'Abdullah', 'al', '-', 'Ani', ',', 'the', 'preacher', 'at', 'the', 'mosque', 'in', 'the', 'town', 'of', 'Qaim', ',', 'near', 'the', 'Syrian', 'border', '.'], |
|
|
'upos': [10, 1, 10, 1, 6, 0, 16, 10, 10, 10, 1, 10, 1, 8, 0, 2, 8, 0, 2, 8, 0, 2, 10, 1, 2, 8, 6, 0, 1], |
|
|
'xpos': ['NNP', 'HYPH', 'NNP', ':', 'JJ', 'NNS', 'VBD', 'NNP', 'NNP', 'NNP', 'HYPH', 'NNP', ',', 'DT', 'NN', 'IN', 'DT', 'NN', 'IN', 'DT', 'NN', 'IN', 'NNP', ',', 'IN', 'DT', 'JJ', 'NN', '.'], |
|
|
'feats': ['Number=Sing', None, 'Number=Sing', None, 'Degree=Pos', 'Number=Plur', 'Mood=Ind|Number=Plur|Person=3|Tense=Past|VerbForm=Fin', 'Number=Sing', 'Number=Sing', 'Number=Sing', None, 'Number=Sing', None, 'Definite=Def|PronType=Art', 'Number=Sing', None, 'Definite=Def|PronType=Art', 'Number=Sing', None, 'Definite=Def|PronType=Art', 'Number=Sing', None, 'Number=Sing', None, None, 'Definite=Def|PronType=Art', 'Degree=Pos', 'Number=Sing', None], |
|
|
'head': ['0', '3', '1', '7', '6', '7', '1', '7', '8', '8', '12', '8', '15', '15', '8', '18', '18', '15', '21', '21', '18', '23', '21', '28', '28', '28', '28', '21', '1'], |
|
|
'deprel': ['root', 'punct', 'flat', 'punct', 'amod', 'nsubj', 'parataxis', 'obj', 'flat', 'flat', 'punct', 'flat', 'punct', 'det', 'appos', 'case', 'det', 'nmod', 'case', 'det', 'nmod', 'case', 'nmod', 'punct', 'case', 'det', 'amod', 'nmod', 'punct'], |
|
|
'deps': ['0:root', '3:punct', '1:flat', '7:punct', '6:amod', '7:nsubj', '1:parataxis', '7:obj', '8:flat', '8:flat', '12:punct', '8:flat', '15:punct', '15:det', '8:appos', '18:case', '18:det', '15:nmod:at', '21:case', '21:det', '18:nmod:in', '23:case', '21:nmod:of', '28:punct', '28:case', '28:det', '28:amod', '21:nmod:near', '1:punct'], |
|
|
'misc': ['SpaceAfter=No', 'SpaceAfter=No', None, None, None, None, None, None, None, 'SpaceAfter=No', 'SpaceAfter=No', 'SpaceAfter=No', None, None, None, None, None, None, None, None, None, None, 'SpaceAfter=No', None, None, None, None, 'SpaceAfter=No', None], |
|
|
'mwt': [], |
|
|
'empty_nodes': [] |
|
|
} |
|
|
``` |
|
|
|
|
|
*MWT Example (French):** |
|
|
|
|
|
```python |
|
|
dataset = load_dataset("commul/universal_dependencies", "fr_gsd", split="train") |
|
|
|
|
|
example = [ex for ex in dataset if ex['mwt']][0] |
|
|
print(example['mwt']) |
|
|
|
|
|
|
|
|
[{'id': '8-9', 'form': 'des', 'feats': None, 'misc': None}] |
|
|
|
|
|
``` |
|
|
|
|
|
|
|
|
|
|
|
The file `metadata.json` stores additional information about the data, for example, available splits: |
|
|
|
|
|
```python |
|
|
from huggingface_hub import hf_hub_download |
|
|
import json |
|
|
|
|
|
md = hf_hub_download(repo_id="commul/universal_dependencies", filename="metadata.json", repo_type="dataset") |
|
|
|
|
|
with open(md, "r", encoding="utf-8") as f: |
|
|
metadata = json.load(f) |
|
|
|
|
|
[metadata[key]['splits'].keys() for key in metadata] |
|
|
``` |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
|
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
|
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
|
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
|
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
|
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
|
|
|
Universal Dependencies is a collection of linguistic data and tools. Each of |
|
|
the treebanks has its own license terms and you (the "User") are responsible |
|
|
for complying with the license terms applicable to those parts of UD which you |
|
|
use. |
|
|
|
|
|
Details about the License Terms: |
|
|
* https://lindat.mff.cuni.cz/repository/xmlui/page/license-ud-{{ ud_ver }} |
|
|
|
|
|
The `./tools/` are licensed under the [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) license. |
|
|
|
|
|
|
|
|
|
|
|
```bibtex |
|
|
{{ citation }} |
|
|
``` |
|
|
|
|
|
|
|
|
|
|
|
Thanks to [universal-dependencies](https://huggingface.co/universal-dependencies) for [the original of this dataset](https://huggingface.co/datasets/universal-dependencies/universal_dependencies). |
|
|
|