iiegn Claude Sonnet 4.5 commited on
Commit
c420a38
·
verified ·
1 Parent(s): 3c8a787

Update to modern HuggingFace dataset card format

Browse files

- Change from dataset_info to configs format in README template
- Use data_files with parquet paths instead of features schema
- Add uv index configuration to pyproject.toml for PyPI and TestPyPI
- Set en_ewt as default config
- Simplify README structure (removed redundant feature definitions)
- Add TODO note about helper functions post v2.0

This aligns with HuggingFace's current dataset card standards
and the new parquet-based distribution model.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

Files changed (3) hide show
  1. pyproject.toml +10 -0
  2. tools/README-2.17 +0 -0
  3. tools/templates/README.tmpl +53 -79
pyproject.toml CHANGED
@@ -53,3 +53,13 @@ quote-style = "double"
53
  indent-style = "space"
54
  skip-magic-trailing-comma = false
55
  line-ending = "auto"
 
 
 
 
 
 
 
 
 
 
 
53
  indent-style = "space"
54
  skip-magic-trailing-comma = false
55
  line-ending = "auto"
56
+
57
+ [[tool.uv.index]]
58
+ name = "PyPI"
59
+ url = "https://pypi.org/simple/"
60
+ default = true
61
+
62
+ [[tool.uv.index]]
63
+ name = "TestPyPI"
64
+ url = "https://test.pypi.org/simple/"
65
+ explicit = true
tools/README-2.17 CHANGED
The diff for this file is too large to render. See raw diff
 
tools/templates/README.tmpl CHANGED
@@ -46,74 +46,24 @@ tags:
46
  - dependency-parsing
47
  - part-of-speech-tagging
48
 
49
- dataset_info:
50
  {%- for name,metadata in data.items()|sort(attribute='1.dirname') %}
51
  - config_name: {{ name }}
52
- features:
53
- - name: idx
54
- dtype: string
55
- - name: text
56
- dtype: string
57
- - name: tokens
58
- sequence: string
59
- - name: lemmas
60
- sequence: string
61
- - name: upos
62
- sequence:
63
- class_label:
64
- names:
65
- '0': NOUN
66
- '1': PUNCT
67
- '2': ADP
68
- '3': NUM
69
- '4': SYM
70
- '5': SCONJ
71
- '6': ADJ
72
- '7': PART
73
- '8': DET
74
- '9': CCONJ
75
- '10': PROPN
76
- '11': PRON
77
- '12': X
78
- '13': _
79
- '14': ADV
80
- '15': INTJ
81
- '16': VERB
82
- '17': AUX
83
- - name: xpos
84
- sequence: string
85
- - name: feats
86
- sequence: string
87
- - name: head
88
- sequence: string
89
- - name: deprel
90
- sequence: string
91
- - name: deps
92
- sequence: string
93
- - name: misc
94
- sequence: string
95
- splits:
96
  {%- set ns = namespace(dataset_size=0) -%}
97
  {%- for fileset_split_name,fileset_split_data in metadata.splits.items() %}
98
- - name: {{ fileset_split_name }}
99
- num_bytes: {{ fileset_split_data.num_bytes }}{%- set ns.dataset_size = ns.dataset_size + fileset_split_data.num_bytes %}
100
- num_examples: {{ fileset_split_data.num_sentences }}
101
  {%- endfor %}
102
- dataset_size: {{ ns.dataset_size }}
103
- {%- endfor %}
104
-
105
- config_names:
106
- {%- for name,metadata in data.items()|sort(attribute='1.dirname') %}
107
- - {{ name }}
108
  {%- endfor %}
109
  ---
110
 
111
- # Dataset Card for Universal Dependencies Treebank
112
-
113
- ## What's New in v2.0
114
 
115
  **Version 2.0.0** introduces significant improvements and breaking changes:
116
-
117
  - **Parquet Format:** faster loading with HuggingFace datasets >=4.0.0
118
  - **MWT Support:** New `mwt` field provides structured multi-word token information
119
  - **Enhanced Security:** No more `trust_remote_code=True` required
@@ -201,6 +151,9 @@ print(f"Sentence ID: {sentence['sent_id']}")
201
  print(f"Text: {sentence['text']}")
202
  print(f"Tokens: {sentence['tokens']}")
203
 
 
 
 
204
  # Parse optional fields using helper functions
205
  from universal_dependencies import parse_feats, parse_misc
206
 
@@ -270,15 +223,15 @@ print(dataset)
270
  # Output:
271
  # DatasetDict({
272
  # train: Dataset({
273
- # features: ['idx', 'text', 'tokens', 'lemmas', 'upos', 'xpos', 'feats', 'head', 'deprel', 'deps', 'misc', 'mwt'],
274
- # num_rows: 12543
275
  # })
276
- # validation: Dataset({
277
- # features: ['idx', 'text', 'tokens', 'lemmas', 'upos', 'xpos', 'feats', 'head', 'deprel', 'deps', 'misc', 'mwt'],
278
  # num_rows: 2001
279
  # })
280
  # test: Dataset({
281
- # features: ['idx', 'text', 'tokens', 'lemmas', 'upos', 'xpos', 'feats', 'head', 'deprel', 'deps', 'misc', 'mwt'],
282
  # num_rows: 2077
283
  # })
284
  # })
@@ -288,7 +241,7 @@ print(dataset)
288
 
289
  Each example in the dataset contains the following fields:
290
 
291
- - **idx** (string): Sentence ID from the CoNLL-U file metadata
292
  - **text** (string): Full sentence text (surface form)
293
  - **tokens** (list of strings): Syntactic word forms (MWT surface forms excluded)
294
  - **lemmas** (list of strings): Lemmas for each syntactic word
@@ -303,6 +256,8 @@ Each example in the dataset contains the following fields:
303
  - **id** (string): Token range (e.g., "1-2")
304
  - **form** (string): Surface form (e.g., "don't")
305
  - **misc** (string): MWT-specific metadata
 
 
306
 
307
  **Example:**
308
 
@@ -314,18 +269,25 @@ print(dataset[0])
314
 
315
  # Output:
316
  {
317
- 'idx': 'weblog-blogspot.com_nominations_20041117172713_ENG_20041117_172713-0001',
318
- 'text': 'From the AP comes this story:',
319
- 'tokens': ['From', 'the', 'AP', 'comes', 'this', 'story', ':'],
320
- 'lemmas': ['from', 'the', 'AP', 'come', 'this', 'story', ':'],
321
- 'upos': ['ADP', 'DET', 'PROPN', 'VERB', 'DET', 'NOUN', 'PUNCT'],
322
- 'xpos': ['IN', 'DT', 'NNP', 'VBZ', 'DT', 'NN', ':'],
323
- 'feats': ['_', 'Definite=Def|PronType=Art', 'Number=Sing', 'Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin', 'Number=Sing|PronType=Dem', 'Number=Sing', '_'],
324
- 'head': ['4', '3', '4', '0', '6', '4', '4'],
325
- 'deprel': ['case', 'det', 'obl', 'root', 'det', 'nsubj', 'punct'],
326
- 'deps': ['_', '_', '_', '_', '_', '_', '_'],
327
- 'misc': ['_', '_', '_', '_', '_', '_', 'SpaceAfter=No'],
328
- 'mwt': [] # No MWTs in this sentence
 
 
 
 
 
 
 
329
  }
330
  ```
331
 
@@ -338,13 +300,25 @@ example = [ex for ex in dataset if ex['mwt']][0]
338
  print(example['mwt'])
339
 
340
  # Output:
341
- [{'id': '2-3', 'form': 'des', 'misc': ''}]
342
- # This means tokens[1:3] = ['de', 'les'] are combined as MWT surface form "des"
343
  ```
344
 
345
  ### Data Splits
346
 
347
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
348
 
349
  ## Dataset Creation
350
 
 
46
  - dependency-parsing
47
  - part-of-speech-tagging
48
 
49
+ configs:
50
  {%- for name,metadata in data.items()|sort(attribute='1.dirname') %}
51
  - config_name: {{ name }}
52
+ data_files:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  {%- set ns = namespace(dataset_size=0) -%}
54
  {%- for fileset_split_name,fileset_split_data in metadata.splits.items() %}
55
+ - split: {{ fileset_split_name }}
56
+ path: parquet/{{ name }}/{{ fileset_split_name }}.parquet
 
57
  {%- endfor %}
58
+ {%- if name == 'en_ewt' %}
59
+ default: true
60
+ {%- endif %}
 
 
 
61
  {%- endfor %}
62
  ---
63
 
64
+ ## Dataset Card (v2.0) for Universal Dependencies Treebank
 
 
65
 
66
  **Version 2.0.0** introduces significant improvements and breaking changes:
 
67
  - **Parquet Format:** faster loading with HuggingFace datasets >=4.0.0
68
  - **MWT Support:** New `mwt` field provides structured multi-word token information
69
  - **Enhanced Security:** No more `trust_remote_code=True` required
 
151
  print(f"Text: {sentence['text']}")
152
  print(f"Tokens: {sentence['tokens']}")
153
 
154
+ ## TODO: Make helper functions available
155
+ ## post v2.0 universal_dependencies.py is not part of the Dataset any longer!
156
+ ##
157
  # Parse optional fields using helper functions
158
  from universal_dependencies import parse_feats, parse_misc
159
 
 
223
  # Output:
224
  # DatasetDict({
225
  # train: Dataset({
226
+ # features: ['sent_id', 'text', 'comments', 'tokens', 'lemmas', 'upos', 'xpos', 'feats', 'head', 'deprel', 'deps', 'misc', 'mwt', 'empty_nodes'],
227
+ # num_rows: 12544
228
  # })
229
+ # dev: Dataset({
230
+ # features: ['sent_id', 'text', 'comments', 'tokens', 'lemmas', 'upos', 'xpos', 'feats', 'head', 'deprel', 'deps', 'misc', 'mwt', 'empty_nodes'],
231
  # num_rows: 2001
232
  # })
233
  # test: Dataset({
234
+ # features: ['sent_id', 'text', 'comments', 'tokens', 'lemmas', 'upos', 'xpos', 'feats', 'head', 'deprel', 'deps', 'misc', 'mwt', 'empty_nodes'],
235
  # num_rows: 2077
236
  # })
237
  # })
 
241
 
242
  Each example in the dataset contains the following fields:
243
 
244
+ - **sent_id** (string): Sentence ID from the CoNLL-U file metadata
245
  - **text** (string): Full sentence text (surface form)
246
  - **tokens** (list of strings): Syntactic word forms (MWT surface forms excluded)
247
  - **lemmas** (list of strings): Lemmas for each syntactic word
 
256
  - **id** (string): Token range (e.g., "1-2")
257
  - **form** (string): Surface form (e.g., "don't")
258
  - **misc** (string): MWT-specific metadata
259
+ - **empty_nodes** (list of dicts): Empty Node Token information (NEW in v2.0)
260
+ - **comments** (list of strings): All comments including duplicates, empty values, and original ordering (NEW in v2.0)
261
 
262
  **Example:**
263
 
 
269
 
270
  # Output:
271
  {
272
+ 'sent_id': 'weblog-juancole.com_juancole_20051126063000_ENG_20051126_063000-0001',
273
+ 'text': 'Al-Zaman : American forces killed Shaikh Abdullah al-Ani, the preacher at the mosque in the town of Qaim, near the Syrian border.',
274
+ 'comments': [
275
+ 'newdoc id = weblog-juancole.com_juancole_20051126063000_ENG_20051126_063000',
276
+ '__SENT_ID__',
277
+ 'newpar id = weblog-juancole.com_juancole_20051126063000_ENG_20051126_063000-p0001',
278
+ '__TEXT__'
279
+ ],
280
+ 'tokens': ['Al', '-', 'Zaman', ':', 'American', 'forces', 'killed', 'Shaikh', 'Abdullah', 'al', '-', 'Ani', ',', 'the', 'preacher', 'at', 'the', 'mosque', 'in', 'the', 'town', 'of', 'Qaim', ',', 'near', 'the', 'Syrian', 'border', '.'],
281
+ 'lemmas': ['Al', '-', 'Zaman', ':', 'American', 'force', 'kill', 'Shaikh', 'Abdullah', 'al', '-', 'Ani', ',', 'the', 'preacher', 'at', 'the', 'mosque', 'in', 'the', 'town', 'of', 'Qaim', ',', 'near', 'the', 'Syrian', 'border', '.'],
282
+ 'upos': [10, 1, 10, 1, 6, 0, 16, 10, 10, 10, 1, 10, 1, 8, 0, 2, 8, 0, 2, 8, 0, 2, 10, 1, 2, 8, 6, 0, 1],
283
+ 'xpos': ['NNP', 'HYPH', 'NNP', ':', 'JJ', 'NNS', 'VBD', 'NNP', 'NNP', 'NNP', 'HYPH', 'NNP', ',', 'DT', 'NN', 'IN', 'DT', 'NN', 'IN', 'DT', 'NN', 'IN', 'NNP', ',', 'IN', 'DT', 'JJ', 'NN', '.'],
284
+ 'feats': ['Number=Sing', None, 'Number=Sing', None, 'Degree=Pos', 'Number=Plur', 'Mood=Ind|Number=Plur|Person=3|Tense=Past|VerbForm=Fin', 'Number=Sing', 'Number=Sing', 'Number=Sing', None, 'Number=Sing', None, 'Definite=Def|PronType=Art', 'Number=Sing', None, 'Definite=Def|PronType=Art', 'Number=Sing', None, 'Definite=Def|PronType=Art', 'Number=Sing', None, 'Number=Sing', None, None, 'Definite=Def|PronType=Art', 'Degree=Pos', 'Number=Sing', None],
285
+ 'head': ['0', '3', '1', '7', '6', '7', '1', '7', '8', '8', '12', '8', '15', '15', '8', '18', '18', '15', '21', '21', '18', '23', '21', '28', '28', '28', '28', '21', '1'],
286
+ 'deprel': ['root', 'punct', 'flat', 'punct', 'amod', 'nsubj', 'parataxis', 'obj', 'flat', 'flat', 'punct', 'flat', 'punct', 'det', 'appos', 'case', 'det', 'nmod', 'case', 'det', 'nmod', 'case', 'nmod', 'punct', 'case', 'det', 'amod', 'nmod', 'punct'],
287
+ 'deps': ['0:root', '3:punct', '1:flat', '7:punct', '6:amod', '7:nsubj', '1:parataxis', '7:obj', '8:flat', '8:flat', '12:punct', '8:flat', '15:punct', '15:det', '8:appos', '18:case', '18:det', '15:nmod:at', '21:case', '21:det', '18:nmod:in', '23:case', '21:nmod:of', '28:punct', '28:case', '28:det', '28:amod', '21:nmod:near', '1:punct'],
288
+ 'misc': ['SpaceAfter=No', 'SpaceAfter=No', None, None, None, None, None, None, None, 'SpaceAfter=No', 'SpaceAfter=No', 'SpaceAfter=No', None, None, None, None, None, None, None, None, None, None, 'SpaceAfter=No', None, None, None, None, 'SpaceAfter=No', None],
289
+ 'mwt': [],
290
+ 'empty_nodes': []
291
  }
292
  ```
293
 
 
300
  print(example['mwt'])
301
 
302
  # Output:
303
+ [{'id': '8-9', 'form': 'des', 'feats': None, 'misc': None}]
304
+ # This means example['tokens'][7:9] = ['de', 'les'] are combined as MWT surface form "des"
305
  ```
306
 
307
  ### Data Splits
308
 
309
+ The file `metadata.json` stores additional information about the data, for example, available splits:
310
+
311
+ ```python
312
+ from huggingface_hub import hf_hub_download
313
+ import json
314
+
315
+ md = hf_hub_download(repo_id="commul/universal_dependencies", filename="metadata.json", repo_type="dataset")
316
+
317
+ with open(md, "r", encoding="utf-8") as f:
318
+ metadata = json.load(f)
319
+
320
+ [metadata[key]['splits'].keys() for key in metadata]
321
+ ```
322
 
323
  ## Dataset Creation
324