iiegn commited on
Commit
d0de5f4
·
verified ·
1 Parent(s): dc9f48f

Revert README.md to 541812f0 and update with tools/ content

Browse files
Files changed (3) hide show
  1. README.md +0 -0
  2. tools/README.md +0 -0
  3. tools/templates/README.tmpl +110 -16
README.md CHANGED
The diff for this file is too large to render. See raw diff
 
tools/README.md CHANGED
The diff for this file is too large to render. See raw diff
 
tools/templates/README.tmpl CHANGED
@@ -110,6 +110,23 @@ config_names:
110
 
111
  # Dataset Card for Universal Dependencies Treebank
112
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
113
  ## Table of Contents
114
  - [Dataset Description](#dataset-description)
115
  - [Dataset Summary](#dataset-summary)
@@ -176,7 +193,7 @@ For technical details, see [`tools/CONLLU_PARSING_ISSUES.md`](https://huggingfac
176
  from datasets import load_dataset
177
 
178
  # Load a specific treebank
179
- ds = load_dataset("commul/universal_dependencies", "en_ewt", split="train", trust_remote_code=True)
180
 
181
  # Access sentence data
182
  sentence = ds[0]
@@ -225,28 +242,105 @@ break, including an LF character at the end of file).
225
 
226
  ### Data Instances
227
 
228
- This dataset has {{ data.items()|length }} configurations.
229
  ```python
230
- from datasets import get_dataset_config_names
231
-
232
- # Get the revision specific configurations
233
- get_dataset_config_names("commul/universal_dependencies", revision="{{ ud_ver }}", trust_remote_code=True) # 179
234
- ['af_afribooms',
235
- 'akk_pisandub',
236
- 'aqz_tudet',
237
- 'sq_tsa',
238
- 'gsw_uzh',
239
- 'am_att',
240
- ...
241
- ]
 
 
 
242
 
243
  # Get the latest configurations
244
- get_dataset_config_names("commul/universal_dependencies", trust_remote_code=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
245
  ```
246
 
247
  ### Data Fields
248
 
249
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
250
 
251
  ### Data Splits
252
 
 
110
 
111
  # Dataset Card for Universal Dependencies Treebank
112
 
113
+ ## What's New in v2.0
114
+
115
+ **Version 2.0.0** introduces significant improvements and breaking changes:
116
+
117
+ - **Parquet Format:** faster loading with HuggingFace datasets >=4.0.0
118
+ - **MWT Support:** New `mwt` field provides structured multi-word token information
119
+ - **Enhanced Security:** No more `trust_remote_code=True` required
120
+ - **Separate Versioning:** Loader version (2.0.0) distinct from UD data version (2.17)
121
+
122
+ **Breaking Changes:**
123
+ - Token sequences now exclude MWT surface forms (matches UD guidelines)
124
+ - Requires `datasets>=4.0.0` for Parquet support
125
+
126
+ - **Migration Guide:** See [MIGRATION.md](MIGRATION.md) for detailed upgrade instructions
127
+ - **Changelog:** See [CHANGELOG.md](CHANGELOG.md) for complete release notes
128
+
129
+
130
  ## Table of Contents
131
  - [Dataset Description](#dataset-description)
132
  - [Dataset Summary](#dataset-summary)
 
193
  from datasets import load_dataset
194
 
195
  # Load a specific treebank
196
+ ds = load_dataset("commul/universal_dependencies", "en_ewt", revision="{{ ud_ver }}", split="train")
197
 
198
  # Access sentence data
199
  sentence = ds[0]
 
242
 
243
  ### Data Instances
244
 
245
+ This dataset has {{ data.items()|length }} configurations (treebanks).
246
  ```python
247
+ from datasets import get_dataset_config_names, load_dataset
248
+
249
+ # Get all available treebank configurations for revision="{{ ud_ver }}"
250
+ configs = get_dataset_config_names("commul/universal_dependencies", revision="{{ ud_ver }}")
251
+ print(f"Available treebanks: {len(configs)}")
252
+
253
+ # Example configurations:
254
+ # ['af_afribooms',
255
+ # 'akk_pisandub',
256
+ # 'aqz_tudet',
257
+ # 'sq_tsa',
258
+ # 'gsw_uzh',
259
+ # 'am_att',
260
+ # ...
261
+ # ]
262
 
263
  # Get the latest configurations
264
+ get_dataset_config_names("commul/universal_dependencies")
265
+
266
+ # Load a specific treebank
267
+ dataset = load_dataset("commul/universal_dependencies", "en_ewt")
268
+ print(dataset)
269
+
270
+ # Output:
271
+ # DatasetDict({
272
+ # train: Dataset({
273
+ # features: ['idx', 'text', 'tokens', 'lemmas', 'upos', 'xpos', 'feats', 'head', 'deprel', 'deps', 'misc', 'mwt'],
274
+ # num_rows: 12543
275
+ # })
276
+ # validation: Dataset({
277
+ # features: ['idx', 'text', 'tokens', 'lemmas', 'upos', 'xpos', 'feats', 'head', 'deprel', 'deps', 'misc', 'mwt'],
278
+ # num_rows: 2001
279
+ # })
280
+ # test: Dataset({
281
+ # features: ['idx', 'text', 'tokens', 'lemmas', 'upos', 'xpos', 'feats', 'head', 'deprel', 'deps', 'misc', 'mwt'],
282
+ # num_rows: 2077
283
+ # })
284
+ # })
285
  ```
286
 
287
  ### Data Fields
288
 
289
+ Each example in the dataset contains the following fields:
290
+
291
+ - **idx** (string): Sentence ID from the CoNLL-U file metadata
292
+ - **text** (string): Full sentence text (surface form)
293
+ - **tokens** (list of strings): Syntactic word forms (MWT surface forms excluded)
294
+ - **lemmas** (list of strings): Lemmas for each syntactic word
295
+ - **upos** (list of strings): Universal POS tags
296
+ - **xpos** (list of strings): Language-specific POS tags
297
+ - **feats** (list of strings): Morphological features in UD format
298
+ - **head** (list of strings): Head indices for dependency relations
299
+ - **deprel** (list of strings): Dependency relation labels
300
+ - **deps** (list of strings): Enhanced dependency graph
301
+ - **misc** (list of strings): Miscellaneous annotations
302
+ - **mwt** (list of dicts): Multi-Word Token information (NEW in v2.0)
303
+ - **id** (string): Token range (e.g., "1-2")
304
+ - **form** (string): Surface form (e.g., "don't")
305
+ - **misc** (string): MWT-specific metadata
306
+
307
+ **Example:**
308
+
309
+ ```python
310
+ from datasets import load_dataset
311
+
312
+ dataset = load_dataset("commul/universal_dependencies", "en_ewt", split="train")
313
+ print(dataset[0])
314
+
315
+ # Output:
316
+ {
317
+ 'idx': 'weblog-blogspot.com_nominations_20041117172713_ENG_20041117_172713-0001',
318
+ 'text': 'From the AP comes this story:',
319
+ 'tokens': ['From', 'the', 'AP', 'comes', 'this', 'story', ':'],
320
+ 'lemmas': ['from', 'the', 'AP', 'come', 'this', 'story', ':'],
321
+ 'upos': ['ADP', 'DET', 'PROPN', 'VERB', 'DET', 'NOUN', 'PUNCT'],
322
+ 'xpos': ['IN', 'DT', 'NNP', 'VBZ', 'DT', 'NN', ':'],
323
+ 'feats': ['_', 'Definite=Def|PronType=Art', 'Number=Sing', 'Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin', 'Number=Sing|PronType=Dem', 'Number=Sing', '_'],
324
+ 'head': ['4', '3', '4', '0', '6', '4', '4'],
325
+ 'deprel': ['case', 'det', 'obl', 'root', 'det', 'nsubj', 'punct'],
326
+ 'deps': ['_', '_', '_', '_', '_', '_', '_'],
327
+ 'misc': ['_', '_', '_', '_', '_', '_', 'SpaceAfter=No'],
328
+ 'mwt': [] # No MWTs in this sentence
329
+ }
330
+ ```
331
+
332
+ *MWT Example (French):**
333
+
334
+ ```python
335
+ dataset = load_dataset("commul/universal_dependencies", "fr_gsd", split="train")
336
+ # Find sentence with MWT
337
+ example = [ex for ex in dataset if ex['mwt']][0]
338
+ print(example['mwt'])
339
+
340
+ # Output:
341
+ [{'id': '2-3', 'form': 'des', 'misc': ''}]
342
+ # This means tokens[1:3] = ['de', 'les'] are combined as MWT surface form "des"
343
+ ```
344
 
345
  ### Data Splits
346