iiegn commited on
Commit
6d68609
·
verified ·
1 Parent(s): 749caf5

Release v2.0.2 - Framework patch release

Browse files

**Added:**
- Updated ud-hf-parquet-tools to pinned tag version v1.1.0
- Added `jinja2>=3.1.6` for `uv run` support

**Changed:**
- Removed shebang --script usage (use `uv run` with dependencies in
`pyproject.toml`)
- Removed ar_nyuad configuration from dataset card
- Removed ja_bccwj configuration from dataset card
- Both treebanks moved to blocked status (license restrictions)
- `02_generate_metadata.py` now uses output formatting

**Fixed:**
- Corrected dataset card configs by excluding blocked treebanks
- Updated README-2.17 metadata to reflect dataset configuration

See CHANGELOG.md for full details.

CHANGELOG.md CHANGED
@@ -5,6 +5,32 @@ All notable changes to the Universal Dependencies HuggingFace dataset loader wil
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  ## [2.0.1] - 2026-01-14
9
 
10
  ### Added
 
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
 
8
+ ## [2.0.2] - 2026-02-02
9
+
10
+ ### Added
11
+
12
+ - **Dependency Update**
13
+ - Updated `ud-hf-parquet-tools` to pinned tag version v1.1.0 in pyproject.toml
14
+ - Added `jinja2>=3.1.6` for `uv run` support
15
+
16
+ ### Changed
17
+
18
+ - **Drop uv script usage**
19
+ - Removed shebang --script usage (use `uv run` with dependencies in `pyproject.toml`)
20
+
21
+ - **Treebank Updates**
22
+ - Removed `ar_nyuad` configuration (treebank moved to blocked status)
23
+ - Removed `ja_bccwj` configuration (treebank moved to blocked status)
24
+
25
+ - **Metadata Generation**
26
+ - `02_generate_metadata.py` now uses output formatting
27
+
28
+ ### Fixed
29
+
30
+ - **Updated README Metadata**
31
+ - Corrected dataset card configs by excluding blocked treebanks
32
+ - `tools/README-2.17` now reflects current dataset configuration
33
+
34
  ## [2.0.1] - 2026-01-14
35
 
36
  ### Added
pyproject.toml CHANGED
@@ -1,6 +1,6 @@
1
  [project]
2
  name = "universal-dependencies"
3
- version = "2.0.1"
4
  description = "UD Dependencies Data Set"
5
  readme = "README.md"
6
  requires-python = ">=3.12"
@@ -11,6 +11,8 @@ dependencies = [
11
  "conllu>=5.0.0",
12
  "pytest>=7.0.0",
13
  "python-dotenv>=1.0.0",
 
 
14
  ]
15
 
16
  [build-system]
@@ -63,3 +65,6 @@ default = true
63
  name = "TestPyPI"
64
  url = "https://test.pypi.org/simple/"
65
  explicit = true
 
 
 
 
1
  [project]
2
  name = "universal-dependencies"
3
+ version = "2.0.2"
4
  description = "UD Dependencies Data Set"
5
  readme = "README.md"
6
  requires-python = ">=3.12"
 
11
  "conllu>=5.0.0",
12
  "pytest>=7.0.0",
13
  "python-dotenv>=1.0.0",
14
+ "ud-hf-parquet-tools",
15
+ "jinja2>=3.1.6",
16
  ]
17
 
18
  [build-system]
 
65
  name = "TestPyPI"
66
  url = "https://test.pypi.org/simple/"
67
  explicit = true
68
+
69
+ [tool.uv.sources]
70
+ ud-hf-parquet-tools = { git = "https://github.com/bot-zen/ud-hf-parquet-tools", tag = "v1.1.0" }
tools/00_fetch_ud_clarin-dspace_metadata.py CHANGED
@@ -1,10 +1,4 @@
1
- #!/usr/bin/env -S uv run --script
2
- #
3
- # /// script
4
- # requires-python = ">=3.12"
5
- # dependencies = [
6
- # ]
7
- # ///
8
  """
9
  Download UD metadata for the different releases from the lindat clarin-dspace
10
  repository into `./etc/`:
 
1
+ #!/usr/bin/env -S uv run
 
 
 
 
 
 
2
  """
3
  Download UD metadata for the different releases from the lindat clarin-dspace
4
  repository into `./etc/`:
tools/02_generate_metadata.py CHANGED
@@ -1,12 +1,4 @@
1
- #!/usr/bin/env -S uv run --script
2
- #
3
- # /// script
4
- # requires-python = ">=3.12"
5
- # dependencies = [
6
- # "pyyaml",
7
- # "load-dotenv",
8
- # ]
9
- # ///
10
  """
11
  Collect relevant metadata from local UD directories:
12
  - extracting the '# Summary' from the beginning and machine readable
@@ -288,7 +280,8 @@ if __name__ == '__main__':
288
  output_fn = f"metadata-{UD_VER}.json"
289
  if args.override or not open(output_fn, 'r').read():
290
  with open(output_fn, 'w') as fh:
291
- json.dump(results, fh, ensure_ascii=False)
 
292
  print(f"{output_fn} written")
293
  else:
294
  logging.info(f"Output {output_fn} already exists: Not overriding.")
 
1
+ #!/usr/bin/env -S uv run
 
 
 
 
 
 
 
 
2
  """
3
  Collect relevant metadata from local UD directories:
4
  - extracting the '# Summary' from the beginning and machine readable
 
280
  output_fn = f"metadata-{UD_VER}.json"
281
  if args.override or not open(output_fn, 'r').read():
282
  with open(output_fn, 'w') as fh:
283
+ json.dump(results, fh, ensure_ascii=False, sort_keys=True,
284
+ indent=4, separators=(',', ': '))
285
  print(f"{output_fn} written")
286
  else:
287
  logging.info(f"Output {output_fn} already exists: Not overriding.")
tools/03_generate_README.py CHANGED
@@ -1,12 +1,4 @@
1
- #!/usr/bin/env -S uv run --script
2
- #
3
- # /// script
4
- # requires-python = ">=3.12"
5
- # dependencies = [
6
- # "Jinja2",
7
- # "load-dotenv",
8
- # ]
9
- # ///
10
  """
11
  Generate README-{UD_VER} from templates/README.tmpl using metadata-{UD_VER}.json.
12
  """
 
1
+ #!/usr/bin/env -S uv run
 
 
 
 
 
 
 
 
2
  """
3
  Generate README-{UD_VER} from templates/README.tmpl using metadata-{UD_VER}.json.
4
  """
tools/04_generate_parquet.py CHANGED
@@ -1,12 +1,4 @@
1
- #!/usr/bin/env -S uv run --script
2
- #
3
- # /// script
4
- # requires-python = ">=3.12"
5
- # dependencies = [
6
- # "ud-hf-parquet-tools",
7
- # "python-dotenv",
8
- # ]
9
- # ///
10
  """
11
  Generate Parquet files from Universal Dependencies CoNLL-U data.
12
 
 
1
+ #!/usr/bin/env -S uv run
 
 
 
 
 
 
 
 
2
  """
3
  Generate Parquet files from Universal Dependencies CoNLL-U data.
4
 
tools/05_validate_parquet.py CHANGED
@@ -1,12 +1,4 @@
1
- #!/usr/bin/env -S uv run --script
2
- #
3
- # /// script
4
- # requires-python = ">=3.12"
5
- # dependencies = [
6
- # "ud-hf-parquet-tools",
7
- # "python-dotenv",
8
- # ]
9
- # ///
10
  """
11
  Validate Parquet files by comparing with original CoNLL-U data.
12
 
 
1
+ #!/usr/bin/env -S uv run
 
 
 
 
 
 
 
 
2
  """
3
  Validate Parquet files by comparing with original CoNLL-U data.
4
 
tools/README-2.17 CHANGED
@@ -309,14 +309,6 @@ configs:
309
  data_files:
310
  - split: test
311
  path: parquet/apu_ufpa/test.parquet
312
- - config_name: ar_nyuad
313
- data_files:
314
- - split: train
315
- path: parquet/ar_nyuad/train.parquet
316
- - split: dev
317
- path: parquet/ar_nyuad/dev.parquet
318
- - split: test
319
- path: parquet/ar_nyuad/test.parquet
320
  - config_name: ar_padt
321
  data_files:
322
  - split: train
@@ -1164,14 +1156,6 @@ configs:
1164
  path: parquet/it_vit/dev.parquet
1165
  - split: test
1166
  path: parquet/it_vit/test.parquet
1167
- - config_name: ja_bccwj
1168
- data_files:
1169
- - split: train
1170
- path: parquet/ja_bccwj/train.parquet
1171
- - split: dev
1172
- path: parquet/ja_bccwj/dev.parquet
1173
- - split: test
1174
- path: parquet/ja_bccwj/test.parquet
1175
  - config_name: ja_bccwjluw
1176
  data_files:
1177
  - split: train
 
309
  data_files:
310
  - split: test
311
  path: parquet/apu_ufpa/test.parquet
 
 
 
 
 
 
 
 
312
  - config_name: ar_padt
313
  data_files:
314
  - split: train
 
1156
  path: parquet/it_vit/dev.parquet
1157
  - split: test
1158
  path: parquet/it_vit/test.parquet
 
 
 
 
 
 
 
 
1159
  - config_name: ja_bccwjluw
1160
  data_files:
1161
  - split: train
tools/metadata-2.17.json CHANGED
The diff for this file is too large to render. See raw diff