rain1024 commited on
Commit
15886ba
·
verified ·
1 Parent(s): 1fb832e
Files changed (2) hide show
  1. docs/UVW-2026.md +166 -0
  2. docs/underthesea_integration.md +83 -0
docs/UVW-2026.md ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # UVW 2026: Underthesea Vietnamese Wikipedia Dataset
2
+
3
+ > Vietnamese Wikipedia corpus for NLP research and applications
4
+
5
+ ## Overview
6
+
7
+ UVW 2026 (Underthesea Vietnamese Wikipedia) is a cleaned and processed dataset of Vietnamese Wikipedia articles. It is designed to support various Vietnamese NLP tasks including:
8
+
9
+ - Language modeling
10
+ - Text generation
11
+ - Pretraining for downstream tasks
12
+ - Text classification
13
+ - Information retrieval
14
+
15
+ ## Dataset Statistics
16
+
17
+ | Metric | Value |
18
+ |--------|-------|
19
+ | Source | Vietnamese Wikipedia |
20
+ | Language | Vietnamese (vi) |
21
+ | License | CC BY-SA 4.0 |
22
+ | Year | 2026 |
23
+
24
+ ## Data Format
25
+
26
+ ### JSONL Format
27
+
28
+ Each line contains a JSON object with the following fields:
29
+
30
+ ```json
31
+ {
32
+ "id": "Việt_Nam",
33
+ "title": "Việt Nam",
34
+ "content": "Việt Nam, tên chính thức là Cộng hòa Xã hội chủ nghĩa Việt Nam...",
35
+ "num_chars": 45000,
36
+ "num_sentences": 500
37
+ }
38
+ ```
39
+
40
+ ### Plaintext Format
41
+
42
+ Individual text files in `plaintext/` directory, one file per article.
43
+
44
+ ## Data Splits
45
+
46
+ | Split | Ratio |
47
+ |-------|-------|
48
+ | train | 80% |
49
+ | dev | 10% |
50
+ | test | 10% |
51
+
52
+ ## Usage
53
+
54
+ ### With Underthesea
55
+
56
+ ```python
57
+ from underthesea import DataFetcher
58
+
59
+ # Download and load dataset
60
+ corpus = DataFetcher.load_corpus("UVW_2026")
61
+ ```
62
+
63
+ ### With HuggingFace Datasets
64
+
65
+ ```python
66
+ from datasets import load_dataset
67
+
68
+ # Load from HuggingFace Hub
69
+ dataset = load_dataset("undertheseanlp/UVW-2026")
70
+
71
+ # Access splits
72
+ train_data = dataset["train"]
73
+ val_data = dataset["validation"]
74
+ test_data = dataset["test"]
75
+ ```
76
+
77
+ ### Direct Usage
78
+
79
+ ```python
80
+ import json
81
+
82
+ # Load JSONL
83
+ with open("data/processed/uvw_2026.jsonl", "r", encoding="utf-8") as f:
84
+ for line in f:
85
+ article = json.loads(line)
86
+ print(article["title"])
87
+ ```
88
+
89
+ ## Building from Source
90
+
91
+ ### Prerequisites
92
+
93
+ ```bash
94
+ pip install requests tqdm datasets
95
+ ```
96
+
97
+ ### Build Pipeline
98
+
99
+ ```bash
100
+ # Run complete pipeline
101
+ python scripts/build_dataset.py
102
+
103
+ # Or run individual steps:
104
+ python scripts/download_wikipedia.py # Download Wikipedia dump
105
+ python scripts/extract_articles.py # Extract and clean articles
106
+ python scripts/create_splits.py # Create train/dev/test splits
107
+ python scripts/prepare_huggingface.py # Prepare for HuggingFace
108
+ python scripts/analyze_dataset.py # Generate statistics
109
+ ```
110
+
111
+ ## Processing Details
112
+
113
+ ### Included Articles
114
+
115
+ - Main namespace articles
116
+ - Articles with at least 100 characters of content
117
+
118
+ ### Excluded Content
119
+
120
+ - Special pages (Wikipedia:, User:, Template:, Module:, etc.)
121
+ - Redirect pages
122
+ - Disambiguation pages
123
+ - Wikipedia markup (templates, references, categories, etc.)
124
+ - HTML tags
125
+ - Tables
126
+
127
+ ### Text Cleaning
128
+
129
+ 1. Remove Wikipedia templates (`{{...}}`)
130
+ 2. Remove categories and files
131
+ 3. Remove references and citations
132
+ 4. Remove HTML tags
133
+ 5. Convert internal links to plain text
134
+ 6. Remove bold/italic markup
135
+ 7. Unicode normalization (NFC)
136
+
137
+ ## Related Datasets
138
+
139
+ | Dataset | Description | Year |
140
+ |---------|-------------|------|
141
+ | UVB | Vietnamese Books | 2026 |
142
+ | UVN | Vietnamese News | 2026 |
143
+ | UTS_VLC | Vietnamese Legal Corpus | 2026 |
144
+ | UUD | Universal Dependencies | 2026 |
145
+
146
+ ## Citation
147
+
148
+ ```bibtex
149
+ @misc{uvw2026,
150
+ title={UVW 2026: Underthesea Vietnamese Wikipedia Dataset},
151
+ author={Underthesea NLP},
152
+ year={2026},
153
+ url={https://github.com/undertheseanlp/underthesea}
154
+ }
155
+ ```
156
+
157
+ ## License
158
+
159
+ This dataset is released under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/), following the Wikipedia license.
160
+
161
+ ## Links
162
+
163
+ - [GitHub Issue](https://github.com/undertheseanlp/underthesea/issues/896)
164
+ - [Underthesea](https://github.com/undertheseanlp/underthesea)
165
+ - [Vietnamese Wikipedia](https://vi.wikipedia.org)
166
+ - [HuggingFace Dataset](https://huggingface.co/datasets/undertheseanlp/UVW-2026)
docs/underthesea_integration.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Underthesea Integration
2
+
3
+ This document describes how to integrate UVW 2026 into the underthesea project.
4
+
5
+ ## 1. Add to datasets.yaml
6
+
7
+ Add the following entry to `underthesea/datasets.yaml`:
8
+
9
+ ```yaml
10
+ UVW_2026:
11
+ cache_dir: datasets/UVW_2026
12
+ type: Plaintext
13
+ license: Open
14
+ year: 2026
15
+ filepath: ''
16
+ url: https://github.com/undertheseanlp/underthesea/releases/download/resources/UVW_2026.zip
17
+ url_filename: UVW_2026.zip
18
+ ```
19
+
20
+ ## 2. Create Release Package
21
+
22
+ Package the processed data for release:
23
+
24
+ ```bash
25
+ cd data/processed
26
+ zip -r UVW_2026.zip plaintext/ uvw_2026.jsonl metadata.json
27
+ ```
28
+
29
+ Upload `UVW_2026.zip` to GitHub releases.
30
+
31
+ ## 3. Usage with DataFetcher
32
+
33
+ ```python
34
+ from underthesea import DataFetcher
35
+
36
+ # List available datasets
37
+ DataFetcher.list()
38
+
39
+ # Download and load corpus
40
+ corpus = DataFetcher.load_corpus("UVW_2026")
41
+
42
+ # Access data
43
+ for doc in corpus:
44
+ print(doc)
45
+ ```
46
+
47
+ ## 4. HuggingFace Hub Upload
48
+
49
+ ```bash
50
+ # Login to HuggingFace
51
+ huggingface-cli login
52
+
53
+ # Create dataset repository
54
+ huggingface-cli repo create UVW-2026 --type dataset --organization undertheseanlp
55
+
56
+ # Upload parquet files
57
+ cd data/huggingface/uvw_2026_parquet
58
+ huggingface-cli upload undertheseanlp/UVW-2026 . --repo-type dataset
59
+
60
+ # Upload README
61
+ cd ..
62
+ huggingface-cli upload undertheseanlp/UVW-2026 README.md --repo-type dataset
63
+ ```
64
+
65
+ ## 5. Integration Tests
66
+
67
+ Add test to `tests/test_corpus.py`:
68
+
69
+ ```python
70
+ def test_uvw_2026():
71
+ from underthesea import DataFetcher
72
+
73
+ corpus = DataFetcher.load_corpus("UVW_2026")
74
+ assert corpus is not None
75
+
76
+ # Check first document
77
+ docs = list(corpus)
78
+ assert len(docs) > 0
79
+ ```
80
+
81
+ ## 6. Documentation
82
+
83
+ Add dataset documentation to `docs/datasets/UVW-2026.md` in the underthesea repository (see `docs/UVW-2026.md` in this project).