rain1024 commited on
Commit
cd1068e
·
verified ·
1 Parent(s): b0e730e

Update UVW 2026 dataset

Browse files
Files changed (4) hide show
  1. README.md +237 -86
  2. test.parquet +3 -0
  3. train.parquet +3 -0
  4. validation.parquet +3 -0
README.md CHANGED
@@ -5,40 +5,123 @@ license: cc-by-sa-4.0
5
  task_categories:
6
  - text-generation
7
  - fill-mask
 
 
 
8
  tags:
9
  - wikipedia
10
  - vietnamese
11
  - nlp
12
  - underthesea
 
 
 
 
13
  size_categories:
14
  - 1M<n<10M
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ---
16
 
17
  # UVW 2026: Underthesea Vietnamese Wikipedia Dataset
18
 
 
 
 
 
 
 
 
 
19
  ## Dataset Description
20
 
21
- UVW 2026 (Underthesea Vietnamese Wikipedia) is a cleaned and processed dataset of Vietnamese Wikipedia articles, designed for Vietnamese NLP tasks such as language modeling, text generation, and pretraining.
 
 
 
 
 
 
 
 
22
 
23
  ### Dataset Summary
24
 
25
- - **Language:** Vietnamese
26
- - **Source:** Vietnamese Wikipedia (vi.wikipedia.org)
27
- - **License:** CC BY-SA 4.0
28
- - **Year:** 2026
 
 
 
 
 
 
 
 
 
 
29
 
30
- ### Statistics
 
 
 
 
31
 
32
- | Metric | Value |
33
- |--------|-------|
34
- | Total Articles | 1,118,224 |
35
- | Total Characters | 1,331,021,085 |
36
- | Total Sentences | 11,772,330 |
37
- | Avg. Characters/Article | 1,190 |
 
 
38
 
39
  ## Dataset Structure
40
 
41
- ### Data Instances
 
 
 
 
 
 
 
 
42
 
43
  ```json
44
  {
@@ -47,115 +130,183 @@ UVW 2026 (Underthesea Vietnamese Wikipedia) is a cleaned and processed dataset o
47
  "content": "Việt Nam, tên chính thức là Cộng hòa Xã hội chủ nghĩa Việt Nam...",
48
  "num_chars": 45000,
49
  "num_sentences": 500,
50
- "quality": 8
 
 
51
  }
52
  ```
53
 
54
- ### Data Fields
55
-
56
- - `id` (string): Article identifier (title with underscores)
57
- - `title` (string): Article title
58
- - `content` (string): Cleaned article content
59
- - `num_chars` (int): Number of characters in content
60
- - `num_sentences` (int): Estimated number of sentences
61
- - `quality` (int): Quality score from 1-10 based on article metrics
62
 
63
- ### Quality Score
 
 
 
 
 
 
 
 
 
64
 
65
- Quality score (1-10) is computed based on Wikipedia quality research:
66
 
67
- - **Length score (40%)**: Article comprehensiveness based on character count
68
- - **Sentence score (30%)**: Content depth based on number of sentences
69
- - **Density score (30%)**: Readability based on average sentence length
70
 
71
- | Score | Count | Percentage |
72
- |-------|-------|------------|
73
- | 2 | 124,937 | 11.2% |
74
- | 3 | 496,169 | 44.4% |
75
- | 4 | 224,161 | 20.0% |
76
- | 5 | 111,906 | 10.0% |
77
- | 6 | 121,956 | 10.9% |
78
- | 7 | 25,178 | 2.3% |
79
- | 8 | 10,905 | 1.0% |
80
- | 9 | 2,958 | 0.3% |
81
- | 10 | 54 | 0.0% |
82
-
83
- References:
84
- - [Wikipedia Language-Agnostic Quality](https://meta.wikimedia.org/wiki/Research:Prioritization_of_Wikipedia_Articles/Language-Agnostic_Quality)
85
- - [Automatic Quality Assessment of Wikipedia Articles](https://dl.acm.org/doi/10.1145/3625286)
86
 
87
- ### Data Splits
88
 
89
- | Split | Articles |
90
- |-------|----------|
91
- | train | 1,118,224 |
 
 
 
 
 
 
 
92
 
93
- ## Usage
94
 
95
  ```python
96
- from datasets import load_dataset
 
97
 
98
- # Load from HuggingFace Hub
99
- dataset = load_dataset("undertheseanlp/UVW-2026")
 
100
 
101
- # Access data
102
- train_data = dataset["train"]
103
 
104
- # Example usage
105
- for example in train_data:
106
- print(example["title"])
107
- print(example["content"][:200])
108
- break
109
- ```
110
 
111
- ## Dataset Creation
112
 
113
- ### Source Data
 
114
 
115
- The dataset is created from the Vietnamese Wikipedia dump, with the following processing steps:
 
116
 
117
- 1. Download Vietnamese Wikipedia XML dump
118
- 2. Parse XML and extract article content
119
- 3. Remove Wikipedia markup (templates, categories, references, etc.)
120
- 4. Unicode normalization (NFC)
121
- 5. Filter out:
122
- - Special pages (Wikipedia:, User:, Template:, etc.)
123
- - Redirect pages
124
- - Disambiguation pages
125
- - Articles with less than 100 characters
126
 
127
- ### Processing Pipeline
128
 
129
- ```bash
130
- # 1. Download Wikipedia dump
131
- python scripts/download_wikipedia.py
 
 
 
 
 
132
 
133
- # 2. Extract and clean articles
134
- python scripts/extract_articles.py
135
 
136
- # 3. Create train/dev/test splits
137
- python scripts/create_splits.py
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
138
 
139
- # 4. Prepare for HuggingFace
140
- python scripts/prepare_huggingface.py
 
 
 
 
 
 
 
 
 
 
 
 
 
141
  ```
142
 
143
  ## Citation
144
 
145
  ```bibtex
146
- @misc{uvw2026,
147
- title={UVW 2026: Underthesea Vietnamese Wikipedia Dataset},
148
- author={Underthesea NLP},
149
- year={2026},
150
- url={https://github.com/undertheseanlp/underthesea}
 
 
151
  }
152
  ```
153
 
154
  ## Related Resources
155
 
156
  - [Underthesea](https://github.com/undertheseanlp/underthesea) - Vietnamese NLP Toolkit
 
157
  - [Vietnamese Wikipedia](https://vi.wikipedia.org)
 
158
 
159
  ## License
160
 
161
- This dataset is released under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/), following the Wikipedia license.
 
 
 
 
 
 
 
5
  task_categories:
6
  - text-generation
7
  - fill-mask
8
+ - text-classification
9
+ - feature-extraction
10
+ - sentence-similarity
11
  tags:
12
  - wikipedia
13
  - vietnamese
14
  - nlp
15
  - underthesea
16
+ - wikidata
17
+ - pretraining
18
+ - language-modeling
19
+ pretty_name: UVW 2026 - Vietnamese Wikipedia Dataset
20
  size_categories:
21
  - 1M<n<10M
22
+ source_datasets:
23
+ - original
24
+ dataset_info:
25
+ features:
26
+ - name: id
27
+ dtype: string
28
+ - name: title
29
+ dtype: string
30
+ - name: content
31
+ dtype: string
32
+ - name: num_chars
33
+ dtype: int32
34
+ - name: num_sentences
35
+ dtype: int32
36
+ - name: quality_score
37
+ dtype: int32
38
+ - name: wikidata_id
39
+ dtype: string
40
+ - name: main_category
41
+ dtype: string
42
+ splits:
43
+ - name: train
44
+ num_examples: 894579
45
+ - name: validation
46
+ num_examples: 111822
47
+ - name: test
48
+ num_examples: 111823
49
+ configs:
50
+ - config_name: default
51
+ data_files:
52
+ - split: train
53
+ path: train.parquet
54
+ - split: validation
55
+ path: validation.parquet
56
+ - split: test
57
+ path: test.parquet
58
  ---
59
 
60
  # UVW 2026: Underthesea Vietnamese Wikipedia Dataset
61
 
62
+ <div align="center">
63
+
64
+ [![License: CC BY-SA 4.0](https://img.shields.io/badge/License-CC%20BY--SA%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by-sa/4.0/)
65
+ [![Language: Vietnamese](https://img.shields.io/badge/Language-Vietnamese-blue.svg)](https://vi.wikipedia.org)
66
+ [![Wikidata Enriched](https://img.shields.io/badge/Wikidata-Enriched-green.svg)](https://www.wikidata.org)
67
+
68
+ </div>
69
+
70
  ## Dataset Description
71
 
72
+ **UVW 2026** (Underthesea Vietnamese Wikipedia) is a high-quality, cleaned dataset of Vietnamese Wikipedia articles enriched with Wikidata metadata. Designed for Vietnamese NLP research including language modeling, text generation, text classification, named entity recognition, and model pretraining.
73
+
74
+ ### Key Features
75
+
76
+ - **Clean text**: Wikipedia markup, templates, references, and formatting removed
77
+ - **Wikidata integration**: Articles linked to Wikidata entities with semantic categories
78
+ - **Quality scoring**: Each article scored 1-10 based on content quality metrics
79
+ - **Unicode normalized**: NFC normalization applied for consistent text processing
80
+ - **Ready to use**: Pre-split into train/validation/test sets
81
 
82
  ### Dataset Summary
83
 
84
+ | Property | Value |
85
+ |----------|-------|
86
+ | **Language** | Vietnamese (vi) |
87
+ | **Source** | Vietnamese Wikipedia + Wikidata |
88
+ | **License** | CC BY-SA 4.0 |
89
+ | **Generated** | 2026-01-31 |
90
+ | **Total Articles** | 1,118,224 |
91
+ | **Wikidata Coverage** | 99.4% |
92
+ | **Category Coverage** | 97.0% |
93
+ | **Unique Categories** | 11,549 |
94
+ | **Avg. Characters** | 1,190 |
95
+ | **Avg. Sentences** | 10 |
96
+
97
+ ## Quick Start
98
 
99
+ ```python
100
+ from datasets import load_dataset
101
+
102
+ # Load the dataset
103
+ dataset = load_dataset("undertheseanlp/UVW-2026")
104
 
105
+ # Access splits
106
+ train = dataset["train"]
107
+ validation = dataset["validation"]
108
+ test = dataset["test"]
109
+
110
+ # View an example
111
+ print(train[0])
112
+ ```
113
 
114
  ## Dataset Structure
115
 
116
+ ### Data Splits
117
+
118
+ | Split | Examples | Description |
119
+ |-------|----------|-------------|
120
+ | `train` | 894,579 | Training set (80%) |
121
+ | `validation` | 111,822 | Validation set (10%) |
122
+ | `test` | 111,823 | Test set (10%) |
123
+
124
+ ### Schema
125
 
126
  ```json
127
  {
 
130
  "content": "Việt Nam, tên chính thức là Cộng hòa Xã hội chủ nghĩa Việt Nam...",
131
  "num_chars": 45000,
132
  "num_sentences": 500,
133
+ "quality_score": 9,
134
+ "wikidata_id": "Q881",
135
+ "main_category": "quốc gia có chủ quyền"
136
  }
137
  ```
138
 
139
+ ### Field Descriptions
 
 
 
 
 
 
 
140
 
141
+ | Field | Type | Description |
142
+ |-------|------|-------------|
143
+ | `id` | string | Unique article identifier (URL-safe title) |
144
+ | `title` | string | Human-readable article title |
145
+ | `content` | string | Cleaned article text content |
146
+ | `num_chars` | int32 | Character count of content |
147
+ | `num_sentences` | int32 | Estimated sentence count |
148
+ | `quality_score` | int32 | Quality score from 1 (lowest) to 10 (highest) |
149
+ | `wikidata_id` | string | Wikidata Q-identifier (e.g., "Q881" for Vietnam) |
150
+ | `main_category` | string | Primary category from Wikidata P31 (instance of) |
151
 
152
+ ## Usage Examples
153
 
154
+ ### Filter High-Quality Articles
 
 
155
 
156
+ ```python
157
+ # Get articles with quality score >= 7
158
+ high_quality = dataset["train"].filter(lambda x: x["quality_score"] >= 7)
159
+ print(f"High-quality articles: {len(high_quality):,}")
160
+ ```
 
 
 
 
 
 
 
 
 
 
161
 
162
+ ### Filter by Category
163
 
164
+ ```python
165
+ # Get articles about people
166
+ people = dataset["train"].filter(lambda x: x["main_category"] == "người")
167
+ print(f"Articles about people: {len(people):,}")
168
+
169
+ # Get articles about locations
170
+ locations = dataset["train"].filter(
171
+ lambda x: "khu định cư" in (x["main_category"] or "")
172
+ )
173
+ ```
174
 
175
+ ### Filter by Wikidata
176
 
177
  ```python
178
+ # Get articles with Wikidata links
179
+ with_wikidata = dataset["train"].filter(lambda x: x["wikidata_id"] != "")
180
 
181
+ # Lookup specific entity
182
+ vietnam = dataset["train"].filter(lambda x: x["wikidata_id"] == "Q881")
183
+ ```
184
 
185
+ ### Use for Language Modeling
 
186
 
187
+ ```python
188
+ from transformers import AutoTokenizer
 
 
 
 
189
 
190
+ tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base")
191
 
192
+ def tokenize(examples):
193
+ return tokenizer(examples["content"], truncation=True, max_length=512)
194
 
195
+ tokenized = dataset["train"].map(tokenize, batched=True)
196
+ ```
197
 
198
+ ## Quality Score
 
 
 
 
 
 
 
 
199
 
200
+ Articles are scored 1-10 based on multiple factors:
201
 
202
+ | Component | Weight | Criteria |
203
+ |-----------|--------|----------|
204
+ | **Length** | 40% | Character count (200 - 100,000 optimal) |
205
+ | **Sentences** | 30% | Sentence count (3 - 1,000 optimal) |
206
+ | **Density** | 30% | Avg sentence length (80-150 chars optimal) |
207
+ | **Wikidata bonus** | +0.5 | Has wikidata_id |
208
+ | **Category bonus** | +0.5 | Has main_category |
209
+ | **Markup penalty** | -1 to -3 | Remaining Wikipedia markup |
210
 
211
+ ### Quality Distribution
 
212
 
213
+ | Score | Count | Percentage |
214
+ |-------|------:|----------:|
215
+ | 1 | 134 | 0.0% |
216
+ | 2 | 376 | 0.0% |
217
+ | 3 | 28,267 | 2.5% |
218
+ | 4 | 607,081 | 54.3% |
219
+ | 5 | 208,304 | 18.6% |
220
+ | 6 | 134,385 | 12.0% |
221
+ | 7 | 70,345 | 6.3% |
222
+ | 8 | 57,054 | 5.1% |
223
+ | 9 | 9,649 | 0.9% |
224
+ | 10 | 2,629 | 0.2% |
225
+
226
+ ## Top Categories
227
+
228
+ | Category (Vietnamese) | Count | Percentage |
229
+ |----------------------|------:|----------:|
230
+ | đơn vị phân loại | 618,281 | 55.3% |
231
+ | người | 78,191 | 7.0% |
232
+ | xã của Pháp | 35,635 | 3.2% |
233
+ | khu định cư | 20,276 | 1.8% |
234
+ | village of Turkey | 18,540 | 1.7% |
235
+ | tiểu hành tinh | 17,891 | 1.6% |
236
+ | mahalle | 16,419 | 1.5% |
237
+ | xã của Việt Nam | 7,088 | 0.6% |
238
+ | đô thị của Ý | 6,700 | 0.6% |
239
+ | trang định hướng Wikimedia | 6,202 | 0.6% |
240
+
241
+ ## Data Processing
242
+
243
+ ### Pipeline Steps
244
+
245
+ 1. **Download**: Fetch Vietnamese Wikipedia XML dump from Wikimedia
246
+ 2. **Extract**: Parse XML and extract article content
247
+ 3. **Clean**: Remove Wikipedia markup (templates, refs, links, tables, categories)
248
+ 4. **Normalize**: Apply Unicode NFC normalization
249
+ 5. **Score**: Calculate quality metrics for each article
250
+ 6. **Enrich**: Add Wikidata IDs and semantic categories via Wikidata API
251
+ 7. **Filter**: Remove special pages, redirects, disambiguation, and short articles (<100 chars)
252
+ 8. **Split**: Create train/validation/test splits (80/10/10) with seed=42
253
+
254
+ ### Removed Content
255
+
256
+ - Wikipedia templates (`{{...}}`)
257
+ - References and citations (`<ref>...</ref>`)
258
+ - HTML tags and comments
259
+ - Category links (`[[Thể loại:...]]`)
260
+ - File/image links (`[[Tập tin:...]]`, `[[File:...]]`)
261
+ - Interwiki links
262
+ - Tables (`{| ... |}`)
263
+ - Infoboxes and navigation templates
264
+
265
+ ### Reproduction
266
 
267
+ ```bash
268
+ git clone https://github.com/undertheseanlp/UVW-2026
269
+ cd UVW-2026
270
+ uv sync --extra huggingface
271
+
272
+ # Run full pipeline
273
+ uv run python scripts/build_dataset.py
274
+
275
+ # Or run individual steps
276
+ uv run python scripts/download_wikipedia.py
277
+ uv run python scripts/extract_articles.py
278
+ uv run python scripts/wikipedia_quality_score.py
279
+ uv run python scripts/add_wikidata.py
280
+ uv run python scripts/create_splits.py
281
+ uv run python scripts/prepare_huggingface.py --push
282
  ```
283
 
284
  ## Citation
285
 
286
  ```bibtex
287
+ @dataset{uvw2026,
288
+ title = {UVW 2026: Underthesea Vietnamese Wikipedia Dataset},
289
+ author = {Underthesea NLP},
290
+ year = {2026},
291
+ publisher = {Hugging Face},
292
+ url = {https://huggingface.co/datasets/undertheseanlp/UVW-2026},
293
+ note = {Vietnamese Wikipedia articles enriched with Wikidata metadata}
294
  }
295
  ```
296
 
297
  ## Related Resources
298
 
299
  - [Underthesea](https://github.com/undertheseanlp/underthesea) - Vietnamese NLP Toolkit
300
+ - [PhoBERT](https://github.com/VinAIResearch/PhoBERT) - Pre-trained language models for Vietnamese
301
  - [Vietnamese Wikipedia](https://vi.wikipedia.org)
302
+ - [Wikidata](https://www.wikidata.org)
303
 
304
  ## License
305
 
306
+ This dataset is released under the [Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/), consistent with the Wikipedia content license.
307
+
308
+ ---
309
+
310
+ <div align="center">
311
+ Made with ❤️ by <a href="https://github.com/undertheseanlp">Underthesea NLP</a>
312
+ </div>
test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60fcbe70fd7c110fae09b34ee9306960f8db3738c5a7ce679753060bd7c4e323
3
+ size 79550587
train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:524374d40fc7b25501c9d3c7420d9d6f41973d24476418523d3f0536a8c955f2
3
+ size 608316204
validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3da59890851b2e68698b4fd8e67aef5439dbad8132e1a264d98eaab9936a83f
3
+ size 78047554