ViWikiBench / README.md
anhndbk's picture
Create README.md
1159c53 verified
---
language:
- vi
license: cc-by-sa-4.0
multilinguality: monolingual
size_categories:
- 1M<n<10M
source_datasets:
- wikimedia/wikipedia
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- vietnamese
- benchmark
- quantization
- perplexity
- llm-evaluation
- wikitext-style
- nlp
pretty_name: ViWiki-Bench
configs:
- config_name: default
data_files:
- split: train
path: data/vi_wiki_train.txt
- split: validation
path: data/vi_wiki_valid.txt
- split: test
path: data/vi_wiki_test.txt
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2079483
num_examples: 1
- name: validation
num_bytes: 211415
num_examples: 1
- name: test
num_bytes: 2081195
num_examples: 1
download_size: 4372093
dataset_size: 4372093
---
# ViWiki-Bench 🇻🇳
**Vietnamese benchmark dataset for LLM quantization perplexity evaluation.**
ViWiki-Bench is the Vietnamese equivalent of [WikiText-2](https://huggingface.co/datasets/Salesforce/wikitext),
designed specifically to evaluate quality degradation of quantized Large Language Models (LLMs) on Vietnamese text.
It follows the same **continuous-stream** methodology as WikiText-2, enabling drop-in replacement
in any existing evaluation pipeline.
---
## Dataset Summary
| Split | Characters | Words (~) | Paragraphs (~) |
|--------------|-------------|------------|----------------|
| `train` | 2,079,483 | 435,385 | 6,600 |
| `validation` | 211,415 | 43,996 | 670 |
| `test` | 2,081,195 | 435,672 | 6,605 |
| **Total** | **4,372,093** | **915,053** | **~13,875** |
**Reference — WikiText-2 English:**
| Split | Characters | Words |
|--------------|-------------|---------|
| `train` | 2,051,904 | 238,854 |
| `validation` | 217,646 | 25,877 |
| `test` | 2,088,628 | 245,569 |
> **Note:** Vietnamese word count is higher than English at equivalent character count because
> Vietnamese words average 1.7–2.2 characters vs. 4.5–5.0 for English.
---
## Motivation
Existing quantization benchmarks — WikiText-2, WikiText-103, C4 — are **English-only**.
When quantizing multilingual or Vietnamese-specific models (e.g., Vistral, PhoGPT, SeaLLM, Qwen-vi),
evaluating on English data does not reflect real-world Vietnamese performance for two reasons:
1. **Different token distribution.** Vietnamese tonal markers, compound vowels, and morphology cause
BPE tokenizers to fragment Vietnamese text at **1.8–2.5× the rate of English** on the same tokenizer.
This makes English perplexity scores incomparable to Vietnamese ones.
2. **Language-specific quantization effects.** Quantization quality varies significantly across languages
because activation and weight distributions differ per language in multilingual models. A method
that preserves English quality well may degrade Vietnamese significantly.
ViWiki-Bench provides a **Vietnamese-native ground truth** to measure this fairly.
---
## Source Data
**Primary source:** [`wikimedia/wikipedia`](https://huggingface.co/datasets/wikimedia/wikipedia),
config `20231101.vi` — the full Vietnamese Wikipedia dump from November 2023
(~1.34 million articles, ~1.5 GB).
**Fallback sources** (used automatically if primary fails):
- `uonlp/CulturaX` (vi)
- `allenai/c4` (vi)
### Why Wikipedia?
| Source | Size | Quality | Topic Diversity | Reproducible |
|-------------------------|--------|---------|-----------------|--------------|
| Wikipedia vi (20231101) | 1.3 GB | High | High | ✅ |
| CC-100 vi | 39 GB | Medium | High | Difficult |
| OSCAR vi | 8.3 GB | Medium | High | Difficult |
| MC4 vi | 1.1 GB | Medium | Medium | ✅ |
| VnExpress corpus | 0.5 GB | High | Low | ❌ |
Wikipedia provides community-reviewed text with neutral style, broad topic coverage,
and consistent Vietnamese orthography — ideal properties for a language model benchmark.
---
## Data Processing Pipeline
The raw Wikipedia text goes through a **5-step cleaning pipeline**, mirroring WikiText-103's methodology:
**Step 1 — Remove Wiki markup**
Strip templates `{{...}}`, tables `{|...|}`, reference tags `<ref>...</ref>`, and HTML tags.
**Step 2 — Resolve links**
Replace `[[link|text]]` with `text` to preserve sentence continuity.
**Step 3 — Unicode NFC normalization** *(critical for Vietnamese)*
Vietnamese characters can be encoded in two Unicode forms:
- Composed: `e` + combining hook + combining dot below
- Precomposed: single codepoint `ệ`
NFC normalization ensures consistency across articles from different contributors,
preventing tokenization artifacts.
**Step 4 — Remove section headers**
Lines of the form `=== Title ===` are removed (following WikiText convention),
keeping only prose content.
**Step 5 — Whitespace normalization**
Collapse multiple spaces, remove redundant blank lines.
### Paragraph Quality Filter
After cleaning, each paragraph passes a 3-condition quality filter:
```
keep(p) = True iff:
len(p) >= 150 chars
AND alpha_ratio(p) >= 0.55
AND contains at least one Vietnamese-specific vowel (ă, â, ê, ô, ơ, ư, ...)
```
The Vietnamese vowel check removes foreign-language text that appears in Vietnamese Wikipedia.
### Continuous Stream Construction
Filtered paragraphs are shuffled with a fixed seed (`seed=42`) and concatenated
into a **single continuous text stream** separated by double newlines (`\n\n`),
exactly as WikiText-2 is constructed. This avoids "boundary bias" — the perplexity
inflation that occurs when evaluating isolated short sentences without context.
---
## Splits & Reproducibility
All splits are **non-overlapping** by construction:
```
paragraphs = shuffle(all_filtered_paragraphs, seed=42)
test = paragraphs[0 : n_test]
valid = paragraphs[n_test : n_test + n_valid]
train = paragraphs[n_test + n_valid : n_test + n_valid + n_train]
```
Full reproduction metadata is included in `metadata.json`:
```json
{
"seed": 42,
"source": "wikimedia/wikipedia",
"source_config": "20231101.vi",
"methodology": "continuous_stream_wikitext_style",
"splits": {
"train": {"num_paragraphs": 6600, "num_chars": 2079483, "num_words": 435385},
"validation": {"num_paragraphs": 670, "num_chars": 211415, "num_words": 43996},
"test": {"num_paragraphs": 6605, "num_chars": 2081195, "num_words": 435672}
}
}
```
---
## Usage
### Quick Start
```python
from datasets import load_dataset
dataset = load_dataset("your-org/viwiki-bench")
# Each split is a single continuous text stream
test_text = dataset["test"][0]["text"]
train_text = dataset["train"][0]["text"]
valid_text = dataset["validation"][0]["text"]
```
### Drop-in Replacement for WikiText-2
```python
# Instead of:
# texts = load_wikitext2_test()
# Use:
from datasets import load_dataset
def load_vi_wiki_test():
ds = load_dataset("your-org/viwiki-bench", split="test")
return [ds[0]["text"]]
texts = load_vi_wiki_test()
results = validator.evaluate_sliding_window(model, tokenizer, texts)
```
### Perplexity Evaluation (Sliding Window)
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "your-quantized-model"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16)
# Recommended evaluation parameters
STRIDE = 512
MAX_LENGTH = 2048
dataset = load_dataset("your-org/viwiki-bench", split="test")
text = dataset[0]["text"]
encodings = tokenizer(text, return_tensors="pt", add_special_tokens=False)
input_ids = encodings.input_ids
# Add BOS manually once (avoids Double-BOS bug on Llama-3)
if tokenizer.bos_token_id is not None:
if input_ids[0, 0].item() != tokenizer.bos_token_id:
bos = torch.tensor([[tokenizer.bos_token_id]])
input_ids = torch.cat([bos, input_ids], dim=1)
nlls, total_tokens = [], 0
for begin_loc in range(0, input_ids.size(1), STRIDE):
end_loc = min(begin_loc + MAX_LENGTH, input_ids.size(1))
trg_len = end_loc - (begin_loc if begin_loc == 0 else begin_loc)
chunk = input_ids[:, begin_loc:end_loc].cuda()
labels = chunk.clone()
if begin_loc > 0:
labels[:, :-trg_len] = -100 # mask context, loss only on new tokens
with torch.no_grad():
loss = model(chunk, labels=labels).loss
nlls.append(loss * trg_len)
total_tokens += trg_len
if end_loc == input_ids.size(1):
break
ppl = torch.exp(torch.stack(nlls).sum() / total_tokens)
print(f"Perplexity: {ppl.item():.4f}")
```
### Important: Interpreting Perplexity Values
Vietnamese PPL scores will be **higher** than English WikiText-2 scores for the same model.
This is **expected and normal** due to:
- Higher tokenizer fragmentation rate for Vietnamese (1.8–2.5× vs English)
- Lower Vietnamese data proportion in most LLM pretraining corpora (<2%)
**Always compare relatively** (quantized vs. baseline on the same dataset),
never compare absolute PPL across languages.
---
## Paragraph Statistics
| Split | Mean (chars) | Median | P25 | P75 | Max |
|--------------|-------------|--------|-----|------|-------|
| `train` | 315 | 248 | 167 | 412 | 4,820 |
| `validation` | 308 | 241 | 162 | 405 | 3,910 |
| `test` | 312 | 245 | 165 | 408 | 4,340 |
## Topic Distribution
Sampled from Wikipedia with broad topic coverage:
| Category | ~Share |
|-----------------------|--------|
| History & Geography | 28% |
| Science & Technology | 22% |
| Culture & Arts | 18% |
| Biography | 16% |
| Sports & Entertainment| 9% |
| Politics & Society | 7% |
---
## Limitations
- **Single source:** Only Wikipedia prose. Conversational, social media, or literary text
is not represented.
- **Snapshot:** Based on the November 2023 Wikipedia dump. Articles added or revised after
this date are not included.
- **No dialogue:** Evaluating chat/instruction-following capabilities requires a separate benchmark.
- **Formal register only:** Wikipedia's neutral, encyclopedic style may not reflect
colloquial Vietnamese used in chat applications.
---
## Related Work
| Benchmark | Language | Task | Metric |
|------------------|----------|-------------|-------------|
| WikiText-2 | English | LM eval | Perplexity |
| WikiText-103 | English | LM eval | Perplexity |
| C4 | English | LM eval | Perplexity |
| **ViWiki-Bench** | **Vietnamese** | **LM eval** | **Perplexity** |
| ViASR-Bench | Vietnamese | ASR eval | WER / CER |
---
## Citation
If you use ViWiki-Bench in your research, please cite:
```bibtex
@techreport{viwikibench2024,
title = {ViWiki-Bench: A Vietnamese Benchmark Dataset for
LLM Quantization Perplexity Evaluation},
author = {AnhND},
year = {2026},
note = {Technical Report v1.0},
url = {https://huggingface.co/datasets/anhnda/viwikibench}
}
```
---
## License
This dataset is released under **CC-BY-SA 4.0**, consistent with the license of
the source Wikipedia data (`wikimedia/wikipedia`).
The dataset generation code is released under **MIT License**.