--- language: - vi license: cc-by-sa-4.0 multilinguality: monolingual size_categories: - 1M **Note:** Vietnamese word count is higher than English at equivalent character count because > Vietnamese words average 1.7โ€“2.2 characters vs. 4.5โ€“5.0 for English. --- ## Motivation Existing quantization benchmarks โ€” WikiText-2, WikiText-103, C4 โ€” are **English-only**. When quantizing multilingual or Vietnamese-specific models (e.g., Vistral, PhoGPT, SeaLLM, Qwen-vi), evaluating on English data does not reflect real-world Vietnamese performance for two reasons: 1. **Different token distribution.** Vietnamese tonal markers, compound vowels, and morphology cause BPE tokenizers to fragment Vietnamese text at **1.8โ€“2.5ร— the rate of English** on the same tokenizer. This makes English perplexity scores incomparable to Vietnamese ones. 2. **Language-specific quantization effects.** Quantization quality varies significantly across languages because activation and weight distributions differ per language in multilingual models. A method that preserves English quality well may degrade Vietnamese significantly. ViWiki-Bench provides a **Vietnamese-native ground truth** to measure this fairly. --- ## Source Data **Primary source:** [`wikimedia/wikipedia`](https://huggingface.co/datasets/wikimedia/wikipedia), config `20231101.vi` โ€” the full Vietnamese Wikipedia dump from November 2023 (~1.34 million articles, ~1.5 GB). **Fallback sources** (used automatically if primary fails): - `uonlp/CulturaX` (vi) - `allenai/c4` (vi) ### Why Wikipedia? | Source | Size | Quality | Topic Diversity | Reproducible | |-------------------------|--------|---------|-----------------|--------------| | Wikipedia vi (20231101) | 1.3 GB | High | High | โœ… | | CC-100 vi | 39 GB | Medium | High | Difficult | | OSCAR vi | 8.3 GB | Medium | High | Difficult | | MC4 vi | 1.1 GB | Medium | Medium | โœ… | | VnExpress corpus | 0.5 GB | High | Low | โŒ | Wikipedia provides community-reviewed text with neutral style, broad topic coverage, and consistent Vietnamese orthography โ€” ideal properties for a language model benchmark. --- ## Data Processing Pipeline The raw Wikipedia text goes through a **5-step cleaning pipeline**, mirroring WikiText-103's methodology: **Step 1 โ€” Remove Wiki markup** Strip templates `{{...}}`, tables `{|...|}`, reference tags `...`, and HTML tags. **Step 2 โ€” Resolve links** Replace `[[link|text]]` with `text` to preserve sentence continuity. **Step 3 โ€” Unicode NFC normalization** *(critical for Vietnamese)* Vietnamese characters can be encoded in two Unicode forms: - Composed: `e` + combining hook + combining dot below - Precomposed: single codepoint `แป‡` NFC normalization ensures consistency across articles from different contributors, preventing tokenization artifacts. **Step 4 โ€” Remove section headers** Lines of the form `=== Title ===` are removed (following WikiText convention), keeping only prose content. **Step 5 โ€” Whitespace normalization** Collapse multiple spaces, remove redundant blank lines. ### Paragraph Quality Filter After cleaning, each paragraph passes a 3-condition quality filter: ``` keep(p) = True iff: len(p) >= 150 chars AND alpha_ratio(p) >= 0.55 AND contains at least one Vietnamese-specific vowel (ฤƒ, รข, รช, รด, ฦก, ฦฐ, ...) ``` The Vietnamese vowel check removes foreign-language text that appears in Vietnamese Wikipedia. ### Continuous Stream Construction Filtered paragraphs are shuffled with a fixed seed (`seed=42`) and concatenated into a **single continuous text stream** separated by double newlines (`\n\n`), exactly as WikiText-2 is constructed. This avoids "boundary bias" โ€” the perplexity inflation that occurs when evaluating isolated short sentences without context. --- ## Splits & Reproducibility All splits are **non-overlapping** by construction: ``` paragraphs = shuffle(all_filtered_paragraphs, seed=42) test = paragraphs[0 : n_test] valid = paragraphs[n_test : n_test + n_valid] train = paragraphs[n_test + n_valid : n_test + n_valid + n_train] ``` Full reproduction metadata is included in `metadata.json`: ```json { "seed": 42, "source": "wikimedia/wikipedia", "source_config": "20231101.vi", "methodology": "continuous_stream_wikitext_style", "splits": { "train": {"num_paragraphs": 6600, "num_chars": 2079483, "num_words": 435385}, "validation": {"num_paragraphs": 670, "num_chars": 211415, "num_words": 43996}, "test": {"num_paragraphs": 6605, "num_chars": 2081195, "num_words": 435672} } } ``` --- ## Usage ### Quick Start ```python from datasets import load_dataset dataset = load_dataset("your-org/viwiki-bench") # Each split is a single continuous text stream test_text = dataset["test"][0]["text"] train_text = dataset["train"][0]["text"] valid_text = dataset["validation"][0]["text"] ``` ### Drop-in Replacement for WikiText-2 ```python # Instead of: # texts = load_wikitext2_test() # Use: from datasets import load_dataset def load_vi_wiki_test(): ds = load_dataset("your-org/viwiki-bench", split="test") return [ds[0]["text"]] texts = load_vi_wiki_test() results = validator.evaluate_sliding_window(model, tokenizer, texts) ``` ### Perplexity Evaluation (Sliding Window) ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "your-quantized-model" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16) # Recommended evaluation parameters STRIDE = 512 MAX_LENGTH = 2048 dataset = load_dataset("your-org/viwiki-bench", split="test") text = dataset[0]["text"] encodings = tokenizer(text, return_tensors="pt", add_special_tokens=False) input_ids = encodings.input_ids # Add BOS manually once (avoids Double-BOS bug on Llama-3) if tokenizer.bos_token_id is not None: if input_ids[0, 0].item() != tokenizer.bos_token_id: bos = torch.tensor([[tokenizer.bos_token_id]]) input_ids = torch.cat([bos, input_ids], dim=1) nlls, total_tokens = [], 0 for begin_loc in range(0, input_ids.size(1), STRIDE): end_loc = min(begin_loc + MAX_LENGTH, input_ids.size(1)) trg_len = end_loc - (begin_loc if begin_loc == 0 else begin_loc) chunk = input_ids[:, begin_loc:end_loc].cuda() labels = chunk.clone() if begin_loc > 0: labels[:, :-trg_len] = -100 # mask context, loss only on new tokens with torch.no_grad(): loss = model(chunk, labels=labels).loss nlls.append(loss * trg_len) total_tokens += trg_len if end_loc == input_ids.size(1): break ppl = torch.exp(torch.stack(nlls).sum() / total_tokens) print(f"Perplexity: {ppl.item():.4f}") ``` ### Important: Interpreting Perplexity Values Vietnamese PPL scores will be **higher** than English WikiText-2 scores for the same model. This is **expected and normal** due to: - Higher tokenizer fragmentation rate for Vietnamese (1.8โ€“2.5ร— vs English) - Lower Vietnamese data proportion in most LLM pretraining corpora (<2%) **Always compare relatively** (quantized vs. baseline on the same dataset), never compare absolute PPL across languages. --- ## Paragraph Statistics | Split | Mean (chars) | Median | P25 | P75 | Max | |--------------|-------------|--------|-----|------|-------| | `train` | 315 | 248 | 167 | 412 | 4,820 | | `validation` | 308 | 241 | 162 | 405 | 3,910 | | `test` | 312 | 245 | 165 | 408 | 4,340 | ## Topic Distribution Sampled from Wikipedia with broad topic coverage: | Category | ~Share | |-----------------------|--------| | History & Geography | 28% | | Science & Technology | 22% | | Culture & Arts | 18% | | Biography | 16% | | Sports & Entertainment| 9% | | Politics & Society | 7% | --- ## Limitations - **Single source:** Only Wikipedia prose. Conversational, social media, or literary text is not represented. - **Snapshot:** Based on the November 2023 Wikipedia dump. Articles added or revised after this date are not included. - **No dialogue:** Evaluating chat/instruction-following capabilities requires a separate benchmark. - **Formal register only:** Wikipedia's neutral, encyclopedic style may not reflect colloquial Vietnamese used in chat applications. --- ## Related Work | Benchmark | Language | Task | Metric | |------------------|----------|-------------|-------------| | WikiText-2 | English | LM eval | Perplexity | | WikiText-103 | English | LM eval | Perplexity | | C4 | English | LM eval | Perplexity | | **ViWiki-Bench** | **Vietnamese** | **LM eval** | **Perplexity** | | ViASR-Bench | Vietnamese | ASR eval | WER / CER | --- ## Citation If you use ViWiki-Bench in your research, please cite: ```bibtex @techreport{viwikibench2024, title = {ViWiki-Bench: A Vietnamese Benchmark Dataset for LLM Quantization Perplexity Evaluation}, author = {AnhND}, year = {2026}, note = {Technical Report v1.0}, url = {https://huggingface.co/datasets/anhnda/viwikibench} } ``` --- ## License This dataset is released under **CC-BY-SA 4.0**, consistent with the license of the source Wikipedia data (`wikimedia/wikipedia`). The dataset generation code is released under **MIT License**.