Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    FileNotFoundError
Message:      Couldn't find any data file at /src/services/worker/anhndbk/ViWikiBench. Couldn't find 'anhndbk/ViWikiBench' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/anhndbk/ViWikiBench@1159c53d40b54c2b9615f82e3400101205052b13/data/vi_wiki_train.txt' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.ndjson', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.xml', '.hdf5', '.h5', '.eval', '.lance', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.3gp', '.3g2', '.avi', '.asf', '.flv', '.mp4', '.mov', '.m4v', '.mkv', '.webm', '.f4v', '.wmv', '.wma', '.ogm', '.mxf', '.nut', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.3GP', '.3G2', '.AVI', '.ASF', '.FLV', '.MP4', '.MOV', '.M4V', '.MKV', '.WEBM', '.F4V', '.WMV', '.WMA', '.OGM', '.MXF', '.NUT', '.pdf', '.PDF', '.nii', '.NII', '.zip', '.idx', '.manifest', '.txn']
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1203, in dataset_module_factory
                  raise FileNotFoundError(
              FileNotFoundError: Couldn't find any data file at /src/services/worker/anhndbk/ViWikiBench. Couldn't find 'anhndbk/ViWikiBench' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/anhndbk/ViWikiBench@1159c53d40b54c2b9615f82e3400101205052b13/data/vi_wiki_train.txt' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.ndjson', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.xml', '.hdf5', '.h5', '.eval', '.lance', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.3gp', '.3g2', '.avi', '.asf', '.flv', '.mp4', '.mov', '.m4v', '.mkv', '.webm', '.f4v', '.wmv', '.wma', '.ogm', '.mxf', '.nut', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.3GP', '.3G2', '.AVI', '.ASF', '.FLV', '.MP4', '.MOV', '.M4V', '.MKV', '.WEBM', '.F4V', '.WMV', '.WMA', '.OGM', '.MXF', '.NUT', '.pdf', '.PDF', '.nii', '.NII', '.zip', '.idx', '.manifest', '.txn']

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

ViWiki-Bench πŸ‡»πŸ‡³

Vietnamese benchmark dataset for LLM quantization perplexity evaluation.

ViWiki-Bench is the Vietnamese equivalent of WikiText-2, designed specifically to evaluate quality degradation of quantized Large Language Models (LLMs) on Vietnamese text. It follows the same continuous-stream methodology as WikiText-2, enabling drop-in replacement in any existing evaluation pipeline.


Dataset Summary

Split Characters Words (~) Paragraphs (~)
train 2,079,483 435,385 6,600
validation 211,415 43,996 670
test 2,081,195 435,672 6,605
Total 4,372,093 915,053 ~13,875

Reference β€” WikiText-2 English:

Split Characters Words
train 2,051,904 238,854
validation 217,646 25,877
test 2,088,628 245,569

Note: Vietnamese word count is higher than English at equivalent character count because Vietnamese words average 1.7–2.2 characters vs. 4.5–5.0 for English.


Motivation

Existing quantization benchmarks β€” WikiText-2, WikiText-103, C4 β€” are English-only. When quantizing multilingual or Vietnamese-specific models (e.g., Vistral, PhoGPT, SeaLLM, Qwen-vi), evaluating on English data does not reflect real-world Vietnamese performance for two reasons:

  1. Different token distribution. Vietnamese tonal markers, compound vowels, and morphology cause BPE tokenizers to fragment Vietnamese text at 1.8–2.5Γ— the rate of English on the same tokenizer. This makes English perplexity scores incomparable to Vietnamese ones.

  2. Language-specific quantization effects. Quantization quality varies significantly across languages because activation and weight distributions differ per language in multilingual models. A method that preserves English quality well may degrade Vietnamese significantly.

ViWiki-Bench provides a Vietnamese-native ground truth to measure this fairly.


Source Data

Primary source: wikimedia/wikipedia, config 20231101.vi β€” the full Vietnamese Wikipedia dump from November 2023 (~1.34 million articles, ~1.5 GB).

Fallback sources (used automatically if primary fails):

  • uonlp/CulturaX (vi)
  • allenai/c4 (vi)

Why Wikipedia?

Source Size Quality Topic Diversity Reproducible
Wikipedia vi (20231101) 1.3 GB High High βœ…
CC-100 vi 39 GB Medium High Difficult
OSCAR vi 8.3 GB Medium High Difficult
MC4 vi 1.1 GB Medium Medium βœ…
VnExpress corpus 0.5 GB High Low ❌

Wikipedia provides community-reviewed text with neutral style, broad topic coverage, and consistent Vietnamese orthography β€” ideal properties for a language model benchmark.


Data Processing Pipeline

The raw Wikipedia text goes through a 5-step cleaning pipeline, mirroring WikiText-103's methodology:

Step 1 β€” Remove Wiki markup Strip templates {{...}}, tables {|...|}, reference tags <ref>...</ref>, and HTML tags.

Step 2 β€” Resolve links Replace [[link|text]] with text to preserve sentence continuity.

Step 3 β€” Unicode NFC normalization (critical for Vietnamese) Vietnamese characters can be encoded in two Unicode forms:

  • Composed: e + combining hook + combining dot below
  • Precomposed: single codepoint ệ

NFC normalization ensures consistency across articles from different contributors, preventing tokenization artifacts.

Step 4 β€” Remove section headers Lines of the form === Title === are removed (following WikiText convention), keeping only prose content.

Step 5 β€” Whitespace normalization Collapse multiple spaces, remove redundant blank lines.

Paragraph Quality Filter

After cleaning, each paragraph passes a 3-condition quality filter:

keep(p) = True  iff:
  len(p) >= 150 chars
  AND  alpha_ratio(p) >= 0.55
  AND  contains at least one Vietnamese-specific vowel (Δƒ, Γ’, Γͺ, Γ΄, Ζ‘, Ζ°, ...)

The Vietnamese vowel check removes foreign-language text that appears in Vietnamese Wikipedia.

Continuous Stream Construction

Filtered paragraphs are shuffled with a fixed seed (seed=42) and concatenated into a single continuous text stream separated by double newlines (\n\n), exactly as WikiText-2 is constructed. This avoids "boundary bias" β€” the perplexity inflation that occurs when evaluating isolated short sentences without context.


Splits & Reproducibility

All splits are non-overlapping by construction:

paragraphs = shuffle(all_filtered_paragraphs, seed=42)

test  = paragraphs[0          : n_test]
valid = paragraphs[n_test     : n_test + n_valid]
train = paragraphs[n_test + n_valid : n_test + n_valid + n_train]

Full reproduction metadata is included in metadata.json:

{
  "seed": 42,
  "source": "wikimedia/wikipedia",
  "source_config": "20231101.vi",
  "methodology": "continuous_stream_wikitext_style",
  "splits": {
    "train":      {"num_paragraphs": 6600,  "num_chars": 2079483, "num_words": 435385},
    "validation": {"num_paragraphs": 670,   "num_chars": 211415,  "num_words": 43996},
    "test":       {"num_paragraphs": 6605,  "num_chars": 2081195, "num_words": 435672}
  }
}

Usage

Quick Start

from datasets import load_dataset

dataset = load_dataset("your-org/viwiki-bench")

# Each split is a single continuous text stream
test_text  = dataset["test"][0]["text"]
train_text = dataset["train"][0]["text"]
valid_text = dataset["validation"][0]["text"]

Drop-in Replacement for WikiText-2

# Instead of:
# texts = load_wikitext2_test()

# Use:
from datasets import load_dataset

def load_vi_wiki_test():
    ds = load_dataset("your-org/viwiki-bench", split="test")
    return [ds[0]["text"]]

texts = load_vi_wiki_test()
results = validator.evaluate_sliding_window(model, tokenizer, texts)

Perplexity Evaluation (Sliding Window)

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "your-quantized-model"
tokenizer  = AutoTokenizer.from_pretrained(model_path)
model      = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16)

# Recommended evaluation parameters
STRIDE     = 512
MAX_LENGTH = 2048

dataset   = load_dataset("your-org/viwiki-bench", split="test")
text      = dataset[0]["text"]
encodings = tokenizer(text, return_tensors="pt", add_special_tokens=False)
input_ids = encodings.input_ids

# Add BOS manually once (avoids Double-BOS bug on Llama-3)
if tokenizer.bos_token_id is not None:
    if input_ids[0, 0].item() != tokenizer.bos_token_id:
        bos = torch.tensor([[tokenizer.bos_token_id]])
        input_ids = torch.cat([bos, input_ids], dim=1)

nlls, total_tokens = [], 0
for begin_loc in range(0, input_ids.size(1), STRIDE):
    end_loc  = min(begin_loc + MAX_LENGTH, input_ids.size(1))
    trg_len  = end_loc - (begin_loc if begin_loc == 0 else begin_loc)
    chunk    = input_ids[:, begin_loc:end_loc].cuda()
    labels   = chunk.clone()
    if begin_loc > 0:
        labels[:, :-trg_len] = -100          # mask context, loss only on new tokens
    with torch.no_grad():
        loss = model(chunk, labels=labels).loss
    nlls.append(loss * trg_len)
    total_tokens += trg_len
    if end_loc == input_ids.size(1):
        break

ppl = torch.exp(torch.stack(nlls).sum() / total_tokens)
print(f"Perplexity: {ppl.item():.4f}")

Important: Interpreting Perplexity Values

Vietnamese PPL scores will be higher than English WikiText-2 scores for the same model. This is expected and normal due to:

  • Higher tokenizer fragmentation rate for Vietnamese (1.8–2.5Γ— vs English)
  • Lower Vietnamese data proportion in most LLM pretraining corpora (<2%)

Always compare relatively (quantized vs. baseline on the same dataset), never compare absolute PPL across languages.


Paragraph Statistics

Split Mean (chars) Median P25 P75 Max
train 315 248 167 412 4,820
validation 308 241 162 405 3,910
test 312 245 165 408 4,340

Topic Distribution

Sampled from Wikipedia with broad topic coverage:

Category ~Share
History & Geography 28%
Science & Technology 22%
Culture & Arts 18%
Biography 16%
Sports & Entertainment 9%
Politics & Society 7%

Limitations

  • Single source: Only Wikipedia prose. Conversational, social media, or literary text is not represented.
  • Snapshot: Based on the November 2023 Wikipedia dump. Articles added or revised after this date are not included.
  • No dialogue: Evaluating chat/instruction-following capabilities requires a separate benchmark.
  • Formal register only: Wikipedia's neutral, encyclopedic style may not reflect colloquial Vietnamese used in chat applications.

Related Work

Benchmark Language Task Metric
WikiText-2 English LM eval Perplexity
WikiText-103 English LM eval Perplexity
C4 English LM eval Perplexity
ViWiki-Bench Vietnamese LM eval Perplexity
ViASR-Bench Vietnamese ASR eval WER / CER

Citation

If you use ViWiki-Bench in your research, please cite:

@techreport{viwikibench2024,
  title     = {ViWiki-Bench: A Vietnamese Benchmark Dataset for
               LLM Quantization Perplexity Evaluation},
  author    = {AnhND},
  year      = {2026},
  note      = {Technical Report v1.0},
  url       = {https://huggingface.co/datasets/anhnda/viwikibench}
}

License

This dataset is released under CC-BY-SA 4.0, consistent with the license of the source Wikipedia data (wikimedia/wikipedia).

The dataset generation code is released under MIT License.

Downloads last month
41