metadata
license: cc-by-nc-sa-4.0
language:
- sr
pretty_name: Sr Tokenizer test
configs:
- config_name: default
data_files:
- split: train
path:
- '*_train.jsonl'
- split: test
path:
- '*_test.jsonl'
Sr Tokenizer test
This dataset provides a large Serbian text corpus designed for training and evaluating of tokenizers for Serbian language models. It combines multiple sources of Serbian text in both Cyrillic and Latin scripts, unified into a consistent JSONL format with id and text fields.
Dataset Structure
Metadata has been stripped; Each record is a JSON object with:
- id: unique identifier
- text: raw Serbian text
Source coprora
- Znanje(sr) corpus: ~6.6 GB with 700 million words
- WikiViki(sr) corpus: ~1.5 GB with 135 million words
- PDRS web corpus: ~3.2 GB with 500 million words
total 11.3 GB with 1.3 billion words
Splits:
- train.jsonl (75%)
- test.jsonl (25%)
from datasets import load_dataset
dataset = load_dataset("procesaur/sr-tokenizer-test", split="train")
Citation:
soon