--- license: cc-by-nc-sa-4.0 language: - sr pretty_name: Sr Tokenizer test configs: - config_name: default data_files: - split: train path: - '*_train.jsonl' - split: test path: - '*_test.jsonl' --- # Sr Tokenizer test This dataset provides a large Serbian text corpus designed for training and evaluating of tokenizers for Serbian language models. It combines multiple sources of Serbian text in both Cyrillic and Latin scripts, unified into a consistent JSONL format with id and text fields. ## Dataset Structure Metadata has been stripped; Each record is a JSON object with: - id: unique identifier - text: raw Serbian text ## Source coprora - [Znanje(sr) corpus](https://huggingface.co/datasets/procesaur/znanje): ~6.6 GB with 700 million words - [WikiViki(sr) corpus](https://huggingface.co/datasets/procesaur/WikiViki): ~1.5 GB with 135 million words - [PDRS web corpus](http://hdl.handle.net/11356/1752): ~3.2 GB with 500 million words total 11.3 GB with 1.3 billion words #### Splits: - train.jsonl (75%) - test.jsonl (25%) ```python from datasets import load_dataset dataset = load_dataset("procesaur/sr-tokenizer-test", split="train") ```
Citation: ```bibtex soon ```