sr-tokenizer-test / README.md
procesaur's picture
Update README.md
a737111 verified
metadata
license: cc-by-nc-sa-4.0
language:
  - sr
pretty_name: Sr Tokenizer test
configs:
  - config_name: default
    data_files:
      - split: train
        path:
          - '*_train.jsonl'
      - split: test
        path:
          - '*_test.jsonl'

Sr Tokenizer test

This dataset provides a large Serbian text corpus designed for training and evaluating of tokenizers for Serbian language models. It combines multiple sources of Serbian text in both Cyrillic and Latin scripts, unified into a consistent JSONL format with id and text fields.

Dataset Structure

Metadata has been stripped; Each record is a JSON object with:

  • id: unique identifier
  • text: raw Serbian text

Source coprora

total 11.3 GB with 1.3 billion words

Splits:

  • train.jsonl (75%)
  • test.jsonl (25%)
from datasets import load_dataset
dataset = load_dataset("procesaur/sr-tokenizer-test", split="train")
Editor
Mihailo Škorić

Citation:

soon