SR/BS/HR Clean Text Corpus
High-quality, deduplicated text corpus for Serbian, Bosnian, and Croatian
Overview
This dataset provides a carefully curated and cleaned text corpus for South Slavic languages, specifically designed to address quality issues found in existing corpora like OSCAR and CC100. It serves as a foundation for training language models, tokenizers, and conducting linguistic research on Balkan languages.
Why This Dataset?
Existing sr/bs/hr corpora often suffer from:
| Problem | Our Solution |
|---|---|
| HTML fragments & noise | Aggressive cleaning pipeline |
| Poor deduplication | SHA256 + MinHash (>95% dedup rate) |
| Mixed languages | Source-based labeling + FastText validation |
| Unclear sources | Full provenance tracking |
Dataset Statistics
| Metric | Value |
|---|---|
| Total examples | 641,186 |
| Dataset size | 2.67 GB |
| Download size | 1.37 GB |
| Source | Wikipedia |
Splits
| Split | Examples | Size |
|---|---|---|
train |
512,948 | 2.14 GB |
validation |
64,116 | 266 MB |
test |
64,122 | 269 MB |
Languages
| Language | ISO Code | Source |
|---|---|---|
| Serbian | sr |
sr.wikipedia.org |
| Bosnian | bs |
bs.wikipedia.org |
| Croatian | hr |
hr.wikipedia.org |
Dataset Structure
Data Format
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"title": "Article Title",
"text": "Full cleaned article text...",
"language": "sr",
"source": "sr.wikipedia.org",
"domain": "wiki",
"date": null,
"url": "https://sr.wikipedia.org/wiki/..."
}
Fields
| Field | Type | Description |
|---|---|---|
id |
string | Unique identifier |
title |
string | Article title |
text |
string | Cleaned textual content |
language |
string | Language code (sr/bs/hr) |
source |
string | Source domain |
domain |
string | Content type (wiki) |
date |
null | Publication date (not available for wiki) |
url |
string | Original Wikipedia URL |
Data Processing Pipeline
┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Collection │ -> │ Cleaning │ -> │ Dedup │ -> │ Lang ID │ -> │ Filtering │
│ Wikipedia │ │ normalize │ │ MinHash │ │ FastText │ │ quality │
└─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘
Processing Steps
- Collection: Wikipedia dumps from bs/hr/sr Wikipedia
- Cleaning: Markup removal, Unicode normalization (NFC), whitespace normalization
- Deduplication: SHA256 exact matching + MinHash near-duplicate detection (90% threshold)
- Language ID: Source-based labeling with FastText validation
- Quality Filtering: Length constraints, language confidence >0.90
Usage
Loading the Dataset
from datasets import load_dataset
# Load full dataset
dataset = load_dataset("rsateam/sr-bs-hr-clean-text")
# Load specific split
train = load_dataset("rsateam/sr-bs-hr-clean-text", split="train")
# Filter by language
serbian = dataset["train"].filter(lambda x: x["language"] == "sr")
Streaming
from datasets import load_dataset
dataset = load_dataset(
"rsateam/sr-bs-hr-clean-text",
split="train",
streaming=True
)
for example in dataset:
print(example["title"], "-", example["text"][:100])
Training a Tokenizer
from tokenizers import Tokenizer
from tokenizers.models import BPE
from tokenizers.trainers import BpeTrainer
from datasets import load_dataset
dataset = load_dataset("rsateam/sr-bs-hr-clean-text", split="train")
def batch_iterator(batch_size=1000):
for i in range(0, len(dataset), batch_size):
yield dataset[i:i+batch_size]["text"]
tokenizer = Tokenizer(BPE(unk_token="[UNK]"))
trainer = BpeTrainer(
vocab_size=32000,
special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]
)
tokenizer.train_from_iterator(batch_iterator(), trainer=trainer)
Supported Tasks
- Language Model Pretraining: Foundation for training or continued pretraining of LLMs
- Tokenizer Training: Clean text for BPE/WordPiece/Unigram tokenizer training
- Word Embeddings: Training Word2Vec, FastText, or similar embeddings
- Linguistic Research: Analysis of Serbian, Bosnian, and Croatian texts
Considerations
Ethical Considerations
- Data sourced from Wikipedia under CC-BY-SA license
- No personally identifiable information (PII)
- Encyclopedic content with neutral point of view
Limitations
- Single source (Wikipedia) — encyclopedic style only
- Some topics may be underrepresented
- Article length varies significantly
License
This dataset is released under CC-BY-SA-4.0, consistent with Wikipedia's licensing.
Citation
@dataset{rsateam_clean_text_2026,
title={SR/BS/HR Clean Text Corpus},
author={RSA Team},
year={2026},
publisher={Hugging Face},
url={https://huggingface.co/datasets/rsateam/sr-bs-hr-clean-text},
note={High-quality deduplicated corpus for Serbian, Bosnian, and Croatian}
}
Future Plans
We plan to expand this dataset with additional sources:
- News portals (klix.ba, index.hr, blic.rs, etc.)
- Government and public institution documents
- Other curated text sources
Contributing
We welcome contributions! For suggestions, bug reports, or improvements:
- Open an issue on GitHub
- Email: office@rsateam.com
RSA Team — Building bridges between languages and AI, one dataset at a time.
- Downloads last month
- 8