Datasets:
SR/BS/HR Language Identification Dataset
Fine-grained language identification dataset for Serbian, Bosnian, and Croatian
Overview
This dataset enables fine-grained language identification between Serbian, Bosnian, and Croatian — three closely related South Slavic languages that are often misclassified by general-purpose language identification tools.
Derived from rsateam/sr-bs-hr-clean-text, this dataset uses source-based labeling to provide ground-truth labels without circular dependencies on automatic language detection.
The Challenge
Existing language identification tools often struggle with sr/bs/hr:
| Tool | Problem |
|---|---|
| FastText lid.176 | Often treats them as one language |
| langid.py | Low precision between sr/bs/hr |
| CLD3 | Inconsistent across text lengths |
This Dataset Enables
- Training specialized sr/bs/hr classifiers
- Benchmarking and evaluating LID systems
- Preprocessing pipeline routing for multilingual NLP
- Research on closely related language varieties
Dataset Statistics
| Metric | Value |
|---|---|
| Total examples | 60,000 |
| Dataset size | 27.4 MB |
| Download size | 16.5 MB |
| Labels | 3 (sr, bs, hr) |
Splits
| Split | Examples | Size |
|---|---|---|
train |
41,996 | 19.1 MB |
validation |
8,997 | 4.1 MB |
test |
9,007 | 4.1 MB |
Label Distribution
Each split is balanced across languages (~33% each):
| Label | Language |
|---|---|
sr |
Serbian |
bs |
Bosnian |
hr |
Croatian |
Length Distribution
| Bucket | Description |
|---|---|
short |
<100 characters |
medium |
100-300 characters |
long |
>300 characters |
Dataset Structure
Data Format
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"text": "Kratak pasus teksta za identifikaciju...",
"label": "bs",
"source": "bs.wikipedia.org",
"length": 156,
"length_bucket": "medium",
"source_doc_id": "original-document-uuid"
}
Fields
| Field | Type | Description |
|---|---|---|
id |
string | Unique sample identifier |
text |
string | Text sample for classification |
label |
string | Language label (sr/bs/hr) |
source |
string | Source domain |
length |
int | Length in characters |
length_bucket |
string | Length category (short/medium/long) |
source_doc_id |
string | Reference to source document in clean-text corpus |
Labeling Methodology
Source-Based Labeling
Labels are assigned based on the source of publication, not automatic detection:
| Source | Label |
|---|---|
| sr.wikipedia.org | sr |
| bs.wikipedia.org | bs |
| hr.wikipedia.org | hr |
Why Source-Based?
This approach is academically accepted and avoids common pitfalls:
| Advantage | Explanation |
|---|---|
| Ground truth | Labels reflect author/publisher intent |
| No circular dependency | Not trained on LID output |
| Reproducible | Deterministic labeling process |
| Transparent | Source field enables verification |
Usage
Loading the Dataset
from datasets import load_dataset
# Load full dataset
dataset = load_dataset("rsateam/sr-bs-hr-language-id")
# Load specific split
train = load_dataset("rsateam/sr-bs-hr-language-id", split="train")
test = load_dataset("rsateam/sr-bs-hr-language-id", split="test")
Training a Classifier
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import TrainingArguments, Trainer
dataset = load_dataset("rsateam/sr-bs-hr-language-id")
# Create label mapping
label2id = {"sr": 0, "bs": 1, "hr": 2}
id2label = {v: k for k, v in label2id.items()}
# Load model and tokenizer
model_name = "xlm-roberta-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(
model_name,
num_labels=3,
label2id=label2id,
id2label=id2label
)
# Tokenize
def tokenize(examples):
tokens = tokenizer(examples["text"], truncation=True, max_length=256)
tokens["labels"] = [label2id[label] for label in examples["label"]]
return tokens
tokenized = dataset.map(tokenize, batched=True)
# Train
trainer = Trainer(
model=model,
args=TrainingArguments(
output_dir="./sr-bs-hr-lid",
num_train_epochs=3,
per_device_train_batch_size=32,
evaluation_strategy="epoch"
),
train_dataset=tokenized["train"],
eval_dataset=tokenized["validation"]
)
trainer.train()
Evaluating Existing LID Systems
from datasets import load_dataset
import fasttext
# Load dataset and FastText model
dataset = load_dataset("rsateam/sr-bs-hr-language-id", split="test")
lid_model = fasttext.load_model("lid.176.bin")
# Evaluate
correct = 0
for example in dataset:
pred = lid_model.predict(example["text"].replace("\n", " "))[0][0]
pred_lang = pred.replace("__label__", "")
# Map FastText predictions to our labels
if pred_lang in ["sr", "bs", "hr"]:
if pred_lang == example["label"]:
correct += 1
accuracy = correct / len(dataset)
print(f"FastText accuracy on sr/bs/hr: {accuracy:.2%}")
Filtering by Length
# Get only short texts for challenging evaluation
short_texts = dataset["test"].filter(lambda x: x["length_bucket"] == "short")
# Get long texts for easier classification
long_texts = dataset["test"].filter(lambda x: x["length_bucket"] == "long")
Relationship to Source Dataset
This dataset is derived from rsateam/sr-bs-hr-clean-text:
sr-bs-hr-clean-text (641K docs)
│
▼
[Sampling & Segmentation]
│
▼
sr-bs-hr-language-id (60K samples)
The source_doc_id field links each sample back to its source document.
Considerations
Use Cases
- Primary: Training and evaluating language identification models
- Secondary: Preprocessing pipelines, dialect research, cross-lingual studies
Limitations
- Source-based labels may not capture individual variation
- Wikipedia style may differ from conversational text
- Some lexical overlap between languages is inherent
Ethical Considerations
- Derived from Wikipedia (CC-BY-SA)
- No personally identifiable information
- Language labels reflect publication source, not speaker identity
License
CC-BY-SA-4.0 — consistent with source dataset.
Citation
@dataset{rsateam_language_id_2026,
title={SR/BS/HR Language Identification Dataset},
author={RSA Team},
year={2026},
publisher={Hugging Face},
url={https://huggingface.co/datasets/rsateam/sr-bs-hr-language-id},
note={Fine-grained language identification for Serbian, Bosnian, and Croatian}
}
Related Datasets
| Dataset | Description |
|---|---|
| rsateam/sr-bs-hr-clean-text | Source corpus (641K documents) |
Contributing
For suggestions, bug reports, or improvements:
- Open an issue on GitHub
- Email: office@rsateam.com
RSA Team — Building bridges between languages and AI, one dataset at a time.
- Downloads last month
- 4