Dataset Card for Bashkir News Multilabel Classification Dataset
This dataset contains Bashkir news and analytical articles annotated with multiple thematic labels, enabling multi-label classification tasks. Each article can belong to several categories simultaneously.
Dataset Details
Dataset Description
The dataset consists of 22,318 Bashkir-language texts (articles and news pieces) collected from various online sources. Each article is annotated with 14 possible thematic labels (e.g., Society, Culture, Religion, Politics). An article can have between 1 and 14 labels, with an average of 3.6 labels per text.
- Curated by: Arabov Mullosharaf Kurbonovich, Khaybullina Svetlana Sergeevna (BashkirNLPWorld)
- Funded by: Not applicable
- Shared by: BashkirNLPWorld
- Language(s): Bashkir (Cyrillic script)
- License: MIT License
Dataset Sources
- Repository: BashkirNLPWorld/bashkir-news-multilabel
- Related datasets:
Uses
Direct Use
This dataset is suitable for:
- Multi-label text classification
- Training multi-label classifiers (e.g., transformers, SVM, ensemble methods)
- Multi-task learning
- Fine-tuning large language models for Bashkir text categorization
Out-of-Scope Use
- The dataset should not be used for single-label classification tasks (use the multiclass version instead).
- It is not intended for generating offensive content or for any unethical applications.
Loading the Dataset
Using Hugging Face Datasets Library
from datasets import load_dataset
# Load the full dataset
dataset = load_dataset("BashkirNLPWorld/bashkir-news-multilabel")
# Access the training split
train_data = dataset["train"]
# View first example
print(train_data[0])
# Get label names
labels = ["Йәмғиәт", "Мәҙәниәт", "Дин", "Сәйәсәт", "Иҡтисад",
"Мәғариф", "Сәләмәтлек", "Спорт", "Шоу-бизнес",
"Татарстан", "Башҡортостан", "Донъя", "Әҙәбиәт", "Хикәйә"]
Convert to Pandas DataFrame
# Convert to pandas for easy analysis
df = train_data.to_pandas()
print(df.head())
print(f"Average labels per article: {df['num_labels'].mean():.2f}")
Working with Labels
# Get label vector for first article
labels_vector = train_data[0]["label_vector"]
print(f"Label vector: {labels_vector}")
print(f"Active labels: {[labels[i] for i, val in enumerate(labels_vector) if val == 1]}")
# Filter articles with specific label
society_articles = train_data.filter(lambda x: 1 in x["label_vector"][0]) # Index 0 = Йәмғиәт
print(f"Articles about society: {len(society_articles)}")
Streaming Mode (for large datasets)
# Stream the dataset without downloading all at once
dataset = load_dataset(
"BashkirNLPWorld/bashkir-news-multilabel",
split="train",
streaming=True
)
# Iterate through examples
for example in dataset:
print(f"Title: {example['title']}")
print(f"Labels: {example['labels']}")
break
Training a Multi-label Classifier with Transformers
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import Trainer, TrainingArguments
import torch
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("bert-base-multilingual-cased")
model = AutoModelForSequenceClassification.from_pretrained(
"bert-base-multilingual-cased",
num_labels=14, # 14 labels
problem_type="multi_label_classification"
)
# Load dataset
dataset = load_dataset("BashkirNLPWorld/bashkir-news-multilabel")
# Tokenize function
def tokenize_function(examples):
return tokenizer(
examples["content"],
padding="max_length",
truncation=True,
max_length=512
)
# Apply tokenization
tokenized_dataset = dataset.map(tokenize_function, batched=True)
# Convert label vectors to tensors
def convert_labels(examples):
examples["labels"] = torch.tensor(examples["label_vector"], dtype=torch.float)
return examples
tokenized_dataset = tokenized_dataset.map(convert_labels, batched=True)
# Create train/test split
split_dataset = tokenized_dataset["train"].train_test_split(test_size=0.1, seed=42)
# Train the model (simplified example)
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
evaluation_strategy="epoch",
save_strategy="epoch",
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=split_dataset["train"],
eval_dataset=split_dataset["test"],
)
# trainer.train()
Dataset Structure
Data Fields
content(string): Full article text.title(string): Article title.labels(list of strings): List of thematic labels for this article (e.g.,["Йәмғиәт", "Дин"]).label_vector(list of ints): Binary vector of length 14 indicating which labels are present.num_labels(int64): Number of labels assigned to this article (1–14).original_category(string): Original normalized category from source data.content_length(int64): Length of the text in characters.resource(string): Original URL or resource identifier (if available).date(string): Publication date (when available).
Label Definitions
The dataset uses 14 thematic labels:
| Index | Label (Bashkir) | English Translation |
|---|---|---|
| 0 | Йәмғиәт | Society |
| 1 | Мәҙәниәт | Culture |
| 2 | Дин | Religion |
| 3 | Сәйәсәт | Politics |
| 4 | Иҡтисад | Economy |
| 5 | Мәғариф | Education |
| 6 | Сәләмәтлек | Health |
| 7 | Спорт | Sports |
| 8 | Шоу-бизнес | Show Business |
| 9 | Татарстан | Tatarstan |
| 10 | Башҡортостан | Bashkortostan |
| 11 | Донъя | World |
| 12 | Әҙәбиәт | Literature |
| 13 | Хикәйә | Fiction/Stories |
Label Statistics
- Most frequent label: Йәмғиәт (Society) – 12,251 occurrences (55.1% of articles)
- Least frequent label: Хикәйә (Fiction) – 1,880 occurrences (8.4% of articles)
- Average labels per article: 3.6
- Most common label pairs:
- Дин + Йәмғиәт (22.6%)
- Донъя + Йәмғиәт (20.8%)
- Башҡортостан + Йәмғиәт (19.2%)
Data Splits
The dataset contains a single split (train) with all 22,318 examples. Users are encouraged to create their own train/validation/test splits based on their specific needs.
Dataset Creation
Annotation Method
Labels were generated using a keyword-based approach:
- A dictionary of Bashkir keywords was created for each label (e.g.,
"йәмғиәт","общество"for Йәмғиәт). - Each article's title and content were scanned for these keywords.
- If no keywords were found, the article's original category was used as a fallback (if it belonged to one of the 14 labels).
Curation Rationale
The goal was to create a multi-label dataset for Bashkir that reflects the natural overlap of topics in news articles. Unlike single-label datasets, this captures the complexity where an article about education might also touch on society or politics.
Source Data
Data Collection and Processing
Articles were collected from 14 Bashkir online sources (see cluster dataset for full list).
Processing steps:
- Extracted JSONL files from raw HTML.
- Removed texts shorter than 50 characters or longer than 10,000 characters.
- Removed exact duplicates.
- Applied keyword-based multi-label annotation.
- Normalized category names in the original_category field.
Who are the source data producers?
The articles were originally written by journalists, authors, and contributors of the respective online publications. The BashkirNLP team does not claim ownership of the content; it is used for non‑commercial research purposes under fair use.
Annotations
- Method: Automated keyword-based labeling with fallback to original categories
- Validation: Human review of a random sample to ensure label quality
- Limitations: Some labels may be incomplete or have false positives due to keyword ambiguity
Personal and Sensitive Information
The texts are public news articles and do not contain personally identifiable information (PII) beyond what is already published. No additional personal data was collected.
Bias, Risks, and Limitations
- Label bias: The keyword-based approach may introduce bias towards certain terms or topics.
- Multi-label sparsity: Some labels (e.g., Хикәйә) are rare and may not provide enough examples for robust classification.
- Source bias: The dataset is dominated by certain sources (e.g., azatliqorg accounts for 28% of data).
- Genre bias: All texts are from news sources; may not represent other domains (e.g., social media, literature).
- Date incompleteness: Many articles lack publication dates.
Recommendations
- Users should be aware of label distribution and consider techniques for handling imbalanced multi-label data.
- For better label quality, consider filtering by confidence or using ensemble methods.
- For tasks requiring high precision, consider using only articles where labels came from explicit keyword matches.
Citation
If you use this dataset in your research, please cite it as:
@dataset{arabov2025bashkirmultilabel,
author = {Arabov, Mullosharaf Kurbonovich and Khaybullina, Svetlana Sergeevna},
title = {Bashkir News Multilabel Classification Dataset},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/BashkirNLPWorld/bashkir-news-multilabel}
}
Dataset Card Authors
- Arabov Mullosharaf Kurbonovich
- Khaybullina Svetlana Sergeevna
- BashkirNLPWorld
Dataset Card Contact
- Email: cool.araby@gmail.com
- Hugging Face organization: BashkirNLPWorld
- Downloads last month
- 4