Datasets:
Dataset Card for Bashkir News Cluster Dataset
This dataset contains a collection of Bashkir news and analytical articles, curated for clustering, embedding, and unsupervised learning tasks. It is part of the BashkirNLP project.
Dataset Details
Dataset Description
The dataset consists of 24,428 Bashkir-language texts (articles and news pieces) collected from various online sources. Each text is accompanied by metadata such as title, source, date, and original category. This version is intended for clustering, representation learning, and embedding tasks, as it provides raw texts without predefined train/test splits.
- Curated by: Arabov Mullosharaf Kurbonovich, Khaybullina Svetlana Sergeevna (BashkirNLPWorld)
- Funded by: Not applicable
- Shared by: BashkirNLPWorld
- Language(s): Bashkir (Cyrillic script)
- License: MIT License
Dataset Sources
- Repository: BashkirNLPWorld/bashkir-news-cluster
- Related datasets:
Uses
Direct Use
This dataset is suitable for:
- Training word or document embeddings (Word2Vec, FastText, BERT-like models)
- Text clustering
- Unsupervised topic modelling
- Representation learning for downstream NLP tasks
Out-of-Scope Use
- The dataset should not be used for tasks requiring gold-standard labels (use the classification versions instead).
- It is not intended for generating offensive content or for any unethical applications.
Loading the Dataset
Using Hugging Face Datasets Library
from datasets import load_dataset
# Load the full dataset
dataset = load_dataset("BashkirNLPWorld/bashkir-news-cluster")
# Access the training split
train_data = dataset["train"]
# View first example
print(train_data[0])
# Convert to pandas DataFrame for easier analysis
df = train_data.to_pandas()
print(df.head())
Loading Specific Columns
# Load only content and title (faster)
dataset = load_dataset(
"BashkirNLPWorld/bashkir-news-cluster",
split="train",
columns=["content", "title", "category"]
)
Streaming Mode (for large datasets)
# Stream the dataset without downloading all at once
dataset = load_dataset(
"BashkirNLPWorld/bashkir-news-cluster",
split="train",
streaming=True
)
# Iterate through examples
for example in dataset:
print(example["title"])
break
Using with Transformers for Embeddings
from transformers import AutoTokenizer, AutoModel
import torch
# Load a multilingual model
tokenizer = AutoTokenizer.from_pretrained("bert-base-multilingual-cased")
model = AutoModel.from_pretrained("bert-base-multilingual-cased")
# Load dataset
dataset = load_dataset("BashkirNLPWorld/bashkir-news-cluster", split="train")
# Generate embeddings for first 100 texts
texts = dataset["content"][:100]
inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
embeddings = outputs.last_hidden_state.mean(dim=1) # Pooling
Dataset Structure
Data Fields
content(string): Full article text.title(string): Article title.category(string): Original category after normalization (one of 280 categories).source(string): Source file name (e.g.,azatliqorg,amanat_articles).content_length(int64): Length of the text in characters.resource(string): Original URL or resource identifier (if available).date(string): Publication date (when available).
Data Splits
The dataset contains a single split (train) with all 24,428 examples. No predefined train/validation/test split is provided – users are free to create their own.
Dataset Creation
Curation Rationale
The goal was to assemble a large, diverse corpus of modern Bashkir texts to support low-resource NLP research. The collected articles cover a wide range of topics (news, culture, education, religion, sports, etc.), making them useful for general-purpose language modelling and representation learning.
Source Data
Data Collection and Processing
Articles were collected from 14 Bashkir online sources:
- Agidel (журнал)
- Akbuzat
- Amanat
- Azatliq
- Bashkortostan gazete
- Bashkizi
- Henek
- Shonkar
- Tamasha
- Tanburz
- Uchbash
- Yanshishma (two variants)
- Ye102.ru
Processing steps:
- Extracted JSONL files from raw HTML.
- Removed texts shorter than 50 characters or longer than 10,000 characters.
- Removed exact duplicates.
- Normalized category names (e.g.,
яңалыклар,Яңылыҡтар таҫмаһы,Новости→Яңылыҡтар). - Added metadata (source, content length, date where available).
Who are the source data producers?
The articles were originally written by journalists, authors, and contributors of the respective online publications. The BashkirNLP team does not claim ownership of the content; it is used for non‑commercial research purposes under fair use.
Annotations
No manual annotations were added beyond the existing categories. The category field was automatically normalised using a rule‑based dictionary.
Personal and Sensitive Information
The texts are public news articles and do not contain personally identifiable information (PII) beyond what is already published. No additional personal data was collected.
Bias, Risks, and Limitations
- The dataset is heavily skewed towards news and journalistic style; it may not represent colloquial or spoken Bashkir.
- Sources vary in quality and topic distribution; some sources (e.g., Azatliq) dominate the dataset.
- The
datefield is incomplete – many articles lack publication dates. - Category normalisation is rule‑based and may contain errors.
Recommendations
Users should be aware of the genre bias and consider balancing the data if training on a sub‑domain. For tasks requiring high‑quality labels, the classification versions of the dataset are recommended.
Citation
If you use this dataset in your research, please cite it as:
@dataset{arabov2025bashkircluster,
author = {Arabov, Mullosharaf Kurbonovich and Khaybullina, Svetlana Sergeevna},
title = {Bashkir News Cluster Dataset},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/BashkirNLPWorld/bashkir-news-cluster}
}
Dataset Card Authors
- Arabov Mullosharaf Kurbonovich
- Khaybullina Svetlana Sergeevna
- BashkirNLPWorld
Dataset Card Contact
- Email: cool.araby@gmail.com
- Hugging Face organization: BashkirNLPWorld
- Downloads last month
- 6