Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
Tatar
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

📚 Tatar Wiki Corpus (tatar-wiki-corpus)

Hugging Face License: CC BY-SA 4.0 Dataset size Language Documents

A comprehensive cleaned corpus of Tatar Wikipedia and Wikibooks with over 467,000 articles. This dataset is ideal for training language models, text classification, information retrieval, and various NLP tasks for the Tatar language.


📊 Corpus Statistics

Overall Statistics

Metric Value
📄 Documents 467,578
📝 Characters 336,591,416
🔤 Words 47,742,463
📏 Sentences 8,547,916
📚 Unique words 830,915
🎯 Unique characters 152
📏 Avg. document length 720 characters
📏 Avg. word length 6.06 characters
📏 Avg. sentence length 5.59 words
🧹 Compression after cleaning 27.3%

📁 Source Distribution

Source Documents Percentage
Wikipedia (wikipedia) 467,013 99.88%
Wikibooks (wikibooks) 565 0.12%

🔤 Most Frequent Words (Cleaned)

Word Frequency Translation
торак 879,185 settlement / dwelling
урнашкан 652,623 located
һәм 538,061 and
буенча 500,623 according to / along
төркем 430,100 category / group
пунктларытөркем 357,180 category of settlements
әлифба 333,379 alphabet (by letter)
уртача 269,028 average
ред 232,450 editor
пунктлар 197,878 settlements
авыл 176,660 village
диңгез 150,056 sea
елга 143,379 river
башкаласы 141,272 capital city
халык 113,915 people / population
су 105,129 water
энциклопедия 121,355 encyclopedia

🔤 Most Frequent Characters

Character Frequency
space 47,119,926
а 27,887,729
р 18,317,042
е 17,511,687
н 17,048,091
к 14,982,961
л 13,713,831
и 12,868,963
т 12,689,023
ы 11,161,694

🏷️ Top Categories

Category Documents
Әлифба буенча торак пунктлар (Settlements by alphabet) 324,459
Мексика торак пунктлары (Mexican settlements) 55,083
Италия торак пунктлары (Italian settlements) 45,150
Польша торак пунктлары (Polish settlements) 42,949
Төркия торак пунктлары (Turkish settlements) 39,614
Франция коммуналары (French communes) 35,805
Беларусия авыллары (Belarusian villages) 23,287
Төркия мәхәлләләре (Turkish neighborhoods) 21,597
Веракрус торак пунктлары (Veracruz settlements) 21,419
АКШ шәһәрләре (US cities) 19,351

📌 Record Structure

Each line in the JSONL file contains the following fields:

{
  "title": "Әгерҗе",
  "text": "Әгерҗе - РФнең субъекты булган Татарстан Республикасындагы шәһәр...",
  "categories": ["Татарстан район үзәкләре", "Әгерҗе районының торак пунктлары"],
  "alphabet": "cyrillic",
  "alphabet_info": {
    "alphabet": "cyrillic",
    "confidence": 0.9369,
    "cyr_ratio": 0.9369,
    "lat_ratio": 0.0631
  },
  "quality_info": {
    "word_count": 263,
    "sentence_count": 20,
    "readability_score": 93.7,
    "avg_sentence_length": 13.15
  },
  "text_length": 1969,
  "categories_count": 2,
  "url": "https://tt.wikipedia.org/wiki/Әгерҗе",
  "source": "wikipedia"
}

Field Descriptions

Field Type Description
title string Article title
text string Cleaned text of the article (no HTML, no empty brackets)
categories list of strings Wikipedia categories associated with the article
alphabet string Detected alphabet (cyrillic)
alphabet_info dict Detailed information about alphabet detection
quality_info dict Text quality metrics (word count, readability, etc.)
text_length int Length of text in characters
categories_count int Number of categories
url string Original Wikipedia/Wikibooks URL
source string Source type: wikipedia or wikibooks

Data Quality Features

Fully cleaned text - All HTML tags, URLs, emails, and special characters removed
Empty brackets removed - Patterns like (), (..), (1), (*) are cleaned
Language detection - Cyrillic ratio and confidence scores included
Quality metrics - Word count, sentence count, readability score
Original metadata - Categories, URLs, and source attribution preserved


🚀 Loading the Dataset

Using Hugging Face datasets (recommended)

from datasets import load_dataset

# Load full corpus
dataset = load_dataset("TatarNLPWorld/tatar-wiki-corpus", split="full")
print(f"Total documents: {len(dataset)}")  # 467,578

# View first record
print(dataset[0])

# Access specific fields
titles = dataset["title"]
texts = dataset["text"]
categories = dataset["categories"]
sources = dataset["source"]

Available Splits

  • full — complete corpus (467,578 documents)
  • sample — small subset for testing (1,000 documents)
# Load sample split
sample = load_dataset("TatarNLPWorld/tatar-wiki-corpus", split="sample")

# Load specific split
full = load_dataset("TatarNLPWorld/tatar-wiki-corpus", split="full")

Quick Start Example

# 1️⃣ Install if needed
!pip install -q datasets

# 2️⃣ Load dataset
from datasets import load_dataset

dataset = load_dataset("TatarNLPWorld/tatar-wiki-corpus", split="full")

# 3️⃣ Check basic info
print(f"Total documents: {len(dataset)}")
print(f"Available fields: {dataset.column_names}")

# 4️⃣ View sample
print("Sample record:")
print(dataset[0])

# 5️⃣ Access cleaned text
first_text = dataset[0]["text"]
print(f"First 500 chars:\n{first_text[:500]}")

# 6️⃣ Filter by source
wikipedia = dataset.filter(lambda x: x["source"] == "wikipedia")
wikibooks = dataset.filter(lambda x: x["source"] == "wikibooks")

print(f"Wikipedia articles: {len(wikipedia)}")
print(f"Wikibooks articles: {len(wikibooks)}")

🎯 Recommended Usage

Task What to Use
Language Models (GPT, BERT, LLaMA) text field (cleaned article text)
Text Classification text + categories
Information Retrieval text field + embeddings
Title Generation texttitle
Source Attribution source field (wikipedia/wikibooks)
Word Embeddings (Word2vec, fastText) text field (tokenized)
Readability Analysis quality_info metrics
Question Answering text field as context
Summarization texttitle or extractive summaries

Examples for Different Tasks

# For language model training
train_texts = dataset["text"]

# For multi-label classification with categories
texts = dataset["text"]
labels = dataset["categories"]  # Each document can have multiple categories

# For article retrieval
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/multilingual-e5-small')
embeddings = model.encode(dataset["text"][:1000])  # Encode first 1000 articles

# Filter by source
wikipedia_only = dataset.filter(lambda x: x["source"] == "wikipedia")

# Filter by text quality
high_quality = dataset.filter(lambda x: x["quality_info"]["word_count"] > 100)

# Get documents with specific categories
geography_articles = dataset.filter(
    lambda x: any("торак пунктлар" in cat for cat in x["categories"])
)

📝 Sample Records

Record 1 (Wikipedia - City):

  • Title: "Әгерҗе"
  • Source: wikipedia
  • Categories: ["Татарстан район үзәкләре", "Әгерҗе районының торак пунктлары"]
  • Text excerpt: "Әгерҗе - РФнең субъекты булган Татарстан Республикасындагы шәһәр, Әгерҗе районының үзәге. Мөһим тимер юл төене. Халык саны – 19 739 кеше (2016)..."

Record 2 (Wikipedia - River):

  • Title: "Идел"
  • Source: wikipedia
  • Categories: ["Россия елгалары", "Идел"]
  • Text excerpt: "Идел - Европаның иң озын елгасы. Озынлыгы 3530 км, бассейнының мәйданы 1360 мең км²..."

Record 3 (Wikibooks - Folk Tale):

  • Title: "Тапкыр солдат"
  • Source: wikibooks
  • Categories: ["Татар халык әкиятләре"]
  • Text excerpt: "Тапкыр солдат (татар халык әкияте) Унике ел патша армиясендә хезмәт иткән солдатның өенә кайтыр вакыты җиткән..."

⚖️ License

This dataset is distributed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license, consistent with Wikipedia's licensing.

Terms of Use:

  • Attribution — You must give appropriate credit to TatarNLPWorld and the original Wikipedia contributors
  • ShareAlike — If you modify the material, distribute under the same license
  • Commercial use is allowed

Source Attribution:

All texts originate from Tatar Wikipedia (tt.wikipedia.org) and Tatar Wikibooks (tt.wikibooks.org). Original URLs are preserved in the url field.


🛡️ Privacy & Safety

The dataset has been automatically cleaned of:

  • HTML tags and markup
  • URLs and email addresses
  • Phone numbers
  • Copyright notices
  • Empty brackets and formatting artifacts
  • Non-Tatar characters (optional, configurable)

All texts are sourced from publicly available Wikimedia projects and have been processed for NLP use. No personal or sensitive information is intentionally included.


📊 Cleaning Process

The corpus underwent extensive cleaning using a custom Tatar text cleaner:

  1. HTML decoding and tag removal
  2. URL and email removal
  3. Copyright notice removal
  4. Empty bracket removal - (), (..), (...), (1), (*)
  5. Repeated punctuation normalization
  6. Special character cleaning
  7. Non-Tatar character removal (configurable)
  8. Space and line normalization
  9. Quality filtering (minimum length, Tatar ratio, etc.)

Result: 27.3% size reduction while preserving all meaningful content.


📖 Citation

@misc{tatarwikicorpus2026,
  author = {TatarNLPWorld},
  title = {Tatar Wiki Corpus: Cleaned Tatar Wikipedia and Wikibooks Dataset},
  year = {2026},
  month = {March},
  note = {Version with 467,578 cleaned documents},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/datasets/TatarNLPWorld/tatar-wiki-corpus}}
}

BibTeX for academic papers

@dataset{tatar_wiki_corpus_2026,
  title={Tatar Wiki Corpus: A Large-Scale Cleaned Dataset of Tatar Wikipedia and Wikibooks},
  author={{TatarNLPWorld}},
  year={2026},
  publisher={Hugging Face},
  url={https://huggingface.co/datasets/TatarNLPWorld/tatar-wiki-corpus}
}

📬 Contact & Community

Join the Tatar NLP Community

  • Share your models trained on this corpus
  • Report issues or suggest improvements
  • Contribute to Tatar language technology development

🙏 Acknowledgments

Special thanks to:

  • Wikimedia Foundation for Tatar Wikipedia and Wikibooks content
  • All volunteer contributors to Tatar Wikimedia projects
  • The global NLP community for tools and inspiration
  • Everyone working to preserve and promote the Tatar language

Data Sources

  • Wikipedia Tatar (tt.wikipedia.org) - Main source (467,013 articles)
  • Wikibooks Tatar (tt.wikibooks.org) - Supplementary source (565 books/folios)

📋 Changelog

Version 1.0 (2026-03-03)

  • Initial release
  • 467,578 cleaned documents
  • 877 MB total size
  • Wikipedia + Wikibooks sources
  • Full metadata preservation
  • 27.3% size reduction after cleaning

Made with ❤️ for the Tatar language and the global NLP community
Без татар теле өчен эшлибез 🇹🇹

Downloads last month
11