📚 Tatar Web Corpus (tatar-web-corpus)
The largest open corpus for the Tatar language with over 1 million documents collected from news websites, social media, articles, books, and Wikipedia. Designed for various NLP tasks including language modeling, text classification, information extraction, and search.
📊 Corpus Statistics
Overall Statistics
| Metric | Value |
|---|---|
| 📄 Documents | 1,053,698 |
| 📝 Characters | 1,796,654,320 |
| 🔤 Words | 254,462,180 |
| 🎯 Tokens | 304,923,850 |
| 📚 Unique words (approx.) | 1,258,563 |
| 📏 Avg. document length | 1,705 characters |
| 📌 With title | 1,035,160 (98.2%) |
| 📌 With category | 1,032,024 (97.9%) |
| 📌 With source | 1,053,240 (100%) |
📁 Source Distribution
| Source | Documents | Percentage |
|---|---|---|
ttwiki_cyrillic |
853,196 | 80.97% |
merged_matbugat_news |
56,558 | 5.37% |
intertat_news |
55,008 | 5.22% |
beznen_articles |
24,884 | 2.36% |
merged_azatliqorg |
16,632 | 1.58% |
vk_posts_filtered |
12,248 | 1.16% |
syuyumbike_news |
8,192 | 0.78% |
shahrikazan_news |
6,446 | 0.61% |
tatar_inform |
5,144 | 0.49% |
kazanutlary_articles |
4,038 | 0.38% |
alluki_articles |
3,928 | 0.37% |
merged_mamadysh_tt |
3,444 | 0.33% |
webbooks |
1,384 | 0.13% |
tuganaylar_articles |
680 | 0.06% |
vatantat_news |
490 | 0.05% |
telegram |
458 | 0.04% |
nurlat_articles_tt_only_soft |
338 | 0.03% |
kiziltan_articles |
280 | 0.03% |
books |
194 | 0.02% |
belgechbook |
92 | 0.01% |
almet_articles |
64 | 0.01% |
Total sources: 21
🔤 Most Frequent Words
| Word | Frequency |
|---|---|
| да | 209,407 |
| белән | 158,340 |
| ул | 156,740 |
| дә | 149,821 |
| һәм | 137,386 |
| дип | 120,781 |
| бу | 116,989 |
| иде | 107,990 |
| бер | 95,129 |
| мин | 76,308 |
| өчен | 72,522 |
| инде | 60,870 |
| бик | 58,640 |
| аның | 57,926 |
| бар | 51,666 |
| генә | 49,173 |
| гына | 47,670 |
| шул | 47,393 |
| юк | 43,893 |
| түгел | 43,491 |
📌 Record Structure
Each line in the JSONL file contains the following fields:
{
"title": "Document title",
"content": "Main text...",
"category": "News / Blog / Book / Wikipedia",
"source": "Source URL or channel name",
"text": "Document title\n\nMain text" // Convenience field for modeling
}
Field Descriptions
| Field | Description | Availability |
|---|---|---|
title |
Document title | 98.2% |
content |
Main text | 100% |
category |
Topic category | 97.9% |
source |
Source URL or channel name | 100% |
text |
Ready-to-use field for modeling (title + \n\n + content) |
100% |
🔗 Source Attribution
- Wikipedia articles:
tt.wikipedia.org - Books: Telegram channels (
t.me/tatarkit,t.me/tatarkitaphane,t.me/tatelkit,t.me/tatarkitap) - News articles: Original URLs when available
🚀 Loading the Dataset
Using Hugging Face datasets (recommended)
from datasets import load_dataset
# Load full corpus
dataset = load_dataset("TatarNLPWorld/tatar-web-corpus", split="full")
print(f"Total documents: {len(dataset)}") # 1,053,698
# View first record
print(dataset[0])
# Use 'text' field for modeling
texts = dataset["text"]
Available Splits
full— complete corpus (1,053,698 documents)sample— small subset for testing (1,000 documents)
# Load sample split
sample = load_dataset("TatarNLPWorld/tatar-web-corpus", split="sample")
Quick Start Example
# 1️⃣ Install library if needed
!pip install -q datasets
# 2️⃣ Import and load
from datasets import load_dataset
dataset = load_dataset("TatarNLPWorld/tatar-web-corpus", split="full")
# 3️⃣ Check basic info
print(f"Total documents: {len(dataset)}")
print("Sample record:")
print(dataset[0])
# 4️⃣ Access 'text' field
texts = dataset["text"]
print(f"First 500 chars of first document:\n{texts[0][:500]}")
🎯 Recommended Usage
| Task | What to Use |
|---|---|
| Language Models (GPT, BERT) | text field (title + content) |
| Text Classification | text + category |
| Title Generation | content → title |
| Source Analysis | source field |
| Word Embeddings (Word2vec, fastText) | text field |
Examples for Different Tasks
# For GPT / language modeling
train_texts = dataset["text"]
# For classification
train_texts = dataset["text"]
train_labels = dataset["category"]
# For summarization
articles = dataset["content"]
titles = dataset["title"]
# Filter by category
news_only = dataset.filter(lambda x: x["category"] == "яңалыклар")
wikipedia_only = dataset.filter(lambda x: x["source"] == "ttwiki_cyrillic")
📝 Sample Records
Record 1 (News):
- Title: "Татарстан Республикасы Президенты Рөстәм Миңнехановның мөрәҗәгате"
- Category:
яңалыклар(news) - Source: Intertat
- Content: «Хөрмәтле татарстанлылар! Кадерле дуслар!..»
Record 2 (Wikipedia):
- Title: "Казан"
- Category:
Wikipedia - Source:
ttwiki_cyrillic - Content: «Казан - Татарстанның башкаласы, Идел буендагы борынгы шәһәрләрнең берсе...»
Record 3 (Fiction):
- Title: "Кышкы урманда"
- Category:
нәфис әдәбият(fiction) - Source:
t.me/tatarkit - Content: «Урман тынлыгында кар ява иде...»
⚖️ License
This dataset is distributed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.
Terms of Use:
- Attribution — You must give appropriate credit to TatarNLPWorld
- ShareAlike — If you modify the material, distribute under the same license
- Commercial use is allowed
Source Attribution:
Texts are collected from publicly available sources. Each document links to its original source in the source field.
🛡️ Privacy & Safety
The dataset has been cleaned of personal information (PII). However, if you find any sensitive data, please report it.
📖 Citation
@misc{tatarwebcorpus2026,
author = {TatarNLPWorld},
title = {Tatar Web Corpus: A Large-Scale Dataset for Tatar Language NLP},
year = {2026},
month = {February},
note = {Version with 1,053,698 documents},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/datasets/TatarNLPWorld/tatar-web-corpus}}
}
📬 Contact
- Organization: TatarNLPWorld on Hugging Face
- Questions: Open an issue
🙏 Acknowledgments
Special thanks to all data sources:
- Wikipedia (tt.wikipedia.org)
- News portals: Intertat, Tatar-inform, Shähri Qazan, Vatanym Tatarstan
- Magazines: Söyembikä, Qazan utları
- Social media: VK thematic groups
- Books: Tatar books Telegram channels
Made with ❤️ for the Tatar language and the global NLP community
- Downloads last month
- 33