license: apache-2.0
Turkish Wikipedia Topic-to-Summary Dataset
This dataset consists of title–summary pairs extracted from the Turkish Wikipedia XML dump. Each entry contains a topic title as the input and a cleaned, HTML-free summary generated from the first paragraph of the corresponding article as the output. It is suitable for training language models, retrieval systems, and knowledge extraction tasks.
Format
The dataset is provided in JSONL format. Each line represents a single record:
{
"input": "Title",
"output": "A short description or summary about the given title."
}
Cleaning Process
The following preprocessing steps were applied:
Removal of HTML tags
Removal of HTML entities such as , &, "
Removal of template structures ({{ ... }})
Removal of Infobox fields and markup
Removal of wiki links ([[Page]], [[Page|Text]])
Normalization of whitespace
Removal of section headers (== Heading ==)
Extraction of the first paragraph or first few sentences
All summaries are plain text and designed to be model-friendly.
Possible Use Cases
Training retrieval and ranking models
Topic-to-text or title-to-abstract generation
Knowledge extraction and factual reasoning tasks
Pretraining or finetuning LLMs on structured encyclopedic data
Building question answering systems with short factual outputs
Source
The dataset is derived from the publicly available Turkish Wikipedia dump. It does not contain any proprietary content and follows Wikipedia's licensing terms.
Size
Format: JSONL
Each record: one topic title and its cleaned summary
Dataset size will depend on the specific Wikipedia dump version used
Notes
This dataset is not an official product of Wikipedia or the Wikimedia Foundation. It is a processed derivative created for research and machine learning purposes.