File size: 1,881 Bytes
9bd6ce0
 
 
 
 
 
0024fad
9bd6ce0
 
 
 
0024fad
9bd6ce0
 
 
 
 
 
 
 
0024fad
9bd6ce0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0024fad
9bd6ce0
 
 
 
0024fad
9bd6ce0
 
 
 
 
 
 
 
0024fad
9bd6ce0
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
license: apache-2.0
---

---

# Turkish Wikipedia Topic-to-Summary Dataset

This dataset consists of title–summary pairs extracted from the Turkish Wikipedia XML dump. Each entry contains a topic title as the input and a cleaned, HTML-free summary generated from the first paragraph of the corresponding article as the output.
It is suitable for training language models, retrieval systems, and knowledge extraction tasks.

## Format

The dataset is provided in JSONL format. Each line represents a single record:
```
{
  "input": "Title",
  "output": "A short description or summary about the given title."
}
```
## Cleaning Process

The following preprocessing steps were applied:

Removal of HTML tags

Removal of HTML entities such as  , &, "

Removal of template structures ({{ ... }})

Removal of Infobox fields and markup

Removal of wiki links ([[Page]], [[Page|Text]])

Normalization of whitespace

Removal of section headers (== Heading ==)

Extraction of the first paragraph or first few sentences


All summaries are plain text and designed to be model-friendly.

Possible Use Cases

Training retrieval and ranking models

Topic-to-text or title-to-abstract generation

Knowledge extraction and factual reasoning tasks

Pretraining or finetuning LLMs on structured encyclopedic data

Building question answering systems with short factual outputs


## Source

The dataset is derived from the publicly available Turkish Wikipedia dump.
It does not contain any proprietary content and follows Wikipedia's licensing terms.

## Size

Format: JSONL

Each record: one topic title and its cleaned summary

Dataset size will depend on the specific Wikipedia dump version used


## Notes

This dataset is not an official product of Wikipedia or the Wikimedia Foundation. It is a processed derivative created for research and machine learning purposes.


---