sixfingerdev commited on
Commit
9bd6ce0
·
verified ·
1 Parent(s): 022874f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -3
README.md CHANGED
@@ -1,3 +1,88 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ Here is a clean and professional README.md in English, without any emojis.
5
+
6
+
7
+ ---
8
+
9
+ #Turkish Wikipedia Topic-to-Summary Dataset
10
+
11
+ This dataset consists of title–summary pairs extracted from the Turkish Wikipedia XML dump. Each entry contains a topic title as the input and a cleaned, HTML-free summary generated from the first paragraph of the corresponding article as the output.
12
+ It is suitable for training language models, retrieval systems, and knowledge extraction tasks.
13
+
14
+ #Format
15
+
16
+ The dataset is provided in JSONL format. Each line represents a single record:
17
+ ```
18
+ {
19
+ "input": "Title",
20
+ "output": "A short description or summary about the given title."
21
+ }
22
+ ```
23
+ Cleaning Process
24
+
25
+ The following preprocessing steps were applied:
26
+
27
+ Removal of HTML tags
28
+
29
+ Removal of HTML entities such as  , &, "
30
+
31
+ Removal of template structures ({{ ... }})
32
+
33
+ Removal of Infobox fields and markup
34
+
35
+ Removal of wiki links ([[Page]], [[Page|Text]])
36
+
37
+ Normalization of whitespace
38
+
39
+ Removal of section headers (== Heading ==)
40
+
41
+ Extraction of the first paragraph or first few sentences
42
+
43
+
44
+ All summaries are plain text and designed to be model-friendly.
45
+
46
+ Possible Use Cases
47
+
48
+ Training retrieval and ranking models
49
+
50
+ Topic-to-text or title-to-abstract generation
51
+
52
+ Knowledge extraction and factual reasoning tasks
53
+
54
+ Pretraining or finetuning LLMs on structured encyclopedic data
55
+
56
+ Building question answering systems with short factual outputs
57
+
58
+
59
+ Source
60
+
61
+ The dataset is derived from the publicly available Turkish Wikipedia dump.
62
+ It does not contain any proprietary content and follows Wikipedia's licensing terms.
63
+
64
+ Size
65
+
66
+ Format: JSONL
67
+
68
+ Each record: one topic title and its cleaned summary
69
+
70
+ Dataset size will depend on the specific Wikipedia dump version used
71
+
72
+
73
+ Notes
74
+
75
+ This dataset is not an official product of Wikipedia or the Wikimedia Foundation. It is a processed derivative created for research and machine learning purposes.
76
+
77
+
78
+ ---
79
+
80
+ If you want, I can also prepare:
81
+
82
+ A shorter version
83
+
84
+ A GitHub-optimized version
85
+
86
+ A citation block (BibTeX)
87
+
88
+ A dataset card in HuggingFace style