File size: 1,385 Bytes
7bf0786
 
 
 
 
 
 
 
 
 
 
 
 
 
 
954ab28
 
 
 
 
 
 
 
7bf0786
 
 
e3b9397
 
 
 
 
 
 
 
 
 
 
 
 
954ab28
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
dataset_info:
  features:
  - name: title
    dtype: string
  - name: text
    dtype: string
  - name: url
    dtype: string
  splits:
  - name: train
    num_bytes: 138186439
    num_examples: 2759
  download_size: 79585645
  dataset_size: 138186439
license: cc-by-sa-3.0
task_categories:
- text-generation
language:
- de
pretty_name: wikitext german
size_categories:
- 1K<n<10K
---
# Dataset Card for "wikitext-18-de"

## Dataset Summary

The dataset is a german variation of the [wikitext](https://huggingface.co/datasets/wikitext) dataset and is a collection of 
ca. 18 million tokens. It follows the same approach by extracting from the "Good and Featured" articles on Wikipedia, but
for [German articles](https://en.wikipedia.org/wiki/Wikipedia:Featured_articles_in_other_languages/German). The dataset is
available under the Creative Commons Attribution-ShareAlike License.

The stated German version contains 2759 articles (visited: 27.06.23). Even though the smalle size of articles, compared to wikitext,
the dataset contains 18 million (whitespace) seperated tokens. Probably due to longer articles lengths and language.

The dataset retains the original case, punctuation, numbers and newlines, excluding images, tables and other data.


[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)