|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: title |
|
|
dtype: string |
|
|
- name: text |
|
|
dtype: string |
|
|
- name: url |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 138186439 |
|
|
num_examples: 2759 |
|
|
download_size: 79585645 |
|
|
dataset_size: 138186439 |
|
|
license: cc-by-sa-3.0 |
|
|
task_categories: |
|
|
- text-generation |
|
|
language: |
|
|
- de |
|
|
pretty_name: wikitext german |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
--- |
|
|
# Dataset Card for "wikitext-18-de" |
|
|
|
|
|
## Dataset Summary |
|
|
|
|
|
The dataset is a german variation of the [wikitext](https://huggingface.co/datasets/wikitext) dataset and is a collection of |
|
|
ca. 18 million tokens. It follows the same approach by extracting from the "Good and Featured" articles on Wikipedia, but |
|
|
for [German articles](https://en.wikipedia.org/wiki/Wikipedia:Featured_articles_in_other_languages/German). The dataset is |
|
|
available under the Creative Commons Attribution-ShareAlike License. |
|
|
|
|
|
The stated German version contains 2759 articles (visited: 27.06.23). Even though the smalle size of articles, compared to wikitext, |
|
|
the dataset contains 18 million (whitespace) seperated tokens. Probably due to longer articles lengths and language. |
|
|
|
|
|
The dataset retains the original case, punctuation, numbers and newlines, excluding images, tables and other data. |
|
|
|
|
|
|
|
|
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |