|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: id |
|
|
dtype: string |
|
|
- name: url |
|
|
dtype: string |
|
|
- name: title |
|
|
dtype: string |
|
|
- name: text |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 1657066217 |
|
|
num_examples: 275503 |
|
|
- name: test |
|
|
num_bytes: 3554531 |
|
|
num_examples: 1222 |
|
|
download_size: 839560462 |
|
|
dataset_size: 1660620748 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
- split: test |
|
|
path: data/test-* |
|
|
--- |
|
|
| Language | Articles | Token Count* | Character Count | |
|
|
|-----------|----------|-----------------|---------------------| |
|
|
| English | 107,123 | 219.88 M | 951.53 M | |
|
|
| Georgian | 169,602 | 175.32 M | 260.86 M | |
|
|
|
|
|
*Gemma Tokenizer |
|
|
|
|
|
|
|
|
### Original model card |
|
|
The dataset is built from the Wikipedia dumps (https://dumps.wikimedia.org/) |
|
|
with one subset per language, each containing a single train split. |
|
|
|
|
|
Each example contains the content of one full Wikipedia article with cleaning to strip |
|
|
markdown and unwanted sections (references, etc.). |
|
|
|
|
|
|
|
|
All language subsets have already been processed for recent dump, and you can load them per date and language this way: |
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
ds = load_dataset("wikimedia/wikipedia", "20231101.en") |
|
|
``` |
|
|
|
|
|
#### Data Visualization |
|
|
Click the [Nomic Atlas](https://atlas.nomic.ai/map/475c26d7-b142-4795-9887-02b6eeb18dc0/0d312be6-a3bb-4586-b6b7-53dcd0cbefa5) map below to visualize the 6.4 million samples in the `20231101.en` split. |
|
|
|
|
|
<a href="https://atlas.nomic.ai/map/475c26d7-b142-4795-9887-02b6eeb18dc0/0d312be6-a3bb-4586-b6b7-53dcd0cbefa5"> |
|
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/6480c476cacb1c4a0696eeb8/sZNN6Vubc0Oue83vKaJUu.webp" alt="Nomic-Atlas Wikipedia Map" width="25%"/> |
|
|
</a> |
|
|
|
|
|
### Supported Tasks and Leaderboards |
|
|
|
|
|
The dataset is generally used for Language Modeling. |
|
|
|
|
|
### Languages |
|
|
|
|
|
You can find the list of languages here: https://meta.wikimedia.org/wiki/List_of_Wikipedias |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
### Data Instances |
|
|
|
|
|
An example looks as follows: |
|
|
``` |
|
|
{'id': '1', |
|
|
'url': 'https://simple.wikipedia.org/wiki/April', |
|
|
'title': 'April', |
|
|
'text': 'April is the fourth month...' |
|
|
} |
|
|
``` |
|
|
|
|
|
### Data Fields |
|
|
|
|
|
The data fields are the same among all configurations: |
|
|
- `id` (`str`): ID of the article. |
|
|
- `url` (`str`): URL of the article. |
|
|
- `title` (`str`): Title of the article. |
|
|
- `text` (`str`): Text content of the article. |
|
|
|
|
|
### Data Splits |
|
|
|
|
|
All configurations contain a single `train` split. |
|
|
|
|
|
## Dataset Creation |
|
|
|
|
|
### Curation Rationale |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Source Data |
|
|
|
|
|
#### Initial Data Collection and Normalization |
|
|
|
|
|
The dataset is built from the Wikipedia dumps: https://dumps.wikimedia.org |
|
|
|
|
|
You can find the full list of languages and dates here: https://dumps.wikimedia.org/backup-index.html |
|
|
|
|
|
The articles have been parsed using the [`mwparserfromhell`](https://mwparserfromhell.readthedocs.io) tool. |
|
|
|
|
|
When uploading the data files for the 20231101 dump, we noticed that the Wikimedia Dumps website does not contain this date dump |
|
|
for the "bbc", "dga", nor "zgh" Wikipedias. We have reported the issue to the Wikimedia Phabricator: https://phabricator.wikimedia.org/T351761 |
|
|
|
|
|
#### Who are the source language producers? |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Annotations |
|
|
|
|
|
#### Annotation process |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
#### Who are the annotators? |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Personal and Sensitive Information |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## Considerations for Using the Data |
|
|
|
|
|
### Social Impact of Dataset |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Discussion of Biases |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Other Known Limitations |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## Additional Information |
|
|
|
|
|
### Dataset Curators |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Licensing Information |
|
|
|
|
|
Copyright licensing information: https://dumps.wikimedia.org/legal.html |
|
|
|
|
|
All original textual content is licensed under the [GNU Free Documentation License](https://www.gnu.org/licenses/fdl-1.3.html) (GFDL) |
|
|
and the [Creative Commons Attribution-Share-Alike 3.0 License](https://creativecommons.org/licenses/by-sa/3.0/). |
|
|
Some text may be available only under the Creative Commons license; see their [Terms of Use](https://foundation.wikimedia.org/wiki/Policy:Terms_of_Use) for details. |
|
|
Text written by some authors may be released under additional licenses or into the public domain. |
|
|
|
|
|
### Citation Information |
|
|
|
|
|
``` |
|
|
@ONLINE{wikidump, |
|
|
author = "Wikimedia Foundation", |
|
|
title = "Wikimedia Downloads", |
|
|
url = "https://dumps.wikimedia.org" |
|
|
} |
|
|
``` |