Datasets:
Add metadata to dataset card
#17
by
albertvillanova HF Staff - opened
README.md
CHANGED
|
@@ -320,6 +320,21 @@ language:
|
|
| 320 |
- zgh
|
| 321 |
- zh
|
| 322 |
- zu
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 323 |
configs:
|
| 324 |
- config_name: 20230701.ca
|
| 325 |
data_files:
|
|
@@ -569,3 +584,161 @@ language_bcp47:
|
|
| 569 |
- zh-min-nan
|
| 570 |
- zh-yue
|
| 571 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 320 |
- zgh
|
| 321 |
- zh
|
| 322 |
- zu
|
| 323 |
+
license:
|
| 324 |
+
- cc-by-sa-3.0
|
| 325 |
+
- gfdl
|
| 326 |
+
task_categories:
|
| 327 |
+
- text-generation
|
| 328 |
+
- fill-mask
|
| 329 |
+
task_ids:
|
| 330 |
+
- language-modeling
|
| 331 |
+
- masked-language-modeling
|
| 332 |
+
size_categories:
|
| 333 |
+
- n<1K
|
| 334 |
+
- 1K<n<10K
|
| 335 |
+
- 10K<n<100K
|
| 336 |
+
- 100K<n<1M
|
| 337 |
+
- 1M<n<10M
|
| 338 |
configs:
|
| 339 |
- config_name: 20230701.ca
|
| 340 |
data_files:
|
|
|
|
| 584 |
- zh-min-nan
|
| 585 |
- zh-yue
|
| 586 |
---
|
| 587 |
+
|
| 588 |
+
# Dataset Card for Wikimedia Wikipedia
|
| 589 |
+
|
| 590 |
+
## Table of Contents
|
| 591 |
+
- [Table of Contents](#table-of-contents)
|
| 592 |
+
- [Dataset Description](#dataset-description)
|
| 593 |
+
- [Dataset Summary](#dataset-summary)
|
| 594 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
| 595 |
+
- [Languages](#languages)
|
| 596 |
+
- [Dataset Structure](#dataset-structure)
|
| 597 |
+
- [Data Instances](#data-instances)
|
| 598 |
+
- [Data Fields](#data-fields)
|
| 599 |
+
- [Data Splits](#data-splits)
|
| 600 |
+
- [Dataset Creation](#dataset-creation)
|
| 601 |
+
- [Curation Rationale](#curation-rationale)
|
| 602 |
+
- [Source Data](#source-data)
|
| 603 |
+
- [Annotations](#annotations)
|
| 604 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
| 605 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
| 606 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
| 607 |
+
- [Discussion of Biases](#discussion-of-biases)
|
| 608 |
+
- [Other Known Limitations](#other-known-limitations)
|
| 609 |
+
- [Additional Information](#additional-information)
|
| 610 |
+
- [Dataset Curators](#dataset-curators)
|
| 611 |
+
- [Licensing Information](#licensing-information)
|
| 612 |
+
- [Citation Information](#citation-information)
|
| 613 |
+
- [Contributions](#contributions)
|
| 614 |
+
|
| 615 |
+
## Dataset Description
|
| 616 |
+
|
| 617 |
+
- **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org)
|
| 618 |
+
- **Repository:**
|
| 619 |
+
- **Paper:**
|
| 620 |
+
- **Point of Contact:**
|
| 621 |
+
|
| 622 |
+
### Dataset Summary
|
| 623 |
+
|
| 624 |
+
Wikipedia dataset containing cleaned articles of all languages.
|
| 625 |
+
|
| 626 |
+
The dataset is built from the Wikipedia dumps (https://dumps.wikimedia.org/)
|
| 627 |
+
with one subset per language, each conatining a single train split.
|
| 628 |
+
|
| 629 |
+
Each example contains the content of one full Wikipedia article with cleaning to strip
|
| 630 |
+
markdown and unwanted sections (references, etc.).
|
| 631 |
+
|
| 632 |
+
|
| 633 |
+
All language subsets have already been processed for recent dump, and you can load them per date and language this way:
|
| 634 |
+
```python
|
| 635 |
+
from datasets import load_dataset
|
| 636 |
+
|
| 637 |
+
load_dataset("wikipedia", "20231101.en")
|
| 638 |
+
```
|
| 639 |
+
|
| 640 |
+
### Supported Tasks and Leaderboards
|
| 641 |
+
|
| 642 |
+
The dataset is generally used for Language Modeling.
|
| 643 |
+
|
| 644 |
+
### Languages
|
| 645 |
+
|
| 646 |
+
You can find the list of languages here: https://meta.wikimedia.org/wiki/List_of_Wikipedias
|
| 647 |
+
|
| 648 |
+
## Dataset Structure
|
| 649 |
+
|
| 650 |
+
### Data Instances
|
| 651 |
+
|
| 652 |
+
An example looks as follows:
|
| 653 |
+
```
|
| 654 |
+
{'id': '1',
|
| 655 |
+
'url': 'https://simple.wikipedia.org/wiki/April',
|
| 656 |
+
'title': 'April',
|
| 657 |
+
'text': 'April is the fourth month...'
|
| 658 |
+
}
|
| 659 |
+
```
|
| 660 |
+
|
| 661 |
+
### Data Fields
|
| 662 |
+
|
| 663 |
+
The data fields are the same among all configurations:
|
| 664 |
+
- `id` (`str`): ID of the article.
|
| 665 |
+
- `url` (`str`): URL of the article.
|
| 666 |
+
- `title` (`str`): Title of the article.
|
| 667 |
+
- `text` (`str`): Text content of the article.
|
| 668 |
+
|
| 669 |
+
### Data Splits
|
| 670 |
+
|
| 671 |
+
All configurations contain a single `train` split.
|
| 672 |
+
|
| 673 |
+
## Dataset Creation
|
| 674 |
+
|
| 675 |
+
### Curation Rationale
|
| 676 |
+
|
| 677 |
+
[More Information Needed]
|
| 678 |
+
|
| 679 |
+
### Source Data
|
| 680 |
+
|
| 681 |
+
#### Initial Data Collection and Normalization
|
| 682 |
+
|
| 683 |
+
The dataset is built from the Wikipedia dumps: https://dumps.wikimedia.org
|
| 684 |
+
|
| 685 |
+
You can find the full list of languages and dates here: https://dumps.wikimedia.org/backup-index.html
|
| 686 |
+
|
| 687 |
+
The articles have been parsed using the [`mwparserfromhell`](https://mwparserfromhell.readthedocs.io) tool.
|
| 688 |
+
|
| 689 |
+
#### Who are the source language producers?
|
| 690 |
+
|
| 691 |
+
[More Information Needed]
|
| 692 |
+
|
| 693 |
+
### Annotations
|
| 694 |
+
|
| 695 |
+
#### Annotation process
|
| 696 |
+
|
| 697 |
+
[More Information Needed]
|
| 698 |
+
|
| 699 |
+
#### Who are the annotators?
|
| 700 |
+
|
| 701 |
+
[More Information Needed]
|
| 702 |
+
|
| 703 |
+
### Personal and Sensitive Information
|
| 704 |
+
|
| 705 |
+
[More Information Needed]
|
| 706 |
+
|
| 707 |
+
## Considerations for Using the Data
|
| 708 |
+
|
| 709 |
+
### Social Impact of Dataset
|
| 710 |
+
|
| 711 |
+
[More Information Needed]
|
| 712 |
+
|
| 713 |
+
### Discussion of Biases
|
| 714 |
+
|
| 715 |
+
[More Information Needed]
|
| 716 |
+
|
| 717 |
+
### Other Known Limitations
|
| 718 |
+
|
| 719 |
+
[More Information Needed]
|
| 720 |
+
|
| 721 |
+
## Additional Information
|
| 722 |
+
|
| 723 |
+
### Dataset Curators
|
| 724 |
+
|
| 725 |
+
[More Information Needed]
|
| 726 |
+
|
| 727 |
+
### Licensing Information
|
| 728 |
+
|
| 729 |
+
Copyright licensing information: https://dumps.wikimedia.org/legal.html
|
| 730 |
+
|
| 731 |
+
All original textual content is licensed under the [GNU Free Documentation License](https://www.gnu.org/licenses/fdl-1.3.html) (GFDL)
|
| 732 |
+
and the [Creative Commons Attribution-Share-Alike 3.0 License](https://creativecommons.org/licenses/by-sa/3.0/).
|
| 733 |
+
Some text may be available only under the Creative Commons license; see their [Terms of Use](https://foundation.wikimedia.org/wiki/Policy:Terms_of_Use) for details.
|
| 734 |
+
Text written by some authors may be released under additional licenses or into the public domain.
|
| 735 |
+
|
| 736 |
+
### Citation Information
|
| 737 |
+
|
| 738 |
+
```
|
| 739 |
+
@ONLINE{wikidump,
|
| 740 |
+
author = "Wikimedia Foundation",
|
| 741 |
+
title = "Wikimedia Downloads",
|
| 742 |
+
url = "https://dumps.wikimedia.org"
|
| 743 |
+
}
|
| 744 |
+
```
|