| --- |
| language: |
| - en |
| - es |
| - fr |
| - de |
| - it |
| - pt |
| - nl |
| - vi |
| - tr |
| - la |
| - id |
| - ms |
| - af |
| - sq |
| - is |
| - no |
| - sv |
| - da |
| - fi |
| - hu |
| - pl |
| - cs |
| - ro |
| - ru |
| - bg |
| - uk |
| - sr |
| - be |
| - kk |
| - mk |
| - mn |
| - zh |
| - ja |
| - ko |
| - hi |
| - ur |
| - bn |
| - ta |
| - te |
| - mr |
| - gu |
| - kn |
| - ml |
| - pa |
| - as |
| - or |
| - ar |
| - fa |
| - ps |
| - sd |
| - ug |
| - el |
| - he |
| - hy |
| - ka |
| - am |
| - km |
| - lo |
| - my |
| - th |
| - si |
| - sw |
| - ti |
| - tl |
| - bo |
| - dv |
| - eu |
| - tl |
| tags: |
| - wikipedia |
| size_categories: |
| - 1M<n<10M |
| license: cc-by-sa-3.0 |
| task_categories: |
| - text-generation |
| - fill-mask |
| task_ids: |
| - language-modeling |
| - masked-language-modeling |
|
|
| --- |
| |
|
|
|
|
| # Wikipedia Sentence Snippets |
|
|
| This dataset contains cleaned Wikipedia sentence snippets extracted from multiple languages, by taking only the first half of a Wikipedia article, after filtering for stubs. |
|
|
| ## Files |
| - `en/en.parquet` |
| - `es/es.parquet` |
|
|
| And others. |
|
|
| ## Format |
| Each parquet contains a `sentence` column. |
|
|
| ### Data Splits |
| All configurations contain a single `train` split. |
|
|
|
|
|
|
| ### Supported Tasks and Leaderboards |
|
|
| The dataset is generally used for Language Modeling. |
|
|
| ## From `wikimedia/wikipedia` |
|
|
| ### Languages |
|
|
| You can find the list of languages here: https://meta.wikimedia.org/wiki/List_of_Wikipedias |
|
|
|
|
|
|
| ## Dataset Creation |
|
|
| ### Source Data |
|
|
| #### Initial Data Collection and Normalization |
|
|
| The dataset is built from the Wikipedia dumps: https://dumps.wikimedia.org |
|
|
| You can find the full list of languages and dates here: https://dumps.wikimedia.org/backup-index.html |
|
|
| The articles have been parsed using the [`mwparserfromhell`](https://mwparserfromhell.readthedocs.io) tool. |
|
|
| When uploading the data files for the 20231101 dump, we noticed that the Wikimedia Dumps website does not contain this date dump |
| for the "bbc", "dga", nor "zgh" Wikipedias. We have reported the issue to the Wikimedia Phabricator: https://phabricator.wikimedia.org/T351761 |
|
|
| ### Licensing Information |
|
|
| Copyright licensing information: https://dumps.wikimedia.org/legal.html |
|
|
| All original textual content is licensed under the [GNU Free Documentation License](https://www.gnu.org/licenses/fdl-1.3.html) (GFDL) |
| and the [Creative Commons Attribution-Share-Alike 3.0 License](https://creativecommons.org/licenses/by-sa/3.0/). |
| Some text may be available only under the Creative Commons license; see their [Terms of Use](https://foundation.wikimedia.org/wiki/Policy:Terms_of_Use) for details. |
| Text written by some authors may be released under additional licenses or into the public domain. |
|
|
| ### Citation Information |
|
|
| ``` |
| @ONLINE{wikidump, |
| author = "Wikimedia Foundation", |
| title = "Wikimedia Downloads", |
| url = "https://dumps.wikimedia.org" |
| } |
| ``` |