| | --- |
| | language: |
| | - en |
| | dataset_info: |
| | - config_name: 100M_1 |
| | features: |
| | - name: id |
| | dtype: string |
| | - name: url |
| | dtype: string |
| | - name: title |
| | dtype: string |
| | - name: text |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 645762137.0362595 |
| | num_examples: 225498 |
| | - name: validation |
| | num_bytes: 2863715.5852214186 |
| | num_examples: 1000 |
| | download_size: 389527538 |
| | dataset_size: 648625852.621481 |
| | - config_name: 100M_2 |
| | features: |
| | - name: id |
| | dtype: string |
| | - name: url |
| | dtype: string |
| | - name: title |
| | dtype: string |
| | - name: text |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 646074282.0350486 |
| | num_examples: 225607 |
| | - name: validation |
| | num_bytes: 2863715.5852214186 |
| | num_examples: 1000 |
| | download_size: 389381161 |
| | dataset_size: 648937997.62027 |
| | - config_name: 100M_3 |
| | features: |
| | - name: id |
| | dtype: string |
| | - name: url |
| | dtype: string |
| | - name: title |
| | dtype: string |
| | - name: text |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 650730683.5766187 |
| | num_examples: 227233 |
| | - name: validation |
| | num_bytes: 2863715.5852214186 |
| | num_examples: 1000 |
| | download_size: 390292303 |
| | dataset_size: 653594399.1618401 |
| | - config_name: 10M_1 |
| | features: |
| | - name: id |
| | dtype: string |
| | - name: url |
| | dtype: string |
| | - name: title |
| | dtype: string |
| | - name: text |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 64820202.27148681 |
| | num_examples: 22635 |
| | - name: validation |
| | num_bytes: 2863715.5852214186 |
| | num_examples: 1000 |
| | download_size: 40370445 |
| | dataset_size: 67683917.85670823 |
| | - config_name: 10M_2 |
| | features: |
| | - name: id |
| | dtype: string |
| | - name: url |
| | dtype: string |
| | - name: title |
| | dtype: string |
| | - name: text |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 64236004.292101644 |
| | num_examples: 22431 |
| | - name: validation |
| | num_bytes: 2863715.5852214186 |
| | num_examples: 1000 |
| | download_size: 40412205 |
| | dataset_size: 67099719.87732306 |
| | - config_name: 10M_3 |
| | features: |
| | - name: id |
| | dtype: string |
| | - name: url |
| | dtype: string |
| | - name: title |
| | dtype: string |
| | - name: text |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 63250886.13078547 |
| | num_examples: 22087 |
| | - name: validation |
| | num_bytes: 2863715.5852214186 |
| | num_examples: 1000 |
| | download_size: 40514801 |
| | dataset_size: 66114601.71600689 |
| | - config_name: all |
| | features: |
| | - name: id |
| | dtype: string |
| | - name: url |
| | dtype: string |
| | - name: title |
| | dtype: string |
| | - name: text |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 18350156819 |
| | num_examples: 6407814 |
| | download_size: 10723740674 |
| | dataset_size: 18350156819 |
| | configs: |
| | - config_name: 100M_1 |
| | data_files: |
| | - split: train |
| | path: 100M_1/train-* |
| | - split: validation |
| | path: 100M_1/validation-* |
| | - config_name: 100M_2 |
| | data_files: |
| | - split: train |
| | path: 100M_2/train-* |
| | - split: validation |
| | path: 100M_2/validation-* |
| | - config_name: 100M_3 |
| | data_files: |
| | - split: train |
| | path: 100M_3/train-* |
| | - split: validation |
| | path: 100M_3/validation-* |
| | - config_name: 10M_1 |
| | data_files: |
| | - split: train |
| | path: 10M_1/train-* |
| | - split: validation |
| | path: 10M_1/validation-* |
| | - config_name: 10M_2 |
| | data_files: |
| | - split: train |
| | path: 10M_2/train-* |
| | - split: validation |
| | path: 10M_2/validation-* |
| | - config_name: 10M_3 |
| | data_files: |
| | - split: train |
| | path: 10M_3/train-* |
| | - split: validation |
| | path: 10M_3/validation-* |
| | - config_name: all |
| | data_files: |
| | - split: train |
| | path: all/train-* |
| | --- |
| | This repository contains random subsets of the English wikipedia obtained from |
| | [`"wikimedia/wikipedia"`](https://huggingface.co/datasets/wikimedia/wikipedia) (`"20231101.en"`). |
| | It includes two random subsets of the English wikipedia, one containing roughly 10M words total (23k articles), the other containing roughly 100M words total (228K articles). |
| | These data are intended to be used for the BabyLM challenge. For convenience, the repository also includes the full English wikipedia containing roughly 2.8B words total |
| | (6.4M articles). |
| |
|
| | You can load these datasets as follows: |
| | ```python |
| | from datasets import load_dataset |
| | |
| | ds_10M = load_dataset("eminorhan/wikipedia", "10M") # 10M word subset |
| | |
| | ds_100M = load_dataset("eminorhan/wikipedia", "100M") # 100M word subset |
| | |
| | ds_all = load_dataset("eminorhan/wikipedia", "all") # the full data (2.8B words) |
| | ``` |
| | Both subsets come with `train`/`validation` splits, whereas the full data only has a `train` split. |
| | We applied lightweight preprocessing to the article texts using [this script](https://github.com/eminorhan/babylm/blob/master/create_random_wikipedia.py), |
| | which mainly strips away some sections of the articles like "References", "See also", *etc.* |