| # commonwealth-wiki-mix | |
| FineWeb-style Parquet shards (LLM-training friendly) created by merging multiple Wikipedia language datasets into a single dataset to reduce looping during training. | |
| ## What’s inside | |
| - **Format**: `nanochat-parquet-v1` | |
| - **Layout**: `shard_*.parquet` + `metadata.json` | |
| - **Text column**: `text` | |
| - **Parquet settings**: zstd (level 3), `row_group_size=1024`, `use_dictionary=False`, `write_statistics=False` | |
| ## Sources included | |
| This mix was built from the already-exported wiki datasets in `~/.cache/nanochat/datasets/monolingual/`: | |
| - South Asia: Urdu, Hindi, Bengali, Tamil, Telugu, Malayalam, Marathi, Kannada, Gujarati, Punjabi, Nepali, Sinhala | |
| - Southeast Asia: Malay | |
| - Africa: Swahili, Yoruba, Afrikaans, Zulu | |
| ## Loading | |
| In Python, you can load shards directly with PyArrow / Pandas. For HuggingFace `datasets`, Parquet loading works out of the box: | |
| ```python | |
| from datasets import load_dataset | |
| ds = load_dataset("JayJayThrowThrow/commonwealth-wiki-mix", split="train") | |
| print(ds.column_names) # ["text"] | |
| ``` | |