Add README
Browse files
README.md
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# commonwealth-wiki-mix
|
| 2 |
+
|
| 3 |
+
FineWeb-style Parquet shards (LLM-training friendly) created by merging multiple Wikipedia language datasets into a single dataset to reduce looping during training.
|
| 4 |
+
|
| 5 |
+
## What’s inside
|
| 6 |
+
|
| 7 |
+
- **Format**: `nanochat-parquet-v1`
|
| 8 |
+
- **Layout**: `shard_*.parquet` + `metadata.json`
|
| 9 |
+
- **Text column**: `text`
|
| 10 |
+
- **Parquet settings**: zstd (level 3), `row_group_size=1024`, `use_dictionary=False`, `write_statistics=False`
|
| 11 |
+
|
| 12 |
+
## Sources included
|
| 13 |
+
|
| 14 |
+
This mix was built from the already-exported wiki datasets in `~/.cache/nanochat/datasets/monolingual/`:
|
| 15 |
+
|
| 16 |
+
- South Asia: Urdu, Hindi, Bengali, Tamil, Telugu, Malayalam, Marathi, Kannada, Gujarati, Punjabi, Nepali, Sinhala
|
| 17 |
+
- Southeast Asia: Malay
|
| 18 |
+
- Africa: Swahili, Yoruba, Afrikaans, Zulu
|
| 19 |
+
|
| 20 |
+
## Loading
|
| 21 |
+
|
| 22 |
+
In Python, you can load shards directly with PyArrow / Pandas. For HuggingFace `datasets`, Parquet loading works out of the box:
|
| 23 |
+
|
| 24 |
+
```python
|
| 25 |
+
from datasets import load_dataset
|
| 26 |
+
|
| 27 |
+
ds = load_dataset("JayJayThrowThrow/commonwealth-wiki-mix", split="train")
|
| 28 |
+
print(ds.column_names) # ["text"]
|
| 29 |
+
```
|
| 30 |
+
|
| 31 |
+
|