metadata
language:
- en
pretty_name: SlimPajama_300B
The SlimPajama_300B is a 300B token sample of de-duplicated Slim Pajama dataset tokenized using the EleutherAI/gpt-neox-20b tokenizer
Due to file size constraints, C4 and CommonCrawl has been uploaded in multiple chunks, you can use the following commands to merge them back into a single file:
cat C4_part_* > C4.bin
cat CommonCrawl_part_* > CommonCrawl.bin
Data Distribution
| Data source | Composition |
|---|---|
| Commoncrawl | 0.5208 |
| C4 | 0.2668 |
| GitHub | 0.0522 |
| Books | 0.0420 |
| ArXiv | 0.0442 |
| Wikpedia | 0.0399 |
| StackExchange | 0.0337 |