File size: 1,453 Bytes
89e2fac 9b500cf 89e2fac 50f8d21 89e2fac e5dfe1f 59d014e d8541d1 89e2fac 630ef21 89e2fac 59d014e 89e2fac e5dfe1f 89e2fac 879250e e5dfe1f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
---
license: apache-2.0
---
# Dataset Card for TinyCorpus
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** Lucas S
- **Language(s) (NLP):** English
- **License:** apache-2.0
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
This dataset comprises ~250B bytes of high quality text data. Should be used to train very small (<50M parameters) byte-level English models for educational purposes and to iterate on experimental LLM architectures quickly.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
The text is encoded in UTF-8 format, with \x02 (STX) and \x03 (ETX) as the start and end special bytes respectively.
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
There is a lack of modern pretraining mixtures for very small byte-level language models.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
- 55% FineWeb Edu
- 35% DCLM Edu
- 10% FineMath
- 1% Filtered StarCoderData Python (Educational Score ≥3 as in Smollm Corpus)
- 0.1% WritingPrompts Curated |