|
|
--- |
|
|
license: apache-2.0 |
|
|
--- |
|
|
|
|
|
# Dataset Card for TinyCorpus |
|
|
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
|
|
|
|
## Dataset Details |
|
|
|
|
|
### Dataset Description |
|
|
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
|
|
|
|
- **Curated by:** Lucas S |
|
|
- **Language(s) (NLP):** English |
|
|
- **License:** apache-2.0 |
|
|
|
|
|
## Uses |
|
|
|
|
|
<!-- Address questions around how the dataset is intended to be used. --> |
|
|
This dataset comprises ~250B bytes of high quality text data. Should be used to train very small (<50M parameters) byte-level English models for educational purposes and to iterate on experimental LLM architectures quickly. |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
|
|
|
|
The text is encoded in UTF-8 format, with \x02 (STX) and \x03 (ETX) as the start and end special bytes respectively. |
|
|
|
|
|
## Dataset Creation |
|
|
|
|
|
### Curation Rationale |
|
|
|
|
|
<!-- Motivation for the creation of this dataset. --> |
|
|
|
|
|
There is a lack of modern pretraining mixtures for very small byte-level language models. |
|
|
|
|
|
### Source Data |
|
|
|
|
|
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> |
|
|
|
|
|
- 55% FineWeb Edu |
|
|
- 35% DCLM Edu |
|
|
- 10% FineMath |
|
|
- 1% Filtered StarCoderData Python (Educational Score ≥3 as in Smollm Corpus) |
|
|
- 0.1% WritingPrompts Curated |