Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -4,6 +4,7 @@ configs:
|
|
| 4 |
data_files:
|
| 5 |
- split: train
|
| 6 |
path: "*.parquet"
|
|
|
|
| 7 |
---
|
| 8 |
Gemstones Training Dataset - Sequential version
|
| 9 |
|
|
@@ -17,6 +18,12 @@ reproduce the training batches across the gpus is/was the run the training code.
|
|
| 17 |
This repo is the result of an attempt to simulate the way in which the training code loaded the data and
|
| 18 |
stream it out to a portable file format for use in downstream analyses of the model suite.
|
| 19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
# Sharding format: sequential
|
| 21 |
|
| 22 |
This version of the dataset approximates the order of the dataset _as if_ a model was being trained
|
|
@@ -35,7 +42,7 @@ have seen all of the same rows of the dataset during training. The sync granular
|
|
| 35 |
|
| 36 |
This linearized recreation assumes a single worker is reading every row of the dataset and so at a microbatch size of 8 over packed sequences of 2048 tokens, 21247488 steps worth of "training" is required to reach ~350B tokens.
|
| 37 |
|
| 38 |
-
|
| 39 |
The raw files were first shuffled globally, and then the single worker
|
| 40 |
loaded 4 files at a time, and shuffled the "blocks" of 2048 tokens each in a temporary buffer so
|
| 41 |
that the contents of the 4 packed files were not read in the exact order in which the tokens appeared in them.
|
|
@@ -44,12 +51,7 @@ that the contents of the 4 packed files were not read in the exact order in whic
|
|
| 44 |
a time whose contents (blocks of tokens) are read in a shuffled order, does not exactly match any one of the
|
| 45 |
Gemstones model sets. However, the key is that the synchronization argument above still holds and so analyses at a coarser granularity than ~8.6B tokens should be sound.
|
| 46 |
|
| 47 |
-
The `train_mock_data_order_file.py` performs these operations and writes the resulting data order out to files
|
|
|
|
| 48 |
`ordered_dataset_shard_{shard}-of-{total_shards}.parquet` where the total number of shards is arbitrary, but chosen to be 256 for
|
| 49 |
-
portability.
|
| 50 |
-
|
| 51 |
-
# Loading
|
| 52 |
-
|
| 53 |
-
This data should be loadable using `load_dataset` in the standard manner to auto-download the data.
|
| 54 |
-
Alternately, the dataset can be cloned using git to materialize the files locally, and then loaded
|
| 55 |
-
using the default `parquet` builder as described here: https://huggingface.co/docs/datasets/en/loading#parquet
|
|
|
|
| 4 |
data_files:
|
| 5 |
- split: train
|
| 6 |
path: "*.parquet"
|
| 7 |
+
license: odc-by
|
| 8 |
---
|
| 9 |
Gemstones Training Dataset - Sequential version
|
| 10 |
|
|
|
|
| 18 |
This repo is the result of an attempt to simulate the way in which the training code loaded the data and
|
| 19 |
stream it out to a portable file format for use in downstream analyses of the model suite.
|
| 20 |
|
| 21 |
+
# Loading
|
| 22 |
+
|
| 23 |
+
This data should be loadable using `load_dataset` in the standard manner to auto-download the data.
|
| 24 |
+
Alternately, the dataset can be cloned using git to materialize the files locally, and then loaded
|
| 25 |
+
using the default `parquet` builder as described here: https://huggingface.co/docs/datasets/en/loading#parquet
|
| 26 |
+
|
| 27 |
# Sharding format: sequential
|
| 28 |
|
| 29 |
This version of the dataset approximates the order of the dataset _as if_ a model was being trained
|
|
|
|
| 42 |
|
| 43 |
This linearized recreation assumes a single worker is reading every row of the dataset and so at a microbatch size of 8 over packed sequences of 2048 tokens, 21247488 steps worth of "training" is required to reach ~350B tokens.
|
| 44 |
|
| 45 |
+
In this setup, a single worker received the total dataset represented by the thousands of raw training format files (for reference, this format is defined by the `packed_cycle_dataset.py` file in this repo).
|
| 46 |
The raw files were first shuffled globally, and then the single worker
|
| 47 |
loaded 4 files at a time, and shuffled the "blocks" of 2048 tokens each in a temporary buffer so
|
| 48 |
that the contents of the 4 packed files were not read in the exact order in which the tokens appeared in them.
|
|
|
|
| 51 |
a time whose contents (blocks of tokens) are read in a shuffled order, does not exactly match any one of the
|
| 52 |
Gemstones model sets. However, the key is that the synchronization argument above still holds and so analyses at a coarser granularity than ~8.6B tokens should be sound.
|
| 53 |
|
| 54 |
+
The `train_mock_data_order_file.py` performs these operations and writes the resulting data order out to files.
|
| 55 |
+
Each shard named like
|
| 56 |
`ordered_dataset_shard_{shard}-of-{total_shards}.parquet` where the total number of shards is arbitrary, but chosen to be 256 for
|
| 57 |
+
portability, represents a contiguous subset of the approximated total ordering of the rows int the training dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|