Update README.md
Browse files
README.md
CHANGED
|
@@ -15,3 +15,18 @@ configs:
|
|
| 15 |
- split: train
|
| 16 |
path: data/train-*
|
| 17 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
- split: train
|
| 16 |
path: data/train-*
|
| 17 |
---
|
| 18 |
+
|
| 19 |
+
# OpenWebTextCorpus tokenized for Gemma
|
| 20 |
+
|
| 21 |
+
This dataset is a pre-tokenized version of the [Skylion007/openwebtext](https://huggingface.co/datasets/Skylion007/openwebtext) dataset
|
| 22 |
+
using the [gemma](https://huggingface.co/google/gemma-2b) tokenizer. As such, this dataset follows the same licensing as the original openwebtext dataset.
|
| 23 |
+
|
| 24 |
+
This pre-tokenization is done as a performance optimization for using the openwebtext dataset with a Gemma model (gemma-2b, gemma-2b-it, gemma-7b, gemma-7b-it).
|
| 25 |
+
This dataset was created using [SAELens](https://github.com/jbloomAus/SAELens), with the following settings:
|
| 26 |
+
|
| 27 |
+
- context_size: 8192
|
| 28 |
+
- shuffled: true
|
| 29 |
+
- begin_batch_token: "bos",
|
| 30 |
+
- begin_sequence_token: null,
|
| 31 |
+
- sequence_separator_token: "bos"
|
| 32 |
+
- sae_lens_version: "3.3.0"
|