File size: 2,598 Bytes
0a02ae1 b9d2403 be00c77 d159d6e 8d6be6c 8173643 1f6fd78 7e5a47b 574ff18 8bba4cf d20f291 2fa887c 344deab 2fa887c 8878c28 4985ccc 8878c28 2fa887c 725f5f9 2fa887c 347fe37 826e22c 347fe37 826e22c 2fa887c 2855807 2fa887c 37c1a40 0a02ae1 2fa887c fb6af8f 2fa887c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
---
configs:
- config_name: bytelevel
default: true
data_files:
- split: train
path: bytelevel2/*.parquet
- config_name: bytelevel-llm-data
data_files:
- split: fw57M
path: bytelevel-llm-data/fw57M/fw57M-*
- split: ngram
path: bytelevel-llm-data/ngram/ngram-*
- config_name: bytelevel-subset
data_files:
- split: train
path: bytelevel-subset/train-*
- config_name: bytelevel-subset_1
data_files:
- split: train
path: bytelevel-subset_1/train-*
- config_name: bytelevel-subset_2
data_files:
- split: train
path: bytelevel-subset_2/train-*
- config_name: BPE_64000
data_files:
- split: train
path: BPE_64000/*.parquet
- config_name: ByteSpanSurprisalCombinedFrequency_64000
data_files:
- split: train
path: ByteSpanSurprisalCombinedFrequency_64000/*.parquet
- config_name: ByteSpanSurprisalMonotonicFrequency_64000
data_files:
- split: train
path: ByteSpanSurprisalMonotonicFrequency_64000/*.parquet
- config_name: ByteSpanSurprisalMonotonicSeeding_64000
data_files:
- split: train
path: ByteSpanSurprisalMonotonicSeeding_64000/*.parquet
- config_name: ByteSpanSurprisalCombinedSeeding_64000
data_files:
- split: train
path: ByteSpanSurprisalCombinedSeeding_64000/*.parquet
- config_name: ByteSpanSurprisalGlobalIncrement_64000
data_files:
- split: train
path: ByteSpanSurprisalGlobalIncrement_64000/*.parquet
- config_name: BPEWP_64000
data_files:
- split: train
path: BPEWP_64000/*.parquet
language:
- en
tags:
- language modeling
pretty_name: FineWebEDU 20B
size_categories:
- 10B<n<100B
---
# FineWebEDU 20B
A copy of [FineWebEDU-20B](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) used for out tokenizer experiments. The subsets are as follows:
- `bytelevel`: the full dataset tokenized using our bytelevel tokenizer
- `bytelevel-subset_1`: a 100k-row subset of the bytelevel subset, used to train bytelevel models.
- `bytelevel-subset_2`: a 100k-row subset of the bytelevel subset, used to extract llm predictions.
- `bytelevel-llm-data`: a copy of `bytelevel-subset_2` with lm predictions, used to train bytespan tokenizers
- `bytelevel-subset_3`: a 100k-row subset of the bytelevel subset, used to evaluate trained tokenizers
The remaining subsets are all versions of the dataset tokenized with our trained tokenizers:
- `BPE_64000`
- `BPEWP_64000`
- `ByteSpanSurprisalMonotonicFrequency_64000`
- `ByteSpanSurprisalMonotonicSeeding_64000`
- `ByteSpanSurprisalCombinedFrequency_64000`
- `ByteSpanSurprisalCombinedSeeding_64000`
- `ByteSpanSurprisalGlobalIncrement_64000`
|