metadata
configs:
- config_name: bytelevel
default: true
data_files:
- split: train
path: bytelevel2/*.parquet
- config_name: bytelevel-llm-data
data_files:
- split: fw57M
path: bytelevel-llm-data/fw57M/fw57M-*
- split: ngram
path: bytelevel-llm-data/ngram/ngram-*
- config_name: bytelevel-subset
data_files:
- split: train
path: bytelevel-subset/train-*
- config_name: bytelevel-subset_1
data_files:
- split: train
path: bytelevel-subset_1/train-*
- config_name: bytelevel-subset_2
data_files:
- split: train
path: bytelevel-subset_2/train-*
- config_name: BPE_64000
data_files:
- split: train
path: BPE_64000/*.parquet
- config_name: ByteSpanSurprisalCombinedFrequency_64000
data_files:
- split: train
path: ByteSpanSurprisalCombinedFrequency_64000/*.parquet
- config_name: ByteSpanSurprisalMonotonicFrequency_64000
data_files:
- split: train
path: ByteSpanSurprisalMonotonicFrequency_64000/*.parquet
- config_name: ByteSpanSurprisalMonotonicSeeding_64000
data_files:
- split: train
path: ByteSpanSurprisalMonotonicSeeding_64000/*.parquet
- config_name: ByteSpanSurprisalCombinedSeeding_64000
data_files:
- split: train
path: ByteSpanSurprisalCombinedSeeding_64000/*.parquet
- config_name: ByteSpanSurprisalGlobalIncrement_64000
data_files:
- split: train
path: ByteSpanSurprisalGlobalIncrement_64000/*.parquet
- config_name: BPEWP_64000
data_files:
- split: train
path: BPEWP_64000/*.parquet
language:
- en
tags:
- language modeling
pretty_name: FineWebEDU 20B
size_categories:
- 10B<n<100B
FineWebEDU 20B
A copy of FineWebEDU-20B used for out tokenizer experiments. The subsets are as follows:
bytelevel: the full dataset tokenized using our bytelevel tokenizerbytelevel-subset_1: a 100k-row subset of the bytelevel subset, used to train bytelevel models.bytelevel-subset_2: a 100k-row subset of the bytelevel subset, used to extract llm predictions.bytelevel-llm-data: a copy ofbytelevel-subset_2with lm predictions, used to train bytespan tokenizersbytelevel-subset_3: a 100k-row subset of the bytelevel subset, used to evaluate trained tokenizers
The remaining subsets are all versions of the dataset tokenized with our trained tokenizers:
BPE_64000BPEWP_64000ByteSpanSurprisalMonotonicFrequency_64000ByteSpanSurprisalMonotonicSeeding_64000ByteSpanSurprisalCombinedFrequency_64000ByteSpanSurprisalCombinedSeeding_64000ByteSpanSurprisalGlobalIncrement_64000