finewebedu-20B / README.md
codebyzeb's picture
Update README.md
fb6af8f verified
metadata
configs:
  - config_name: bytelevel
    default: true
    data_files:
      - split: train
        path: bytelevel2/*.parquet
  - config_name: bytelevel-llm-data
    data_files:
      - split: fw57M
        path: bytelevel-llm-data/fw57M/fw57M-*
      - split: ngram
        path: bytelevel-llm-data/ngram/ngram-*
  - config_name: bytelevel-subset
    data_files:
      - split: train
        path: bytelevel-subset/train-*
  - config_name: bytelevel-subset_1
    data_files:
      - split: train
        path: bytelevel-subset_1/train-*
  - config_name: bytelevel-subset_2
    data_files:
      - split: train
        path: bytelevel-subset_2/train-*
  - config_name: BPE_64000
    data_files:
      - split: train
        path: BPE_64000/*.parquet
  - config_name: ByteSpanSurprisalCombinedFrequency_64000
    data_files:
      - split: train
        path: ByteSpanSurprisalCombinedFrequency_64000/*.parquet
  - config_name: ByteSpanSurprisalMonotonicFrequency_64000
    data_files:
      - split: train
        path: ByteSpanSurprisalMonotonicFrequency_64000/*.parquet
  - config_name: ByteSpanSurprisalMonotonicSeeding_64000
    data_files:
      - split: train
        path: ByteSpanSurprisalMonotonicSeeding_64000/*.parquet
  - config_name: ByteSpanSurprisalCombinedSeeding_64000
    data_files:
      - split: train
        path: ByteSpanSurprisalCombinedSeeding_64000/*.parquet
  - config_name: ByteSpanSurprisalGlobalIncrement_64000
    data_files:
      - split: train
        path: ByteSpanSurprisalGlobalIncrement_64000/*.parquet
  - config_name: BPEWP_64000
    data_files:
      - split: train
        path: BPEWP_64000/*.parquet
language:
  - en
tags:
  - language modeling
pretty_name: FineWebEDU 20B
size_categories:
  - 10B<n<100B

FineWebEDU 20B

A copy of FineWebEDU-20B used for out tokenizer experiments. The subsets are as follows:

  • bytelevel: the full dataset tokenized using our bytelevel tokenizer
  • bytelevel-subset_1: a 100k-row subset of the bytelevel subset, used to train bytelevel models.
  • bytelevel-subset_2: a 100k-row subset of the bytelevel subset, used to extract llm predictions.
  • bytelevel-llm-data: a copy of bytelevel-subset_2 with lm predictions, used to train bytespan tokenizers
  • bytelevel-subset_3: a 100k-row subset of the bytelevel subset, used to evaluate trained tokenizers

The remaining subsets are all versions of the dataset tokenized with our trained tokenizers:

  • BPE_64000
  • BPEWP_64000
  • ByteSpanSurprisalMonotonicFrequency_64000
  • ByteSpanSurprisalMonotonicSeeding_64000
  • ByteSpanSurprisalCombinedFrequency_64000
  • ByteSpanSurprisalCombinedSeeding_64000
  • ByteSpanSurprisalGlobalIncrement_64000