fineweb-1M_longish / README.md
pszemraj's picture
Super-squash branch 'main' using huggingface_hub
13bc2b8 verified
|
raw
history blame
847 Bytes
metadata
dataset_info:
  features:
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 5485226321
      num_examples: 1000000
  download_size: 3353329992
  dataset_size: 5485226321
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: odc-by
task_categories:
  - text-generation
  - fill-mask
  - feature-extraction
language:
  - en
size_categories:
  - 100K<n<1M

fineweb "longish" 1M

1m samples w/ random seed w.r.t. previous samples.

  • min 512 GPT-4 tiktoken tokens
  • max 8192 GPT-4 tiktoken tokens

BEE-spoke-data/claude-tokenizer token count:

          token_count
count  1000000.000000
mean      1218.231641
std        935.733312
min        139.000000
25%        683.000000
50%        905.000000
75%       1350.000000
max       9550.000000
  • Total count: 1218.23 M tokens