GulkoA's picture
Update README.md
c0aa41b verified
metadata
dataset_info:
  features:
    - name: input_ids
      sequence: int32
  splits:
    - name: train
      num_bytes: 34722066732
      num_examples: 67290827
  download_size: 19759572290
  dataset_size: 34722066732
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

OpenWebText dataset (open-source replication of the WebText dataset from OpenAI, that was used to train GPT-2) tokenized for Llama 3.2 models

Useful for accelerated training and testing of sparse autoencoders

Context size: 128, not shuffled