--- dataset_info: features: - name: text dtype: string - name: meta dtype: string splits: - name: train num_bytes: 52428748 num_examples: 100 download_size: 20492289 dataset_size: 52428748 configs: - config_name: default data_files: - split: train path: data/train-* --- Documents from [Proof Pile](https://huggingface.co/datasets/EleutherAI/proof-pile-2) with over 128K tokens, as measured by the [starmie-v1](https://huggingface.co/moondream/starmie-v1) tokenizer. Intended use is to measure long context cross-entropy.