--- dataset_info: features: - name: input_ids sequence: int32 splits: - name: train num_bytes: 1083419904 num_examples: 27352 download_size: 487720508 dataset_size: 1083419904 configs: - config_name: default data_files: - split: train path: data/train-* task_categories: - text-generation --- This repository contains the PG-19 training dataset, used in the paper [From Hours to Minutes: Lossless Acceleration of Ultra Long Sequence Generation up to 100K Tokens](https://hf.co/papers/2502.18890). Data larger than 8K tokens are filtered out according to different tokenizers. Code: https://github.com/bigai-nlco/TokenSwift