File size: 560 Bytes
6372db6 b7d4e69 6372db6 b7d4e69 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 1076886272
num_examples: 27332
download_size: 487651276
dataset_size: 1076886272
task_categories:
- text-generation
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This dataset contains data used in the paper [From Hours to Minutes: Lossless Acceleration of Ultra Long Sequence Generation up to 100K Tokens](https://huggingface.co/papers/2502.18890).
Code: https://github.com/bigai-nlco/TokenSwift |