nielsr's picture
nielsr HF Staff
Add paper link, task category and code link
b7d4e69 verified
|
raw
history blame
560 Bytes
---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 1076886272
num_examples: 27332
download_size: 487651276
dataset_size: 1076886272
task_categories:
- text-generation
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This dataset contains data used in the paper [From Hours to Minutes: Lossless Acceleration of Ultra Long Sequence Generation up to 100K Tokens](https://huggingface.co/papers/2502.18890).
Code: https://github.com/bigai-nlco/TokenSwift