Improve dataset card: add description, link to paper and code
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -14,4 +14,11 @@ configs:
|
|
| 14 |
data_files:
|
| 15 |
- split: train
|
| 16 |
path: data/train-*
|
|
|
|
|
|
|
| 17 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
data_files:
|
| 15 |
- split: train
|
| 16 |
path: data/train-*
|
| 17 |
+
task_categories:
|
| 18 |
+
- text-generation
|
| 19 |
---
|
| 20 |
+
|
| 21 |
+
This repository contains the PG-19 training dataset, used in the paper [From Hours to Minutes: Lossless Acceleration of Ultra Long Sequence Generation up to 100K Tokens](https://hf.co/papers/2502.18890).
|
| 22 |
+
Data larger than 8K tokens are filtered out according to different tokenizers.
|
| 23 |
+
|
| 24 |
+
Code: https://github.com/bigai-nlco/TokenSwift
|