nielsr's picture
nielsr HF Staff
Improve dataset card: add description, link to paper and code
a4db362 verified
|
raw
history blame
656 Bytes
metadata
dataset_info:
  features:
    - name: input_ids
      sequence: int32
  splits:
    - name: train
      num_bytes: 1083419904
      num_examples: 27352
  download_size: 487720508
  dataset_size: 1083419904
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - text-generation

This repository contains the PG-19 training dataset, used in the paper From Hours to Minutes: Lossless Acceleration of Ultra Long Sequence Generation up to 100K Tokens. Data larger than 8K tokens are filtered out according to different tokenizers.

Code: https://github.com/bigai-nlco/TokenSwift