prolong-data-64K / README.md
nielsr's picture
nielsr HF Staff
Add paper link, task category and improve dataset card
f94e0e7 verified
|
raw
history blame
1.39 kB
metadata
language:
  - en
task_categories:
  - text-generation
tags:
  - long-context

prolong-data-64K

This dataset was used in the paper CoPE: Clipped RoPE as A Scalable Free Lunch for Long Context LLMs. It is based on the princeton-nlp/prolong-data-64K dataset.

Overview

CoPE (Clipped RoPE) is a plug-and-play enhancement of RoPE that softly clips unstable low-frequency components, delivering consistent gains both within the training context and during long-context extrapolation. This dataset was utilized for continued pre-training and supervised fine-tuning (SFT) to scale models (starting from Llama-3-8B) to a 64k context length.

Usage

According to the GitHub README, the data can be downloaded using:

git clone https://huggingface.co/datasets/haoranli-ml/prolong-data-64K datasets/long-context-65536

Citation

@article{li2026cope,
  title={CoPE: Clipped RoPE as A Scalable Free Lunch for Long Context LLMs},
  author={Li, Haoran and Ren, Sucheng and Yuille, Alan and Wang, Feng},
  journal={arXiv preprint arXiv:2602.05258},
  year={2026}
}