nielsr's picture
nielsr HF Staff
Link paper and GitHub repository, add task category
8d2a409 verified
|
raw
history blame
1.35 kB
metadata
license: apache-2.0
task_categories:
  - text-generation

Prolong_64K_v2_Llama2_Tokenizer

This is the Prolong_64K dataset, tokenized using the Llama-2-7b-hf tokenizer for use in Samba-style training.

This dataset was used in the research paper: Rethinking Language Model Scaling under Transferable Hypersphere Optimization.

The official training codebase can be found at GitHub - microsoft/ArchScale.

Download

👉 You can download and unzip the dataset from: prolong_64K_v2.zip

wget -c https://huggingface.co/datasets/jsun/Prolong_64K_v2_Llama2_Tokenizer/resolve/main/prolong_64K_v2.zip -O prolong_64K_v2.zip
sudo apt install zip # Ubuntu
unzip prolong_64K_v2.zip -d prolong_64K_v2

Usage

Once extracted, the dataset can be loaded using the PackedDataset class from the Samba/ArchScale codebase.

Example training scripts utilizing this data format are provided in the ArchScale repository.