nielsr HF Staff commited on
Commit
8d2a409
·
verified ·
1 Parent(s): 8093de7

Link paper and GitHub repository, add task category

Browse files

Hi! I'm Niels from the community science team at Hugging Face. I'm opening this PR to improve the dataset card:
- Added `task_categories: [text-generation]` to the metadata.
- Linked the dataset to its corresponding paper: [Rethinking Language Model Scaling under Transferable Hypersphere Optimization](https://huggingface.co/papers/2603.28743).
- Added a link to the official GitHub repository: [microsoft/ArchScale](https://github.com/microsoft/ArchScale).

Files changed (1) hide show
  1. README.md +14 -2
README.md CHANGED
@@ -1,12 +1,19 @@
1
  ---
2
  license: apache-2.0
 
 
3
  ---
4
 
5
-
6
  # Prolong_64K_v2_Llama2_Tokenizer
7
 
8
  This is the Prolong_64K dataset, tokenized using the [Llama-2-7b-hf tokenizer](https://github.com/microsoft/Samba/blob/main/scripts/prepare_slimpajama.py#L22) for use in Samba-style training.
9
 
 
 
 
 
 
 
10
  👉 You can download and unzip the dataset from: [prolong_64K_v2.zip](https://huggingface.co/datasets/jsun/Prolong_64K_v2_Llama2_Tokenizer/blob/main/prolong_64K_v2.zip)
11
 
12
  ```bash
@@ -14,4 +21,9 @@ wget -c https://huggingface.co/datasets/jsun/Prolong_64K_v2_Llama2_Tokenizer/res
14
  sudo apt install zip # Ubuntu
15
  unzip prolong_64K_v2.zip -d prolong_64K_v2
16
  ```
17
- Once extracted, the dataset can be loaded using the [PackedDataset](https://github.com/microsoft/Samba/blob/383c016f2fb20ce75eed777761e1a4456c87b2b0/lit_gpt/packed_dataset.py#L33) class from the Samba codebase.
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ task_categories:
4
+ - text-generation
5
  ---
6
 
 
7
  # Prolong_64K_v2_Llama2_Tokenizer
8
 
9
  This is the Prolong_64K dataset, tokenized using the [Llama-2-7b-hf tokenizer](https://github.com/microsoft/Samba/blob/main/scripts/prepare_slimpajama.py#L22) for use in Samba-style training.
10
 
11
+ This dataset was used in the research paper: [Rethinking Language Model Scaling under Transferable Hypersphere Optimization](https://huggingface.co/papers/2603.28743).
12
+
13
+ The official training codebase can be found at [GitHub - microsoft/ArchScale](https://github.com/microsoft/ArchScale).
14
+
15
+ ## Download
16
+
17
  👉 You can download and unzip the dataset from: [prolong_64K_v2.zip](https://huggingface.co/datasets/jsun/Prolong_64K_v2_Llama2_Tokenizer/blob/main/prolong_64K_v2.zip)
18
 
19
  ```bash
 
21
  sudo apt install zip # Ubuntu
22
  unzip prolong_64K_v2.zip -d prolong_64K_v2
23
  ```
24
+
25
+ ## Usage
26
+
27
+ Once extracted, the dataset can be loaded using the [PackedDataset](https://github.com/microsoft/Samba/blob/383c016f2fb20ce75eed777761e1a4456c87b2b0/lit_gpt/packed_dataset.py#L33) class from the Samba/ArchScale codebase.
28
+
29
+ Example training scripts utilizing this data format are provided in the [ArchScale repository](https://github.com/microsoft/ArchScale).