jsun commited on
Commit
8093de7
·
verified ·
1 Parent(s): c285a1c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -31
README.md CHANGED
@@ -15,34 +15,3 @@ sudo apt install zip # Ubuntu
15
  unzip prolong_64K_v2.zip -d prolong_64K_v2
16
  ```
17
  Once extracted, the dataset can be loaded using the [PackedDataset](https://github.com/microsoft/Samba/blob/383c016f2fb20ce75eed777761e1a4456c87b2b0/lit_gpt/packed_dataset.py#L33) class from the Samba codebase.
18
-
19
- # slimpajama_Llama2_Tokenizer
20
-
21
- The original `slimpajama_Llama2_Tokenizer.tar.gz` archive (≈794 GB on Ubuntu) was split into smaller 40 GB chunks for easier upload to Hugging Face.
22
- ```bash
23
- tar cvf - slimpajama_Llama2_Tokenizer/ | pigz -p 16 > slimpajama_Llama2_Tokenizer.tar.gz
24
- split -b 40G -d -a 3 slimpajama_Llama2_Tokenizer.tar.gz slimpajama_Llama2_Tokenizer/slimpajama_Llama2_Tokenizer_part_
25
-
26
- # Upload files sequentially using a loop:
27
- count=0
28
- for file in slimpajama_Llama2_Tokenizer/slimpajama_Llama2_Tokenizer_part_*; do
29
- count=$((count + 1))
30
- # if [ $count -lt 3 ]; then
31
- # continue
32
- # fi
33
- filename=$(basename "$file")
34
- echo "Uploading $file (file #$count) to slimpajama_Llama2_Tokenizer/$filename ..."
35
- hf upload jsun/Prolong_64K_v2_Llama2_Tokenizer "$file" "slimpajama_Llama2_Tokenizer/$filename" --repo-type dataset
36
- done
37
- ```
38
-
39
- Once all parts are downloaded, you can reconstruct the original tar.gz file using:
40
- ```bash
41
- cat slimpajama_Llama2_Tokenizer_part_* > slimpajama_Llama2_Tokenizer.tar.gz
42
- ```
43
-
44
- Then extract the archive as usual:
45
-
46
- ```bash
47
- pigz -dc -p 16 slimpajama_Llama2_Tokenizer.tar.gz | tar xvf -
48
- ```
 
15
  unzip prolong_64K_v2.zip -d prolong_64K_v2
16
  ```
17
  Once extracted, the dataset can be loaded using the [PackedDataset](https://github.com/microsoft/Samba/blob/383c016f2fb20ce75eed777761e1a4456c87b2b0/lit_gpt/packed_dataset.py#L33) class from the Samba codebase.