jsun commited on
Commit
2ba2090
·
verified ·
1 Parent(s): 0ecd693

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -0
README.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ # fineweb-edu_default_Llama2_Tokenizer
6
+
7
+ The original `fineweb-edu_default_Llama2_Tokenizer.tar.gz` archive (≈1.9T on Ubuntu) was split into smaller 40 GB chunks for easier upload to Hugging Face.
8
+ ```bash
9
+ sudo apt install git-lfs
10
+ pip install -U huggingface_hub # `hf version`==1.1.4
11
+ tar cvf - fineweb-edu_default_Llama2_Tokenizer/ | pigz -p 16 > fineweb-edu_default_Llama2_Tokenizer.tar.gz
12
+ split -b 40G -d -a 3 fineweb-edu_default_Llama2_Tokenizer.tar.gz fineweb-edu_default_Llama2_Tokenizer/fineweb-edu_default_Llama2_Tokenizer_part_
13
+
14
+ # Upload files sequentially using a loop:
15
+ count=0
16
+ for file in fineweb-edu_default_Llama2_Tokenizer/fineweb-edu_default_Llama2_Tokenizer_part_*; do
17
+ count=$((count + 1))
18
+ # if [ $count -lt 3 ]; then
19
+ # continue
20
+ # fi
21
+ filename=$(basename "$file")
22
+ echo "Uploading $file (file #$count) to fineweb-edu_default_Llama2_Tokenizer/$filename ..."
23
+ hf upload jsun/fineweb-edu_default_Llama2_Tokenizer "$file" "fineweb-edu_default_Llama2_Tokenizer/$filename" --repo-type dataset
24
+ done
25
+ ```
26
+
27
+ Once all parts are downloaded, you can reconstruct the original tar.gz file using:
28
+ ```bash
29
+ cat fineweb-edu_default_Llama2_Tokenizer_part_* > fineweb-edu_default_Llama2_Tokenizer.tar.gz
30
+ ```
31
+
32
+ Then extract the archive as usual:
33
+
34
+ ```bash
35
+ pigz -dc -p 16 fineweb-edu_default_Llama2_Tokenizer.tar.gz | tar xvf -
36
+ ```