jsun's picture
Update README.md
cdc67c7 verified
metadata
license: apache-2.0

slimpajama_Llama2_Tokenizer

The original slimpajama_Llama2_Tokenizer.tar.gz archive (≈794 GB on Ubuntu) was split into smaller 40 GB chunks for easier upload to Hugging Face.

sudo apt install git-lfs
pip install -U huggingface_hub  # `hf version`==1.1.4
tar cvf - slimpajama_Llama2_Tokenizer/ | pigz -p 16 > slimpajama_Llama2_Tokenizer.tar.gz
split -b 40G -d -a 3 slimpajama_Llama2_Tokenizer.tar.gz slimpajama_Llama2_Tokenizer/slimpajama_Llama2_Tokenizer_part_

# Upload files sequentially using a loop:
count=0
for file in slimpajama_Llama2_Tokenizer/slimpajama_Llama2_Tokenizer_part_*; do
    count=$((count + 1))
    # if [ $count -lt 3 ]; then
    #     continue
    # fi
    filename=$(basename "$file")
    echo "Uploading $file (file #$count) to slimpajama_Llama2_Tokenizer/$filename ..."
    hf upload jsun/slimpajama_Llama2_Tokenizer "$file" "slimpajama_Llama2_Tokenizer/$filename" --repo-type dataset
done

Once all parts are downloaded, you can reconstruct the original tar.gz file using:

cat slimpajama_Llama2_Tokenizer_part_* > slimpajama_Llama2_Tokenizer.tar.gz

Then extract the archive as usual:

pigz -dc -p 16 slimpajama_Llama2_Tokenizer.tar.gz | tar xvf -