mtyeung's picture
add dataset
db76bb7
|
raw
history blame
799 Bytes

This is a preprocessed version of the realnewslike subdirectory of C4, with 4 different vocabulary sizes: 32k, 64k, 128k and 256k

C4 from: https://huggingface.co/datasets/allenai/c4

Files generated by using SentencePieceTrainer https://github.com/google/sentencepiece and Megatron-LM https://github.com/NVIDIA/Megatron-LM/

import sentencepiece as spm

spm.SentencePieceTrainer.Train('--input=c4_dataset.txt --model_prefix=vp_sample_dataset --vocab_size=256000')
python tools/preprocess_data.py \
  --input 'c4/realnewslike/c4-train.0000[0-2]-of-00512.json' \
  --partitions 8 \
  --output-prefix c4 \
  --vocab-file vp_sample_dataset.vocab \
  --tokenizer-type GPTSentencePieceTokenizer \
  --tokenizer-model vp_sample_dataset.model \
  --workers 8

license: odc-by