File size: 799 Bytes
db76bb7 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | This is a preprocessed version of the realnewslike subdirectory of C4, with 4 different vocabulary sizes: 32k, 64k, 128k and 256k
C4 from: https://huggingface.co/datasets/allenai/c4
Files generated by using SentencePieceTrainer https://github.com/google/sentencepiece and Megatron-LM https://github.com/NVIDIA/Megatron-LM/
```py
import sentencepiece as spm
spm.SentencePieceTrainer.Train('--input=c4_dataset.txt --model_prefix=vp_sample_dataset --vocab_size=256000')
```
```bash
python tools/preprocess_data.py \
--input 'c4/realnewslike/c4-train.0000[0-2]-of-00512.json' \
--partitions 8 \
--output-prefix c4 \
--vocab-file vp_sample_dataset.vocab \
--tokenizer-type GPTSentencePieceTokenizer \
--tokenizer-model vp_sample_dataset.model \
--workers 8
```
---
license: odc-by
---
|