mtyeung commited on
Commit
db76bb7
·
1 Parent(s): bbe8590

add dataset

Browse files
README.md CHANGED
@@ -1,3 +1,25 @@
1
- ---
2
- license: odc-by
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ This is a preprocessed version of the realnewslike subdirectory of C4, with 4 different vocabulary sizes: 32k, 64k, 128k and 256k
2
+
3
+ C4 from: https://huggingface.co/datasets/allenai/c4
4
+
5
+ Files generated by using SentencePieceTrainer https://github.com/google/sentencepiece and Megatron-LM https://github.com/NVIDIA/Megatron-LM/
6
+ ```py
7
+ import sentencepiece as spm
8
+
9
+ spm.SentencePieceTrainer.Train('--input=c4_dataset.txt --model_prefix=vp_sample_dataset --vocab_size=256000')
10
+ ```
11
+
12
+ ```bash
13
+ python tools/preprocess_data.py \
14
+ --input 'c4/realnewslike/c4-train.0000[0-2]-of-00512.json' \
15
+ --partitions 8 \
16
+ --output-prefix c4 \
17
+ --vocab-file vp_sample_dataset.vocab \
18
+ --tokenizer-type GPTSentencePieceTokenizer \
19
+ --tokenizer-model vp_sample_dataset.model \
20
+ --workers 8
21
+ ```
22
+
23
+ ---
24
+ license: odc-by
25
+ ---
vp_sample_dataset_v128k.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65c470dfc4b9e14adb13bee3ab41466837401381bc2a2ebdf68b071ca50b2e54
3
+ size 74753018
vp_sample_dataset_v256k.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a924ec8befd3818562b2fb8f0143f363d43b76361e453370a4c0aeffb71cb221
3
+ size 76006904
vp_sample_dataset_v32k.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68995ced2cd132994e635dc2599b881b8171cc4c02dc4362abbc7d3f5c49665a
3
+ size 64169965
vp_sample_dataset_v64k.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2b3e36df44759992ac113be6bd4307ce34f0e4cc32074cdcef640dd86e02e31
3
+ size 62880154