and111's picture
Update README.md
8359df3

Dataset Summary

Input data for the first phase of BERT pretraining (sequence length 128). All text is tokenized with bert-base-uncased tokenizer. Data is obtained by concatenating and shuffling wikipedia (split: 20220301.en) and bookcorpusopen datasets and running reference BERT data preprocessor without masking and input duplication (dupe_factor = 1). Documents are split into sentences with the NLTK sentence tokenizer (nltk.tokenize.sent_tokenize).

See the dataset for the second phase of pretraining: bert_pretrain_phase2.