File size: 557 Bytes
dec5d17 95960cd dec5d17 235432d dec5d17 0066917 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 26485537877
num_examples: 109418257
download_size: 10245098382
dataset_size: 26485537877
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Dataset using the bert-cased tokenizer, cutoff sentences to 512 length (not sentence pairs), all sentence pairs extracted.
Original datasets:
https://huggingface.co/datasets/bookcorpus
Original dataset: https://huggingface.co/datasets/wikipedia Variant: 20220301.en |