| dataset_info: | |
| features: | |
| - name: input_ids | |
| sequence: int32 | |
| - name: token_type_ids | |
| sequence: int8 | |
| - name: attention_mask | |
| sequence: int8 | |
| splits: | |
| - name: train | |
| num_bytes: 52875464012.02522 | |
| num_examples: 136226984 | |
| download_size: 17583618282 | |
| dataset_size: 52875464012.02522 | |
| configs: | |
| - config_name: default | |
| data_files: | |
| - split: train | |
| path: data/train-* | |
| Dataset using the bert-cased tokenizer, cutoff sentences to 512 length (not sentence pairs), all sentence pairs extracted. | |
| Original datasets: | |
| https://huggingface.co/datasets/bookcorpus | |
| https://huggingface.co/datasets/wikipedia Variant: 20220301.en | |
| Mapped from: https://huggingface.co/datasets/gmongaras/BERT_Base_Cased_512_Dataset |