gmongaras's picture
Upload README.md with huggingface_hub
235432d
metadata
dataset_info:
  features:
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 26485537877
      num_examples: 109418257
  download_size: 10245098382
  dataset_size: 26485537877
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Dataset using the bert-cased tokenizer, cutoff sentences to 512 length (not sentence pairs), all sentence pairs extracted.

Original datasets: https://huggingface.co/datasets/bookcorpus Original dataset: https://huggingface.co/datasets/wikipedia Variant: 20220301.en