gmongaras commited on
Commit
f4d7db8
·
1 Parent(s): 5bf68f7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -0
README.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ Dataset using the bert-cased tokenizer, cutoff sentences to 512 length (not sentence pairs), all sentence pairs extracted.
2
+
3
+ Original datasets:
4
+
5
+ https://huggingface.co/datasets/bookcorpus
6
+ https://huggingface.co/datasets/wikipedia Variant: 20220301.en
7
+
8
+
9
+ Mapped from: https://huggingface.co/datasets/gmongaras/BERT_Base_Cased_512_Dataset