How to use GeneZC/bert-large-qqp with Transformers:
# Load model directly from transformers import AutoTokenizer, BertCls tokenizer = AutoTokenizer.from_pretrained("GeneZC/bert-large-qqp") model = BertCls.from_pretrained("GeneZC/bert-large-qqp")
bert-large-uncased finetuned on QQP.
bert-large-uncased
QQP
batch size is 32, learning rate is 2e-5.
acc: 0.9178, f1: 0.8895