How to use GeneZC/bert-base-qnli with Transformers:
# Load model directly from transformers import AutoTokenizer, BertCls tokenizer = AutoTokenizer.from_pretrained("GeneZC/bert-base-qnli") model = BertCls.from_pretrained("GeneZC/bert-base-qnli")
bert-base-uncased finetuned on QNLI.
bert-base-uncased
QNLI
batch size is 32, learning rate is 2e-5.
acc: 0.9187