BERT
Collection
BERT models of varying flavors • 22 items • Updated
How to use Intel/bert-base-uncased-mnli-sparse-70-unstructured with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="Intel/bert-base-uncased-mnli-sparse-70-unstructured") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Intel/bert-base-uncased-mnli-sparse-70-unstructured")
model = AutoModelForSequenceClassification.from_pretrained("Intel/bert-base-uncased-mnli-sparse-70-unstructured")Fine tuned sparse BERT base to MNLI (GLUE Benchmark) task from bert-base-uncased-sparse-70-unstructured.
Note: This model requires transformers==2.10.0
Matched: 82.5%
Mismatched: 83.3%
This model can be further fine-tuned to other tasks and achieve the following evaluation results:
| Task | QQP (Acc/F1) | QNLI (Acc) | SST-2 (Acc) | STS-B (Pears/Spear) | SQuADv1.1 (Acc/F1) |
|---|---|---|---|---|---|
| 90.2/86.7 | 90.3 | 91.5 | 88.9/88.6 | 80.5/88.2 |