How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("question-answering", model="philschmid/distilbert-neuron")
# Load model directly
from transformers import AutoTokenizer, AutoModelForQuestionAnswering

tokenizer = AutoTokenizer.from_pretrained("philschmid/distilbert-neuron")
model = AutoModelForQuestionAnswering.from_pretrained("philschmid/distilbert-neuron")
Quick Links

AWS Neuron Conversion of distilbert-base-cased-distilled-squad

DistilBERT base cased distilled SQuAD

This model is a fine-tune checkpoint of DistilBERT-base-cased, fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1. This model reaches a F1 score of 87.1 on the dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7).

Downloads last month
13
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train philschmid/distilbert-neuron