Instructions to use abhilash1910/distilbert-squadv1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use abhilash1910/distilbert-squadv1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="abhilash1910/distilbert-squadv1")# Load model directly from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("abhilash1910/distilbert-squadv1") model = AutoModelForQuestionAnswering.from_pretrained("abhilash1910/distilbert-squadv1") - Notebooks
- Google Colab
- Kaggle
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
DistilBERT--SQuAD-v1
Training is done on the SQuAD dataset. The model can be accessed via HuggingFace:
Model Specifications
We have used the following parameters:
- Training Batch Size : 512
- Learning Rate : 3e-5
- Training Epochs : 0.75
- Sequence Length : 384
- Stride : 128
Usage Specifications
from transformers import AutoModelForQuestionAnswering,AutoTokenizer,pipeline
model=AutoModelForQuestionAnswering.from_pretrained('abhilash1910/distilbert-squadv1')
tokenizer=AutoTokenizer.from_pretrained('abhilash1910/distilbert-squadv1')
nlp_QA=pipeline('question-answering',model=model,tokenizer=tokenizer)
QA_inp={
'question': 'What is the fund price of Huggingface in NYSE?',
'context': 'Huggingface Co. has a total fund price of $19.6 million dollars'
}
result=nlp_QA(QA_inp)
result
The result is:
{'score': 0.38547369837760925,
'start': 42,
'end': 55,
'answer': '$19.6 million'}
language:
- en license: apache-2.0 datasets:
- squad_v1
- Downloads last month
- 21