Instructions to use mrm8488/bert-tiny-5-finetuned-squadv2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mrm8488/bert-tiny-5-finetuned-squadv2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="mrm8488/bert-tiny-5-finetuned-squadv2")# Load model directly from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("mrm8488/bert-tiny-5-finetuned-squadv2") model = AutoModelForQuestionAnswering.from_pretrained("mrm8488/bert-tiny-5-finetuned-squadv2") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- a3c6bde95b114f2c12b6cd1cd555e3fe3df4893880058324dd69b09786fe9383
- Size of remote file:
- 25.4 MB
- SHA256:
- 100283ef7585561acce0192c1b5e56cab60e2b50ba78f247c76896c7529fc99a
路
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.