Instructions to use VMware/bert-tiny-mrqa with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use VMware/bert-tiny-mrqa with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="VMware/bert-tiny-mrqa")# Load model directly from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("VMware/bert-tiny-mrqa") model = AutoModelForQuestionAnswering.from_pretrained("VMware/bert-tiny-mrqa") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 60fca14f5ec56be85487110c264cb47e1ce7d2aa64878617e81afc3d86b5c38f
- Size of remote file:
- 17.5 MB
- SHA256:
- f4d0e75437c331f0ad3cc94fcf186b51dea7195d36083f290025a5d06d499f87
路
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.