Instructions to use intanm/mbert-squadv2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use intanm/mbert-squadv2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="intanm/mbert-squadv2")# Load model directly from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("intanm/mbert-squadv2") model = AutoModelForQuestionAnswering.from_pretrained("intanm/mbert-squadv2") - Notebooks
- Google Colab
- Kaggle
Adding `safetensors` variant of this model
#2
by SFconvertbot - opened
- model.safetensors +3 -0
model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:47d1ac5749aa4fe2a917c75499ad372157adf0fe4d72e559ba1294fa0eb23a53
|
| 3 |
+
size 709085088
|