Instructions to use deepset/minilm-uncased-squad2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use deepset/minilm-uncased-squad2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="deepset/minilm-uncased-squad2")# Load model directly from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("deepset/minilm-uncased-squad2") model = AutoModelForQuestionAnswering.from_pretrained("deepset/minilm-uncased-squad2") - Inference
- Notebooks
- Google Colab
- Kaggle
- Xet hash:
- ad077e80afef4cf1c94d8b29b3bf483d09277baaf7a8781533ae5352bd5839f5
- Size of remote file:
- 133 MB
- SHA256:
- bcd4d6e16c9e843b51a977870a5e2c2dd038da46f686e005804e45c4999165f2
路
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.