Question Answering
Transformers
PyTorch
TensorFlow
JAX
Rust
Safetensors
English
roberta
Eval Results (legacy)
Instructions to use deepset/roberta-base-squad2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use deepset/roberta-base-squad2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="deepset/roberta-base-squad2")# Load model directly from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2") model = AutoModelForQuestionAnswering.from_pretrained("deepset/roberta-base-squad2") - Inference
- Notebooks
- Google Colab
- Kaggle
Fix typo
Browse files
README.md
CHANGED
|
@@ -39,7 +39,7 @@ Please note that we have also released a distilled version of this model called
|
|
| 39 |
## Usage
|
| 40 |
|
| 41 |
### In Haystack
|
| 42 |
-
Haystack is an NLP framework by deepset. You can use this model in a
|
| 43 |
```python
|
| 44 |
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
|
| 45 |
# or
|
|
|
|
| 39 |
## Usage
|
| 40 |
|
| 41 |
### In Haystack
|
| 42 |
+
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
|
| 43 |
```python
|
| 44 |
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
|
| 45 |
# or
|