| | --- |
| | language: |
| | - hu |
| | pipeline_tag: question-answering |
| | --- |
| | This model is a fine-tuned version of deepset/xlm-roberta-large-squad2 on the milqa dataset. |
| |
|
| | Packages to install for large roberta model: |
| | ```py |
| | sentencepiece==0.1.97 |
| | protobuf==3.20.0 |
| | ``` |
| |
|
| | How to use: |
| | ```py |
| | from transformers import pipeline |
| | qa_pipeline = pipeline( |
| | "question-answering", |
| | model = "ZTamas/xlm-roberta-large-squad2_impossible_long_answer", |
| | tokenizer = "ZTamas/xlm-roberta-large-squad2_impossible_long_answer", |
| | device = 0, #GPU selection, -1 on CPU |
| | handle_impossible_answer = True, |
| | max_answer_len = 1000 #This can be modified, but to let the model's |
| | #answer be as long as it wants so I |
| | #decided to add a big number |
| | ) |
| | |
| | predictions = qa_pipeline({ |
| | 'context': context, |
| | 'question': question |
| | }) |
| | |
| | print(predictions) |
| | |
| | ``` |