Update README.md
Browse files
README.md
CHANGED
|
@@ -38,4 +38,22 @@ model-index:
|
|
| 38 |
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert/albert-base-v2) on the SQuAD 1.1 and adversarial_qa datasets.
|
| 39 |
It achieves the following results on the SQuAD 1.1 evaluation set:
|
| 40 |
- Exact Match(EM): 84.68
|
| 41 |
-
- F1: 91.40
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert/albert-base-v2) on the SQuAD 1.1 and adversarial_qa datasets.
|
| 39 |
It achieves the following results on the SQuAD 1.1 evaluation set:
|
| 40 |
- Exact Match(EM): 84.68
|
| 41 |
+
- F1: 91.40
|
| 42 |
+
|
| 43 |
+
## Inference
|
| 44 |
+
|
| 45 |
+
Here鈥檚 how to use the model for question answering:
|
| 46 |
+
|
| 47 |
+
```python
|
| 48 |
+
from transformers import pipeline
|
| 49 |
+
|
| 50 |
+
# Load the pipeline
|
| 51 |
+
qa_pipeline = pipeline("question-answering", model="xichenn/albert-base-v2-squad")
|
| 52 |
+
|
| 53 |
+
# Run inference
|
| 54 |
+
result = qa_pipeline({
|
| 55 |
+
"question": "What is the capital of France?",
|
| 56 |
+
"context": "France is a country in Europe. Its capital is Paris.",
|
| 57 |
+
})
|
| 58 |
+
|
| 59 |
+
print(result)
|