Commit
·
4c80eae
1
Parent(s):
e4d8a94
Update README.md
Browse files
README.md
CHANGED
|
@@ -36,7 +36,7 @@ embeds_dropout_prob = 0.1
|
|
| 36 |
We evaluated the extractive question answering performance on our GermanQuAD test set.
|
| 37 |
Model types and training data are included in the model name.
|
| 38 |
For finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset.
|
| 39 |
-
The GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on
|
| 40 |
The human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth.
|
| 41 |

|
| 42 |
|
|
@@ -56,4 +56,6 @@ Some of our work:
|
|
| 56 |
- [Haystack](https://github.com/deepset-ai/haystack/)
|
| 57 |
|
| 58 |
Get in touch:
|
| 59 |
-
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai)
|
|
|
|
|
|
|
|
|
| 36 |
We evaluated the extractive question answering performance on our GermanQuAD test set.
|
| 37 |
Model types and training data are included in the model name.
|
| 38 |
For finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset.
|
| 39 |
+
The GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on \\germanquad.
|
| 40 |
The human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth.
|
| 41 |

|
| 42 |
|
|
|
|
| 56 |
- [Haystack](https://github.com/deepset-ai/haystack/)
|
| 57 |
|
| 58 |
Get in touch:
|
| 59 |
+
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
|
| 60 |
+
|
| 61 |
+
By the way: [we're hiring!](https://apply.workable.com/deepset/)
|