Update README.md
Browse files
README.md
CHANGED
|
@@ -20,17 +20,20 @@ metrics:
|
|
| 20 |
This page presents a fine-tuned [BERT-base-cased](https://huggingface.co/bert-base-cased) language model for tagging Vossian Antonomasia expressions in text on word-level.
|
| 21 |
The tag {B,I}-SRC refers to the source chunk, {B,I}-MOD to the modifier chunk and {B,I}-TRG to the target chunk if existing.
|
| 22 |
|
|
|
|
| 23 |
### Dataset
|
| 24 |
|
| 25 |
The dataset is an annotated Vossian Antonomasia dataset that evolved from [Schwab et al. 2019](https://www.aclweb.org/anthology/D19-1647.pdf) and was updated in [Schwab et al. 2022](https://doi.org/10.3389/frai.2022.868249).
|
| 26 |
-
Please note, that this model was trained on the annotated dataset only and did not use any additional unlabeled training data. Thus, it may not be as robust as the best model in [our paper](https://doi.org/10.3389/frai.2022.868249).
|
| 27 |
|
| 28 |
### Results
|
| 29 |
|
| 30 |
F1 score: 0.926
|
| 31 |
|
|
|
|
| 32 |
For more results, please have a look at [our paper](https://doi.org/10.3389/frai.2022.868249).
|
| 33 |
|
|
|
|
|
|
|
| 34 |
|
| 35 |
|
| 36 |
|
|
|
|
| 20 |
This page presents a fine-tuned [BERT-base-cased](https://huggingface.co/bert-base-cased) language model for tagging Vossian Antonomasia expressions in text on word-level.
|
| 21 |
The tag {B,I}-SRC refers to the source chunk, {B,I}-MOD to the modifier chunk and {B,I}-TRG to the target chunk if existing.
|
| 22 |
|
| 23 |
+
|
| 24 |
### Dataset
|
| 25 |
|
| 26 |
The dataset is an annotated Vossian Antonomasia dataset that evolved from [Schwab et al. 2019](https://www.aclweb.org/anthology/D19-1647.pdf) and was updated in [Schwab et al. 2022](https://doi.org/10.3389/frai.2022.868249).
|
|
|
|
| 27 |
|
| 28 |
### Results
|
| 29 |
|
| 30 |
F1 score: 0.926
|
| 31 |
|
| 32 |
+
|
| 33 |
For more results, please have a look at [our paper](https://doi.org/10.3389/frai.2022.868249).
|
| 34 |
|
| 35 |
+
Please note, that this model was trained on the annotated dataset only and did not use any additional unlabeled training data.
|
| 36 |
+
Thus, it may not be as robust as the best model in [our paper](https://doi.org/10.3389/frai.2022.868249) against new data.
|
| 37 |
|
| 38 |
|
| 39 |
|