mschwab commited on
Commit
6f362b8
·
1 Parent(s): 6d41803

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - sequence tagging
6
+ - vossian antonomasia
7
+ license: "apache-2.0"
8
+ datasets:
9
+ - custom
10
+ widget:
11
+ - text: Bijan wants Jordan to be the Elizabeth Taylor of men's fragrances.
12
+ metrics:
13
+ - f1
14
+ - precision
15
+ - recall
16
+ ---
17
+
18
+ ## English Vossian Antonomasia Sequence Tagger
19
+
20
+ This page presents a fine-tuned [BERT-base-cased](https://huggingface.co/bert-base-cased) language model for tagging Vossian Antonomasia expressions in text on word-level.
21
+ The tag {B,I}-SRC refers to the source chunk, {B,I}-MOD to the modifier chunk and {B,I}-TRG to the target chunk if existing.
22
+
23
+ ### Dataset
24
+
25
+ The dataset is a labeled Vossian Antonomasia dataset that evolved from [Schwab et al. 2019](https://www.aclweb.org/anthology/D19-1647.pdf) and was updated in [Schwab et al. 2022](https://doi.org/10.3389/frai.2022.868249).
26
+
27
+ ### Results
28
+
29
+ F1 score: 0.926
30
+
31
+ For more results, please have a look at [our paper](https://doi.org/10.3389/frai.2022.868249).
32
+
33
+
34
+
35
+
36
+ ---
37
+
38
+ ### Cite
39
+
40
+ Please cite the following paper when using this model.
41
+
42
+ ```
43
+ @article{schwab2022rodney,
44
+ title={“The Rodney Dangerfield of Stylistic Devices”: End-to-End Detection and Extraction of Vossian Antonomasia Using Neural Networks},
45
+ author={Schwab, Michel and J{\"a}schke, Robert and Fischer, Frank},
46
+ journal={Frontiers in Artificial Intelligence},
47
+ volume={5},
48
+ year={2022},
49
+ publisher={Frontiers Media SA}
50
+ }
51
+ ```
52
+
53
+
54
+ ---
55
+
56
+ ### Interested in more?
57
+ Visit our [Website](http://vossanto.weltliteratur.net/) for more research on Vossian Antonomasia, including interactive visualizations for exploration.