SlowGuess commited on
Commit
d4f343d
·
verified ·
1 Parent(s): 3a03d0b

Add Batch b10bec41-729d-417f-ab9e-609e2725382e

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. aligneddualchannelgraphconvolutionalnetworkforvisualquestionanswering/9347b4e6-0ffc-4e35-a69d-12360e5607f6_content_list.json +3 -0
  2. aligneddualchannelgraphconvolutionalnetworkforvisualquestionanswering/9347b4e6-0ffc-4e35-a69d-12360e5607f6_model.json +3 -0
  3. aligneddualchannelgraphconvolutionalnetworkforvisualquestionanswering/9347b4e6-0ffc-4e35-a69d-12360e5607f6_origin.pdf +3 -0
  4. aligneddualchannelgraphconvolutionalnetworkforvisualquestionanswering/full.md +399 -0
  5. aligneddualchannelgraphconvolutionalnetworkforvisualquestionanswering/images.zip +3 -0
  6. aligneddualchannelgraphconvolutionalnetworkforvisualquestionanswering/layout.json +3 -0
  7. amalgamationofproteinsequencestructureandtextualinformationforimprovingproteinproteininteractionidentification/92d0e682-068d-464f-acb5-ea4912594047_content_list.json +3 -0
  8. amalgamationofproteinsequencestructureandtextualinformationforimprovingproteinproteininteractionidentification/92d0e682-068d-464f-acb5-ea4912594047_model.json +3 -0
  9. amalgamationofproteinsequencestructureandtextualinformationforimprovingproteinproteininteractionidentification/92d0e682-068d-464f-acb5-ea4912594047_origin.pdf +3 -0
  10. amalgamationofproteinsequencestructureandtextualinformationforimprovingproteinproteininteractionidentification/full.md +297 -0
  11. amalgamationofproteinsequencestructureandtextualinformationforimprovingproteinproteininteractionidentification/images.zip +3 -0
  12. amalgamationofproteinsequencestructureandtextualinformationforimprovingproteinproteininteractionidentification/layout.json +3 -0
  13. analysinglexicalsemanticchangewithcontextualisedwordrepresentations/e5fe2ad5-39d8-4569-b85f-4b6efa8ece75_content_list.json +3 -0
  14. analysinglexicalsemanticchangewithcontextualisedwordrepresentations/e5fe2ad5-39d8-4569-b85f-4b6efa8ece75_model.json +3 -0
  15. analysinglexicalsemanticchangewithcontextualisedwordrepresentations/e5fe2ad5-39d8-4569-b85f-4b6efa8ece75_origin.pdf +3 -0
  16. analysinglexicalsemanticchangewithcontextualisedwordrepresentations/full.md +441 -0
  17. analysinglexicalsemanticchangewithcontextualisedwordrepresentations/images.zip +3 -0
  18. analysinglexicalsemanticchangewithcontextualisedwordrepresentations/layout.json +3 -0
  19. analyzinganalyticalmethodsthecaseofphonologyinneuralmodelsofspokenlanguage/8d49a011-f3a9-4ff7-a5e2-a69076301106_content_list.json +3 -0
  20. analyzinganalyticalmethodsthecaseofphonologyinneuralmodelsofspokenlanguage/8d49a011-f3a9-4ff7-a5e2-a69076301106_model.json +3 -0
  21. analyzinganalyticalmethodsthecaseofphonologyinneuralmodelsofspokenlanguage/8d49a011-f3a9-4ff7-a5e2-a69076301106_origin.pdf +3 -0
  22. analyzinganalyticalmethodsthecaseofphonologyinneuralmodelsofspokenlanguage/full.md +311 -0
  23. analyzinganalyticalmethodsthecaseofphonologyinneuralmodelsofspokenlanguage/images.zip +3 -0
  24. analyzinganalyticalmethodsthecaseofphonologyinneuralmodelsofspokenlanguage/layout.json +3 -0
  25. analyzingpoliticalparodyinsocialmedia/eb691718-565a-4bd8-9dfb-b05351d2667c_content_list.json +3 -0
  26. analyzingpoliticalparodyinsocialmedia/eb691718-565a-4bd8-9dfb-b05351d2667c_model.json +3 -0
  27. analyzingpoliticalparodyinsocialmedia/eb691718-565a-4bd8-9dfb-b05351d2667c_origin.pdf +3 -0
  28. analyzingpoliticalparodyinsocialmedia/full.md +323 -0
  29. analyzingpoliticalparodyinsocialmedia/images.zip +3 -0
  30. analyzingpoliticalparodyinsocialmedia/layout.json +3 -0
  31. analyzingthepersuasiveeffectofstyleinnewseditorialargumentation/d8d36c18-850e-4233-819c-8a8a8bf35acd_content_list.json +3 -0
  32. analyzingthepersuasiveeffectofstyleinnewseditorialargumentation/d8d36c18-850e-4233-819c-8a8a8bf35acd_model.json +3 -0
  33. analyzingthepersuasiveeffectofstyleinnewseditorialargumentation/d8d36c18-850e-4233-819c-8a8a8bf35acd_origin.pdf +3 -0
  34. analyzingthepersuasiveeffectofstyleinnewseditorialargumentation/full.md +177 -0
  35. analyzingthepersuasiveeffectofstyleinnewseditorialargumentation/images.zip +3 -0
  36. analyzingthepersuasiveeffectofstyleinnewseditorialargumentation/layout.json +3 -0
  37. ananalysisoftheutilityofexplicitnegativeexamplestoimprovethesyntacticabilitiesofneurallanguagemodels/453e6f4e-6353-4d36-931e-6eca1fbb37ca_content_list.json +3 -0
  38. ananalysisoftheutilityofexplicitnegativeexamplestoimprovethesyntacticabilitiesofneurallanguagemodels/453e6f4e-6353-4d36-931e-6eca1fbb37ca_model.json +3 -0
  39. ananalysisoftheutilityofexplicitnegativeexamplestoimprovethesyntacticabilitiesofneurallanguagemodels/453e6f4e-6353-4d36-931e-6eca1fbb37ca_origin.pdf +3 -0
  40. ananalysisoftheutilityofexplicitnegativeexamplestoimprovethesyntacticabilitiesofneurallanguagemodels/full.md +295 -0
  41. ananalysisoftheutilityofexplicitnegativeexamplestoimprovethesyntacticabilitiesofneurallanguagemodels/images.zip +3 -0
  42. ananalysisoftheutilityofexplicitnegativeexamplestoimprovethesyntacticabilitiesofneurallanguagemodels/layout.json +3 -0
  43. aneffectivenessmetricforordinalclassificationformalpropertiesandexperimentalresults/2197aa75-6617-4cf4-a4b3-db27edad2cee_content_list.json +3 -0
  44. aneffectivenessmetricforordinalclassificationformalpropertiesandexperimentalresults/2197aa75-6617-4cf4-a4b3-db27edad2cee_model.json +3 -0
  45. aneffectivenessmetricforordinalclassificationformalpropertiesandexperimentalresults/2197aa75-6617-4cf4-a4b3-db27edad2cee_origin.pdf +3 -0
  46. aneffectivenessmetricforordinalclassificationformalpropertiesandexperimentalresults/full.md +484 -0
  47. aneffectivenessmetricforordinalclassificationformalpropertiesandexperimentalresults/images.zip +3 -0
  48. aneffectivenessmetricforordinalclassificationformalpropertiesandexperimentalresults/layout.json +3 -0
  49. aneffectivetransitionbasedmodelfordiscontinuousner/b056e99d-f30b-47d4-ac4f-d5f368333bc3_content_list.json +3 -0
  50. aneffectivetransitionbasedmodelfordiscontinuousner/b056e99d-f30b-47d4-ac4f-d5f368333bc3_model.json +3 -0
aligneddualchannelgraphconvolutionalnetworkforvisualquestionanswering/9347b4e6-0ffc-4e35-a69d-12360e5607f6_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b83b9a03f9975b678f7e0c9853690875c03d872f8b0da940c45814d7316ec32
3
+ size 83297
aligneddualchannelgraphconvolutionalnetworkforvisualquestionanswering/9347b4e6-0ffc-4e35-a69d-12360e5607f6_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:184657b78343da1200cef3b4625e10d2d98099146d2f51813cfb1a88bda9d862
3
+ size 103723
aligneddualchannelgraphconvolutionalnetworkforvisualquestionanswering/9347b4e6-0ffc-4e35-a69d-12360e5607f6_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:befc50b5e75aa5b74dfe4e608dce80306d40b1ae08fb6bd1030aed8e88475132
3
+ size 3889547
aligneddualchannelgraphconvolutionalnetworkforvisualquestionanswering/full.md ADDED
@@ -0,0 +1,399 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Aligned Dual Channel Graph Convolutional Network for Visual Question Answering
2
+
3
+ Qingbao Huang $^{1,2}$ , Jielong Wei $^{2}$ , Yi Cai $^{1*}$
4
+
5
+ Changmeng Zheng<sup>1</sup>, Junying Chen<sup>1</sup>, Ho-fung Leung<sup>3</sup>, Qing Li<sup>4</sup>
6
+
7
+ $^{1}$ School of Software Engineering, South China University of Technology, Guangzhou, China
8
+
9
+ $^{2}$ School of Electrical Engineering, Guangxi University, Nanning, Guangxi, China
10
+
11
+ <sup>3</sup>The Chinese University of Hong Kong, Hong Kong SAR, China
12
+
13
+ <sup>4</sup>The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China qbhuang@gxu.edu.cn, 1712306010@st.gxu.edu.cn, ycai@scut.edu.cn
14
+
15
+ # Abstract
16
+
17
+ Visual question answering aims to answer the natural language question about a given image. Existing graph-based methods only focus on the relations between objects in an image and neglect the importance of the syntactic dependency relations between words in a question. To simultaneously capture the relations between objects in an image and the syntactic dependency relations between words in a question, we propose a novel dual channel graph convolutional network (DC-GCN) for better combining visual and textual advantages. The DC-GCN model consists of three parts: an I-GCN module to capture the relations between objects in an image, a Q-GCN module to capture the syntactic dependency relations between words in a question, and an attention alignment module to align image representations and question representations. Experimental results show that our model achieves comparable performance with the state-of-the-art approaches.
18
+
19
+ # 1 Introduction
20
+
21
+ As a form of visual Turing test, visual question answering (VQA) has drawn much attention. The goal of VQA (Antol et al., 2015; Goyal et al., 2017) is to answer a natural language question related to the contents of a given image. Attention mechanisms are served as the backbone of the previous mainstream approaches (Lu et al., 2016; Yang et al., 2016; Yu et al., 2017), however, they tend to catch only the most discriminative information, ignoring other rich complementary clues (Liu et al., 2019).
22
+
23
+ Recent VQA studies have been exploring higher level semantic representation of images, notably using graph-based structures for better image understanding, such as scene graph generation (Xu et al., 2017; Yang et al., 2018), visual relationship detection (Yao et al., 2018), object counting (Zhang et al.,
24
+
25
+ ![](images/0d07fbe117f849272f5bac798fe2bc069f0f42ce6b837f5dac322c89405c2e85.jpg)
26
+ (a) O: What color is the vampire's shirt
27
+
28
+ ![](images/b63b73d7aa6e4cdb6389079124aca28522929835413c5ce2368da6be9044209b.jpg)
29
+ (b) O: What color is the vampire's shirt
30
+
31
+ ![](images/e74d260907f75c89459194516259637cde93c28bbd038209ba47716e58070c98.jpg)
32
+ Ground True Answer: blue
33
+ Predicted Answer: black
34
+ (c) Dependency parsing of the question
35
+ Figure 1: (a) The question and the ground true answer. (b) The wrong answer is predicted by a state-of-the-art model, which focuses on the highlighted region in the image. The depth of the color indicates the weights of the words in the question, where deeper color represents higher weight. The question is performed by syntactic dependency parsing. (c) The dependency parsing of the question is obtained by the universal Standford Dependencies tool (De Marneffe et al., 2014).
36
+
37
+ 2018a), and relation reasoning (Cao et al., 2018; Li et al., 2019; Cadene et al., 2019a). Representing images as graphs allows one to explicitly model interactions between two objects in an image, so as to seamlessly transfer information between graph nodes (e.g., objects in an image).
38
+
39
+ Very recent research methods (Li et al., 2019; Cadene et al., 2019a; Yu et al., 2019) have achieved remarkable performances, but there is still a big gap between them and human. As shown in Figure 1(a), given an image of a group of persons and the corresponding question, a VQA system needs to not only recognize the objects in an image (e.g., batter,UMPire and catcher), but also grasp the textual information in the question "what color is theUMPire's shirt". However, even many competitive VQA models struggle to process them accurately, and as a result predict the incorrect answer (black) rather than the correct answer (blue), including the
40
+
41
+ state-of-the-art methods.
42
+
43
+ Although the relations between two objects in an image have been considered, the attention-based VQA models lack building blocks to explicitly capture the syntactic dependency relations between words in a question. As shown in Figure 1(c), these dependency relations can reflect which object is being asked (e.g., the word *umpire*'s modifies the word *shirt*) and which aspect of the object is being asked (e.g., the word *color* is the direct object of the word *is*). If a VQA model only knows the word *shirt* rather than the relation between words *umpire*'s and *shirt* in a question, it is difficult to distinguish which object is being asked. In fact, we do need the modified relations to discriminate the correct object from multiple similar objects. Therefore, we consider that it is necessary to explore the relations between words at linguistic level in addition to constructing the relations between objects at visual level.
44
+
45
+ Motivated by this, we propose a dual channel graph convolutional network (DC-GCN) to simultaneously capture the relations between objects in an image and the syntactic dependency relations between words in a question. Our proposed DC-GCN model consists of an Image-GCN (I-GCN) module, a Question GCN (Q-GCN) module, and an attention alignment module. The I-GCN module captures the relations between objects in an image, the Q-GCN module captures the syntactic dependency relations between words in a question, and the attention alignment module is used to align two representations of image and question. The contributions of this work are summarized as follows:
46
+
47
+ 1) We propose a dual channel graph convolutional network (DC-GCN) to simultaneously capture the visual and textual relations, and design the attention alignment module to align the multimodal representations, thus reducing the semantic gaps between vision and language.
48
+ 2) We explore how to construct the syntactic dependency relations between words at linguistic level via graph convolutional networks as well as the relations between objects at visual level.
49
+ 3) We conduct extensive experiments and ablation studies on VQA-v2 and VQA-CP-v2 datasets to examine the effectiveness of our DC-GCN model. Experimental results show that the DC-GCN model achieves competitive performance with the state-of-the-art approaches.
50
+
51
+ # 2 Related Works
52
+
53
+ Visual Question Answering Attention mechanism has been proven effective on many tasks, such as machine translation (Bahdanau et al., 2014) and image captioning (Pedersoli et al., 2017). A number of methods have been developed so far, in which question-guided attention on image regions is commonly used. These can be categorized into two classes according to the types of employed image features. One class uses visual features from some region proposals, which are generated by Region Proposal Network (Ren et al., 2015). The other class uses convolutional features (i.e., activations of convolutional layers).
54
+
55
+ To learn a better representation of the question, the Stacked Attention Network (Yang et al., 2016) which can search question-related image regions is designed by performing multi-step visual attention operations. A co-attention mechanism that jointly performs question-guided visual attention and image-guided question attention is proposed to solve the problems of which regions to look at and what words to listen to (Shih et al., 2016). To obtain more fine-grained interaction between image and question, some researchers introduce rather sophisticated fusion strategies. Bilinear pooling method (Kim et al., 2018; Yu et al., 2017, 2018) is one of the pioneering works to efficiently and expressively combine multimodal features by using an outer product of two vectors.
56
+
57
+ Recently, some researchers devoted to overcome the priors on VQA dataset and proposed the methods like GVQA (Agrawal et al., 2018), $UpDn + Q-Adv + DoE$ (Ramakrishnan et al., 2018), and RUBi (Cadene et al., 2019b) to solve the language biases on the VQA-CP-v2 dataset.
58
+
59
+ Graph Networks Graph networks are powerful models that can perform relational inferences through message passing. The core idea is to enable communication between image regions to build contextualized representations of these regions. Below we review some of the recent works that rely on graph networks and other contextualized representations for VQA.
60
+
61
+ Recent research works (Cadene et al., 2019a; Li et al., 2019) focus on how to deal with complex scene and relation reasoning to obtain better image representations. Based on multimodal attentional networks, (Cadene et al., 2019a) introduces an atomic reasoning primitive to represent interactions between question and image region by a rich vecto
62
+
63
+ ![](images/9abaec0332032b4737ceafdf818518e4c7b535300d3665953f51d628a40cad7c.jpg)
64
+ Figure 2: Illustration of our proposed Dual Channel Graph Convolutional Network (DC-GCN) for VQA task. The Dependency Parsing constructs the semantic relations between words in a question, and Q-GCN Module updates every word's features by aggregating the adjacent word features. In addition, the I-GCN Module builds the relations between image objects, and the Attention Alignment Module use question-guided image attention mechanism to learn a new object representation thus align the images and questions. All punctuations and upper cases have been preprocessed. The numbers in red are the weight scores of image objects and words.
65
+
66
+ rial representation and model region relations with pairwise combinations. GCNs, which can better explore the visual relations between objects and aggregate its own features and neighbors' features, have been applied to various tasks, such as text classification (Yao et al., 2019), relation extraction (Guo et al., 2019; Zhang et al., 2018b), scene graph generation (Yang et al., 2018; Yao et al., 2018).
67
+
68
+ To answer complicated questions about an image, a relation-aware graph attention network (ReGAT) (Li et al., 2019) is proposed to encode each image into a graph and model multi-type interobject relations via a graph attention mechanism, such as spatial relations, semantic relations and implicit relations. One limitation of ReGAT (Li et al., 2019) lies in the fact that it solely considers the relations between objects in an image while neglect the importance of text information. In contrast, our DC-GCN simultaneously capture visual relations in an image and textual relations in a question.
69
+
70
+ # 3 Model
71
+
72
+ # 3.1 Feature Extraction
73
+
74
+ Similar to (Anderson et al., 2018), we extract the image features by using a pretrained Faster RCNN (Ren et al., 2015). We select $\mu$ object proposals for each image, where each object proposal is represented by a 2048 dimensional feature vector. The obtained visual region features are denoted as $h_v = \{h_{vi}\}_{i=0}^{\mu} \in \mathbb{R}^{\mu \times 2048}$ .
75
+
76
+ To extract the question features, each word is embedded into a 300-dimensional Glove vector
77
+
78
+ (Pennington et al., 2014). The word embeddings are input into a LSTM (Hochreiter and Schmidhuber, 1997) to encode, which produces the initial question representation $h_{q} = \{h_{qj}\}_{j=0}^{\lambda} \in \mathbb{R}^{\lambda \times d_{q}}$ .
79
+
80
+ # 3.2 Relation Extraction and Encoding
81
+
82
+ # 3.2.1 I-GCN Module
83
+
84
+ Image Fully-connected Relations Graph By treating each object region in an image as a vertex, we can construct a fully-connected undirected graph, as shown in Figure 3(b). Each edge represents a relation between two object regions.
85
+
86
+ Pruned Image Graph with Spatial Relations S- spatial relations represent an object position in an image, which correspond to a 4-dimensional spatial coordinate $[x_{1},y_{1},x_{2},y_{2}]$ . Note that $(x_{1},y_{1})$ is the coordinate of the top-left point of the bounding box and $(x_{2},y_{2})$ is the coordinate of the bottom-right point of the bounding box.
87
+
88
+ Identifying the correlation between objects is a key step. We calculate the correlation between objects by using spatial relations. The steps are as follows: (1) The features of two nodes are input into multi-layer perceptron respectively, and then the corresponding elements are multiplied to get a relatedness score. (2) The intersection over union of two object regions is calculated. According to the overlapping part of two object regions, different spatial relations are classified into 11 different categories, such as inside, cover, and overlap (Yao et al., 2018). Following the work (Yao et al., 2018), we utilize the overlapping region between
89
+
90
+ two object regions to judge whether there is an edge between two regions. If two object regions have large overlapping part, it means that there is a strong correlation between these two objects. If two object regions haven't any overlapping part, we consider two objects have a weak correlation, which means there are no edges to connect these two nodes. According to the spatial relations, we prune some irrelevant relations between objects and obtain a sparse graph, as shown in Figure 3(c).
91
+
92
+ ![](images/4a92dccce428f3a12072f5e19e768b55adfd237545ec384302e8283f66aaa7bb.jpg)
93
+ (a)
94
+
95
+ ![](images/e8e7550535a82b4e4bc09b500bca55faac5f8ae9a63e78bdb6bf8018fa71042e.jpg)
96
+ (b)
97
+ Figure 3: (a) Generate region proposals by pretrained model (Anderson et al., 2018). For display purposes, we only highlight some object regions. (b) Construct the relations between objects. (c) Prune the irrelevant object edges and calculate the weight between objects. The numbers in red are the weights of edges.
98
+
99
+ ![](images/804c104be74aba8fddaad3a6b30239933551301b90f6384b700a444e26f8bbd2.jpg)
100
+ (c)
101
+
102
+ Image Graph Convolutions Following the previous studies (Li et al., 2019; Zhang et al., 2018b; Yang et al., 2018), we use GCN to update the representations of objects. Given a graph with $\mu$ nodes, each object region in an image is a node. We represent the graph structure with a $\mu \times \mu$ adjacency matrix $A$ , where $A_{ij} = 1$ if there is overlapping region between node $i$ and node $j$ ; else $A_{ij} = 0$ .
103
+
104
+ Given a target node $i$ and a neighboring node $j \in \mathcal{N}(i)$ in an image, where $\mathcal{N}(i)$ is the set of nodes neighboring with node $i$ , and the representations of node $i$ and node $j$ are $h_{vi}$ and $h_{vj}$ , respectively. To obtain the correlation score $s_{ij}$ between node $i$ and $j$ , we learn a fully connected layer over concatenated node features $h_{vi}$ and $h_{vj}$ :
105
+
106
+ $$
107
+ s _ {i j} = w _ {a} ^ {T} \sigma (W _ {a} [ h _ {v i} ^ {(l)}, h _ {v j} ^ {(l)} ]), \qquad (1)
108
+ $$
109
+
110
+ where $w_{a}$ and $W_{a}$ are learned parameters, $\sigma$ is the non-linear activation function, and $[h_{vi}^{(l)}, h_{vj}^{(l)}]$ denotes the concatenation operation. We apply a softmax function over the correlation score $s_{ij}$ to obtain weight $\alpha_{ij}$ , as shown in Figure 3(c) where the numbers in red represent the weight scores:
111
+
112
+ $$
113
+ \alpha_ {i j} = \frac {\exp (s _ {i j})}{\sum_ {j \in \mathcal {N} (i)} \exp (s _ {i j})}. \tag {2}
114
+ $$
115
+
116
+ The $l$ -th layer representations of neighboring nodes $h_{vj}^{(l)}$ are first transformed via a learned linear transformation $W_{b}$ . Those transformed representations
117
+
118
+ are then gathered with weight $\alpha_{ij}$ , followed by a non-linear function $\sigma$ . This layer-wise propagation can be denoted as:
119
+
120
+ $$
121
+ h _ {v i} ^ {(l + 1)} = \sigma \left(h _ {v i} ^ {(l)} + \sum_ {j \in \mathcal {N} (i)} A _ {i j} \alpha_ {i j} W _ {b} h _ {v j} ^ {(l)}\right). \tag {3}
122
+ $$
123
+
124
+ Following the stacked $L$ layer GCN, the output of I-GCN module $H_{v}$ can be denoted as:
125
+
126
+ $$
127
+ H _ {v} = h _ {v i} ^ {(l + 1)} (l < L). \tag {4}
128
+ $$
129
+
130
+ # 3.2.2 Q-GCN Module
131
+
132
+ In practice, we observe that two words in a sentence usually hold certain relations. Such relations can be identified by the universal Standford Dependencies (De Marneffe et al., 2014). As shown in Table 1, we list a part of commonly-used dependency relations. For example, the sentence what color is
133
+
134
+ ![](images/3e2ea9fdccfe4d64c89027526b303f2bb1f7929e2cd36b600d268417568e3e26.jpg)
135
+ Figure 4: The question is performed by syntactic dependency parsing. The word $is$ is the root node of dependency relations while the words in blue (e.g., det, dobj) are dependency relations. The direction of arrow indicates that two words exist a relation.
136
+
137
+ theUMPIRE'sshirtis parsed to obtain the relations between words (e.g.,cop,det and nmod),as shown in Figure 4.The words in blue are the dependency relations.The ending of arrow indicates that this word is a modifier.The word root in purple is used to indicate which word is the root node of dependency relations.
138
+
139
+ Question Fully-connected Relations Graph By treating each word in a question as a node, we construct a fully-connected undirected graph, as shown in Figure 5(a). Each edge represents a relation between two words.
140
+
141
+ Pruned Question Graph with Dependency Relations Irrelevant relations between two words may bring noises. Therefore, we need to prune some unrelated relations to reduce the noises. By parsing the dependency relations of a question, we obtain the relations between words (cf. Figure 4). According to dependency relations, we prune some edges between two nodes which do not have dependency relations. A sparse graph is obtained, as shown in Figure 5(b).
142
+
143
+ <table><tr><td>Relations</td><td>Relation Description</td></tr><tr><td>det</td><td>determiner</td></tr><tr><td>nsubj</td><td>nominal subject</td></tr><tr><td>case</td><td>prepositions, postpositions</td></tr><tr><td>nmod</td><td>nominal modifier</td></tr><tr><td>cop</td><td>copula</td></tr><tr><td>dobj</td><td>direct object</td></tr><tr><td>amod</td><td>adjective modifier</td></tr><tr><td>aux</td><td>auxiliary</td></tr><tr><td>advmod</td><td>adverbial modifier</td></tr><tr><td>compound</td><td>compound</td></tr><tr><td>dep</td><td>dependent</td></tr><tr><td>acl</td><td>claussal modifier of noun</td></tr><tr><td>nsubjpass</td><td>passive nominal subject</td></tr><tr><td>auxpass</td><td>passive auxiliary</td></tr><tr><td>root</td><td>root node</td></tr></table>
144
+
145
+ Table 1: The main categories of relations classified by the dependency parsing tool (De Marneffe et al., 2014).
146
+
147
+ ![](images/c9eb7c02452553bd647ef15d3f5663319e19003a5568d7455c964df9292e8197.jpg)
148
+ Figure 5: (a) A fully-connected graph network is built where each word is a node and each word may have relations with other words. (b) the Stanford Syntactic Parsing tool (De Marneffe et al., 2014) is used to obtain the dependency relations between words. According to these relations, we can prune the unrelated edges and obtain a sparse graph. (c) The numbers in red are the weight scores. For the node $\text{umpire}$ 's, the weight of word $\text{the}$ is 0.1 while the weight of word $\text{shirt}$ is 0.9. The weight scores reflect the importance of words. The phrase $\text{umpire}$ 's $\text{shirt}$ describes an object, thus the word $\text{shirt}$ is more important than word $\text{the}$ .
149
+
150
+ ![](images/8d3b128259fbe3260586f5ddb27f2ba83dd31f991cf97f9677f942bbf75e868d.jpg)
151
+
152
+ ![](images/246399460dad5ec3878ff8c75e3a5ae7b7e57f2b6f4216d7ae2ac7b2b128617a.jpg)
153
+
154
+ Question Graph Convolutions Following the previous works (Li et al., 2019; Zhang et al., 2018b; Yang et al., 2018), we use GCN to update the node representations of words. Given a graph with $\lambda$ nodes, each word in a question is a node. We represent the graph structure with a $\lambda \times \lambda$ adjacency matrix $B$ where $B_{ij} = 1$ if there is a dependency relation between node $i$ and node $j$ ; else $B_{ij} = 0$ .
155
+
156
+ Given a target node $i$ and a neighboring node $j \in \Omega(i)$ in a question, $\Omega(i)$ is the set of nodes neighboring with node $i$ . The representations of node $i$ and $j$ are $h_{qi}$ and $h_{qj}$ , respectively. To obtain the correlation score $t_{ij}$ between node $i$ and $j$ , we learn a fully connected layer over concatenated node features $h_{qi}$ and $h_{qj}$ :
157
+
158
+ $$
159
+ t _ {i j} = w _ {c} ^ {T} \sigma \left(W _ {c} \left[ h _ {q i} ^ {(l)}, h _ {q j} ^ {(l)} \right]\right), \tag {5}
160
+ $$
161
+
162
+ where $w_{c}$ and $W_{c}$ are learned parameters, $\sigma$ is the non-linear activation function, and $[h_{qi}^{(l)}, h_{qj}^{(l)}]$ de
163
+
164
+ notes the concatenation operation. We apply a softmax function over the correlation score $t_{ij}$ to obtain weight $\beta_{ij}$ :
165
+
166
+ $$
167
+ \beta_ {i j} = \frac {\exp (t _ {i j})}{\sum_ {j \in \Omega (i)} \exp (t _ {i j})}. \tag {6}
168
+ $$
169
+
170
+ As shown in Figure 5(c), the numbers in red are the weight scores. The $l$ -th layer representations of neighboring nodes $h_{qj}^{(l)}$ are first transformed via a learned linear transformation $W_{d}$ . Those transformed representations are gathered with weight $\beta_{ij}$ , followed by a non-linear function $\sigma$ . This layer-wise propagation can be denoted as:
171
+
172
+ $$
173
+ h _ {q i} ^ {(l + 1)} = \sigma \left(h _ {q i} ^ {(l)} + \sum_ {j \in \Omega (i)} B _ {i j} \beta_ {i j} W _ {d} h _ {q j} ^ {(l)}\right). \tag {7}
174
+ $$
175
+
176
+ Following the stacked $L$ layer GCN, the output of Q-GCN module $H_{q}$ is denoted as:
177
+
178
+ $$
179
+ H _ {q} = h _ {q i} ^ {(l + 1)} (l < L). \tag {8}
180
+ $$
181
+
182
+ # 3.3 Attention Alignment Module
183
+
184
+ Based on the previous works (Gao et al., 2019; Yu et al., 2019), we use self-attention mechanism (Vaswani et al., 2017) to enhance the correlation between words in a question and the correlation between objects in an image, respectively.
185
+
186
+ To enhance the correlation between words and highlight the important words, we utilize the self-attention mechanism to update question representation $H_{q}$ . The updated question representation $\tilde{H}_{q}$ is obtained as follows:
187
+
188
+ $$
189
+ \tilde {H} _ {q} = \operatorname {s o f t m a x} \left(\frac {H _ {q} H _ {q} ^ {T}}{\sqrt {d _ {q}}}\right) H _ {q}, \tag {9}
190
+ $$
191
+
192
+ where $H_{q}^{T}$ is the transpose of $H_{q}$ and $d_{q}$ is the dimension of $H_{q}$ . The level of this self-attention is set to 4.
193
+
194
+ To obtain the image representation related to question representation, we align the image representation $H_{v}$ by utilizing the question representation $\tilde{H}_q$ as the guided vector. The similarity score $r$ between $H_{v}$ and $\tilde{H}_q$ is calculated as follows:
195
+
196
+ $$
197
+ r = \frac {\tilde {H} _ {q} H _ {v} ^ {T}}{\sqrt {d _ {v}}}, \tag {10}
198
+ $$
199
+
200
+ where $H_{v}^{T}$ is the transpose of $H_{v}$ and $d_v$ is the dimension of $H_{v}$ . A softmax function is used to normalize the score $r$ to obtain the weight score $\tilde{r}$ :
201
+
202
+ $$
203
+ \tilde {r} = \left[ \tilde {r} _ {1}, \dots , \tilde {r} _ {i} \right] = \frac {\exp (r _ {i})}{\sum_ {j \in \mu} \exp (r _ {j})} \tag {11}
204
+ $$
205
+
206
+ where $\mu$ is the number of image regions.
207
+
208
+ By multiplying the weight $\tilde{r}$ and the image representation $H_{v}$ , the updated image representation $\tilde{H}_{v}$ is obtained:
209
+
210
+ $$
211
+ \tilde {H} _ {v} = \tilde {r} \cdot H _ {v}. \tag {12}
212
+ $$
213
+
214
+ The level of this question guided image attention is set to 4. The final outputs of the attention alignment module are $\tilde{H}_q$ and $\tilde{H}_v$ .
215
+
216
+ # 3.4 Answer Prediction
217
+
218
+ We apply the linear multimodal fusion method to fuse two representations $\tilde{H}_q$ and $\tilde{H}_v$ as follows:
219
+
220
+ $$
221
+ H _ {r} = W _ {v} ^ {T} \tilde {H} _ {v} + W _ {q} ^ {T} \tilde {H} _ {q}, \tag {13}
222
+ $$
223
+
224
+ $$
225
+ \operatorname {p r e d} = \operatorname {s o f t m a x} \left(W _ {e} H _ {r} + b _ {e}\right), \tag {14}
226
+ $$
227
+
228
+ where $W_{v}, W_{q}, W_{e}$ , and $b_{e}$ are learned parameters, and pred means the probability of the classified answers from the set of answer vocabulary which contains $M$ candidate answers. Following (Yu et al., 2019), we use binary cross-entropy loss function to train an answer classifier.
229
+
230
+ # 4 Experiments
231
+
232
+ # 4.1 Datasets
233
+
234
+ VQA-v2 (Goyal et al., 2017) is the most commonly used VQA benchmark dataset which is split into train, val, and test-standard sets. Among test-standard set, $25\%$ are served as test-dev set. Each question has 10 answers from different annotators. Answers with the highest frequency are treated as the ground truth. All answer types can be divided into Yes/No, Number, and Other. VQA-CP-v2 (A-grawal et al., 2018) is a derivation of the VQA-v2 dataset, which is introduced to evaluate and reduce the question-oriented bias in VQA models. Due to significant difference of distribution between train set and test set, the VQA-CP-v2 dataset is harder than VQA-v2 dataset.
235
+
236
+ # 4.2 Experimental Setup
237
+
238
+ We use the Adam optimizer (Kingma and Ba, 2014) with parameters $\alpha = 0.0001$ , $\beta_{1} = 0.9$ , and $\beta_{2} = 0.99$ . The size of the answer vocabulary is set to $M = 3,129$ as used in (Anderson et al., 2018). The base learning rate is set to 0.0001. After 15 epochs, the learning rate is decayed by 1/5 every 2 epochs. All the models are trained up to 20 epochs with the same batch size 64 and hidden size 512. Each image has $\mu \in [10,100]$ object regions, all
239
+
240
+ questions are padded and truncated to the same length 14, i.e., $\lambda = 14$ . The levels of stacked layer $L$ and attention alignment module are both 4.
241
+
242
+ # 4.3 Experimental Results
243
+
244
+ Table 2 shows the performance of our DC-GCN model and baseline models trained with the widely-used VQA-v2 dataset. All results in our paper are based on single-model performance. For a fair comparison, we also train our model with extra visual genome dataset (Krishna et al., 2017). Bottom-Up
245
+
246
+ <table><tr><td rowspan="2">Model</td><td colspan="4">Test-dev</td><td>Test-standard</td></tr><tr><td>Y/N</td><td>Num</td><td>Other</td><td>All</td><td>All</td></tr><tr><td>Bottom-Up (Anderson et al., 2018)</td><td>81.82</td><td>44.21</td><td>56.05</td><td>65.32</td><td>65.67</td></tr><tr><td>DCN (Nguyen and Okatani, 2018)</td><td>83.51</td><td>46.61</td><td>57.26</td><td>66.87</td><td>66.97</td></tr><tr><td>Counter (Zhang et al., 2018a)</td><td>83.14</td><td>51.62</td><td>58.97</td><td>68.09</td><td>68.41</td></tr><tr><td>BAN (Kim et al., 2018)</td><td>85.31</td><td>50.93</td><td>60.26</td><td>69.52</td><td>-</td></tr><tr><td>DCAF (Gao et al., 2019)</td><td>86.09</td><td>53.32</td><td>60.49</td><td>70.22</td><td>70.34</td></tr><tr><td>Erase-Att (Liu et al., 2019)</td><td>85.87</td><td>50.28</td><td>61.10</td><td>70.07</td><td>70.36</td></tr><tr><td>ReGAT (Li et al., 2019)</td><td>86.08</td><td>54.42</td><td>60.33</td><td>70.27</td><td>70.58</td></tr><tr><td>MCAN (Yu et al., 2019)</td><td>86.82</td><td>53.26</td><td>60.72</td><td>70.63</td><td>70.90</td></tr><tr><td>DC-GCN (ours)</td><td>87.32</td><td>53.75</td><td>61.45</td><td>71.21</td><td>71.54</td></tr></table>
247
+
248
+ Table 2: Comparison with previous state-of-the-art methods on VQA-v2 test dataset. "-" means data absence. Answer types consist of Yes/No, Num and Other categories. All means the total accuracy rate. All results in our paper are based on single-model performance.
249
+
250
+ (Anderson et al., 2018) is proposed to use features based on Faster RCNN (Ren et al., 2015) instead of ResNet (He et al., 2016). Dense Co-Attention Network (DCN) (Nguyen and Okatani, 2018) utilizes dense stack of multiple layers of co-attention mechanism. Counting method (Zhang et al., 2018a) is good at counting questions by utilizing the information of bounding boxes. DFAF (Gao et al., 2019) dynamically fuses Intra- and Inter-modality information. ReGAT (Li et al., 2019) models semantic, spatial, and implicit relations via a graph attention network. MCAN (Yu et al., 2019) utilizes deep modular networks to learn the multimodal feature representations, which is a state-of-the-art approach on VQA-v2 dataset. As shown in Table 2, our model increases the overall accuracy of DFAF and MCAN by $1.2\%$ and $0.6\%$ on the test-standard set,
251
+
252
+ ![](images/e86bceee47adab349ce362c60cd1ec7dc4f5527737a8dd995b723c8283b5bb33.jpg)
253
+
254
+ ![](images/b6a1247eaf3e12b7d538251c4368a615c8df788e142aed1157acacfc9d5d0c29.jpg)
255
+ (a) Q-GCN(2)
256
+
257
+ ![](images/1e0e14a090926bf819f54dd87d2cb3f4d0678e6c81795e1b893912f6ea7a7c02.jpg)
258
+ (c) I-GCN(2)
259
+
260
+ ![](images/784edca99ba9c2a20e25ae9a63a58e35f6042f018022afad436d858355b6ef5d.jpg)
261
+ (e) Align Image with Question (3)
262
+
263
+ ![](images/6913edfd2580f89a638f8feb84897379dbebba96963e7345ca97cec7f6665de2.jpg)
264
+ Q: What color is theUMPire's shirt?
265
+ Ground True Answer: blue
266
+ Predicted Answer: blue
267
+ (by our DC-GCN model)
268
+ (b) Q-GCN(4)
269
+
270
+ ![](images/72f70cfb7e84384da7b77177a7381b085474b57fa91ec016893109dd0a4becb7.jpg)
271
+ (d) I-GCN(4)
272
+
273
+ ![](images/f9f0d3ed5c221617f9ee5858249d4557be16425992df9c3e55fdb9d1808d760f.jpg)
274
+ (f) Align Image with Question (4)
275
+ Figure 6: Visualizations of the learned attention maps of the Q-GCN module, I-GCN module and Attention Alignment module from some typical layers. We regard the correlation score between nodes as the attention score. Q-GCN(l) and I-GCN(l) denote the question GCN attention maps and image GCN attention maps from the $l$ -th layer, respectively, as shown in (a), (b), (c) and (d). And (e) and (f) mean the question-guided image attention weight of Attention Alignment module in $l$ -th layer. For the sake of presentation, we only consider 20 object regions in an image. The index within [1, 20] shown on the axes of the attention maps corresponds to each object in the image. For better visualization effect, we highlight in the image three objects which correspond to 4-th, 6-th, 9-th, and 12-th objects, respectively.
276
+
277
+ respectively. Although still cannot achieve comparable performance in the category of Num with respect to ReGAT (which is the best one in counting sub-task), our DC-GCN outperforms it in other categories (e.g., $Y / N$ with $1.2\%$ , Other with $1.1\%$ and Overall with $0.9\%$ ). It shows that DC-GCN has relation capturing ability in answering all kinds of questions by sufficiently exploring the semantics in both object appearances and object relations. In summary, our DC-GCN achieves outstanding performance on the VQA-v2 dataset.
278
+
279
+ To demonstrate the generalizability of our DC-GCN model, we also conduct experiments on the VQA-CP-v2 dataset. To overcome the language biases of the VQA-v2 dataset, the research work (Agrawal et al., 2018) designed the VQA-CP-v2 dataset and specifically proposed the GVQA model for reducing the influence of language biases. Table 3 shows the results on VQA-CP-v2 test split. The Murel (Cadene et al., 2019a) and ReGAT (Li et al., 2019) build the relations between objects to realize the reasoning task and question answering task, which are the state-of-the-art models. Our DC-GCN model surpasses both Murel and ReGAT on VQA-CP-v2 (41.47 vs. 39.54 and 41.47 vs. 40.42). The performance gain is lifted to $+1.05\%$ . Although our proposed method is not designed for VQA-CP-v2 dataset, our model has a slight ad
280
+
281
+ <table><tr><td>Model</td><td>Acc. (%)</td></tr><tr><td>RAMEN (Robik Shrestha, 2019)</td><td>39.21</td></tr><tr><td>BAN (Kim et al., 2018) *</td><td>39.31</td></tr><tr><td>Murel (Cadene et al., 2019a)</td><td>39.54</td></tr><tr><td>ReGAT-Sem (Li et al., 2019)</td><td>39.54</td></tr><tr><td>ReGAT-Imp (Li et al., 2019)</td><td>39.58</td></tr><tr><td>ReGAT-Spa (Li et al., 2019)</td><td>40.30</td></tr><tr><td>ReGAT (Li et al., 2019)</td><td>40.42</td></tr><tr><td>GVQA (Agrawal et al., 2018) #</td><td>31.30</td></tr><tr><td>UpDn (Anderson et al., 2018) **</td><td>39.74</td></tr><tr><td>UpDn + Q-Adv + DoE (Ramakrishnan et al., 2018) #</td><td>41.17</td></tr><tr><td>DC-GCN (ours)</td><td>41.47</td></tr></table>
282
+
283
+ Table 3: Model accuracy on the VQA-CP-v2 benchmark (open-ended setting on the test split). The results of models with * and ** are obtained from the work (Robik Shrestha, 2019) and (Ramakrishnan et al., 2018), respectively. Models with # are designed for solving the language biases. The ReGAT model consists of Semantic (Sem), Implicit (Imp), and Spatial (Spa) relation encoder.
284
+
285
+ vantage over $UpDn + Q\text{-} Adv + DoE$ model. The results on VQA-CP-v2 dataset show that dependency parsing and DC-GCN can effectively reduce question-based overfitting.
286
+
287
+ # 4.4 Qualitative Analysis
288
+
289
+ In Figure 6, we visualize the learned attentions from the I-GCN module, Q-GCN module and At
290
+
291
+ tention Alignment module. Due to the space limitation, we only show one example and visualize six attention maps from different attention units and different layers. From the results, we have the following observations.
292
+
293
+ Question GCN Module: The attention maps of Q-GCN(2) focus on the words color and shirt as shown in Figure 6(a) while the attention maps of Q-GCN(4) correctly focus on the words color, um-pire's, and shirt, as shown in Figure 6(b). Those words have the larger weight than others. That is to say, the keywords color, umpire's and shirt are identified correctly.
294
+
295
+ Image GCN Module For the sake of presentation, we only consider 20 object regions in an image. The index within [1, 20] shown on the axes of the attention maps corresponds to each object in the image. Among these indexes, indexes 4, 6, 9, and 12 are the most relevant ones for the question. Compared with I-GCN(2) which focuses on the 4-th, 6-th, 9-th, 12-th, and 14-th objects (cf. Figure 6(c)), the I-GCN(4) focuses more on the 4-th, 6-th, and 12-th objects where the 4-th object has larger weight than the 6-th and 12-th objects, as shown in Figure 6(d). The 4-th object region is the region of ground true while the 6-th, 9-th, and 12-th object regions are the most relevant ones.
296
+
297
+ Attention Alignment Module Given a specific question, a model needs to align image objects guided by the question to update the representations of objects. As shown in Figure 6(e), the focus regions are more scattered, where the key regions are mainly the 4-th, 9-th and 12-th object regions. Through the guidance of the identified words color, uptime's and shirt, the DC-GCN model gradually pays more attention to the 4-th, 9-th, and 12-th object regions rather than other irrelevant object regions, as shown in Figure 6(f). This alignment process demonstrates that our model can capture the relations of multiple similar objects.
298
+
299
+ We also visualize some negative examples predicted by our DC-GCN model. As shown in Figure 7, which can be classified into three categories: (1) limitation of object detection; (2) text semantic understanding in scenarios; (3) subjective judgment. In Figure 7(a), although the question how many sheep are pictured is not so difficult, the image content is really confusing. If not observe carefully, it's rather easy to obtain the wrong answer 2 instead of 3. The reasons for this error include object occlusion, near and far degrees, and the limitation
300
+
301
+ ![](images/a0cf294a456aee84bffe88116b3eb426b5a6ebb2df47842998425abc7b517297.jpg)
302
+ (a)
303
+ Q: how many sheep are pictured Ground True Answer: 3
304
+
305
+ ![](images/2dd26dc6a4e44fd6766d7432d5518b4ad221e110c98fb3b3c94d0b17963bb795.jpg)
306
+ Q: how many sheep are pictured.
307
+ Predicted Answer: 2
308
+
309
+ ![](images/6e6489429093d1bea1336a5fd4e93881f66695a2b9fdeeb679cfd6644b02a566.jpg)
310
+ (b)
311
+ Q: what time should you pay Ground True Answer: 8 am to 8 pm
312
+
313
+ ![](images/28c86626c4470342c26a279f584207f1d2a79ae67c82016957d7ff9192f93248.jpg)
314
+ Q: what time should you pay
315
+ Predicted Answer: nothing
316
+
317
+ ![](images/2a716fbe415898499c54679834faeac6f0d52914c16dc219e0ec5a1f44ef8b23.jpg)
318
+ (c)
319
+ Q: is this man happy Ground True Answer: no
320
+ Figure 7: We summarize three types of incorrect examples: limitation of object detection, text semantic understanding and subjective judgment which correspond to (a), (b), and (c), respectively.
321
+
322
+ ![](images/637c6d17d39d7a3c167f36c113260f168e04ab194ba857599b9be4ab2d2dc952.jpg)
323
+ Q: is this man happy Predicted Answer: yes
324
+
325
+ of object detection. The image feature extractor is based on Faster R-CNN model (Ren et al., 2015). The accuracy of object detection can indirectly affect the accuracy of feature extraction. Counting subtask in VQA task has a large room to improve. In Figure 7(b), the question what time should you pay can be answered by recognizing the text semantic understanding in the image. Text semantic understanding belongs to another task, namely text visual question answering (Biten et al., 2019), which requires to recognize the numbers, symbols and proper nouns in a scene. In Figure 7(c), subjective judgment is needed to answer the question is this man happy. Making this judgment requires some common sense knowledge and real life experience. Specifically, someone holding a banana against him and just like holding a gun towards him, so he is unhappy. Our model can not make such analysis like a human being done to make a subjective judgment and predict the correct answer yes.
326
+
327
+ Finally, to understand the distribution of three error types, we randomly pick up 100 samples on dev set of VQA-v2. The number of three error types (i.e., overlapping objects, text semantic understanding, and subjective judgment) is 3, 3, and 29, respectively. The predicted answers of the first two questions types are all incorrect. The last one has 12 incorrect answers, which means the error
328
+
329
+ rate of this question type is $41.4\%$ . These observations are helpful to make further improvement in the future.
330
+
331
+ # 4.5 Ablation Study
332
+
333
+ We perform extensive ablation studies on the VQAv2 validation dataset (cf. Table 4). The experimental results are based on one black of our DC-GCN model. All modules inside DC-GCN have the same dimension of 512. The learning rate is 0.0001 and the batch size is 32.
334
+
335
+ <table><tr><td>Component</td><td>Setting</td><td>Acc. (%)</td></tr><tr><td>Bottom-Up (Anderson et al., 2018)</td><td>Bottom-Up</td><td>63.15</td></tr><tr><td>Default</td><td>DC-GCN</td><td>66.57</td></tr><tr><td rowspan="3">GCN Types</td><td>DC-GCN</td><td>66.57</td></tr><tr><td>w/o I-GCN</td><td>65.52</td></tr><tr><td>w/o Q-GCN</td><td>66.15</td></tr><tr><td rowspan="8">Dependency relations</td><td>- det</td><td>66.50</td></tr><tr><td>- case</td><td>66.42</td></tr><tr><td>- cop</td><td>66.01</td></tr><tr><td>- aux</td><td>66.48</td></tr><tr><td>- advmod</td><td>66.53</td></tr><tr><td>- compound</td><td>66.35</td></tr><tr><td>- det case</td><td>65.23</td></tr><tr><td>- det case cop</td><td>64.11</td></tr></table>
336
+
337
+ Table 4: Ablation studies of our proposed model on VQA-v2 validation dataset. The experimental results are based on one black of our DC-GCN model. $w/o$ means removing a certain module from DC-GCN model. The detailed descriptions about dependency relations are shown on Table 1.
338
+
339
+ Firstly, we investigate the influence of GCN types. There are two GCN types: I-GCN and Q-GCN, as shown in Table 4. When removing the I-GCN, the performance of our model decreases from $66.57\%$ to $65.52\%$ ( $p$ -value $= 3.22\mathrm{E} - 08 < 0.05$ ). When removing the Q-GCN, the performance of our model slightly decreases from $66.57\%$ to $66.15\%$ ( $p$ -value $= 2.04\mathrm{E} - 07 < 0.05$ ). We consider that there are two reasons. One is that the image content is more complex than the question's content, hence which has richer semantic information. By building the relations between objects can help clarify what the image represents and help align with the question representations. The other is that the length of question is short, and less information is contained (e.g., what animal is this? and what color is the man's shirt?).
340
+
341
+ Then, we perform ablation study on the influence of dependency relations (cf. Table 1). The relations, like nsubj, nmod, doj and amod, are crucial to semantic representations, therefore, we do
342
+
343
+ not remove them from the sentence. As shown in Table 4, removing the relations like det, case, aux and advmod individually, has trivial influence to the semantic representations of the question. But the result accuracy decreases significantly when we simultaneously remove the relations det, case and cop. The reason may be that the sentence loses too much information and becomes difficult to fully express the meaning of the original sentence. For example, consider the two phrases on the table and under the table. If we remove the relation case, which means that the words on and under are removed, then it will be hard to distinguish whether it is on the table or under the table.
344
+
345
+ # 5 Conclusion
346
+
347
+ In this paper, we propose a dual channel graph convolutional network to explore the relations between objects in an image and the syntactic dependency relations between words in a question. Furthermore, we explicitly construct the relations between words by dependency tree and align the image and question representations by an attention alignment module to reduce the gaps between vision and language. Extensive experiments on the VQA-v2 and VQA-CP-v2 datasets demonstrate that our model achieves comparable performance with the state-of-the-art approaches. We will explore more complicated object relation modeling in future work.
348
+
349
+ # Acknowledgements
350
+
351
+ We thank the anonymous reviewers for valuable comments and thoughtful suggestions. We would also like to thank Professor Yuzhang Lin from University of Massachusetts Lowell for helpful discussions.
352
+
353
+ This work was supported by the Fundamental Research Funds for the Central Universities, SCUT (No.2017ZD048, D2182480), the Science and Technology Planning Project of Guangdong Province (No.2017B050506004), the Science and Technology Programs of Guangzhou (No.201704030076, 201802010027, 201902010046) and the collaborative research grants from the Guangxi Natural Science Foundation (2017GXNSFAA198225) and the Hong Kong Research Grants Council (project no. PolyU 1121417 and project no. C1031-18G), and an internal research grant from the Hong Kong Polytechnic University (project 1.9B0V).
354
+
355
+ # References
356
+
357
+ Aishwarya Agrawal, Dhruv Batra, Devi Parikh, and Aniruddha Kembhavi. 2018. Don't just assume; look and answer: Overcoming priors for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4971-4980.
358
+ Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6077-6086.
359
+ Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425-2433.
360
+ Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
361
+ Ali Furkan Biten, Ruben Tito, Andres Mafla, Lluis Gomez, Maral Rusiol, Ernest Valveny, C. V. Jawahar, and Dimosthenis Karatzas. 2019. Scene text visual question answering. CoRR abs/1905.13648.
362
+ Remi Cadene, Hedi Ben-Younes, Matthieu Cord, and Nicolas Thome. 2019a. Murel: Multimodal relational reasoning for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1989-1998.
363
+ Remi Cadene, Corentin Dancette, Hedi Ben younes, Matthieu Cord, and Devi Parikh. 2019b. Rubi: Reducing unimodal biases for visual question answering. In Advances in Neural Information Processing Systems, pages 841-852.
364
+ Qingxing Cao, Xiaodan Liang, Bailing Li, Guanbin Li, and Liang Lin. 2018. Visual question reasoning on general dependency tree. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7249-7257.
365
+ Marie-Catherine De Marneffe, Timothy Dozat, Natalia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, and Christopher D Manning. 2014. Universal stanford dependencies: A cross-linguistic typology. In LREC, volume 14, pages 4585-4592.
366
+ Peng Gao, Zhengkai Jiang, Haoxuan You, Pan Lu, Steven CH Hoi, Xiaogang Wang, and Hongsheng Li. 2019. Dynamic fusion with intra-and inter-modality attention flow for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6639-6648.
367
+ Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding
368
+
369
+ in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6904-6913.
370
+ Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Attention guided graph convolutional networks for relation extraction. In Proceedings of the 57th Conference of the Association for Computational Linguistics (ACL), pages 241-251.
371
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778.
372
+ Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
373
+ Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. 2018. Bilinear attention networks. In Advances in Neural Information Processing Systems, pages 1564-1574.
374
+ Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
375
+ Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32-73.
376
+ Linjie Li, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. Relation-aware graph attention network for visual question answering. Proceedings of the IEEE International Conference on Computer Vision, pages 10313-10322.
377
+ Xihui Liu, Zihao Wang, Jing Shao, Xiaogang Wang, and Hongsheng Li. 2019. Improving referring expression grounding with cross-modal attention-guided erasing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1950-1959.
378
+ Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image co-attention for visual question answering. In Advances In Neural Information Processing Systems, pages 289–297.
379
+ Duy-Kien Nguyen and Takayuki Okatani. 2018. Improved fusion of visual and language representations by dense symmetric co-attention for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6087-6096.
380
+ Marco Pedersoli, Thomas Lucas, Cordelia Schmid, and Jakob Verbeek. 2017. Areas of attention for image captioning. In Proceedings of the IEEE International Conference on Computer Vision, pages 1242-1250.
381
+
382
+ Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.
383
+ Sainandan Ramakrishnan, Aishwarya Agrawal, and Stefan Lee. 2018. Overcoming language priors in visual question answering with adversarial regularization. In Advances in Neural Information Processing Systems, pages 1541-1551.
384
+ Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91-99.
385
+ Christopher Kanan Robik Shrestha, Kushal Kafle. 2019. Answer them all! toward universal visual question answering models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10472-10481.
386
+ Kevin J Shih, Saurabh Singh, and Derek Hoiem. 2016. Where to look: Focus regions for visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4613-4621.
387
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.
388
+ Danfei Xu, Yuke Zhu, Christopher B Choy, and Li Fei-Fei. 2017. Scene graph generation by iterative message passing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5410-5419.
389
+ Jianwei Yang, Jiasen Lu, Stefan Lee, Dhruv Batra, and Devi Parikh. 2018. Graph r-cnn for scene graph generation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 670-685.
390
+ Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 21-29.
391
+ Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Graph convolutional networks for text classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7370-7377.
392
+ Ting Yao, Yingwei Pan, Yehao Li, and Tao Mei. 2018. Exploring visual relationship for image captioning. In Proceedings of the European Conference on Computer Vision (ECCV), pages 684-699.
393
+ Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian. 2019. Deep modular co-attention networks for visual question answering. In Proceedings of the
394
+
395
+ IEEE Conference on Computer Vision and Pattern Recognition, pages 6281-6290.
396
+ Zhou Yu, Jun Yu, Jianping Fan, and Dacheng Tao. 2017. Multi-modal factorized bilinear pooling with co-attention learning for visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 1821-1830.
397
+ Zhou Yu, Jun Yu, Chenchao Xiang, Jianping Fan, and Dacheng Tao. 2018. Beyond bilinear: Generalized multimodal factorized high-order pooling for visual question answering. IEEE transactions on neural networks and learning systems, 29(12):5947-5959.
398
+ Yan Zhang, Jonathon Hare, and Adam Prügel-Bennett. 2018a. Learning to count objects in natural images for visual question answering. International Conference on Learning Representation (ICLR).
399
+ Yuhao Zhang, Peng Qi, and Christopher D Manning. 2018b. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 conference on empirical methods in natural language processing (EMNLP), pages 2205-2215.
aligneddualchannelgraphconvolutionalnetworkforvisualquestionanswering/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bca41e3378745dc37880532453709d673671f86ccd3dcd33120252c08ed67497
3
+ size 568880
aligneddualchannelgraphconvolutionalnetworkforvisualquestionanswering/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6194d700b11fa73764e05c664d91ce441398ce9a936364dda14d0f72658101c1
3
+ size 483532
amalgamationofproteinsequencestructureandtextualinformationforimprovingproteinproteininteractionidentification/92d0e682-068d-464f-acb5-ea4912594047_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae48b768ccaa591cccd0891690ad5bea23446fa2728b2fb23a94f16c0f69a72f
3
+ size 73114
amalgamationofproteinsequencestructureandtextualinformationforimprovingproteinproteininteractionidentification/92d0e682-068d-464f-acb5-ea4912594047_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0465254ec3dc89009002cbaa72b944678749b5a3879f52f5a4ba27703591f14c
3
+ size 94756
amalgamationofproteinsequencestructureandtextualinformationforimprovingproteinproteininteractionidentification/92d0e682-068d-464f-acb5-ea4912594047_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93f228a7d59aa39de3e08d5ba0a4ac1cc1afde3dab70eb1cb1a0f3ea677b5e0a
3
+ size 3685402
amalgamationofproteinsequencestructureandtextualinformationforimprovingproteinproteininteractionidentification/full.md ADDED
@@ -0,0 +1,297 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Amalgamation of protein sequence, structure and textual information for improving protein-protein interaction identification
2
+
3
+ Pratik Dutta, Sriparna Saha
4
+
5
+ Department of Computer Science & Engineering
6
+
7
+ Indian Institute of Technology Patna
8
+
9
+ (pratik.pcs16, sriparna)@iitp.ac.in
10
+
11
+ # Abstract
12
+
13
+ An in-depth exploration of protein-protein interactions (PPI) is essential to understand the metabolism in addition to the regulations of biological entities like proteins, carbohydrates, and many more. Most of the recent PPI tasks in BioNLP domain have been carried out solely using textual data. In this paper, we argue that incorporation of multimodal cues can improve the automatic identification of PPI. As a first step towards enabling the development of multimodal approaches for PPI identification, we have developed two multimodal datasets which are extensions and multimodal versions of two popular benchmark PPI corpora (BioInfer and HRPD50). Besides, existing textual modalities, two new modalities, 3D protein structure and underlying genomic sequence, are also added to each instance. Further, a novel deep multi-modal architecture is also implemented to efficiently predict the protein interactions from the developed datasets. A detailed experimental analysis reveals the superiority of the multi-modal approach in comparison to the strong baselines including uni-modal approaches and state-of-the-art methods over both the generated multimodal datasets. The developed multi-modal datasets are available for use at https://github.com/sduttap16/MM_PPI_NLP.
14
+
15
+ # 1 Introduction
16
+
17
+ Understanding protein-protein interactions (PPI) is indispensable to comprehend different biological processes such as translation, protein functions (Kulmanov et al., 2017), gene functions (Dutta and Saha, 2017; Dutta et al., 2019b), metabolic pathways, etc. The PPI information helps researchers to discover disease mechanisms and plays seminal role in designing the therapeutic drugs (Goncearenco et al., 2017). Over the years, a significant amount of protein-protein interaction information has been published in scientific articles
18
+
19
+ in unstructured text formats. However, in recent years, there has been an exponential rise in the number of biomedical publications (Khare et al., 2014). Therefore, it becomes imperative, urgent and of extreme interest to develop an intelligent information extraction system to assist biologists in curating and maintaining PPI databases.
20
+
21
+ This pressing need has motivated Biomedical Natural Language Processing (BioNLP) researchers to automatically extract PPI information by exploring various AI techniques. Recent advancements in deep learning (LeCun et al., 2015)(Bengio et al., 2007) have opened up new avenues in solving different well-known problems ranging from computational biology (Alipanahi et al., 2015; Dutta et al., 2019a), machine translations (Cho et al., 2014), image captioning (Chen et al., 2017). Subsequently, there is a notable trend in using deep learning for solving different natural language processing (NLP) tasks in the biomedical and clinical domains (Asada et al., 2018; Alimova and Tutubalina, 2019) including the identification of protein-protein interactions from biomedical corpora (Yadav et al., 2019; Peng and Lu, 2017). Multi-modal deep learning models, combining information from multiple sources/modalities, show promising results compared to the conventional single modal-based models while solving various NLP tasks like sentiment and emotion recognition (Qureshi et al., 2019, 2020), natural language generation, machine translation (Poria et al., 2018; Zhang et al., 2019; Qiao et al., 2019; Fan et al., 2019) etc. There exist few popular multi-modal datasets which are extensively used in solving various problems in NLP like emotion recognition from conversations (Poria et al., 2018; Chen et al., 2018), image captioning (Lin et al., 2014), sentiment analysis (Zadeh et al., 2016), etc. Compared to single modal-based approaches, multi-modal techniques provide a more comprehensive perspective of the
22
+
23
+ dataset under consideration.
24
+
25
+ Despite the popularity of multi-modal approaches in solving traditional NLP tasks, there is a dearth of multi-modal datasets in BioNLP domain especially for the PPI identification task. The available PPI benchmark datasets contain solely the textual knowledge of different protein pairs, which do not help in anticipating the molecular properties of the proteins. Hence, along with the textual information, incorporation of molecular structure or underlying genomic sequence can aid in understanding the regulations of the protein interactions. The integration of multi-modal features can help in obtaining deeper insights but the concept of multimodal architecture, for textual and biological aspects, has not been cultivated much in the BioNLP domain (Peissig et al., 2012; Jin et al., 2018).
26
+
27
+ # 1.1 Motivation and Contribution
28
+
29
+ The main motivation for this research work is to generate multi-modal datasets for PPI identification task, where along with the textual information present in the biomedical literature, we did explore the genetic and structure information of the proteins. The biomedical and clinical text database is an important resource for learning about physical interactions amongst protein molecules; however, it may not be adequate for exploring biological aspects of these interactions. In the field of Bioinformatics, there are various web-based enriched archives<sup>12</sup> that contain multi-omics biological information regarding protein interactions. The integration of multi-omics information from these aforementioned databases helps in understanding the various physiological characteristics (Sun et al., 2019; Ray et al., 2014; Amemiya et al., 2019; Hsieh et al., 2017; Dutta et al., 2020). Hence, in our current work, along with the textual information from biomedical corpora, we have also incorporated structural properties of protein molecules as biological information for solving PPI task. For structural information of proteins, we have considered the atomic structure (3D PDB structure) and underlying nucleotide sequence (FASTA sequence) of protein molecules. In the BioNLP domain, collection of biological data (multi-omics information) from the text corpus is little difficult. To obtain the aforementioned information about other modalities, we need to exploit different web-based archives that
30
+
31
+ are meant for biological structures.
32
+
33
+ Drawing inspirations from these findings, we have generated a protein-protein interaction-based multi-modal dataset which includes not only textual information, but also the structural counterparts of the proteins. Finally, a novel deep multimodal architecture is developed to efficiently predict the protein-protein interactions by considering all modalities. The main contributions of this study are summarized as follows:
34
+
35
+ 1. For this study, we extend and further improve two biomedical corpora containing PPI information for multi-modal scenario by manually annotating and web-crawling two different bio-enriched archives.
36
+ 2. Our proposed multi-modal architecture uses self-attention mechanism to integrate the extracted features of different modalities.
37
+ 3. This work is a step towards integrating multi-omics information with text-mining from biomedical articles for enhancing PPI identification. To the best of our knowledge, this is the first attempt in this direction.
38
+ 4. The results and the comparative study prove the effectiveness of our developed multimodal datasets along with proposed multimodal architecture.
39
+
40
+ # 2 Related Works
41
+
42
+ There are few works (Ono et al., 2001; Blaschke et al., 1999; Huang et al., 2004) which focus on rule-based PPI information extraction method such as co-occurrence rules (Stapley and Benoit, 1999) from the biomedical texts. In (Giuliano et al., 2006), relation is extracted from entire sentence by considering the shallow syntactic information. (Erkan et al., 2007) utilize semi-supervised learning and cosine similarity to find the shortest dependency path (SDP) between protein entities. Some important kernel-based methods for PPI extraction task are graph kernel (Airola et al., 2008a), bag-of-word (BoW) kernel (Sætre et al., 2007), edit-distance kernel (Erkan et al., 2007) and all-path kernel (Airola et al., 2008b). (Yadav et al., 2019) presented an attention-based bidirectional long short-term memory networks (BiLSTM) model that uses SDP between protein pairs, latent PoS and position
43
+
44
+ <table><tr><td rowspan="2">Generated Instances of our multi-modal dataset</td><td colspan="2">Protein pairs</td><td colspan="2">Gene pairs</td><td colspan="2">PDB ID pairs</td><td colspan="2">Ensembl ID pairs</td><td rowspan="2">Interaction type</td></tr><tr><td>Protein1</td><td>Protein2</td><td>Gene1</td><td>Gene2</td><td>PDB1</td><td>PDB2</td><td>Ensembl1</td><td>Ensembl2</td></tr><tr><td>Megalin and cubilin: multifunctional endocytic receptors PROTEIN1 and PROTEIN2 are two structurally different endocytic receptors that interact to serve such functions</td><td>Megalin</td><td>cubilin</td><td>LRP2</td><td>CUBN</td><td>2MOP</td><td>3KQ4</td><td>ENSG00000081479</td><td>ENSG00000107611</td><td>TRUE</td></tr><tr><td>Megalin and PROTEIN1: multifunctional endocytic receptors Megalin and PROTEIN2 are two structurally different endocytic receptors that interact to serve such functions</td><td>cubilin</td><td>cubilin</td><td>CUBN</td><td>CUBN</td><td>3KQ4</td><td>3KQ4</td><td>ENSG00000107611</td><td>ENSG00000107611</td><td>FALSE</td></tr><tr><td>PROTEIN1 and cubilin: multifunctional endocytic receptors Megalin and PROTEIN2 are two structurally different endocytic receptors that interact to serve such functions</td><td>cubilin</td><td>Megalin</td><td>CUBN</td><td>LRP2</td><td>3KQ4</td><td>2MOP</td><td>ENSG00000107611</td><td>ENSG00000081479</td><td>FALSE</td></tr><tr><td>Megalin and PROTEIN1: multifunctional endocytic receptors PROTEIN2 and cubilin are two structurally different endocytic receptors that interact to serve such functions</td><td>cubilin</td><td>Megalin</td><td>CUBN</td><td>LRP2</td><td>3KQ4</td><td>2MOP</td><td>ENSG00000107611</td><td>ENSG00000081479</td><td>FALSE</td></tr><tr><td>PROTEIN1 and PROTEIN2: multifunctional endocytic receptors Megalin and cubilin are two structurally different endocytic receptors that interact to serve such functions</td><td>cubilin</td><td>Megalin</td><td>CUBN</td><td>LRP2</td><td>3KQ4</td><td>2MOP</td><td>ENSG00000107611</td><td>ENSG00000081479</td><td>FALSE</td></tr><tr><td>PROTEIN1 and cubilin: multifunctional endocytic receptors PROTEIN2 and cubilin are two structurally different endocytic receptors that interact to serve such functions</td><td>Megalin</td><td>Megalin</td><td>LRP2</td><td>LRP2</td><td>2MOP</td><td>2MOP</td><td>ENSG00000081479</td><td>ENSG00000081479</td><td>FALSE</td></tr></table>
45
+
46
+ embeddings for PPI extraction. Some of the popular deep learning based PPI extraction techniques are reported by (Shweta et al., 2016; Zhao et al., 2016; Hua and Quan, 2016; Hsieh et al., 2017).
47
+
48
+ # 3 Dataset Formation and Preprocessing
49
+
50
+ In this study, we have extended, improved, and further developed two popular benchmark PPI corpora, namely BioInfer $^{3}$ and HRPD50 $^{4}$ dataset for the multi-modal scenario. Along with the textual information, these enhanced multi-modal datasets contain the biological counterparts of the interacting or non-interacting protein pairs. Biological information comes from the underlying FASTA sequence and the atomic structures of interacting protein pairs.
51
+
52
+ <sup>3</sup>http://corpora.informatik.hu-berlin.de/
53
+ 4https://goo.gl/M5tEJj
54
+
55
+ ![](images/190782a7ae089104069d526badf232a747fdc3448f1c11c8164f8d8480e563d7.jpg)
56
+ Figure 1: An example of generating instances along with the structural and sequence counterparts of our multimodal dataset from HRPD50 dataset. PDB ID and Ensembl ID are utilized for obtaining protein 3D atomic structure and underlying FASTA sequence, respectively.
57
+ Figure 2: Statistics of positive and negative instances across our developed multi-modal datasets.
58
+
59
+ # 3.1 Dataset Preparation
60
+
61
+ Firstly, we have extracted data, primarily consisting of two and more protein entities, from the XML representations of two PPI corpora mentioned earlier. To simplify this complex relations among multiple protein entities, we have considered only a single protein pair at a time and found out if they are interacting or not. Among these relations, we have considered positive instances that are directly mentioned in the dataset. The other interactions are considered as non-interacting proteins, i.e., negative instances.
62
+
63
+ Consider an instance of HRPD50 dataset, "Megalin and cubilin: multifunctional endocytic receptors Megalin and cubilin are two structurally different endocytic receptors that interact to serve such functions"(Figure 1). In this particular example, we have four protein entities but we have considered the interactions between two proteins at a time and arrived at six possible relations (shown in table of Figure 1). Among these relations, only one pair (Megalin, cubilin) is denoted as interacting proteins in the HRPD50 dataset. Hence, the number of instances in our dataset is much higher than those in BioInfer and HRPD50 datasets.
64
+
65
+ After generating both positive and negative instances, next we have downloaded other two modalities. To download the genomic sequence and the 3d structure of proteins, the ensemble ID and PDB ID of the proteins are required to be known. But all the biological archives contain the relationships between gene and PDB ID or Ensemble ID instead
66
+
67
+ ![](images/e9f1d181205492e4a57db7b63ae90070dcfba048e47884132ceb7d35d2be2fb0.jpg)
68
+ Figure 3: An overview of the proposed deep multi-modal architecture for predicting protein-protein interactions. For each modality, we have designed different deep learning based models which are finally integrated using self-attention mechanism.
69
+
70
+ of any relationship between the proteins and aforementioned IDs. Hence, we have used manual annotation to find out the respective gene names of each protein name and then python based methodologies to find out Ensembl ID and PDB ID of each of these genes. These IDs help us in downloading the underlying genomic sequence (FASTA sequence) from $^{5}$ and structures of these proteins (3D PDB structure) from the RCSB Protein Data Bank $^{6}$ archive. The pre-processing and generation of the multimodal datasets from the biomedical corpora
71
+
72
+ are pictorially depicted in Figure 1. The complete exemplified multi-modal datasets are available at the provided GitHub link.
73
+
74
+ # 3.2 Dataset Annotation and Statistics
75
+
76
+ A major challenge in creating the dataset is to manually encode the relationships between genes and proteins, a many to many mapping for biological reasons. Hence, to find out the genes which are more related to a particular protein, we asked three annotators who have strong biological knowledge. The disagreement between the annotators was less than $1\%$ and the disagreement is solved by the ma
77
+
78
+ jority voting. The total number of instances of the developed multi-modal datasets are shown in Figure 2.
79
+
80
+ # 4 Problem Formalization
81
+
82
+ Our goal is to develop a deep multi-modal architecture that can efficiently predict whether two proteins are interact with each other or not from the developed multi-modal datasets. Formally, consider the multi-modal dataset $\mathbb{D} = \{S^i\}_{i=1}^N = \{(I_{Text}^i, I_{Struc}^i, I_{Seq}^i)\}_{i=1}^N$ consisting of $N$ instances. $\forall i \in \{1, 2, \dots, N\}, I_{Text}^i, I_{Struc}^i$ and $I_{Seq}^i$ represent the textual, structural and sequence modality of $S^i$ sentence/instance, respectively. The proposed PPI task for an instance $S^i$ is mathematically formulated as
83
+
84
+ $$
85
+ f _ {a c t} \Big (f _ {s a} \big (\mathbb {M} _ {1} (I _ {T e x t} ^ {i}), \mathbb {M} _ {2} (I _ {S t r u c} ^ {i}), \mathbb {M} _ {3} (I _ {S e q} ^ {i}) \big) \Big)
86
+ $$
87
+
88
+ Here $\mathbb{M}_1, \mathbb{M}_2, \mathbb{M}_3$ are three different deep learning based models for text, structure and sequence modality, respectively. The extracted features are fused by self attention mechanism ( $f_{sa}$ ) which is finally fed to an activation function ( $f_{act}$ ) for predicting protein interactions.
89
+
90
+ # 5 Proposed Methodology
91
+
92
+ The major steps of our proposed multi-modal architecture are shown in Figure 3.
93
+
94
+ # 5.1 Feature Extraction from Textual Modality
95
+
96
+ The proposed deep learning model $(\mathbb{M}_1)$ for extracting features from textual modality is described in Figure 4. Firstly, we use BioBERT v1.1(Lee et al., 2019) model to provide a vector representation $(u^i \in \mathbb{R}^d)$ of the textual instance $(I_{Text}^i)$ . With almost same architecture of BERT (Bidirectional Encoder Representation from Transformers) model (Devlin et al., 2018), BioBERT v1.1 is pre-trained on 1M PubMed abstracts. Here, each sentence is embedded as a unique vector of size 768 (i.e., $d = 768$ ) by averaging the last four transformer layers of the first token ([CLS]) of BioBERT model. Inspired by the efficient usage of stacked Bidirectional long short term memory (BiLSTM)(Yadav et al., 2019), we use this to encode the embedded representation $(u^i)$ . In stacked BiLSTM, the $l^{th}$ level BiLSTM computes the forward $\left(\overleftarrow{h_{u^i}^l}\right)$ and backward hidden states $\left(h_{u^i}^l\right)$ which are then concatenated and fed to the next $(l + 1)^{th}$ level of
97
+
98
+ ![](images/47c6f37ba0f4bd44ee3402aafe414d3b4c8822527f41effb7a2a5384bc5d3d26.jpg)
99
+ Figure 4: Proposed hybrid model combining BioBERT and stacked BiLSTM for the Textual modality.
100
+
101
+ BiLSTM layer. Therefore, the final representation $(F_{Text}^{i})$ of $I_{Text}^{i}$ is obtained from the last layer $(L)$ of the stacked BiLSTM model as
102
+
103
+ $$
104
+ F _ {T e x t} ^ {i} = \mathbb {M} _ {1} \left(I _ {T e x t} ^ {i}\right) = \left[ \overrightarrow {h _ {u ^ {i}} ^ {L}} \bigoplus \overleftarrow {h _ {u ^ {i}} ^ {L}} \right] \tag {1}
105
+ $$
106
+
107
+ # 5.2 Sequence Feature Extraction
108
+
109
+ Firstly, we have downloaded the FASTA sequence of protein pairs of an instance $(S^i)$ from Ensembl genome browser. In this modality, each protein $(I_{Seq}^{i})$ is represented as string of four nucleotides, i.e., $I_{Seq}^{i} = \{A,T,G,C\}^{+}$ . The underlying genomic sequence is considered as a separate channel of the text modality. Since molecular properties of protein molecules are heavily dependent on the sequence of nucleotides, we apply capsule network (Sabour et al., 2017) to capture the spatial information between the nucleotides. In this regard, firstly, we have converted all four nucleotides into one-hot vector representation, i.e., the protein is represented as a 2D matrix, $\mathbb{O} = \{0,1\}^{4\times m}$ where $m$ is the number of nucleotides in the sequence. Now, three convolutional layers $(f_{conv})$ are applied on $\mathbb{O}$ where the output of the third layer is fed to the primary capsule. Finally, the output of the primary capsule is fed to secondary capsule which
110
+
111
+ ![](images/34a48c013c1a7a825134dbe7665da24fa5fa33765b623c1cdfdaa748bacc0bb9.jpg)
112
+ Figure 5: Capsule network-based deep model for extracting features from underlying genomic sequence of proteins.
113
+
114
+ ![](images/38ac8e21254569799e65ffda1ae7a7f971f7cd58c69c77081de31c124863b462.jpg)
115
+ Figure 6: Graph convolutional neural network-based deep model for extracting features from molecular structure of proteins.
116
+
117
+ provides the final representation $(F_{Seq}^{i})$ of the sequence modality. The final feature vector obtained from the developed deep architecture $(\mathbb{M}_2)$ is
118
+
119
+ $$
120
+ F _ {S e q} ^ {i} = \mathbb {M} _ {2} \big (I _ {S e q} ^ {i} \big) = f _ {c a p s u l e} \Bigg (f _ {C O N V} \big (\mathbb {O} \big) \Bigg)
121
+ $$
122
+
123
+ # 5.3 Structural Feature Extraction
124
+
125
+ For the structure modality, firstly we have downloaded protein 3D structure from RCSB protein data bank website and obtained the atomic coordinates from the PDB file. Among all the modalities, structural modality is the most relevant modality for inferring biological information. In this modality, we have considered the atomic structure of the proteins. Inspired by the inherent capabilities of graph convolutional neural network (Kipf and Welling,
126
+
127
+ 2016; Zamora-Resendiz and Crivelli, 2019) for understanding the effective latent representation of the graph, we have used it to learn a local neighborhood representation around each atom of the proteins. For this structural modality, the developed model (Figure 6) learns the chemical bonding information from the atomic structure of the proteins rather from its corresponding image. Each protein, which consists of a set of atoms $\{a_1,a_2,\dots ,a_n\}$ , has an adjacency matrix, $A\in \{0,1\}^{n\times n}$ , and a node feature matrix, $X\in \mathbb{R}^{n\times d_v}$ . In this study, we have considered two proteins $(P_{1},P_{2})$ in an instance and extracted the insightful features $(y_{1},y_{2})$ using GCNN and then concatenated them for the final representation $(F_{Struc}^{i})$ . The GCNN takes $A$
128
+
129
+ and $X$ as inputs of the proteins and the structural feature represented as
130
+
131
+ $$
132
+ F _ {S t r u c} ^ {i} = \mathbb {M} \left(I _ {S t r u c} ^ {i}\right) = \left[ y _ {1} \bigoplus y _ {2} \right] \tag {2}
133
+ $$
134
+
135
+ $$
136
+ y _ {j \mid j \in \{1, 2 \}} = f \left(H _ {j} ^ {i}, A _ {j}\right) = \sigma \left(A _ {j}, H _ {j} ^ {i}, W _ {j} ^ {i}\right) \tag {3}
137
+ $$
138
+
139
+ Here, $\oplus ,f,\sigma$ are the concatenation operator, a non-linear activation function and the propagation rule, respectively. $W_{j}^{i}$ is the weight matrix of layer $i$ of protein $P_{j}$ and $H_{j}^{i}$ is defined as $f(H_j^{i - 1},A)$ where $H_{j}^{0} = X_{j}$
140
+
141
+ # 5.4 Attention-based Multi-modal Integration
142
+
143
+ After extracting the features of three modalities (textual, protein sequence and protein structure), we have fused the features using attention mechanism. Attention mechanism has the ability to focus on the features which are the most relevant to a context specific task. In this study, we have used self-attention mechanism of the transformer model which concatenates the final integrated feature representations $(\mathbb{F})$ of $i^{th}$ instance $(S_{i})$ using the following formula.
144
+
145
+ $$
146
+ \mathbb {F} = \left[ W _ {T e x t} ^ {i} F _ {T e x t} ^ {i} \bigoplus W _ {S e q} ^ {i} F _ {S e q} ^ {i} \bigoplus W _ {S t r c} ^ {i} F _ {S t r c} ^ {i} \right] \tag {4}
147
+ $$
148
+
149
+ Here, $W_{i}$ represents the attention weight of $i$ th modality. Finally, this final representation $(\mathbb{F})$ is fed to softmax layer for final classification.
150
+
151
+ # 6 Experimental Results and Analysis
152
+
153
+ In this section, we have briefly described the details of the hyper-parameters and the comparative analysis of the proposed deep multi-modal architecture. To explore the role of developed multi-modal datasets along with the proposed multi-modal architecture for predicting the protein interactions, several experiments are conducted for evaluating each modality and also different combinations of the modalities. Additionally, we have compared the performance of our multi-modal approach with various state-of-the-art methods.
154
+
155
+ # 6.1 Details of Hyper-parameters
156
+
157
+ In our proposed multi-modal architecture, for the final classification we have used softmax. Adam optimizer is used through out the multi-modal architecture. In stacked BiLSTM model for textual modality, 6 (i.e., $L = 6$ ) layers of BiLSTM are used.
158
+
159
+ In case of structural features, graph convolutional neural network with two hidden layers is used. For sequence modality, capsule network followed by three ReLU convolutional layers are used. In the developed capsule network, the number of primary capsules are eight along with two secondary capsules. Finally, self-attention of transformer model is utilized for integrating the features of different modalities. For self-attention, we have used three encoders which are followed by a fully connected network with two hidden layers. The output of the fully connected network is then fed to softmax for final classification.
160
+
161
+ # 6.2 Comparative analysis with baselines
162
+
163
+ For baselines, we have compared our multi-modal approach with three uni-modal, three bi-modal and two other multi-modal architectures.
164
+
165
+ - Textual modality BioBERT and stacked BiLSTM are utilized for this model.
166
+ - Protein sequence modality Capsule network is utilized to understand the underlying features extracted from the protein sequences.
167
+ - Protein structural modality Inspired by the effective performance of GCNN in understanding the graph representation, GCNN is applied on atomic structure of proteins.
168
+ - 3D structural + sequence modality In this bimodal architecture, GCNN and capsule network are used for structural and sequence modality, respectively. Finally, self-attention is utilized to understand the integrated features of these two modalities.
169
+ - Textual + sequence modality In this model, self-attention is applied on the extracted features of textual and sequence modality.
170
+ - Textual + 3D structure modality: To learn the different attributes discussed in the text and protein structural modality, self-attention mechanism is applied to fuse them.
171
+ - Multi-modal approach 1 This architecture of this baseline is the same as the proposed multimodal approach, except the learned features of each modality are simply concatenated instead of using any attention mechanism.
172
+ - Multi-modal approach 2 In this model, attention mechanism is applied for integrating
173
+
174
+ <table><tr><td colspan="2"></td><td>Textual
175
+ modality</td><td>Protein sequence
176
+ modality</td><td>Protein structural
177
+ modality</td><td>Textual + sequence
178
+ modality</td><td>Textual + 3D
179
+ structure modality</td><td>3D structural +
180
+ sequence modality</td><td>Multi-modal
181
+ approach 1</td><td>Multi-modal
182
+ approach 2</td><td>Proposed
183
+ approach</td></tr><tr><td rowspan="3">BioInfer</td><td>Precision</td><td>54.42</td><td>50.63</td><td>59.34</td><td>64.51</td><td>69.04</td><td>68.15</td><td>79.16</td><td>83.77</td><td>86.81</td></tr><tr><td>Recall</td><td>87.45</td><td>83.68</td><td>91.63</td><td>87.45</td><td>88.49</td><td>89.53</td><td>87.44</td><td>86.40</td><td>89.53</td></tr><tr><td>F-measure</td><td>67.09</td><td>63.09</td><td>72.04</td><td>74.25</td><td>77.54</td><td>77.39</td><td>83.11</td><td>85.07</td><td>88.15</td></tr><tr><td rowspan="3">HRPD50</td><td>Precision</td><td>90.44</td><td>86.95</td><td>91.75</td><td>91.01</td><td>94.79</td><td>93.57</td><td>96.51</td><td>96.61</td><td>96.93</td></tr><tr><td>Recall</td><td>58.67</td><td>41.32</td><td>69.01</td><td>62.81</td><td>75.21</td><td>75.21</td><td>74.38</td><td>76.44</td><td>78.51</td></tr><tr><td>F-measure</td><td>71.17</td><td>56.02</td><td>78.77</td><td>74.32</td><td>83.87</td><td>83.39</td><td>84.01</td><td>85.35</td><td>86.75</td></tr></table>
184
+
185
+ the features of textual, protein sequence and structural modalities. For extracting the features from textual, protein sequence and protein structure, we use BioBERT, BiLSTM and CNN, respectively.
186
+
187
+ The results reported in Table 1 illustrate the supremacy of the proposed multi-modal approach over other baselines.
188
+
189
+ # 6.3 Comparison with State-of-the-art
190
+
191
+ Additionally, along with the baselines, we have compared the performance of our multi-modal approach with several existing works reported in the literature. For BioInfer dataset, we have compared our proposed method with nine state-of-the-art models. These existing methods are based on different techniques like kernel-based (Choi and Myaeng, 2010; Tikk et al., 2010; Qian and Zhou, 2012; Li et al., 2015), deep neural network-based (Zhao et al., 2016), multi-channel dependency-based convolutional neural network model (Peng and Lu, 2017), semantic feature embedding (Choi, 2018) and shortest dependency path (Hua and Quan, 2016). Along with the aforementioned methods, we have also compared our approach with a recent deep learning-based approach proposed by (Yadav et al., 2019). The comparative performance analysis for BioInfer dataset is tabulated in Table
192
+
193
+ Table 1: Comparative study of our proposed deep multi-modal approach with several baselines in terms of precision, recall, $F$ -measure
194
+
195
+ <table><tr><td></td><td></td><td>Precision</td><td>Recall</td><td>F-score</td></tr><tr><td>Proposed Model</td><td></td><td>86.81</td><td>89.53</td><td>88.15</td></tr><tr><td>(Yadav et al., 2019)</td><td></td><td>80.81</td><td>82.57</td><td>81.68</td></tr><tr><td>(Hua and Quan, 2016)</td><td></td><td>73.40</td><td>77.00</td><td>75.20</td></tr><tr><td>(Choi, 2018)</td><td></td><td>72.05</td><td>77.51</td><td>74.68</td></tr><tr><td>(Qian and Zhou, 2012)</td><td>63.61</td><td>61.24</td><td>62.40</td><td></td></tr><tr><td>(Peng and Lu, 2017)</td><td></td><td>62.70</td><td>68.2</td><td>65.30</td></tr><tr><td>(Zhao et al., 2016)</td><td></td><td>53.90</td><td>72.9</td><td>61.60</td></tr><tr><td>(Tikk et al., 2010)</td><td></td><td>53.30</td><td>70.10</td><td>60.00</td></tr><tr><td>(Li et al., 2015)</td><td></td><td>72.33</td><td>74.94</td><td>73.61</td></tr><tr><td>(Choi and Myaeng, 2010)</td><td></td><td>74.50</td><td>70.90</td><td>72.60</td></tr></table>
196
+
197
+ 2. We have also compared our approach with nine existing approaches for HRPD50 dataset. The comparative results for HRPD50 dataset are presented in Table 3.
198
+
199
+ # 6.4 Discussion
200
+
201
+ By analyzing the above comparative study, we can infer that the overall performance of our proposed multi-modal approach surpasses other baselines and existing methods. Among the baseline models, proposed multi-modal approach outperforms its unimodal and bimodal counterparts. Among the uni-modal architecture, structural modality outperforms other two modalities which suggests the importance of structural modality over textual and sequence modalities. The sequence modality performs poorly because of its huge length (length of most of the sequences is approx 10,000 nucleotides).
202
+
203
+ Among the bimodal architectures, (textual + structural) model surpasses other bimodal and unimodal counterparts. This fusion shows improvements of $5.1\%$ and $5.5\%$ F-score values over the best unimodal architecture for HRPD50 and BioInfer data sets, respectively. Similarly, our proposed multi-modal architecture shows an improvement over bi-modal counterparts. Also, the proposed multi-modal architecture shows an aver
204
+
205
+ Table 2: Comparative analysis of the proposed multimodal approach with state-of-the-art techniques for BioInfer dataset.
206
+
207
+ <table><tr><td></td><td>Precision</td><td>Recall</td><td>F-score</td></tr><tr><td>Proposed Model</td><td>96.93</td><td>78.51</td><td>86.75</td></tr><tr><td>(Yadav et al., 2019)</td><td>79.92</td><td>77.58</td><td>78.73</td></tr><tr><td>(Tikk et al., 2010)</td><td>68.20</td><td>69.80</td><td>67.80</td></tr><tr><td>(Tikk et al., 2010)(with SVM)</td><td>68.20</td><td>69.80</td><td>67.80</td></tr><tr><td>(Palaga, 2009)</td><td>66.70</td><td>80.20</td><td>70.90</td></tr><tr><td>(Airola et al., 2008a)(APG)</td><td>64.30</td><td>65.80</td><td>63.40</td></tr><tr><td>(Van Landeghem et al., 2008)</td><td>60.00</td><td>51.00</td><td>55.00</td></tr><tr><td>(Miwa et al., 2009)</td><td>68.50</td><td>76.10</td><td>70.90</td></tr><tr><td>(Airola et al., 2008a)(Co-occ)</td><td>38.90</td><td>100</td><td>55.40</td></tr><tr><td>(Pyysalo et al., 2008)</td><td>76.00</td><td>64.00</td><td>69.00</td></tr></table>
208
+
209
+ Table 3: Comparative analysis of the proposed multimodal approach with other state-of-the-art approaches for HRPD50 dataset.
210
+
211
+ age improvement of $3.87\%$ and $2.24\%$ F-scores over multi-modal approach1 and multi-modal approach2, respectively. This improvement indicates that in addition to multiple modalities, underlying deep learning models and fusion technique contribute significantly in improving the performance of the overall architecture.
212
+
213
+ In addition, Table 2 and Table 3 indicate that the proposed multi-modal architecture outperforms the best and recent existing methods for both BioInfer and HRPD50 dataset, respectively. We have performed Welch's t-test to show that obtained improvements by the proposed approach are statistically significant. From the above comparative study, it is evident that our proposed multi-modal approach identifies the protein interactions in an efficient way and can be further improved in different ways.
214
+
215
+ # 6.5 Error Analysis
216
+
217
+ After thoroughly analyzing false positive and false negative instances, it can be inferred that following are the possible reasons of errors:
218
+
219
+ 1. The instances which contain huge number of protein entities lead to misclassification. The maximum number of proteins in an instance of HRPD50 and BioInfer are 26 and 24, respectively; this has a huge chance of misclassification. For example: "Mutations in Saccharomyces cerevisiae RFC5, DPB11, MEC1, DDC2, MEC3, PDS1, CHK1, PDS1, and DUN1 have increased the rate of genome rearrangements up to 200-fold whereas mutations in RAD9, RAD17, RAD24, BUB3, and MAD3 have little effect."
220
+ 2. Repetitive mentions of the same protein entity adds noise that leads to loose contextual information. For example “Here we demonstrate ... CLIP-170 and LIS1 Overexpression of CLIP-170 results ... phospho-LIS1 ... that CLIP-170 and LIS1 regulate ... that LIS1 is a regulated adapter between CLIP-170 ... MT dynamics”.
221
+ 3. For sequence modality, we consider underlying FASTA sequence of proteins. The length of the sequence varies from 100 to 10000 nucleotides. This increased protein length leads to misclassification as the deep learning-based model is unable to possess this long chain of nucleotides.
222
+
223
+ # 7 Conclusion and Future Work
224
+
225
+ In this work, we have generated some multi-modal protein-protein interaction databases by amalgamating protein structures and sequences with existing text information available in the biomedical literature. The process of generating multi-modal datasets from PPI corpora is illustrated with some examples. Besides, we have proposed a novel deep multi-modal architecture for managing the multimodal scenario for PPIs. For each modality (textual, protein sequence and protein atomic structure), we have developed different deep learning models for efficient feature extractions. A detailed comparative analysis proves that the proposed multi-modal architecture outperforms other strong baselines and existing models. Future work aims at enhancing sequence feature extraction methods to improve the classification performance as those suffer from low accuracy. Further there are plenty of options for improving the fusion technique to enhance the overall performance of the model.
226
+
227
+ # Acknowledgements
228
+
229
+ Pratik Dutta acknowledges Visvesvaraya PhD Scheme for Electronics and IT, an initiative of Ministry of Electronics and Information Technology (MeitY), Government of India for fellowship support. Dr. Sriparna Saha gratefully acknowledges the Young Faculty Research Fellowship (YFRF) Award, supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia) for carrying out this research.
230
+
231
+ # References
232
+
233
+ Antti Airola, Sampo Pyysalo, Jari Björne, Tapio Pahikkala, Filip Ginter, and Tapio Salakoski. 2008a. All-paths graph kernel for protein-protein interaction extraction with evaluation of cross-corpus learning. BMC bioinformatics, 9(11):S2.
234
+
235
+ Antti Airola, Sampo Pyysalo, Jari Björne, Tapio Pahikkala, Filip Ginter, and Tapio Salakoski. 2008b. A graph kernel for protein-protein interaction extraction. In Proceedings of the workshop on current trends in biomedical natural language processing, pages 1-9. Association for Computational Linguistics.
236
+
237
+ Ilseyar Alimova and Elena Tutubalina. 2019. Detecting adverse drug reactions from biomedical texts with
238
+
239
+ neural networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 415-421.
240
+ Babak Alipanahi, Andrew Delong, Matthew T Weirauch, and Brendan J Frey. 2015. Predicting the sequence specificities of dna-and rna-binding proteins by deep learning. Nature biotechnology, 33(8):831.
241
+ Takayuki Amemiya, M Michael Gromiha, Katsuhisa Horimoto, and Kazuhiko Fukui. 2019. Drug repositioning for dengue haemorrhagic fever by integrating multiple omics analyses. Scientific reports, 9(1):523.
242
+ Masaki Asada, Makoto Miwa, and Yutaka Sasaki. 2018. Enhancing drug-drug interaction extraction from texts by molecular structure information. arXiv preprint arXiv:1805.05593.
243
+ Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. 2007. Greedy layer-wise training of deep networks. In Advances in neural information processing systems, pages 153-160.
244
+ Christian Blaschke, Miguel A Andrade, Christos A Ouzounis, and Alfonso Valencia. 1999. Automatic extraction of biological information from scientific text: protein-protein interactions. In Ismb, volume 7, pages 60-67.
245
+ Long Chen, Hanwang Zhang, Jun Xiao, Liqiang Nie, Jian Shao, Wei Liu, and Tat-Seng Chua. 2017. Scacnn: Spatial and channel-wise attention in convolutional networks for image captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5659-5667.
246
+ Sheng-Yeh Chen, Chao-Chun Hsu, Chuan-Chun Kuo, Lun-Wei Ku, et al. 2018. Emotionlines: An emotion corpus of multi-party conversations. arXiv preprint arXiv:1802.08379.
247
+ Kyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
248
+ Sung-Pil Choi. 2018. Extraction of protein-protein interactions (ppis) from the literature by deep convolutional neural networks with various feature embeddings. Journal of Information Science, 44(1):60-73.
249
+ Sung-Pil Choi and Sung-Hyon Myaeng. 2010. Simplicity is better: revisiting single kernel ppi extraction. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 206-214. Association for Computational Linguistics.
250
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
251
+
252
+ Pratik Dutta and Sriparna Saha. 2017. Fusion of expression values and protein interaction information using multi-objective optimization for improving gene clustering. Computers in biology and medicine, 89:31-43.
253
+ Pratik Dutta, Sriparna Saha, Saraansh Chopra, and Varnika Miglani. 2019a. Ensembling of gene clusters utilizing deep learning and protein-protein interaction information. IEEE/ACM transactions on computational biology and bioinformatics.
254
+ Pratik Dutta, Sriparna Saha, and Saurabh Gulati. 2019b. Graph-based hub gene selection technique using protein interaction information: Application to sample classification. *IEEE journal of biomedical and health informatics*, 23(6):2670-2676.
255
+ Pratik Dutta, Sriparna Saha, Sanket Pai, and Aviral Kumar. 2020. A protein interaction information-based generative model for enhancing gene clustering. Scientific Reports (Nature Publisher Group), 10(1).
256
+ Gunes Erkan, Arzucan Ozgur, and Dragomir R Radev. 2007. Semi-supervised classification for extracting protein interaction sentences using dependency parsing. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL).
257
+ Chenyou Fan, Xiaofan Zhang, Shu Zhang, Wensheng Wang, Chi Zhang, and Heng Huang. 2019. Heterogeneous memory enhanced multimodal attention model for video question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1999-2007.
258
+ Claudio Giuliano, Alberto Lavelli, and Lorenzo Romano. 2006. Exploiting shallow linguistic information for relation extraction from biomedical literature. In 11th Conference of the European Chapter of the Association for Computational Linguistics.
259
+ Alexander Goncearenco, Minghui Li, Franco L Simonetti, Benjamin A Shoemaker, and Anna R Panchenko. 2017. Exploring protein-protein interactions as drug targets for anti-cancer therapy with in silico workflows. In *Proteomics for Drug Discovery*, pages 221-236. Springer.
260
+ Yu-Lun Hsieh, Yung-Chun Chang, Nai-Wen Chang, and Wen-Lian Hsu. 2017. Identifying protein-protein interactions in biomedical literature using recurrent neural networks with long short-term memory. In Proceedings of the eighth international joint conference on natural language processing (volume 2: short papers), pages 240-245.
261
+ Lei Hua and Chanqin Quan. 2016. A shortest dependency path based convolutional neural network for protein-protein relation extraction. *BioMed research international*, 2016.
262
+
263
+ Minlie Huang, Xiaoyan Zhu, Yu Hao, Donald G Payan, Kunbin Qu, and Ming Li. 2004. Discovering patterns to extract protein-protein interactions from full texts. Bioinformatics, 20(18):3604-3612.
264
+ Mengqi Jin, Mohammad Taha Bahadori, Aaron Colak, Parminder Bhatia, Busra Celikkaya, Ram Bhakta, Selvan Senthivel, Mohammed Khalilia, Daniel Navarro, Borui Zhang, et al. 2018. Improving hospital mortality prediction with medical named entities and multimodal learning. arXiv preprint arXiv:1811.12276.
265
+ Ritu Khare, Robert Leaman, and Zhiyong Lu. 2014. Accessing biomedical literature in the current information landscape. In Biomedical Literature Mining, pages 11-31. Springer.
266
+ Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.
267
+ Maxat Kulmanov, Mohammed Asif Khan, and Robert Hoehndorf. 2017. Deepgo: predicting protein functions from sequence and interactions using a deep ontology-aware classifier. Bioinformatics, 34(4):660-668.
268
+ Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. nature, 521(7553):436.
269
+ Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics.
270
+ Lishuang Li, Rui Guo, Zhenchao Jiang, and Degen Huang. 2015. An approach to improve kernel-based protein-protein interaction extraction by learning from large-scale network data. Methods, 83:44-50.
271
+ Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer.
272
+ Makoto Miwa, Rune Sætre, Yusuke Miyao, and Jun'ichi Tsujii. 2009. Protein-protein interaction extraction by leveraging multiple kernels and parsers. International journal of medical informatics, 78(12):e39-e46.
273
+ Toshihide Ono, Haretsugu Hishigaki, Akira Tanigami, and Toshihisa Takagi. 2001. Automated extraction of information on protein-protein interactions from the biological literature. Bioinformatics, 17(2):155-161.
274
+ Peter Palaga. 2009. Extracting relations from biomedical texts using syntactic information. Mémoire de DEA, Technische Universität Berlin, 138.
275
+
276
+ Peggy L Peissig, Luke V Rasmussen, Richard L Berg, James G Linneman, Catherine A McCarty, Carol Waudby, Lin Chen, Joshua C Denny, Russell A Wilke, Jyotishman Pathak, et al. 2012. Importance of multi-modal approaches to effectively identify cataract cases from electronic health records. Journal of the American Medical Informatics Association, 19(2):225-234.
277
+ Yifan Peng and Zhiyong Lu. 2017. Deep learning for extracting protein-protein interactions from biomedical literature. arXiv preprint arXiv:1706.01556.
278
+ Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Erik Cambria, and Rada Mihalcea. 2018. Meld: A multimodal multi-party dataset for emotion recognition in conversations. arXiv preprint arXiv:1810.02508.
279
+ Sampo Pyysalo, Antti Airola, Juho Heimonen, Jari Björne, Filip Ginter, and Tapio Salakoski. 2008. Comparative analysis of five protein-protein interaction corpora. In BMC bioinformatics, volume 9, page S6. BioMed Central.
280
+ Longhua Qian and Guodong Zhou. 2012. Tree kernel-based protein-protein interaction extraction from biomedical literature. Journal of biomedical informatics, 45(3):535-543.
281
+ Zhi Qiao, Xian Wu, Shen Ge, and Wei Fan. 2019. Mnn: multimodal attentional neural networks for diagnosis prediction. Extraction, 1:A1.
282
+ Syed Arbaaz Qureshi, Gael Dias, Mohammed Hasanuzzaman, and Sriparna Saha. 2020. Improving depression level estimation by concurrently learning emotion intensity. IEEE Computational Intelligence Magazine.
283
+ Syed Arbaaz Qureshi, Sriparna Saha, Mohammed Hasanuzzaman, and Gael Dias. 2019. Multitask representation learning for multimodal estimation of depression level. IEEE Intelligent Systems, 34(5):45-52.
284
+ Bisakha Ray, Mikael Henaff, Sisi Ma, Efstratios Efstathiadis, Eric R Peskin, Marco Picone, Tito Poli, Constantin F Aliferis, and Alexander Statnikov. 2014. Information content and analysis methods for multi-modal high-throughput biomedical data. Scientific reports, 4:4411.
285
+ Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. 2017. Dynamic routing between capsules. In Advances in neural information processing systems, pages 3856-3866.
286
+ Rune Sætre, Kenji Sagae, and Jun'ichi Tsujii. 2007. Syntactic features for protein-protein interaction extraction. LBM (Short Papers), 319.
287
+ Shweta, A. Ekbal, S. Saha, and P. Bhattacharyya. 2016. A deep learning architecture for protein-protein interaction article identification. In 2016 23rd International Conference on Pattern Recognition (ICPR), pages 3128-3133.
288
+
289
+ Benjamin J Stapley and Gerry Benoit. 1999. Biobibliometrics: information retrieval and visualization from co-occurrences of gene names in medline abstracts. In Biocomputing 2000, pages 529-540. World Scientific.
290
+ Dongdong Sun, Minghui Wang, and Ao Li. 2019. A multimodal deep neural network for human breast cancer prognosis prediction by integrating multidimensional data. IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), 16(3):841-850.
291
+ Domonkos Tikk, Philippe Thomas, Peter Palaga, Jorg Hakenberg, and Ulf Leser. 2010. A comprehensive benchmark of kernel methods to extract protein-protein interactions from literature. PLoS computational biology, 6(7):e1000837.
292
+ Sofie Van Landeghem, Yvan Saeys, Bernard De Baets, and Yves Van de Peer. 2008. Extracting protein-protein interactions from text using rich feature vectors and feature selection. In 3rd International symposium on Semantic Mining in Biomedicine (SMBM 2008), pages 77-84. Turku Centre for Computer Sciences (TUCS).
293
+ Shweta Yadav, Asif Ekbal, Sriparna Saha, Ankit Kumar, and Pushpak Bhattacharyya. 2019. Feature assisted stacked attentive shortest dependency path based bi-lstm model for protein-protein interaction. Knowledge-Based Systems, 166:18-29.
294
+ Amir Zadeh, Rowan Zellers, Eli Pincus, and Louis-Philippe Morency. 2016. Mosi: multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos. arXiv preprint arXiv:1606.06259.
295
+ Rafael Zamora-Resendiz and Silvia Crivelli. 2019. Structural learning of proteins using graph convolutional neural networks. *bioRxiv*, page 610444.
296
+ Shifeng Zhang, Xiaobo Wang, Ajian Liu, Chenxu Zhao, Jun Wan, Sergio Escalera, Hailin Shi, Zezheng Wang, and Stan Z Li. 2019. A dataset and benchmark for large-scale multi-modal face antispoofing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 919-928.
297
+ Zhehuan Zhao, Zhihao Yang, Hongfei Lin, Jian Wang, and Song Gao. 2016. A protein-protein interaction extraction approach based on deep neural network. International Journal of Data Mining and Bioinformatics, 15(2):145-164.
amalgamationofproteinsequencestructureandtextualinformationforimprovingproteinproteininteractionidentification/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f78529e66102c9f24e69bc599ab58a647a0e49bfcf5d2378671309d408e8339e
3
+ size 610912
amalgamationofproteinsequencestructureandtextualinformationforimprovingproteinproteininteractionidentification/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c3d858d2a2158d81a848fe73f11a95e53dd9c2dced82791f5b6b1b3f3680582
3
+ size 354338
analysinglexicalsemanticchangewithcontextualisedwordrepresentations/e5fe2ad5-39d8-4569-b85f-4b6efa8ece75_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57af4b3c2ffda864f6a7f8c09d1e0d2b34ff0311a87c5a10ce89460d992ee377
3
+ size 102688
analysinglexicalsemanticchangewithcontextualisedwordrepresentations/e5fe2ad5-39d8-4569-b85f-4b6efa8ece75_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3fa694bb0a898ba8114d8b88251b5fff4bc6692e09ff4be324b1a3958eb03a2c
3
+ size 131792
analysinglexicalsemanticchangewithcontextualisedwordrepresentations/e5fe2ad5-39d8-4569-b85f-4b6efa8ece75_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63a449de95eb44b974000b608f37e3f1bde9a88f6a7c813976f14636d30dc572
3
+ size 1138697
analysinglexicalsemanticchangewithcontextualisedwordrepresentations/full.md ADDED
@@ -0,0 +1,441 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Analysing Lexical Semantic Change with Contextualised Word Representations
2
+
3
+ Mario Giulianielli Marco Del Tredici Raquel Fernandez
4
+
5
+ Institute for Logic, Language and Computation
6
+
7
+ University of Amsterdam
8
+
9
+ {m.giulianelli|m.deltredici|raquel.fernandez}@uva.nl
10
+
11
+ # Abstract
12
+
13
+ This paper presents the first unsupervised approach to lexical semantic change that makes use of contextualised word representations. We propose a novel method that exploits the BERT neural language model to obtain representations of word usages, clusters these representations into usage types, and measures change along time with three proposed metrics. We create a new evaluation dataset and show that the model representations and the detected semantic shifts are positively correlated with human judgements. Our extensive qualitative analysis demonstrates that our method captures a variety of synchronic and diachronic linguistic phenomena. We expect our work to inspire further research in this direction.
14
+
15
+ # 1 Introduction
16
+
17
+ In the fourteenth century the words boy and girl referred respectively to a male servant and a young person of either sex (Oxford English Dictionary). By the fifteenth century a narrower usage had emerged for girl, designating exclusively female individuals, whereas by the sixteenth century boy had lost its servile connotation and was more broadly used to refer to any male child, becoming the masculine counterpart of girl (Bybee, 2015). Word meaning is indeed in constant mutation and, since correct understanding of the meaning of individual words underpins general machine reading comprehension, it has become increasingly relevant for computational linguists to detect and characterise lexical semantic change—e.g., in the form of laws of semantic change (Dubossarsky et al., 2015; Xu and Kemp, 2015; Hamilton et al., 2016)—with the aid of quantitative and reproducible evaluation procedures (Schlechtweg et al., 2018).
18
+
19
+ Most recent studies have focused on shift detection, the task of deciding whether and to what extent the concept evoked by a word has changed
20
+
21
+ between time periods (e.g., Gulordava and Baroni, 2011; Kim et al., 2014; Kulkarni et al., 2015; Del Tredici et al., 2019; Hamilton et al., 2016; Bamler and Mandt, 2017; Rosenfeld and Erk, 2018). This line of work relies mainly on distributional semantic models, which produce one abstract representation for every word form. However, aggregating all senses of a word into a single representation is particularly problematic for semantic change as word meaning hardly ever shifts directly from one sense to another, but rather typically goes through polysemous stages (Hopper et al., 1991). This limitation has motivated recent work on word sense induction across time periods (Lau et al., 2012; Cook et al., 2014; Mitra et al., 2014; Frermann and Lapata, 2016; Rudolph and Blei, 2018; Hu et al., 2019). Word senses, however, have shortcomings themselves as they are a discretisation of word meaning, which is continuous in nature and modulated by context to convey ad-hoc interpretations (Brugman, 1988; Kilgarriff, 1997; Paradis, 2011).
22
+
23
+ In this work, we propose a usage-based approach to lexical semantic change, where sentential context modulates lexical meaning "on the fly" (Ludlow, 2014). We present a novel method that (1) exploits a pre-trained neural language model (BERT; Devlin et al., 2019) to obtain contextualised representations for every occurrence of a word of interest, (2) clusters these representations into usage types, and (3) measures change along time. More concretely, we make the following contributions:
24
+
25
+ - We present the first unsupervised approach to lexical semantic change that makes use of state-of-the-art contextualised word representations.
26
+ - We propose several metrics to measure semantic change with this type of representation. Our code is available at https://github.com/glnmario/cwr41sc.
27
+ - We create a new evaluation dataset of human sim
28
+
29
+ ilarity judgements on more than 3K word usage pairs across different time periods, available at https://doi.org/10.5281/zenodo.3773250.
30
+
31
+ - We show that both the model representations and the detected semantic shifts are positively correlated with human intuitions.
32
+ - Through in-depth qualitative analysis, we show that the proposed approach captures synchronic phenomena such as word senses and syntactic functions, literal and metaphorical usage, as well as diachronic linguistic processes related to narrowing and broadening of meaning across time.
33
+
34
+ Overall, our study demonstrates the potential of using contextualised word representations for modelling and analysing lexical semantic change and opens the door to further work in this direction.
35
+
36
+ # 2 Related Work
37
+
38
+ Semantic change modelling Lexical semantic change models build on the assumption that meaning change results in the modification of a word's linguistic distribution. In particular, with the exception of a few methods based on word frequencies and parts of speech (Michel et al., 2011; Kulkarni et al., 2015), lexical semantic change detection has been addressed following two main approaches: form-based and sense-based (for an overview, see Kutuzov et al., 2018; Tang, 2018).
39
+
40
+ In form-based approaches independent models are trained on the time intervals of a diachronic corpus and the distance between representations of the same word in different intervals is used as a semantic change score (Gulordava and Baroni, 2011; Kulkarni et al., 2015). Representational coherence between word vectors across different periods can be guaranteed by incremental training procedures (Kim et al., 2014) as well as by post hoc alignment of semantic spaces (Hamilton et al., 2016). More recent methods capture diachronic word usage by learning dynamic word embeddings that vary as a function of time (Bamler and Mandt, 2017; Rosenfeld and Erk, 2018; Rudolph and Blei, 2018). Form-based models depend on a strong simplification: that a single representation is sufficient to model the different usages of a word.
41
+
42
+ Time-dependent representations are also created in sense-based approaches: in this case word meaning is encoded as a distribution over word senses. Several Bayesian models of sense change have been proposed (Wijaya and Yeniterzi, 2011; Lau
43
+
44
+ et al., 2012, 2014; Cook et al., 2014). Among these is the recent SCAN model (Frermann and Lapata, 2016), which represents (1) the meaning of a word in a time interval as a multinomial distribution over word senses and (2) word senses as probability distributions over the vocabulary. The main limitation of sense-based models is that they rely on a bag-of-words representation of context. Furthermore, many of these models keep the number of senses constant across time intervals and require this number to be manually set in advance.
45
+
46
+ Unsupervised approaches have been proposed that do not rely on a fixed number of senses. For example, the method for novel sense identification by Mitra et al. (2015) represents senses as clusters of short dependency-labelled contexts. Like ours, this method analyses word forms within the grammatical structures they appear. However, it requires syntactically parsed diachronic corpora and focuses exclusively on nouns. None of these restrictions limit our proposed approach, which leverages neural contextualised word representations.
47
+
48
+ Contextualised word representations Several approaches to context-sensitive word representations have been proposed in the past. Schütze (1998) introduced a clustering-based disambiguation algorithm for word usage vectors, Erk and Padó (2008) proposed creating multiple vectors for the same word and Erk and Padó (2010) proposed to directly learn usage-specific representations based on the set of exemplary contexts within which the target word occurs.
49
+
50
+ Recently, neural contextualised word representations have gained widespread use in NLP, thanks to deep learning models which learn usage-dependent representations while optimising tasks such as machine translation (CoVe; McCann et al., 2017) and language modelling (Dai and Le, 2015, ULMFiT; Howard and Ruder, 2018, ELMo; Peters et al., 2018, GPT; Radford et al., 2018, 2019, BERT; Devlin et al., 2019). State-of-the-art language models typically use stacked attention layers (Vaswani et al., 2017), they are pre-trained on a very large amount of textual data, and they can be fine-tuned for specific downstream tasks (Howard and Ruder, 2018; Radford et al., 2019; Devlin et al., 2019).
51
+
52
+ Contextualised representations have been shown to encode lexical meaning dynamically, reaching high accuracy on, e.g., the binary usage similarity judgements of the WiC evaluation set (Pilehvar and Camacho-Collados, 2019), performing on a
53
+
54
+ par with state-of-the-art word sense disambiguation models (Wiedemann et al., 2019), and proving useful for the supervised derivation of time-specific sense representation (Hu et al., 2019). In this work, we investigate the potential of contextualised word representations to detect and analyse lexical semantic change, without any lexicographic supervision.
55
+
56
+ # 3 Method: A Usage-based Approach to Lexical Semantic Change
57
+
58
+ We introduce a usage-based approach to lexical semantic change analysis which relies on contextualised representations of unique word occurrences (usage representations). First, given a diachronic corpus and a list of words of interest, we use the BERT language model (Devlin et al., 2019) to compute usage representations for each occurrence of these words. Then, we cluster all the usage representations collected for a given word into an automatically determined number of partitions (usage types) and organise them along the temporal axis. Finally, we propose three metrics to quantify the degree of change undergone by a word.
59
+
60
+ # 3.1 Language Model
61
+
62
+ We produce usage representations using the BERT language model (Devlin et al., 2019), a multi-layer bidirectional Transformer encoder trained on masked token prediction and next sentence prediction, on the BooksCorpus (800M words) (Zhu et al., 2015) and on English text passages extracted from Wikipedia (2,500M words). There are two versions of BERT. For space and time efficiency, we use the smaller base-uncased version, with 12 layers, 768 hidden dimensions, and 110M parameters. $^{1}$
63
+
64
+ # 3.2 Usage Representations
65
+
66
+ Given a word of interest $w$ and a context of occurrence $s = (v_{1},\dots,v_{i},\dots,v_{n})$ with $w = v_{i}$ , we extract the activations of all of BERT's hidden layers for sentence position $i$ and sum them dimension-wise. We use addition because neither concatenation nor selecting a subset of the layers produced notable differences in the relative geometric distance between word representations.
67
+
68
+ The set of $N$ usage representations for $w$ in a given corpus can be expressed as the usage matrix $\mathbf{U}_w = (\mathbf{w}_1,\dots ,\mathbf{w}_N)$ . For each usage representation in the usage matrix $\mathbf{U}_w$ , we store the context of
69
+
70
+ ![](images/bc176eb555ba7f7861f90df8d00d548d17e053646b47fb6662fde6081d83d8f8.jpg)
71
+ (a) PCA visualisation of the usage representations.
72
+ (b) Probability-based usage type distributions along time.
73
+ Figure 1: Usage representations and usage type distributions generated with occurrences of the word atom in COHA (Davies, 2012). Colours encode usage types.
74
+
75
+ ![](images/ce2f713bdb5fc1b7058a3b03a9bbfb7e6cbfd5e31f911f0e5356e715307f159c.jpg)
76
+
77
+ occurrence (a 128-token window around the target word) as well as a temporal label $\mathbf{t}_w$ indicating the time interval of the usage.
78
+
79
+ # 3.3 Usage Types
80
+
81
+ Once we have obtained a word-specific matrix of usage vectors $\mathbf{U}_w$ , we standardise it and cluster its entries using $K$ -Means. This step partitions usage representations into clusters of similar usages of the same word, or usage types (see Figure 1a), and thus it is directly related to automatic word sense discrimination (Schütze, 1998; Pantel and Lin, 2002; Manandhar et al., 2010; Navigli and Vannella, 2013, among others).
82
+
83
+ For each word independently, we automatically select the number of clusters $K$ that maximises the silhouette score (Rousseeuw, 1987), a metric of cluster quality which favours intra-cluster coherence and penalises inter-cluster similarity, without the need for gold labels. For each value of $K$ , we execute 10 iterations of Expectation Maximization to alleviate the influence of different initialisation values (Arthur and Vassilvitskii, 2007). The final clustering for a given $K$ is the one that yields the minimal distortion value across the 10 runs, i.e., the minimal sum of squared distances of each data point from its closest centroid. We experiment with $K \in [2,10]$ . We choose the range [2,10] heuristically: we forgo $K = 1$ as $K$ -Means and the silhouette score are ill-defined for this case, while keeping the number of possible clusters manageable computationally. This excludes the possibility that a word has a single usage type. Alternatively, we could use a measure of intra-cluster dispersion for $K = 1$ , and consider a word monosemous if its dispersion value is below a threshold $d$ (if the dispersion is higher than $d$ , we would discard $K = 1$ ).
84
+
85
+ and use the silhouette score to find the best $K \geq 2$ . There also exist clustering methods that select the optimal $K$ automatically, e.g. DBSCAN or Affinity Propagation (Martinc et al., 2020). They nevertheless require method-specific parameter choices which indirectly determine the number of clusters.
86
+
87
+ By counting the number of occurrences of each usage type $k$ in a given time interval $t$ (we refer to this count as $\text{freq}(k, t)$ ), we obtain frequency distributions $\mathbf{f}_w^t$ for each interval under scrutiny:
88
+
89
+ $$
90
+ \mathbf {f} _ {w} ^ {t} \in \mathbb {N} ^ {K _ {w}}: \mathbf {f} _ {w} ^ {t} [ k ] = f r e q (k, t) \quad k \in [ 1, K _ {w} ] \tag {1}
91
+ $$
92
+
93
+ When normalised, frequency distributions can be interpreted as probability distributions over usage types $\mathbf{u}_w^t:\mathbf{u}_w^t [k] = \frac{1}{N_t}\mathbf{f}_w^t [k]$ . Figure 1b illustrates the result of this process.
94
+
95
+ # 3.4 Quantifying Semantic Change
96
+
97
+ We propose three metrics for the automatic quantification of lexical semantic change using contextualised word representations. The first two (entropy difference and Jensen-Shannon divergence) are known metrics for comparing probability distributions. In our approach, we apply them to measure variations in the relative prominence of coexisting usage types. We conjecture that these kinds of metric can help detect semantic change processes that, e.g., lead to broadening or narrowing (i.e., to increase or decrease, respectively, in the number or relative distribution of usage types).
98
+
99
+ The third metric (average pairwise distance) only requires a usage matrix $\mathbf{U}_w$ and the temporal labels $\mathbf{t}_w$ (Section 3.2). Since it does not rely on usage type distributions, it is not sensitive to possible errors stemming from the clustering process.
100
+
101
+ Entropy difference (ED) We propose measuring the uncertainty (e.g., due to polysemy) in the interpretation of a word $w$ in interval $t$ using the normalised entropy of its usage distribution $\mathbf{u}_w^t$ :
102
+
103
+ $$
104
+ \eta \left(\mathbf {u} _ {w} ^ {t}\right) = \log_ {K _ {w}} \left(\prod_ {k = 1} ^ {K _ {w}} \mathbf {u} _ {w} ^ {t} [ k ] ^ {- \mathbf {u} _ {w} ^ {t} [ k ]}\right) \tag {2}
105
+ $$
106
+
107
+ To quantify how uncertainty over possible interpretations varies across time intervals, we compute the difference in entropy between the two usage type distributions in these intervals: $\mathrm{ED}(\mathbf{u}_w^t,\mathbf{u}_w^{t'}) = \eta (\mathbf{u}_w^{t'}) - \eta (\mathbf{u}_w^t)$ . We expect high ED values to signal the broadening of a word's interpretation and negative values to indicate narrowing.
108
+
109
+ Jensen-Shannon divergence (JSD) The second metric takes into account not only variations in the size of usage type clusters but also which clusters have grown or shrunk. It is the Jensen-Shannon divergence (Lin, 1991) between usage type distributions:
110
+
111
+ $$
112
+ \begin{array}{l} \operatorname {J S D} \left(\mathbf {u} _ {w} ^ {t}, \mathbf {u} _ {w} ^ {t ^ {\prime}}\right) = \mathrm {H} \left(\frac {1}{2} \left(\mathbf {u} _ {w} ^ {t} + \mathbf {u} _ {w} ^ {t ^ {\prime}}\right)\right) \tag {3} \\ - \frac {1}{2} \left(\mathrm {H} \left(\mathbf {u} _ {w} ^ {t}\right) - \mathrm {H} \left(\mathbf {u} _ {w} ^ {t ^ {\prime}}\right)\right) \\ \end{array}
113
+ $$
114
+
115
+ where $\mathrm{H}$ is the Boltzmann-Gibbs-Shannon entropy. Very dissimilar usage distributions yield high JSD whereas low JSD values indicate that the proportions of usage types barely change across periods.
116
+
117
+ Average pairwise distance (APD) While the previous two metrics rely on usage type distributions, it is also possible to quantify change bypassing the clustering step into usage types, e.g. by calculating the average pairwise distance between usage representations in different periods $t$ and $t'$ :
118
+
119
+ $$
120
+ \operatorname {A P D} \left(\mathbf {U} _ {w} ^ {t}, \mathbf {U} _ {w} ^ {t ^ {\prime}}\right) = \frac {1}{N ^ {t} \cdot N ^ {t ^ {\prime}}} \sum_ {\mathbf {x} _ {i} \in \mathbf {U} _ {w} ^ {t}, \mathbf {x} _ {j} \in \mathbf {U} _ {w} ^ {t ^ {\prime}}} d \left(\mathbf {x} _ {i}, \mathbf {x} _ {j}\right) \tag {4}
121
+ $$
122
+
123
+ where $\mathbf{U}_w^t$ is a usage matrix constructed with occurrences of $w$ only in interval $t$ . We experiment with cosine, Euclidean, and Canberra distance.
124
+
125
+ Generalisation to multiple time intervals The presented metrics quantify semantic change across pairs of temporal intervals $(t, t')$ . When more than two intervals are available, we measure change across all contiguous intervals $(m(\mathbf{U}_w^t, \mathbf{U}_w^{t+1})$ , where $m$ is one of the metrics), and collect these values into vectors. We then transform each vector into a scalar change score by computing the vector's mean and maximum values. Whereas the mean is indicative of semantic change across the entire period under consideration, the max pinpoints the pair of successive intervals where the strongest shift has occurred.
126
+
127
+ # 4 Data
128
+
129
+ We examine word usages in a large diachronic corpus of English, the Corpus of Historical American English (COHA, Davies, 2012), which covers two centuries (1810-2009) of language use and includes a variety of genres, from fiction to newspapers and popular magazines, among others. In this study, we focus on texts written between 1910 and 2009, for which a minimum of 21M words per decade is available, and discard previous decades, where data are less balanced per decade.
130
+
131
+ We use the 100 words annotated with semantic shift scores by Gulordava and Baroni (2011) as our target words. These scores are human judgements collected by asking five annotators to quantify the degree of semantic change undertaken by each word (shown out of context) from the 1960's to the 1990's. We exclude extracellular as in COHA this word only appears in three decades; all other words appear in at least 8 decades, with a minimum and maximum frequency of 191 and 108,796, respectively. We refer to the resulting set of 99 words and corresponding shift scores as the 'GEMS dataset' or the 'GEMS words', as appropriate.
132
+
133
+ We collect a contextualised representation for each occurrence of these words in the second century of COHA, using BERT as described in Section 3.2. This results in a large set of usage representations, $\sim 1.3\mathrm{M}$ in total, which we cluster into usage types using $K$ -Means and silhouette coefficients (Section 3.3). We use these usage representations and usage types in the evaluation and the analyses offered in the remaining of the paper.
134
+
135
+ # 5 Correlation with Human Judgements
136
+
137
+ Before using our proposed method to analyse language change, we assess how its key components compare with human judgements. We test whether the clustering into usage types reflects human similarity judgements (Section 5.1) and to what extent the degree of change computed with our metrics correlates with shift scores provided by humans (Section 5.2).
138
+
139
+ # 5.1 Evaluation of Usage Types
140
+
141
+ The clustering of contextualised representations into usage types is one of the main steps in our method (see Section 3.3). It relies on the similarity values between pairs of usage representations created by the language model. To quantitatively evaluate the quality of these similarity values (and thus,
142
+
143
+ by extension, the quality of usage representations and usage types), we compare them to similarity judgements by human raters.
144
+
145
+ New dataset of similarity judgements We create a new evaluation dataset, following the annotation approach of Erk et al. (2009, 2013) for rating pairs of usages of the same word. Since we need to collect human judgements for pairs of usages, annotating the entire GEMS dataset would be extremely costly and time consuming. Therefore, to limit the scope of the annotation, we select a subset of words. For each shift score value $s$ in the GEMS dataset, we sample a word uniformly at random from the words annotated with $s$ . This results in 16 words. To ensure that our selection of usages is sufficiently varied, for each of these words, we sample five usages from each of their usage types (the number of usage types is word-specific) along different time intervals, one usage per 20-year period over the century. All possible pairwise combinations are generated for each target word, resulting in a total of 3,285 usage pairs.
146
+
147
+ We use the crowdsourcing platform Figure Eight to collect five similarity judgements for each of these usage pairs. Annotators are shown pairs of usages of the same word: each usage shows the target word in its sentence, together with the previous and the following sentences (67 tokens on average). Annotators are asked to assign a similarity score on a 4-point scale, ranging from unrelated to identical, as defined by Brown (2008) and used e.g., by Schlechtweg et al. (2018). A total of 380 annotators participated in the task. The interrater agreement, measured as the average pairwise Spearman's correlation between common annotation subsets, is 0.59. This is in line with previous approaches such as Schlechtweg et al. (2018), who report agreement scores between 0.57 and 0.68.
148
+
149
+ Results To obtain a single human similarity judgement per usage pair, we average the scores given by five annotators. We encode all averaged human similarity judgements for a given word in a square matrix. We then compute similarity scores over pairs of usage vectors output by BERT to
150
+
151
+ obtain analogous matrices per word and measure Spearman's rank correlation between the human- and the machine-generated matrices using the Mantel test (Mantel, 1967).
152
+
153
+ We observe a significant $(p < 0.05)$ positive correlation for 10 out of 16 words, with $\rho$ coefficients ranging from 0.13 to 0.45. $^{7}$ This is an encouraging result, which indicates that BERT's word representations and similarity scores (as well as our clustering methods which build on them) correlate, to a substantial extent, with human similarity judgements. We take this to provide a promising empirical basis for our approach.
154
+
155
+ # 5.2 Evaluation of Semantic Change Scores
156
+
157
+ We now quantitatively assess the semantic change scores yielded by the metrics described in Section 3.4 when applied to BERT usage representations and the usage types created with our approach. We do so by comparing them to the human shift scores in the GEMS dataset. For consistency with this dataset, which quantifies change from the 1960's to the 1990's as explained in Section 4, we only consider these four decades when calculating our scores. Using each of the metrics on representations from these time intervals, we assign a semantic change score to all the GEMS words. We then compute Spearman's rank correlation between the automatically generated change scores and the gold standard shift values.
158
+
159
+ Results Table 1 shows the Spearman's correlation coefficients obtained using our metrics, together with a frequency baseline (the difference between the normalised frequency of a word in the 1960's and in the 1990's). The three proposed metrics yield significant positive correlations. This is again a very encouraging result regarding the potential of contextualised word representations for capturing lexical semantic change.
160
+
161
+ As a reference, we report the correlation coefficients with respect to GEMS shift scores documented by the authors of two alternative approaches: the count-based model by Gulordava and Baroni (2011) themselves (trained on two time slices from the Google Books corpus with texts from the 1960's and the 1990's) and the sense-based SCAN model by Frermann and Lapata (2016) (trained on the DATE corpus with texts from the 1960's through the 1990's).
162
+
163
+ <table><tr><td>Frequency difference</td><td>0.068</td></tr><tr><td>Entropy difference (max)</td><td>0.278</td></tr><tr><td>Jensen-Shannon divergence (max)</td><td>0.276</td></tr><tr><td>Average pairwise distance (Euclidean, max)</td><td>0.285</td></tr><tr><td>Gulordava and Baroni (2011)</td><td>0.386</td></tr><tr><td>Frermann and Lapata (2016)</td><td>0.377</td></tr></table>
164
+
165
+ Table 1: Spearman's $\rho$ correlation coefficients between the gold standard scores in the GEMS dataset and the change scores assigned by our three metrics and a relative frequency baseline. For reference, correlation coefficients reported by previous works using different approaches are also given. All correlations are significant $(p < 0.05)$ except for the frequency difference baseline.
166
+
167
+ For all our metrics, the max across the four time intervals—i.e., identifying the pair of successive intervals where the strongest shift has occurred (cfr. end of Section 3.4)—is the best performing aggregation strategy. Table 1 only shows values obtained with max and Euclidean distance for APD, as they are the best-performing options.
168
+
169
+ It is interesting to observe that APD can prove as informative as JSD and ED, although it does not depend on the clustering of word occurrences into usage types. Yet, computing usage types offers a powerful tool for analysing lexical change, as we will see in the next section.
170
+
171
+ # 6 Analysis
172
+
173
+ In this section, we provide an in-depth qualitative analysis of the linguistic properties that define usage types and the kinds of lexical semantic change we observe. More quantitative methods (such as taking the top $n$ words with highest JSD, APD and ED and checking, e.g., how many cases of broadening each metric captures) are difficult to operationalise (Tang et al., 2016) because there exist no well-established formal notions of semantic change types in the linguistic literature. To carry out this analysis, for each GEMS word, we identify the most representative usages in a given usage type cluster by selecting the five closest vectors to the cluster centroid, and take the five corresponding sentences as usage examples.
174
+
175
+ # 6.1 What do Usage Types Capture?
176
+
177
+ We first leave the temporal variable aside and present a synchronous analysis of usage types. Our
178
+
179
+ goal is to assess the interpretability and internal coherence of the obtained usage clusters.
180
+
181
+ We observe that usage types can discriminate between underlying senses of polysemous (and homonymous) words, between literal and figurative usages, and between usages that fulfil different syntactic roles; plus they can single out phrasal collocations as well as named entities.
182
+
183
+ Polysemy and homonymy Distinctions often occur between underlying senses of polysemous and homonymous words. For example, the vectors collected for the polysemous word curious are grouped together into two usage types, depending on whether curious is used to describe something that excites attention as odd, novel, or unexpected ('a wonderful and curious and unbelievable story') or rather to describe someone who is marked by a desire to investigate and learn ('curious and amazed and innocent'). The same happens for the homonymous usages of the word coach, for instance, which can denote vehicles as well as instructors (see Figure 2a for a diachronic view of the usage types).
184
+
185
+ Metaphor and metonymy In several cases, literal and metaphorical usages are also separated. For example, occurrences of curtain are clustered into four usage types (Figure 2c): two of these correspond to a literal interpretation of the word as a hanging piece of cloth ('curtainless windows', 'pulled the curtain closed') whereas the other two indicate metaphorical interpretations of curtain as any barrier that excludes the free exchange of information or communication ('the curtain on the legal war is being raised'). Similarly, we obtain two usage types for sphere: one for literal usages that denote a round solid figure ('the sphere of the moon'), and the other for metaphorical interpretations of the word as an area of knowledge or activity ('a certain sphere of autonomy') as well as metonymical usages that refer to the planet Earth ('land and peoples on the top half of the sphere').
186
+
187
+ Syntactic roles and argument structure Further distinctions are observed between word usages that fulfil a different syntactic functionality: not only is part-of-speech ambiguity detected (e.g., 'the cost-tapered average tariff' vs. 'cost less to make') but contextualised representations also capture regularities in syntactic argument structures. For example, usages of refuse are clustered into nominal usages ('society's emotional refuse', 'the amount of refuse'), verbal transitive and intransi
188
+
189
+ tive usages ('fall, give up, refuse, kick'), as well as verbal usages with infinitive complementation ('refuse to go', 'refuse for the present to sign a treaty').
190
+
191
+ Collocations and named entities Specific clusters are also assigned to lexical items that are parts of phrasal collocations (e.g., 'iron curtain') or of named entities ('alexander graham bell' vs. 'bell-like whistle').
192
+
193
+ Other distinctions Some distinctions are interpretable but unexpected. As an example, the word doubt does not show the default noun-verb separation but rather a distinction between usages in affirmative contexts ('there is still doubt', 'the benefit of the doubt') and in negative contexts ('there is not a bit of doubt', 'beyond a reasonable doubt').
194
+
195
+ Observed errors For some words, we find that usages which appear to be identical are separated into different usage types. In a handful of cases, this seems due to the setup we have used for experimentation, which sets the minimum number of clusters to 2 (see Section 3.3). This leads to distinct usage types for words such as maybe, for which a single type is expected. In other cases, a given interpretation is not identified as an independent type, and its usages appear in different clusters. This holds, for example, for the word tenure, whose usages in phrases such as 'tenure-track faculty position' are present in two distinct usage types (see Figure 2b).
196
+
197
+ Finally, we see that in some cases a usage type ends up including two interpretations which arguably should have been distinguished. For example, two of the usage types identified for address are interpretable and coherent: one includes usages in the sense of formal speech and the other one includes verbal usages. The third usage type, however, includes a mix of nominal usages of the word as in 'disrespectful manners or address' as well as in 'network address'.
198
+
199
+ # 6.2 What Kinds of Change are Observed?
200
+
201
+ Here we consider usage types diachronically. Different kinds of change, driven by cultural and technological innovation as well as by historical events, emerge from a qualitative inspection of usage distributions along the temporal dimension. We describe the most prominent kinds—narrowing and broadening, including metaphorisation—and discuss the extent to which our metrics are able to detect them.
202
+
203
+ ![](images/d8f66f5e1f4d5ed70ee550d08a355d197a3260256cfc9ef51e7f2c2227e9f462.jpg)
204
+ (a)coach
205
+
206
+ ![](images/f9798dc663779cf3a70d4c010fd29593c5c08cf64e3f77408743482b4ee0a9e9.jpg)
207
+ (b) tenure
208
+
209
+ ![](images/e92963b3a2e9583ff944a1ee6c450153fcb8cdf1c61920833743d6183a324723.jpg)
210
+ (c) curtain
211
+
212
+ ![](images/0e33cb1ffa09fc09da99b2e1737639b1ebb04daada22cc7b75fd731a4d326d6b.jpg)
213
+ (d) disk
214
+ Figure 2: Evolution of usage type distributions in the period 1910-2009, generated with occurrences of coach, tenure, curtain and disk in COHA (Davies, 2012). The legends show sample usages per identified usage type.
215
+
216
+ Narrowing Examination of the dynamics of usage distributions allows us to see that for a few words certain usage types disappear or become less common over time (i.e., the interpretation of the word becomes 'narrower', less varied). This is the case, for example, for coach, where the frequency decrease of a usage type is gradual and caused by technological evolution (see Figure 2a).
217
+
218
+ Negative mean ED (see Section 3.4) reliably indicates this kind of narrowing. Indeed coach is assigned one of the lowest ED score among the GEMS words. In contrast, ED fails to detect the obsolescence of a usage type when new usage types emerge simultaneously (since this may lead to no entropy reduction). This is the case, e.g., of tenure. The usage type capturing tenure of a landed property becomes obsolete; however, we obtain a positive mean ED caused by the appearance of a new usage type (the third type in Figure 2b).
219
+
220
+ Broadening For a substantial amount of words, we observe the emergence of new usage types (i.e., a 'broadening' of their use). This may be due to
221
+
222
+ technological advances as well as to specific historical events. As an example, Figure 2d shows how, starting from the 1950's and as a result of technological innovation, the word disk starts to be used to denote also optical disks while beforehand it referred only to generic flat circular objects.
223
+
224
+ A special kind of broadening is metaphorisation. As mentioned in Section 6.1, the usage types for the word curtain include metaphorical interpretations. Figure 2c allows us to see when the metaphorical meaning related to the historically charged expression iron curtain is acquired. This novel usage type is related to a specific historical period: it emerges between the 1930's and the 1940's, reaches its peak in the 1950's, and remains stably low in frequency starting from the 1970's.
225
+
226
+ The metrics that best capture broadening are JSD and APD—e.g., disk is assigned a high semantic change score by both metrics. Yet, sometimes these metrics generate different score rankings. For example, curtain yields a rather low APD score due to the low relative frequency of the novel usage (Figure 2c). In contrast, even though the novel us
227
+
228
+ age type is not very prominent in some decades, JSD can still discriminate it and measure its development. On the other hand, the word address, for which we also observe broadening, is assigned a low score by JSD due to the errors in its usage type assignments pointed out in Section 6.1. As APD does not rely on usage types, it is not affected by this issue and does indeed assign a high change score to the word.
229
+
230
+ Finally, although our metrics help us identify the broadening of a word's meaning, they cannot capture the type of broadening (i.e., the nature of the emerging interpretations). Detecting metaphorisation, for example, may require inter-cluster comparisons to identify a metaphor's source and target usage types, which we leave to future work.
231
+
232
+ # 7 Conclusion
233
+
234
+ We have introduced a novel approach to the analysis of lexical semantic change. To our knowledge, this is the first work that tackles this problem using neural contextualised word representations and no lexicographic supervision. We have shown that the representations and the detected semantic shifts are aligned to human interpretation, and presented a new dataset of human similarity judgements which can be used to measure said alignment. Finally, through extensive qualitative analysis, we have demonstrated that our method allows us to capture a variety of synchronic and diachronic linguistic phenomena.
235
+
236
+ Our approach offers several advantages over previous methods: (1) it does not rely on a fixed number of word senses, (2) it captures morphosyntactic properties of word usage, and (3) it offers a more effective interpretation of lexical meaning by enabling the inspection of particular example sentences. In recent work, we have experimented with alternative ways of obtaining usage representations (using a different language model, fine-tuning, and various layer selection strategies) and we have obtained very promising results in detecting semantic change across four languages (Kutuzov and Giulianelli, 2020). In the future, we plan to investigate whether usage representations can provide an even finer grained account of lexical meaning and its dynamics, e.g., to automatically discriminate between different types of meaning change. We expect our work to inspire further analyses of variation and change which exploit the expressiveness of contextualised word representations.
237
+
238
+ # Acknowledgments
239
+
240
+ This paper builds upon the preliminary work presented by Giulianelli (2019). We would like to thank Lisa Beinborn for providing useful feedback as well as the three anonymous ACL reviewers for their helpful comments. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 819455).
241
+
242
+ # References
243
+
244
+ David Arthur and Sergei Vassilvitskii. 2007. k-means++: The Advantages of Careful Seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1027-1035. Society for Industrial and Applied Mathematics.
245
+ Robert Bamler and Stephan Mandt. 2017. Dynamic Word Embeddings. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 380-389. JMLR.org.
246
+ Susan Windisch Brown. 2008. Choosing Sense Distinctions for WSD: Psycholinguistic Evidence. In Proceedings of ACL-08: HLT, Short Papers, pages 249-252, Columbus, Ohio. Association for Computational Linguistics.
247
+ Claudia Marlea Brugman. 1988. The Story of Over: Polysemy, Semantics, and the Structure of the Lexicon. Garland, New York.
248
+ Joan Bybee. 2015. Language Change. Cambridge University Press.
249
+ Paul Cook, Joy Han Lau, Diana McCarthy, and Timothy Baldwin. 2014. Novel Word-Sense Identification. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1624-1635.
250
+ Andrew M Dai and Quoc V Le. 2015. Semi-supervised Sequence Learning. In Advances in Neural Information Processing Systems, pages 3079-3087.
251
+ Mark Davies. 2012. Expanding Horizons in Historical Linguistics with the 400-Million Word Corpus of Historical American English. Corpora, 7(2):121-157.
252
+ Marco Del Tredici, Raquel Fernandez, and Gemma Boleda. 2019. Short-Term Meaning Shift: A Distributional Exploration. In Proceedings of NAACL-HLT 2019 (Annual Conference of the North American Chapter of the Association for Computational Linguistics).
253
+
254
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.
255
+ Haim Dubossarsky, Yulia Tsvetkov, Chris Dyer, and Eitan Grossman. 2015. A Bottom Up Approach to Category Mapping and Meaning Change. In *Word Structure and Word Usage*. Proceedings of the NetWordS Final Conference, pages 66-70.
256
+ Katrin Erk, Diana McCarthy, and Nicholas Gaylord. 2009. Investigations on Word Senses and Word Usages. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 10-18, Suntec, Singapore. Association for Computational Linguistics.
257
+ Katrin Erk, Diana McCarthy, and Nicholas Gaylord. 2013. Measuring Word Meaning in Context. Computational Linguistics, 39(3):511-554.
258
+ Katrin Erk and Sebastian Padó. 2008. A Structured Vector Space Model for Word Meaning in Context. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 897-906.
259
+ Katrin Erk and Sebastian Padó. 2010. Exemplar-Based Models for Word Meaning in Context. In Proceedings of the ACL 2010 Conference (Short Papers), pages 92-97.
260
+ Lea Frermann and Mirella Lapata. 2016. A Bayesian Model of Diachronic Meaning Change. Transactions of the Association for Computational Linguistics, 4:31-45.
261
+ Mario Giulianielli. 2019. Lexical Semantic Change Analysis with Contextualised Word Representations. Master's thesis, University of Amsterdam, July.
262
+ Kristina Gulordava and Marco Baroni. 2011. A Distributional Similarity Approach to the Detection of Semantic Change in the Google Books Ngram Corpus. In Proceedings of the GEMS 2011 Workshop on Geometrical Models of Natural Language Semantics, pages 67-71.
263
+ William L Hamilton, Jure Leskovec, and Dan Jurafsky. 2016. Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1489-1501.
264
+ Paul J Hopper et al. 1991. On Some Principles of Grammaticization. Approaches to Grammaticalization, 1:17-35.
265
+
266
+ Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328-339.
267
+ Renfen Hu, Shen Li, and Shichen Liang. 2019. Diachronic Sense Modeling with Deep Contextualized Word Embeddings: An Ecological View. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3899-3908, Florence, Italy. Association for Computational Linguistics.
268
+ Adam Kilgarriff. 1997. I Don't Believe in Word Senses. Computers and the Humanities, 31(2):91-113.
269
+ Yoon Kim, Yi-I Chiu, Kentaro Hanaki, Darshan Hegde, and Slav Petrov. 2014. Temporal Analysis of Language through Neural Language Models. In Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, pages 61-65.
270
+ Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2015. Statistically Significant Detection of Linguistic Change. In Proceedings of the 24th International Conference on World Wide Web, pages 625-635. International World Wide Web Conferences Steering Committee.
271
+ Andrey Kutuzov and Mario Giulianelli. 2020. UiO-UvA at SemEval-2020 Task 1: Contextualised Embeddings for Lexical Semantic Change Detection. Forthcoming.
272
+ Andrey Kutuzov, Lilja Øvrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic Word Embeddings and Semantic Shifts: A Survey. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1384-1397.
273
+ Jey Han Lau, Paul Cook, Diana McCarthy, Spandana Gella, and Timothy Baldwin. 2014. Learning Word Sense Distributions, Detecting Unattested Senses and Identifying Novel Senses Using Topic Models. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 259-270.
274
+ Jey Han Lau, Paul Cook, Diana McCarthy, David Newman, and Timothy Baldwin. 2012. Word Sense Induction for Novel Sense Detection. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 591-601. Association for Computational Linguistics.
275
+ Jianhua Lin. 1991. Divergence Measures Based on the Shannon Entropy. IEEE Transactions on Information theory, 37(1):145-151.
276
+ Peter Ludlow. 2014. Living Words: Meaning Underdetermination and the Dynamic Lexicon. OUP Oxford.
277
+
278
+ Suresh Manandhar, Ioannis P Klapaftis, Dmitriy Dligach, and Sameer S Pradhan. 2010. SemEval-2010 Task 14: Word Sense Induction & Disambiguation. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 63-68. Association for Computational Linguistics.
279
+ Nathan Mantel. 1967. The Detection of Disease Clustering and a Generalized Regression Approach. Cancer Research, 27(2):209-220.
280
+ Matej Martinc, Syrielle Montariol, Elaine Zosa, and Lidia Pivovarova. 2020. Capturing Evolution in Word Usage: Just Add More Clusters? In *Companion Proceedings of the International World Wide Web Conference*, pages 20-24.
281
+ Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in Translation: Contextualized Word Vectors. In Advances in Neural Information Processing Systems, pages 6294-6305.
282
+ Jean-Baptiste Michel, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K Gray, Joseph P Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, et al. 2011. Quantitative Analysis of Culture Using Millions of Digitized Books. Science, 331(6014):176-182.
283
+ Sunny Mitra, Ritwik Mitra, Suman Kalyan Maity, Martin Riedl, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2015. An Automatic Approach to Identify Word Sense Changes in Text Media across Timescales. *Natural Language Engineering*, 21(5):773-798.
284
+ Sunny Mitra, Ritwik Mitra, Martin Riedl, Chris Biemann, Animesh Mukherjee, and Pawan Goyal. 2014. That's Sick Dude! Automatic Identification of Word Sense Change across Different Timescales. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1020-1029.
285
+ Roberto Navigli and Daniele Vannella. 2013. SemEval-2013 Task 11: Word Sense Induction and Disambiguation within an End-User Application. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 193-201.
286
+ Patrick Pantel and Dekang Lin. 2002. Discovering Word Senses from Text. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '02, page 613-619, New York, NY, USA. Association for Computing Machinery.
287
+ Carita Paradis. 2011. Metonymization: A Key Mechanism in Semantic Change. Defining Metonymy in Cognitive Linguistics: Towards a Consensus View, pages 61-98.
288
+
289
+ Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237.
290
+ Mohammad Taher Pilehvar and Jose Camacho-Collados. 2019. WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1267-1273, Minneapolis, Minnesota. Association for Computational Linguistics.
291
+ Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving Language Understanding by Generative Pre-training. Technical report, OpenAI.
292
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. Technical report, OpenAI.
293
+ Miguel A Ré and Rajeev K Azad. 2014. Generalization of Entropy Based Divergence Measures for Symbolic Sequence Analysis. PloS one, 9(4):e93532.
294
+ Alex Rosenfeld and Katrin Erk. 2018. Deep Neural Models of Semantic Shift. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 474-484.
295
+ Peter J. Rousseeuw. 1987. Silhouettes: A Graphical Aid to the Interpretation and Validation of Cluster Analysis. Journal of Computational and Applied Mathematics, 20:53-65.
296
+ Maja Rudolph and David Blei. 2018. Dynamic embeddings for Language evolution. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, pages 1003-1011. International World Wide Web Conferences Steering Committee.
297
+ Dominik Schlechtweg, Sabine Schulte im Walde, and Stefanie Eckmann. 2018. Diachronic Usage Relatedness (DURel): A Framework for the Annotation of Lexical Semantic Change. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 169-174.
298
+ Hinrich Schütze. 1998. Automatic Word Sense Discrimination. Computational Linguistics, 24(1):97-123.
299
+ Xuri Tang. 2018. A State-of-the-Art of Semantic Change computation. Natural Language Engineering, 24(5):649-676.
300
+
301
+ Xuri Tang, Weiguang Qu, and Xiaohe Chen. 2016. Semantic Change Computation: A Successive Approach. World Wide Web, 19(3):375-415.
302
+
303
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Advances in Neural Information Processing Systems, pages 5998-6008.
304
+
305
+ Gregor Wiedemann, Steffen Remus, Avi Chawla, and Chris Biemann. 2019. Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings. In Proceedings of the 15th Conference on Natural Language Processing, KONVENS 2019, Erlangen, Germany.
306
+
307
+ Derry Tanti Wijaya and Reyyan Yeniterzi. 2011. Understanding Semantic Change of Words over Centuries. In Proceedings of the 2011 International Workshop on Detecting and Exploiting Cultural Diversity on the Social Web, pages 35-40. ACM.
308
+
309
+ Yang Xu and Charles Kemp. 2015. A Computational Evaluation of Two Laws of Semantic Change. In CogSci.
310
+
311
+ Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books. In Proceedings of the IEEE International Conference on Computer Vision, pages 19-27.
312
+
313
+ # A Appendix
314
+
315
+ This appendix includes supplementary materials related to Section 5.1.
316
+
317
+ # A.1 New Dataset of Similarity Judgements
318
+
319
+ Obtaining usage pairs For each of our 16 target words, we sample five usages from each of their usage types, one for every 20-year period in the last century of COHA. When a usage type does not occur in a time interval, we uniformly sample an interval from those that do contain occurrences of that usage type. All possible pairwise combinations (without replacement) are generated for each target word, resulting in a total of 3,285 usage pairs.
320
+
321
+ Crowdsourced annotation We use the crowdsourcing platform Figure Eight (since then acquired by Appen<sup>9</sup>) to collect five similarity judgements for each of these usage pairs. To control the quality of the similarity judgements, we select Figure Eight workers from the pool of most experienced contributors, we require them to be native English
322
+
323
+ speakers and to have completed a test quiz consisting of 10 similarity judgements. For this purpose, 170 usage pairs were manually annotated by the first author with 1 to 3 acceptable labels. The compensation scheme for the raters is based on an average wage of 10 USD per hour.
324
+
325
+ Figures 4 and 5 (on the next pages) show the full instructions given to the annotators and Figure 3 illustrates a single annotation item.
326
+
327
+ # federal
328
+
329
+ Please read carefully the following two sentences where the word [[federal]] occurs:
330
+
331
+ - robert m. hitchcock, who prosecuted the amerasia case in 1945, testified today that he had been gravely handicapped because the government's best evidence had been produced by illegal seizures by [[federal]] agents. the prosecution, he asserted, was in fact fortunate under the circumstances to have done as well as it did.
332
+ - there should be such a fire every saturday afternoon at the same time with the same actual damage. this time it was the records and documents of the [[federal]] trade commission, said to be " priceless." "also the reels of official motion pictures of historical or technical value.
333
+
334
+ How similar are the two occurrences of [[federal]]? (required)
335
+
336
+ 1. Unrelated
337
+ $\bigcirc$ 2. Distantly related
338
+ 3. Closely related
339
+ $\bigcirc$ 4. Identical
340
+ Cannot decide (please use this option as little as possible)
341
+
342
+ Figure 3: An annotation item on the Figure Eight crowdsourcing platform.
343
+
344
+ # A.2 Correlation Results
345
+
346
+ We measure Spearman's rank correlation between human- and machine-generated usage similarity matrices using the Mantel test and observe a significant positive correlation for 10 out of 16 words. Table 2 presents the correlation coefficients and $p$ -values obtained for each word.
347
+
348
+ <table><tr><td></td><td>ρ</td><td>p</td></tr><tr><td>federal</td><td>0.131</td><td>0.001</td></tr><tr><td>spine</td><td>0.195</td><td>0.032</td></tr><tr><td>optical</td><td>0.227</td><td>0.003</td></tr><tr><td>compact</td><td>0.229</td><td>0.002</td></tr><tr><td>signal</td><td>0.233</td><td>0.008</td></tr><tr><td>leaf</td><td>0.252</td><td>0.001</td></tr><tr><td>net</td><td>0.361</td><td>0.001</td></tr><tr><td>coach</td><td>0.433</td><td>0.007</td></tr><tr><td>sphere</td><td>0.446</td><td>0.002</td></tr><tr><td>mirror</td><td>0.454</td><td>0.027</td></tr><tr><td>card</td><td>0.358</td><td>0.055</td></tr><tr><td>virus</td><td>0.271</td><td>0.159</td></tr><tr><td>disk</td><td>0.183</td><td>0.211</td></tr><tr><td>brick</td><td>0.203</td><td>0.263</td></tr><tr><td>virtual</td><td>-0.085</td><td>0.561</td></tr><tr><td>energy</td><td>0.002</td><td>0.990</td></tr></table>
349
+
350
+ Table 2: Correlation results per word.
351
+
352
+ # Overview
353
+
354
+ Each question includes two sentences. Both sentences contain a target word between double brackets, as in: [[target]]. Your task is to rank the similarity of the two usages of the target word according to the following scale:
355
+
356
+ 1. unrelated
357
+ 2. distantly related
358
+ 3. closely related
359
+ 4. identical
360
+
361
+ IMPORTANT: your task is to evaluate the similarity of the two usages of the same word, not the similarity of the two sentences in general.
362
+
363
+ If you are unable to choose a label because you do not understand the sentences, select the option "cannot decide". Please try to use this option as little as possible!
364
+
365
+ # An example
366
+
367
+ You will see two sentences. Both contain the target word marked by double brackets; in this example it's the word [[current]].
368
+
369
+ Read the sentences carefully:
370
+
371
+ - in any case, it's not a question of electrocution. we can arrange a relay which will break the [[current]] at the instant of application of weight. if the robot should place his weight on it, he wo n't see.
372
+ already, while it was still a blueprint, they were proud of their idea, of its simple clean lines and undeniable originality -- it owed nothing in its conception to any of the [current] models of revolutionary strategy . the japanese red army comrades , whadi haddad and his pflp contingent , even the matchless " carlos " could only admire .
373
+
374
+ And then select how similar the two usages of the word [[current]] are:
375
+
376
+ 1. unrelated
377
+ 2. distantly related
378
+ 3. closely related
379
+ 4. identical
380
+ 5. (cannot decide)
381
+
382
+ You can choose only one label. Please try to use the option "cannot decide" as little as possible.
383
+
384
+ # Why do texts look weird?
385
+
386
+ The sentences you'll read don't look like they were taken from a book. This is because they have gone through some text processing. You should not be concerned nor influenced in your decisions by the fact that:
387
+
388
+ - all words are lowercase (written in small letters), even proper names or the pronoun "I"
389
+ - whitespaces may appear where you don't expect them (e.g. before a comma) and may sometimes not appear where you'd expect them (e.g. between words)
390
+ strange characters and words occasionally appear
391
+ - some words are misspelled
392
+ a few words are missing
393
+ - the target word may appear multiple times (but your judgement should be about the occurrence signalled by the[[ ] marker)
394
+
395
+ Please simply ignore these aspects while labelling!
396
+
397
+ Figure 4: Annotation instructions (part 1).
398
+
399
+ # What do the similarity labels mean? More examples
400
+
401
+ Let's now look at examples for all four labels. Remember that you are evaluating the similarity of two word usages—not the overall similarity of the two sentences!
402
+
403
+ 1. How similar are these two usages of [[current]]?
404
+
405
+ - prices of the leading issues . considering past earnings records , are apparently on a conservative basis measured by [[current]] market valuations in other groups . on the other hand there is no particular speculative incentive for operations in this group , with all signs pointing to a lower volume of sales in the last half of the year .
406
+ one of the weirdest was the disappearance of anchovies off the coast of peru . why this happened is still unclear . one theory is that the cause was the 1972 - 73 invasion of a warm-water [ [current] ] called el nino , which upset the ecology of the coldwater humboldt current , drastically reducing the supply of 119 economics in plain english plankton and other nutrients on which anchovies ( as well as whales ) feed .
407
+
408
+ # UNRELATED.
409
+
410
+ In the first sentence, "current" means being most recent or occurring at the present time. In the second sentence, "current" refers to a flow of water within a lake or an ocean. These two meanings have no properties in common; it is not possible to explain one usage in terms of the other.
411
+
412
+ 2. How similar are these two usages of [[current]]?
413
+
414
+ - it is quite possible to arrive at the right conclusions for the wrong reasons, just as it is possible to ignore history but not to repeat it. thus, the summers book represents an important [[current]] of thought in the u.s. military, which rightly argues that the vietnam defeat was not the fault of the military; never again should young americans be sent into battle without public backing and a clear definition of the goals of the military engagement.
415
+ get busy with that I-tube ! If you do n't have it apart , cleaned , and together again before the day is out , i 'll coagulate your brains with alternating [[current]] . " not a robot moved !
416
+
417
+ # DISTANTLY RELATED.
418
+
419
+ In the first sentence, "current" refers to a feeling or idea that exists within a group of people. In the second sentence, "current" refers to a flow of electricity. Although these two meanings of "current" are different, they are related as they do share some properties: for example, both currents of thought and electronic current can flow, and they are both often the result of an interplay of forces. This is why connecting the two meanings in a sentence results in a perhaps sophisticated but understandable statement: (A) "a new current of thought is flowing through the circuits of parliament".
420
+
421
+ 3. How similar are these two usages of [[current]]?
422
+
423
+ one of the weirdest was the disappearance of anchovies off the coast of peru . why this happened is still unclear . one theory is that the cause was the 1972 - 73 invasion of a warm-water [ [current] ] called el nino , which upset the ecology of the coldwater humboldt current , drastically reducing the supply of 119 economics in plain english plankton and other nutrients on which anchovies ( as well as whales ) feed
424
+ he had survived by managing in a stupor to drag himself into a windowless shed behind his house. col. michael wiener , an israeli army doctor , said many of the survivors in the valley below the lake , such as in souboum , may simply have been in an air [ [current] ] that did not have any poison , while someone standing only a few yards away may have been killed. dr. weiner , the head of a 17-member rescue unit that came to camoer team to nkamba , about 100 miles northeast of bamenda .
425
+
426
+ # CLOSELY RELATED.
427
+
428
+ In the first sentence, "current" refers to a flow of water within a lake or an ocean. In the second sentence, "current" refers to a steady flowing movement of air. These two meanings are closely related: both refer to a steady and continuous flowing movement of some physical element. As in the previously encountered example (A), we can construct a sentence that relates the two usages: (B) "I can't remember whether El Nino is the name of an ocean or a wind current". Note, however, that the meanings are still ultimately different: a flow of water is not a flow of air.
429
+
430
+ 4. How similar are these two usages of [[current]]?
431
+
432
+ dell's shares, on the other hand, go for 26 times projected 2004 earnings-but its business is three times as profitable as apple 's. The company 's supporters say [ [current] ] profits matter little because jobs has proved time and time again that he can create new products and trailblaze markets. that may be so, but as transamerica portfolio manager chris bonavico , who does n't own apple ' s stock , notes , " Apple will remain a company that is neat from a product and consumer standpoint but crap from an investor standpoint .
433
+ - prices of the leading issues . considering past earnings records , are apparently on a conservative basis measured by [[current]] market valuations in other groups . on the other hand there is no particular speculative incentive for operations in this group , with all signs pointing to a lower volume of sales in the last half of the year .
434
+
435
+ # IDENTICAL.
436
+
437
+ In both sentences, "current" means being most recent, up-to-date, or occurring at the present time. The two meanings are identical because they share virtually all properties, as can be seen from the following example. In the sentence (C) "I can't remember whether Donald Trump is the current president or vice-president of the United States", the meanings of "current" are ultimately the same regardless of whether Trump is president or vice-president. (Note the difference with respect to the constructed sentence (B) above.)
438
+
439
+ Now you're ready to start!
440
+
441
+ # Figure 5: Annotation instructions (part 2).
analysinglexicalsemanticchangewithcontextualisedwordrepresentations/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c800dd7154ae1376b04faa3abea5e4295ba93b9e5ad61cc39ca3ef1e7549c49
3
+ size 230561
analysinglexicalsemanticchangewithcontextualisedwordrepresentations/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:844a8861f5241d2374ba533f890411abbad27436872d448e505af509c0039155
3
+ size 475927
analyzinganalyticalmethodsthecaseofphonologyinneuralmodelsofspokenlanguage/8d49a011-f3a9-4ff7-a5e2-a69076301106_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3b58142e48ce6388fcb00142c6ebf8e0290816072b9a12de011c8c0090be515
3
+ size 67962
analyzinganalyticalmethodsthecaseofphonologyinneuralmodelsofspokenlanguage/8d49a011-f3a9-4ff7-a5e2-a69076301106_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3344e7630a0a8dfa5a02a0dc46817fb7a7616a6517e47a7dd4729aada305ade3
3
+ size 83642
analyzinganalyticalmethodsthecaseofphonologyinneuralmodelsofspokenlanguage/8d49a011-f3a9-4ff7-a5e2-a69076301106_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06cdce873ed68823c57d3523e84ea8256686c8910f2e9c8cdff9b7d30b54870f
3
+ size 675893
analyzinganalyticalmethodsthecaseofphonologyinneuralmodelsofspokenlanguage/full.md ADDED
@@ -0,0 +1,311 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Analyzing analytical methods: The case of phonology in neural models of spoken language
2
+
3
+ Grzegorz Chrupa
4
+ Cognitive Science and AI
5
+ Tilburg University
6
+ g.chrupala@uvt.nl
7
+
8
+ Bertrand Higy
9
+ Cognitive Science and AI
10
+ Tilburg University
11
+ b.j.r.higy@uvt.nl
12
+
13
+ Afra Alishahi
14
+ Cognitive Science and AI
15
+ Tilburg University
16
+ a.alishahi@uvt.nl
17
+
18
+ # Abstract
19
+
20
+ Given the fast development of analysis techniques for NLP and speech processing systems, few systematic studies have been conducted to compare the strengths and weaknesses of each method. As a step in this direction we study the case of representations of phonology in neural network models of spoken language. We use two commonly applied analytical techniques, diagnostic classifiers and representational similarity analysis, to quantify to what extent neural activation patterns encode phonemes and phoneme sequences. We manipulate two factors that can affect the outcome of analysis. First, we investigate the role of learning by comparing neural activations extracted from trained versus randomly-initialized models. Second, we examine the temporal scope of the activations by probing both local activations corresponding to a few milliseconds of the speech signal, and global activations pooled over the whole utterance. We conclude that reporting analysis results with randomly initialized models is crucial, and that global-scope methods tend to yield more consistent results and we recommend their use as a complement to local-scope diagnostic methods.
21
+
22
+ # 1 Introduction
23
+
24
+ As end-to-end architectures based on neural networks became the tool of choice for processing speech and language, there has been increased interest in techniques for analyzing and interpreting the representations emerging in these models. A large array of analytical techniques have been proposed and applied to diverse tasks and architectures (Belinkov and Glass, 2019; Alishahi et al., 2019).
25
+
26
+ Given the fast development of analysis techniques for NLP and speech processing systems, relatively few systematic studies have been conducted to compare the strengths and weaknesses of each methodology and to assess the reliability and explanatory power of their outcomes in controlled settings. This paper reports a step in this direction: as a case study, we examine the representation of phonology in neural network models of spoken language. We choose three different models that process speech signal as input, and analyze their learned neural representations.
27
+
28
+ We use two commonly applied analytical techniques: (i) diagnostic models and (ii) representational similarity analysis to quantify to what extent neural activation patterns encode phonemes and phoneme sequences.
29
+
30
+ In our experiments, we manipulate two important factors that can affect the outcome of analysis. One pitfall not always successfully avoided in work on neural representation analysis is the role of learning. Previous work has shown that sometimes non-trivial representations can be found in the activation patterns of randomly initialized, untrained neural networks (Zhang and Bowman, 2018; Chrupaña and Alishahi, 2019). Here we investigate the representations of phonology in neural models of spoken language in light of this fact, as extant studies have not properly controlled for role of learning in these representations.
31
+
32
+ The second manipulated factor in our experiments is the scope of the extracted neural activations. We control for the temporal scope, probing both local activations corresponding to a few milliseconds of the speech signal, as well as global
33
+
34
+ activations pooled over the whole utterance.
35
+
36
+ When applied to global-scope representations, both analysis methods detect a robust difference between the trained and randomly initialized target models. However we find that in our setting, RSA applied to local representations shows low correlations between phonemes and neural activation patterns for both trained and randomly initialized target models, and for one of the target models the local diagnostic classifier only shows a minor difference in the decodability of phonemes from randomly initialized versus trained network. This highlights the importance of reporting analysis results with randomly initialized models as a baseline.
37
+
38
+ This paper comes with a repository which contains instructions and code to reproduce our experiments.
39
+
40
+ # 2 Related work
41
+
42
+ # 2.1 Analysis techniques
43
+
44
+ Many current neural models of language learn representations that capture useful information about the form and meaning of the linguistic input. Such neural representations are typically extracted from activations of various layers of a deep neural architecture trained for a target task such as automatic speech recognition or language modeling.
45
+
46
+ A variety of analysis techniques have been proposed in the academic literature to analyze and interpret representations learned by deep learning models of language as well as explain their decisions; see Belinkov and Glass (2019) and Alishahi et al. (2019) for a review. Some of the proposed techniques aim to explain the behavior of a network by tracking the response of individual or groups of neurons to an incoming trigger (e.g., Nagamine et al., 2015; Krug et al., 2018). In contrast, a larger body of work is dedicated to determining what type of linguistic information is encoded in the learned representations. This type of analysis is the focus of our paper. Two commonly used approaches to analyzing representations are:
47
+
48
+ - Probing techniques, or diagnostic classifiers, i.e. methods which use the activations from different layers of a deep learning architecture as input to a prediction model (e.g., Adi et al., 2017; Alishahi et al., 2017; Hupkes et al., 2018; Conneau et al., 2018);
49
+
50
+ - Representational Similarity Analysis (RSA) borrowed from neuroscience (Kriegeskorte et al., 2008) and used to correlate similarity structures of two different representation spaces (Bouchacourt and Baroni, 2018; Chrupała and Alishahi, 2019; Abnar et al., 2019; Abdou et al., 2019).
51
+
52
+ We use both techniques in our experiments to systematically compare their output.
53
+
54
+ # 2.2 Analyzing random representations
55
+
56
+ Research on the analysis of neural encodings of language has shown that in some cases, substantial information can be decoded from activation patterns of randomly initialized, untrained recurrent networks. It has been suggested that the dynamics of the network together with the characteristics of the input signal can result in non-random activation patterns (Zhang and Bowman, 2018).
57
+
58
+ Using activations generated by randomly initialized recurrent networks has a history in speech recognition and computer vision. Two better-known families of such techniques are called Echo State Networks (ESN) (Jaeger, 2001) and Liquid State Machines (LSM) (Maass et al., 2002). The general approach (also known as reservoir computing) is as follows: the input signal is passed through a randomly initialized network to generate a nonlinear response signal. This signal is then used as input to train a model to generate the desired output at a reduced cost.
59
+
60
+ We also focus on representations from randomly initialized neural models but do so in order to show how training a model changes the information encoded in the representations according to our chosen analysis methods.
61
+
62
+ # 2.3 Neural representations of phonology
63
+
64
+ Since the majority of neural models of language work with text rather than speech, the bulk of work on representation analysis has been focused on (written) word and sentence representations. However, a number of studies analyze neural representations of phonology learned by models that receive a speech signal as their input.
65
+
66
+ As an example of studies that track responses of neurons to controlled input, Nagamine et al. (2015) analyze local representations acquired from a deep model of phoneme recognition and show that both individual and groups of nodes in the trained network are selective to various phonetic features, in
67
+
68
+ cluding manner of articulation, place of articulation, and voicing. Krug et al. (2018) use a similar approach and suggest that phonemes are learned as an intermediate representation for predicting graphemes, especially in very deep layers.
69
+
70
+ Others predominantly use diagnostic classifiers for phoneme and grapheme classification from neural representations of speech. In one of the their experiments Alishahi et al. (2017) use a linear classifier to predict phonemes from local activation patterns of a grounded language learning model, where images and their spoken descriptions are processed and mapped into a shared semantic space. Their results show that the network encodes substantial knowledge of phonology on all its layers, but most strongly on the lower recurrent layers.
71
+
72
+ Similarly, Belinkov and Glass (2017) use diagnostic classifiers to study the encoding of phonemes in an end-to-end ASR system with convolutional and recurrent layers, by feeding local (frame-based) representations to an MLP to predict a phoneme label. They show that phonological information is best represented in lowest input and convolutional layers and to some extent in low-to-middle recurrent layers. Belinkov et al. (2019) extend their previous work to multiple languages (Arabic and English) and different datasets, and show a consistent pattern across languages and datasets where both phonemes and graphemes are encoded best in the middle recurrent layers.
73
+
74
+ None of these studies report on phoneme classification from randomly initialized versions of their target models, and none use global (i.e., utterance-level) representations in their analyses.
75
+
76
+ # 3 Methods
77
+
78
+ In this section we first describe the speech models which are the targets of our analyses, followed by a discussion of the methods used here to carry out these analyses.
79
+
80
+ # 3.1 Target models
81
+
82
+ We tested the analysis methods on three target models trained on speech data.
83
+
84
+ Transformer-ASR model The first model is a transformer model (Vaswani et al., 2017) trained on the automatic speech recognition (ASR) task. More precisely, we used a pretrained joint CTC-Attention transformer model from the ESPNet toolkit (Watanabe et al., 2018), trained on the Librispeech dataset
85
+
86
+ (Parayotov et al., 2015). The architecture is based on the hybrid CTC-Attention decoding scheme presented by Watanabe et al. (2017) but adapted to the transformer model. The encoder is composed of two 2D convolutional layers (with stride 2 in both time and frequency) and a linear layer, followed by 12 transformer layers, while the decoder has 6 such layers. The convolutional layers use 512 channels, which is also the output dimension of the linear and transformer layers. The dimension of the flattened output of the two convolutional layers (along frequencies and channel) is then 20922 and 10240 respectively: we omit these two layers in our analyses due to their excessive size. The input to the model is made of a spectrogram with 80 coefficients and 3 pitch features, augmented with the SpecAugment method (Park et al., 2019). The output is composed of 5000 SentencePiece subword tokens (Kudo and Richardson, 2018). The model is trained for 120 epochs using the optimization strategy from Vaswani et al. (2017), also known as Noam optimization. Decoding is performed with a beam of size 60 for reported word error rates (WER) of $2.6\%$ and $5.7\%$ on the test set (for the clean and other subsets respectively).
87
+
88
+ RNN-VGS model The Visually Grounded Speech (VGS) model is trained on the task of matching images with their corresponding spoken captions, first introduced by Harwath and Glass (2015) and Harwath et al. (2016). We use the architecture of Merkx et al. (2019) which implemented several improvements over the RNN model of Chrupała et al. (2017), and train it on the Flickr8K Audio Caption Corpus (Harwath and Glass, 2015). The speech encoder consists of one 1D convolutional layer (with 64 output channels) which subsamples the input by a factor of two, and four bidirectional GRU layers (each of size 2048) followed by a self-attention-based pooling layer. The image encoder uses features from a pre-trained ResNet-152 model (He et al., 2016) followed by a linear projection. The loss function is a margin-based ranking objective. Following Merkx et al. (2019) we trained the model using the Adam optimizer (Kingma and Ba, 2015) with a cyclical learning rate schedule (Smith, 2017). The input are MFCC features with total energy and delta and double-delta coefficients with combined size 39.
89
+
90
+ RNN-ASR model This model is a middle ground between the two previous ones. It is trained as a speech recognizer similarly to the transformer model but the architecture of the encoder follows the RNN-VGS model (except that the recurrent layers are one-directional in order to fit the model in GPU memory). The last GRU layer of the encoder is fed to the attention-based decoder from Bahdanau et al. (2015), here composed of a single layer of 1024 GRU units. The model is trained with the Adadelta optimizer (Zeiler, 2012). The input features are identical to the ones used for the VGS model; it is also trained on the Flickr8k dataset spoken caption data, using the original written captions as transcriptions. The architecture of this model is not optimized for the speech recognition task: rather it is designed to be as similar as possible to the RNN-VGS model while still performing reasonably on speech recognition (WER of $24.4\%$ on Flickr8k validation set with a beam of size 10).
91
+
92
+ # 3.2 Analytical methods
93
+
94
+ We consider two analytical approaches:
95
+
96
+ - Diagnostic model is a simple, often linear, classifier or regressor trained to predict some information of interest given neural activation patterns. To the extent that the model successfully decodes the information, we conclude that this information is present in the neural representations.
97
+ - Representational similarity analysis (RSA) is a second-order approach where similarities between pairs of some stimuli are measured in two representation spaces: e.g. neural activation pattern space and a space of symbolic linguistic representations such as sequences of phonemes or syntax trees (see Chrupaña and Alishahi, 2019). Then the correlation between these pairwise similarity measurements quantifies how much the two representations are aligned.
98
+
99
+ The diagnostic models have trainable parameters while the RSA-based models do not, except when using a trainable pooling operation.
100
+
101
+ We also consider two ways of viewing activation patterns in hidden layers as representations:
102
+
103
+ - Local representations at the level of a single frame or time-step;
104
+
105
+ - Global representations at the level of the whole utterance.
106
+
107
+ Combinations of these two facets give rise to the following concrete analysis models.
108
+
109
+ Local diagnostic classifier. We use single frames of input (MFCC or spectrogram) features, or activations at a single timestep as input to a logistic diagnostic classifier which is trained to predict the phoneme aligned to this frame or timestep.
110
+
111
+ Local RSA. We compute two sets of similarity scores. For neural representations, these are cosine similarities between neural activations from pairs of frames. For phonemic representations our similarities are binary, indicating whether a pair of frames are labeled with the same phoneme. Pearson's $r$ coefficient computed against a binary variable, as in our setting, is also known as point biserial correlation.
112
+
113
+ Global diagnostic classifier. We train a linear diagnostic classifier to predict the presence of phonemes in an utterance based on global (pooled) neural activations. For each phoneme $j$ the predicted probability that it is present in the utterance with representation $\mathbf{h}$ is denoted as $\mathrm{P}(j|\mathbf{h})$ and computed as:
114
+
115
+ $$
116
+ \mathrm {P} (j \mid \mathbf {h}) = \operatorname {s i g m o i d} (\mathbf {W P o o l} (\mathbf {h}) + \mathbf {a}) _ {j} \quad (1)
117
+ $$
118
+
119
+ where Pool is one of the pooling function in Section 3.2.1.
120
+
121
+ Global RSA. We compute pairwise similarity scores between global (pooled; see Section 3.2.1) representations and measure Pearson's $r$ with the pairwise string similarities between phonemic transcriptions of utterances. We define string similarity as:
122
+
123
+ $$
124
+ \operatorname {s i m} (a, b) = 1 - \frac {\text {L e v e n s h t e i n} (a , b)}{\max (| a | , | b |)} \tag {2}
125
+ $$
126
+
127
+ where $|\cdot|$ denotes string length and Levenshtein is the string edit distance.
128
+
129
+ # 3.2.1 Pooling
130
+
131
+ The representations we evaluate are sequential: sequences of input frames, or of neural activation states. In order to pool them into a single global representation of the whole utterance we test two approaches.
132
+
133
+ Mean pooling. We simply take the mean for each feature along the time dimension.
134
+
135
+ Attention-based pooling. Here we use a simple self-attention operation with parameters trained to optimize the score of interest, i.e. the RSA score or the error of the diagnostic classifier. The attention-based pooling operator performs a weighted average over the positions in the sequence, using scalar weights. The pooled utterance representation $\mathrm{Pool}(\mathbf{h})$ is defined as:
136
+
137
+ $$
138
+ \operatorname {P o o l} (\mathbf {h}) = \sum_ {t = 1} ^ {N} \alpha_ {t} \mathbf {h} _ {t}, \tag {3}
139
+ $$
140
+
141
+ with the weights $\alpha$ computed as:
142
+
143
+ $$
144
+ \alpha_ {t} = \frac {\exp \left(\mathbf {w} ^ {T} \mathbf {h} _ {t}\right)}{\sum_ {j = 1} ^ {N} \exp \left(\mathbf {w} ^ {T} \mathbf {h} _ {j}\right)}, \tag {4}
145
+ $$
146
+
147
+ where $\mathbf{w}$ are learnable parameters, and $\mathbf{h}_t$ is an input or activation vector at position $t$ .<sup>3</sup>
148
+
149
+ # 3.3 Metrics
150
+
151
+ For RSA we use Pearson's $r$ to measure how closely the activation similarity space corresponds to the phoneme or phoneme string similarity space. For the diagnostic classifiers we use the relative error reduction (RER) over the majority class baseline to measure how well phoneme information can be decoded from the activations.
152
+
153
+ Effect of learning In order to be able to assess and compare how sensitive the different methods are to the effect of learning on the activation patterns, it is important to compare the score on the trained model to that on the randomly initialized model; we thus always display the two jointly. We posit that a desirable property of an analytical method is that it is sensitive to the learning effect, and that the scores on trained versus randomly initialized models are clearly separated.
154
+
155
+ Coefficient of partial determination Correlation between similarity structures of two representational spaces can, in principle, be partly due to the fact that both these spaces are correlated to a third space. For example, were we to get a high value for global RSA for one of the top layers of the RNN-VGS model, we might suspect that this is due to the
156
+
157
+ fact that string similarities between phonemic transcriptions of captions are correlated to visual similarities between their corresponding images, rather than due to the layer encoding phoneme strings. In order to control for this issue, we can carry out RSA between two spaces while controlling for the third, confounding, similarity space. We do this by computing the coefficient of partial determination defined as the relative reduction in error caused by including variable $X$ in a linear regression model for $Y$ :
158
+
159
+ $$
160
+ R _ {\text {p a r t i a l}} ^ {2} (Y, X | Z) = \frac {e _ {Y \sim Z} - e _ {Y \sim X + Z}}{e _ {Y \sim Z}} \tag {5}
161
+ $$
162
+
163
+ where $e_{Y \sim X + Z}$ is the sum squared error of the model with all variables, and $e_{Y \sim Z}$ is the sum squared error of the model with $X$ removed. Given the scenario above with the confounding space being visual similarity, we identify $Y$ as the pairwise similarities in phoneme string space, $X$ as the similarities in neural activation space, and $Z$ as similarities in the visual space. The visual similarities are computed via cosine similarity on the image feature vectors corresponding to the stimulus utterances.
164
+
165
+ # 3.4 Experimental setup
166
+
167
+ All analytical methods are implemented in Pytorch (Paszke et al., 2019). The diagnostic classifiers are trained using Adam with learning rate schedule which is scaled by 0.1 after 10 epochs with no improvement in accuracy. We terminate training after 50 epochs with no improvement. Global RSA with attention-based pooling is trained using Adam for 60 epochs with a fixed learning rate (0.001). For all trainable models we snapshot model parameters after every epoch and report the results for the epoch with best validation score. In all cases we sample half of the available data for training (if applicable), holding out the other half for validation.
168
+
169
+ Sampling data for local RSA. When computing RSA scores it is common practice in neuroscience research to use the whole upper triangular part of the matrices containing pairwise similarity scores between stimuli, presumably because the number of stimuli is typically small in that setting. In our case the number of stimuli is very large, which makes using all the pairwise similarities computationally taxing. More importantly, when each stimulus is used for computing multiple similarity scores, these scores are not independent, and score distribution changes with the number of stimuli.
170
+
171
+ We therefore use an alternative procedure where each stimulus is sampled without replacement and used only in a single similarity calculation.
172
+
173
+ # 4 Results
174
+
175
+ Figures 1-3 display the outcome of analyzing our target models. All three figures are organized in a $2 \times 3$ matrix of panels, with the top row showing the diagnostic methods and the bottom row the RSA methods; the first column corresponds to local scope; column two and three show global scope with mean and attention pooling respectively. The data points are displayed in the order of the hierarchy of layers for each architecture, starting with the input (layer $\mathrm{id} = 0$ ). In all the reported experiments, the score of the diagnostic classifiers corresponds to relative error reduction (RER), whereas for RSA we show Pearson's correlation coefficient. For methods with trainable parameters we show three separate runs with different random seeds in order to illustrate the variability due to parameter initialization.
176
+
177
+ Figure 4 shows the results of global RSA with mean pooling on the RNN-VGS target model, while controlling for visual similarity as a confound.
178
+
179
+ We will discuss the patterns of results observed for each model separately in the following sections.
180
+
181
+ # 4.1 Analysis of the Transformer-ASR model
182
+
183
+ As can be seen in Figure 1, most reported experiments (with the exception of the local RSA) suggest that phonemes are best encoded in pre-final layers of the deep network. The results also show a strong impact of learning on the predictions of the analytical methods, as is evident by the difference between the performance using representations of the trained versus randomly initialized models.
184
+
185
+ Local RSA shows low correlation values overall, and does not separate the trained versus random conditions well.
186
+
187
+ # 4.2 Analysis of the RNN-VGS model
188
+
189
+ Most experimental findings displayed in Figure 2 suggest that phonemes are best encoded in RNN layers 3 and 4 of the VGS model. They also show that the representations extracted from the trained model encode phonemes more strongly than the ones from the random version of the model.
190
+
191
+ However, the impact of learning is more salient with global than local scope: the scores of both local classifier and local RSA on random vs. trained
192
+
193
+ representations are close to each other for all layers. For the global representations the performance on trained representations quickly diverges from the random representations from the first RNN layer onward.
194
+
195
+ Furthermore, as demonstrated in Figure 4, for top RNN layers of this architecture, the correlation between similarities in the neural activation space and the similarities in the phoneme string space is not solely due to both being correlated to visual similarities: indeed similarities in activation space contribute substantially to predicting string similarities, over and above the visual similarities.
196
+
197
+ # 4.3 Analysis of the RNN-ASR model
198
+
199
+ The overall qualitative patterns for this target model are the same as for RNN-VGS. The absolute scores for the global diagnostic variants are higher, and the curves steeper, which may reflect that the objective for this target model is more closely aligned with encoding phonemes than in the case of RNN-VGS.
200
+
201
+ # 4.4 RNN vs Transformer models
202
+
203
+ In the case of the local diagnostic setting there is a marked contrast between the behavior of the RNN models on the one hand and the Transformer model on the other: the encoding of phoneme information for the randomly initialized RNN is substantially stronger in the higher layers, while for the randomly initialized Transformer the curve is flat. This difference is likely due to the very different connectivity in these two architectures.
204
+
205
+ With random weights in RNN layer $i$ , the activations at time $t$ are a function of the features from layer $i - 1$ at time $t$ , mixed with the features from layer $i$ at time $t - 1$ . There are thus effects of depth that may make it easier for a linear diagnostic classifier to classify phonemes from the activations of a randomly initialized RNN: (i) features are recombined among themselves, and (ii) local context features are also mixed into the activations.
206
+
207
+ The Transformer architecture, on the other hand, does not have the local recurrent connectivity: at each timestep $t$ the activations are a combination of all the other timesteps and already in the first layer, so with random weights, the activations are close to random, and the amount of information does not increase with layer depth.
208
+
209
+ In the global case, in the activations from random RNNs, pooling across time has the effect of averaging out the vectors such that they are around zero which makes them uninformative for the global
210
+
211
+ ![](images/754925073be640d4985d801158671d2d0b147674cdf87746db87ae47860d0d63.jpg)
212
+ Figure 1: Results of diagnostic and RSA analytical methods applied to the Transformer-ASR model. The score is RER for the diagnostic methods and Pearson's $r$ for RSA.
213
+
214
+ ![](images/c9e58f169b804382356a8296dcf534b0fd8b8bb9465e76866f12b9e86b8082fa.jpg)
215
+ Figure 2: Results of diagnostic and RSA analytical methods applied to the RNN-VGS model. The score is RER for the diagnostic methods and Pearson's $r$ for RSA.
216
+
217
+ ![](images/58b84cfe42fe21c00f18ade69ff259463356d08e79d9593d64c4c02e6d285228.jpg)
218
+
219
+ ![](images/2e3531110f2db0852aea0fd4ea0849e1907b7cc78e338ba3cd7bc30b0cdb457d.jpg)
220
+
221
+ ![](images/8696db8c300e26908633a4034686656b40d54bd461bdd6a651f75cc37dcdf06c.jpg)
222
+ model trained random
223
+
224
+ ![](images/79942923dc115e75a2199e6b9c4776527695a724ba3b724d8759a0e917c914a5.jpg)
225
+ Figure 3: Results of diagnostic and RSA analytical methods applied to the RNN-ASR model. The score is RER for the diagnostic methods and Pearson's $r$ for RSA.
226
+
227
+ ![](images/88cc62c8c9556d94b2a9d21f48ecdcb6b2686b3389e76fe308751c9ac3a6e3b1.jpg)
228
+ layer id
229
+
230
+ ![](images/2e2f084955426366ff1ae1412c78d09527507bb4cf3ead90cc19071cc3e0bbd7.jpg)
231
+
232
+ ![](images/b7dc3b9e95a41540f086fd24068735bff075c3914a37f8b855ae1e38592bc698.jpg)
233
+ Figure 4: Results of global RSA with mean pooling on the RNN-VGS model, while controlling for visual similarity. The score reported is the square root of the absolute value of the coefficient of partial determination $R_{\mathrm{partial}}^2$ .
234
+
235
+ classifier: this does not happen to trained RNN activations. Figure 5 illustrates this point by showing the standard deviations of vectors of mean-pooled activations of each utterance processed by the RNN-VGS model for the randomly initialized and trained conditions, for the recurrent layers. $^4$
236
+
237
+ ![](images/b1ac54189486666d8f412bca5c6fda344951d6c02848496f0fbc33f09fa78579.jpg)
238
+ Figure 5: Standard deviation of pooled activations of the RNN layers for the RNN-VGS model.
239
+
240
+ # 4.5 Summary of findings
241
+
242
+ Here we discuss the impact of each factor in the outcome of our analyses.
243
+
244
+ Choice of method. The choice of RSA versus diagnostic classifier interacts with scope, and thus these are better considered as a combination. Specifically, local RSA as implemented in this study shows only weak correlations between neural activations and phoneme labels. It is possibly
245
+
246
+ related to the range restriction of point biserial correlation with unbalanced binary variables.
247
+
248
+ Impact of learning. Applied to the global representations, both analytical methods are equally sensitive to learning. The results on random vs. trained representations for both methods start to diverge noticeably from early recurrent layers. The separation for the local diagnostic classifiers is weaker for the RNN models.
249
+
250
+ Representation scope. Although the temporal scale of the extracted representations has not received much attention and scrutiny, our experimental findings suggest that it is an important choice. Specifically, global representations are more sensitive to learning, and more consistent across different analysis methods. Results with attention-based learned pooling are in general more erratic than with mean pooling. This reflects the fact that analytical models which incorporate learned pooling are more difficult to optimize and require more careful tuning compared to mean pooling.
251
+
252
+ # 4.6 Recommendations
253
+
254
+ Given the above findings, we now offer tentative recommendations on how to carry out representational analyses of neural models.
255
+
256
+ - Analyses on randomly initialized target models should be run as a baseline. Most scores on these models were substantially above zero: some relatively close to scores on trained models.
257
+ - It is unwise to rely on a single analytical approach, even a widely used one such as the local diagnostic classifier. With solely this method we would have concluded that, in RNN models, learning has only a weak effect on the encoding of phonology.
258
+ - Global methods applied to pooled representations should be considered as a complement to standard local diagnostic methods. In our experiments they show more consistent results.
259
+
260
+ # 5 Conclusion
261
+
262
+ In this systematic study of analysis methods for neural models of spoken language we offered some suggestions on best practices in this endeavor. Nevertheless our work is only a first step, and several limitations remain. The main challenge is that it is
263
+
264
+ often difficult to completely control for the many factors of variation in the target models, due to the fact that a particular objective function, or even a dataset, may require relatively important architectural modifications. In future we will sample target models with a larger number of plausible combinations of factors. Likewise, a choice of an analytical method may often entail changes in other aspects of the analysis: for example, unlike a global diagnostic classifier, global RSA captures the sequential order of phonemes. In future work we hope to further disentangle these differences.
265
+
266
+ # Acknowledgements
267
+
268
+ Bertrand Higy was supported by a NWO/E-Science Center grant number 027.018.G03.
269
+
270
+ # References
271
+
272
+ Mostafa Abdou, Artur Kulmizev, Felix Hill, Daniel M. Low, and Anders Søgaard. 2019. Higher-order comparisons of sentence encoder representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5842-5848, Hong Kong, China. Association for Computational Linguistics.
273
+ Samira Abnar, Lisa Beinborn, Rochelle Choenni, and Willem Zuidema. 2019. Blackbox meets blackbox: Representational similarity & stability analysis of neural language models and brains. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 191-203, Florence, Italy. Association for Computational Linguistics.
274
+ Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. International Conference on Learning Representations (ICLR).
275
+ Afra Alishahi, Marie Barking, and Grzegorz Chrupał. 2017. Encoding of phonology in a recurrent neural model of grounded speech. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 368-378, Vancouver, Canada. Association for Computational Linguistics.
276
+ Afra Alishahi, Grzegorz Chrupa, and Tal Linzen. 2019. Analyzing and interpreting neural networks for nlp: A report on the first blackboxnlp workshop. Natural Language Engineering.
277
+ Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proc. of
278
+
279
+ the International Conference on Learning Representations (ICLR), San Diego, CA, USA. ArXiv: 1409.0473.
280
+ Yonatan Belinkov, Ahmed Ali, and James Glass. 2019. Analyzing phonetic and graphemic representations in end-to-end automatic speech recognition. In *Interspeech*.
281
+ Yonatan Belinkov and James Glass. 2017. Analyzing hidden representations in end-to-end automatic speech recognition systems. In Advances in Neural Information Processing Systems, pages 2441-2451.
282
+ Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics, 7:49-72.
283
+ Diane Bouchacourt and Marco Baroni. 2018. How agents see things: On visual representations in an emergent language game. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 981-985, Brussels, Belgium. Association for Computational Linguistics.
284
+ Grzegorz Chrupała. 2019. Symbolic inductive bias for visually grounded learning of spoken language. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6452-6462, Florence, Italy. Association for Computational Linguistics.
285
+ Grzegorz Chrupała and Afra Alishahi. 2019. Correlating neural and symbolic representations of language. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2952-2962, Florence, Italy. Association for Computational Linguistics.
286
+ Grzegorz Chrupa, Lieke Gelderloos, and Afra Alishahi. 2017. Representations of language in a model of visually grounded speech signal. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 613-622, Vancouver, Canada. Association for Computational Linguistics.
287
+ Alexis Conneau, German Kruszewski, Guillaume Lample, Loic Barrault, and Marco Baroni. 2018. What you can cram into a single $\$ \& !\#$ * vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136, Melbourne, Australia. Association for Computational Linguistics.
288
+ David Harwath and James Glass. 2015. Deep multimodal semantic embeddings for speech and images. In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pages 237-244. IEEE.
289
+ David Harwath, Antonio Torralba, and James Glass. 2016. Unsupervised learning of spoken language with visual context. In Advances in Neural Information Processing Systems, pages 1858-1866.
290
+
291
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778.
292
+ Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907-926.
293
+ Herbert Jaeger. 2001. The "echo state" approach to analysing and training recurrent neural networks with an erratum note. Bonn, Germany: German National Research Center for Information Technology GMD Technical Report, 148(34):13.
294
+ Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR).
295
+ Nikolaus Kriegeskorte, Marieke Mur, and Peter A Bandettini. 2008. Representational similarity analysis connecting the branches of systems neuroscience. Frontiers in systems neuroscience, 2:4.
296
+ Andreas Krug, René Knaebel, and Sebastian Stober. 2018. Neuron activation profiles for interpreting convolutional speech recognition models. In NeurIPS Workshop on Interpretability and Robustness in Audio, Speech, and Language (IRASL).
297
+ Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.
298
+ Wolfgang Maass, Thomas Natschlager, and Henry Markram. 2002. Real-time computing without stable states: A new framework for neural computation based on perturbations. *Neural computation*, 14(11):2531-2560.
299
+ Danny Merkx, Stefan L. Frank, and Mirjam Ernestus. 2019. Language Learning Using Speech to Image Retrieval. In Proc. Interspeech 2019, pages 1841-1845.
300
+ Tasha Nagamine, Michael L Seltzer, and Nima Mesgarani. 2015. Exploring how deep neural networks form phonemic categories. In *Interspeech*.
301
+ Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: An ASR corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5206-5210. ISSN: 2379-190X.
302
+ Daniel S. Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D. Cubuk, and Quoc V. Le. 2019. SpecAugment: A Simple Data
303
+
304
+ Augmentation Method for Automatic Speech Recognition. In Proc. Interspeech 2019, pages 2613-2617.
305
+ Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc.
306
+ Leslie N Smith. 2017. Cyclic learning rates for training neural networks. In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 464-472. IEEE.
307
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems (NIPS), pages 5998-6008. Curran Associates, Inc.
308
+ Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, Adithya Renduchintala, and Tsubasa Ochiai. 2018. Espnet: End-to-end speech processing toolkit. In Interspeech, pages 2207-2211.
309
+ Shinji Watanabe, Takaaki Hori, S. Kim, J. R. Hershey, and T. Hayashi. 2017. Hybrid CTC/Attention Architecture for End-to-End Speech Recognition. IEEE Journal of Selected Topics in Signal Processing, 11(8):1240-1253.
310
+ Matthew D. Zeiler. 2012. ADADELTA: An Adaptive Learning Rate Method. arXiv:1212.5701 [cs]. ArXiv: 1212.5701.
311
+ Kelly W Zhang and Samuel R Bowman. 2018. Language modeling teaches you more syntax than translation does: Lessons learned through auxiliary task analysis. arXiv preprint arXiv:1809.10040.
analyzinganalyticalmethodsthecaseofphonologyinneuralmodelsofspokenlanguage/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e34c7a2f1312619395c36f9d7a483a7030c9313c4e750c6a39a3ec560788df3
3
+ size 208959
analyzinganalyticalmethodsthecaseofphonologyinneuralmodelsofspokenlanguage/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea93fbc1ac57c0a409e30fa2fcc5d998b7dc735c449b7df0f0ebf28829dd405f
3
+ size 308457
analyzingpoliticalparodyinsocialmedia/eb691718-565a-4bd8-9dfb-b05351d2667c_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be744940330774e189e06a21fd68077b961146b30cc44654c8d0d3e11c78907f
3
+ size 84610
analyzingpoliticalparodyinsocialmedia/eb691718-565a-4bd8-9dfb-b05351d2667c_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0539be0a2395094f219d190b9777054d3df9a9a56a81f70e7cdc76e4cfa7c00f
3
+ size 105587
analyzingpoliticalparodyinsocialmedia/eb691718-565a-4bd8-9dfb-b05351d2667c_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c5447d7a18e57a033f887264849e34f943dfb3e3adffdfdec2a567eff722f7d
3
+ size 430246
analyzingpoliticalparodyinsocialmedia/full.md ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Analyzing Political Parody in Social Media
2
+
3
+ Antonis Maronikolakis $^{1*}$ Danae Sánchez Villegas $^{2*}$ Daniel Preotić-Pietro $^{3}$ Nikolaos Aletras $^{2}$
4
+
5
+ <sup>1</sup> Center for Information and Language Processing, LMU Munich, Germany
6
+
7
+ $^{2}$ Computer Science Department, University of Sheffield, UK $^{3}$ Bloomberg
8
+
9
+ antmarakis@cis.lmu.de,{dsanchezvillegas1, n.aletras}@sheffield.ac.uk
10
+
11
+ dpreotiucpie@bloomberg.net
12
+
13
+ # Abstract
14
+
15
+ Parody is a figurative device used to imitate an entity for comedic or critical purposes and represents a widespread phenomenon in social media through many popular parody accounts. In this paper, we present the first computational study of parody. We introduce a new publicly available data set of tweets from real politicians and their corresponding parody accounts. We run a battery of supervised machine learning models for automatically detecting parody tweets with an emphasis on robustness by testing on tweets from accounts unseen in training, across different genders and across countries. Our results show that political parody tweets can be predicted with an accuracy up to $90\%$ . Finally, we identify the markers of parody through a linguistic analysis. Beyond research in linguistics and political communication, accurately and automatically detecting parody is important to improving fact checking for journalists and analytics such as sentiment analysis through filtering out parodical utterances.<sup>1</sup>
16
+
17
+ # 1 Introduction
18
+
19
+ Parody is a figurative device which is used to imitate and ridicule a particular target (Rose, 1993) and has been studied in linguistics as a figurative trope distinct to irony and satire (Kreuz and Roberts, 1993; Rossen-Knill and Henry, 1997). Traditional forms of parody include editorial cartoons, sketches or articles pretending to have been authored by the parodied person.2 A new form
20
+
21
+ of parody recently emerged in social media, and Twitter in particular, through accounts that impersonate public figures. Highfield (2016) defines parody accounts acting as: a known, real person, for obviously comedic purposes. There should be no risk of mistaking their tweets for their subject's actual views; these accounts play with stereotypes of these figures or juxtapose their public image with a very different, behind-closed-doors persona.
22
+
23
+ A very popular type of parody is political parody which plays an important role in public speech by offering irreverent interpretations of political personas (Hariman, 2008). Table 1 shows examples of very popular (over 50k followers) and active (thousands of tweets sent) political parody accounts on Twitter. Sample tweets show how the style and topic of parody tweets are similar to those from the real accounts, which may pose issues to automatic classification.
24
+
25
+ While closely related figurative devices such as irony and sarcasm have been extensively studied in computational linguistics (Wallace, 2015; Joshi et al., 2017), parody yet to be explored using computational methods. In this paper, we aim to bridge this gap and conduct, for the first time, a systematic study of political parody as a figurative device in social media. To this end, we make the following contributions:
26
+
27
+ 1. A novel classification task where we seek to automatically classify real and parody tweets. For this task, we create a new large-scale publicly available data set containing a total of 131,666 English tweets from 184 parody accounts and corresponding real accounts of politicians from the US, UK and other countries (Section 3);
28
+ 2. Experiments with feature- and neural-based machine learning models for parody detection, which achieve high predictive accuracy of up to $89.7\%$ F1. These are focused on the robust
29
+
30
+ ness of classification, with test data from: a) users; b) genders; c) locations; unseen in training (Section 5);
31
+
32
+ 3. Linguistic analysis of the markers of parody tweets and of the model errors (Section 6).
33
+
34
+ We argue that understanding the expression and use of parody in natural language and automatically identifying it are important to applications in computational social science and beyond. Parody tweets can often be misinterpreted as facts even though Twitter only allows parody accounts if they are explicitly marked as $\text{parody}^3$ and the poster does not have the intention to mislead. For example, the Speaker of the US House of Representatives, Nancy Pelosi, falsely cited a Michael Flynn parody tweet;[4] and many users were fooled by a Donald Trump parody tweet about 'Dow Joans'.[5] Thus, accurate parody classification methods can be useful in downstream NLP applications such as automatic fact checking (Vlachos and Riedel, 2014) and rumour verification (Karmakharm et al., 2019), sentiment analysis (Pang et al., 2008) or nowcasting voting intention (Tumasjan et al., 2010; Lampos et al., 2013; Tsakalidis et al., 2018).
35
+
36
+ Beyond NLP, parody detection can be used in: (i) political communication, to study and understand the effects of political parody in the public speech on a large scale (Hariman, 2008; Highfield, 2016); (ii) linguistics, to identify characteristics of figurative language (Rose, 1993; Kreuz and Roberts, 1993; Rossen-Knill and Henry, 1997); (iii) network science, to identify the adoption and diffusion mechanisms of parody (Vosoughi et al., 2018).
37
+
38
+ # 2 Related Work
39
+
40
+ Parody in Linguistics Parody is an artistic form and literary genre that dates back to Aristophanes in ancient Greece who parodied argumentation styles in Frogs. Verbal parody was studied in linguistics as a figurative trope distinct to irony and satire (Kreuz and Roberts, 1993; Rossen-Knill and Henry, 1997) and researchers long debated its definition and theoretic distinctions to other types of humor (Grice et al., 1975; Sperber, 1984; Wilson, 2006; Dynel, 2014). In general, verbal parody
41
+
42
+ involves a highly situated, intentional, and conventional speech act (Rossen-Knill and Henry, 1997) composed of both a negative evaluation and a form of pretense or echoic mention (Sperber, 1984; Wilson, 2006; Dynel, 2014) through which an entity is mimicked or imitated with the goal of criticizing it to a comedic effect. Thus, imitative composition for amusing purpose is an an inherent characteristic of parody (Franke, 1971). The parodist intentionally re-presents the object of the parody and flaunts this re-presentation (Rossen-Knill and Henry, 1997).
43
+
44
+ Parody on Social Media Parody is considered an integral part of Twitter (Vis, 2013) and previous studies on parody in social media focused on analysing how these accounts contribute to topical discussions (Highfield, 2016) and the relationship between identity, impersonation and authenticity (Page, 2014). Public relation studies showed that parody accounts impact organisations during crises while they can become a threat to their reputation (Wan et al., 2015).
45
+
46
+ Satire Most related to parody, satire has been tangentially studied as one of several prediction targets in NLP in the context of identifying disinformation (McHardy et al., 2019; de Morais et al., 2019). (Rashkin et al., 2017) compare the language of real news with that of satire, hoaxes, and propaganda to identify linguistic features of unreliable text. They demonstrate how stylistic characteristics can help to decide the text's veracity. The study of parody is therefore relevant to this topic, as satire and parodies are classified by some as a type of disinformation with 'no intention to cause harm but has potential to fool' (Wardle and Derakhshan, 2018).
47
+
48
+ Irony and Sarcasm There is a rich body of work in NLP on identifying irony and sarcasm as a classification task (Wallace, 2015; Joshi et al., 2017). Van Hee et al. (2018) organized two open shared tasks. The first aims to automatically classify tweets as ironic or not, and the second is on identifying the type of irony expressed in tweets. However, the definition of irony is usually 'a trope whose actual meaning differs from what is literally enunciated' (Van Hee et al., 2018), following the Gricean belief that the hallmark of irony is to communicate the opposite of the literal meaning (Wilson, 2006), violating the first maxim of Quality (Grice et al., 1975). In this
49
+
50
+ <table><tr><td>Account type</td><td>Twitter Handle</td><td>Sample tweet</td></tr><tr><td>Real</td><td>@realDonaldTrump</td><td>The Republican Party, and me, had a GREAT day yesterday with respect to the phony Impeachment Hoax, &amp; yet, when I got home to the White House &amp; checked out the news coverage on much of television, you would have no idea they were reporting on the same event. FAKE &amp; CORRUPT NEWS!</td></tr><tr><td>Parody</td><td>@realDonaldTrFan</td><td>Lies! Kampala Harris says my crimes are committed in plane site! Shes lying! My crimes are ALWAYS hidden! ALWAYS!!</td></tr><tr><td>Real</td><td>@BorisJohnson</td><td>Our NHS will never be on the table for any trade negotiations. Were investing more than ever before - and when we leave the EU, we will introduce an Australian style, points-based immigration system so the NHS can plan for the future.</td></tr><tr><td>Parody</td><td>@BorisJohnson_MP</td><td>People seem to be ignoring the many advantages of selling off the NHS, like the fact that hospitals will be far more spacious once poor people can’t afford to use them.</td></tr></table>
51
+
52
+ Table 1: Two examples of Twitter accounts of politicians and their corresponding parody account with a sample tweet from each.
53
+
54
+ sense, irony is treated in NLP in a similar way as sarcasm (González-Ibanez et al., 2011; Khattri et al., 2015; Joshi et al., 2017). In addition to the words in the utterance, further using the user and pragmatic context is known to be informative for irony or sarcasm detection in NLP (Bamman and Smith, 2015; Wallace, 2015). For instance, Oprea and Magdy (2019) make use of user embeddings for textual sarcasm detection. In the design of our data splits, we aim to limit the contribution of this aspects from the results.
55
+
56
+ Relation to other NLP Tasks The pretense aspect of parody relates our task to a few other NLP tasks. In authorship attribution, the goal is to predict the author of a given text (Stamatatos, 2009; Juola et al., 2008; Koppel et al., 2009). However, there is no intent for the authors to imitate the style of others and most differences between authors are in the topics they write about, which we aim to limit by focusing on political parody. Further, in our setups, no tweets from an author are in both training and testing to limit the impact of terms specific to a particular person.
57
+
58
+ Pastiche detection (Dinu et al., 2012) aims to distinguish between an original text and a text written by someone aiming to imitate the style of the original author with the goal of impersonating. Most similar in experimental setup to our task, Preoticiuc-Pietro and Devlin Marier (2019) aim to distinguish between tweets published from the same account by different types of users: politicians or their staff. While both pastiches and staff writers aim to present similar content with similar style to the original authors, the texts lack the humorous component specific of parodies.
59
+
60
+ A large body of related NLP work has ex
61
+
62
+ plored the inference of user characteristics. Past research studied predicting the type of a Twitter account, most frequently between individual or organizational, using linguistic features (De Choudhury et al., 2012; McCorriston et al., 2015; Mac Kim et al., 2017). A broad literature has been devoted to predicting personal traits from language use on Twitter, such as gender (Burger et al., 2011), age (Nguyen et al., 2011), geolocation (Cheng et al., 2010), political preference (Volkova et al., 2014; Preoticiuc-Pietro et al., 2017), income (Preoticiuc-Pietro et al., 2015; Aletras and Chamberlain, 2018), impact (Lampos et al., 2014), socio-economic status (Lampos et al., 2016), race (Preoticiuc-Pietro and Ungar, 2018) or personality (Schwartz et al., 2013; Preoticiuc-Pietro et al., 2016).
63
+
64
+ # 3 Task & Data
65
+
66
+ We define parody detection in social media as a binary classification task performed at the social media post level. Given a post $T$ , defined as a sequence of tokens $T = \{t_1, \dots, t_n\}$ , the aim is to label $T$ either as parody or genuine. Note that one could use social network information but this is out of the paper's scope as we only focus on parody as a linguistic device.
67
+
68
+ We create a new publicly available data set to study this task, as no other data set is available. We perform our analysis on a set of users from the same domain (politics) to limit variations caused by topic. We first identify real and parody accounts of politicians on Twitter posting in English from the United States of America (US), the United Kingdom (UK) and other accounts posting in English from the rest of the world. We opted to use
69
+
70
+ Twitter because it is arguably the most popular platform for politicians to interact with the public or with other politicians (Parmelee and Bichard, 2011). For example, $67\%$ of prospective parliamentary candidates for the 2019 UK general election have an active Twitter account.6 Twitter also allows to maintain parody accounts, subject to adding explicit markers in both the user bio and handle such as parody, fake.7 Finally, we label tweets as parody or real, depending on the type of account they were posted from. We highlight that we are not using user description or handle names in prediction, as this would make the task trivial.
71
+
72
+ # 3.1 Collecting Real and Parody Politician Accounts
73
+
74
+ We first query the public Twitter API using the following terms: {parody, #parody, parody account, fake, #fake, fake account, not real} to retrieve candidate parody accounts according to Twitter's policy. From that set, we exclude any accounts matching fan or commentary in their bio or account name since these are likely to be not posting parodical content. We also exclude private and deactivated accounts and accounts with a majority of non-English tweets.
75
+
76
+ After collecting this initial set of parody candidates, the authors of the paper manually inspected up to the first ten original tweets from each candidate to identify whether an account is a parody or not following the definition of a public figure parody account from Highfield (2016) (see Section 1), further filtering out non-parody accounts. We keep a single parody account in case of multiple parody accounts about the same person. Finally, for each remaining account, the authors manually identified the corresponding real politician account to collect pairs of real and parody.
77
+
78
+ Following the process above, we were able to identify parody accounts of 103 unique people, with 81 having a corresponding real account. The authors also identified the binary gender and location (country) of the accounts using publicly available records. This resulted in $21.6\%$ female accounts (women parliamentarians percentages as of 2017: $19\%$ US, $30\%$ UK, $28.8\%$ OECD average).<sup>8</sup>
79
+
80
+ <table><tr><td colspan="6">Person</td></tr><tr><td></td><td>Train</td><td>Dev</td><td>Test</td><td>Total</td><td>Avg. tokens (Train)</td></tr><tr><td>Real</td><td>51,460</td><td>6,164</td><td>8,086</td><td>65,710</td><td>23.33</td></tr><tr><td>Parody</td><td>51,706</td><td>6,164</td><td>8,086</td><td>65,956</td><td>20.15</td></tr><tr><td>All</td><td>103,166</td><td>12,328</td><td>16,172</td><td>131,666</td><td>22.55</td></tr></table>
81
+
82
+ Table 2: Data set statistics with the person split.
83
+
84
+ The majority of the politicians are located in the US $(44.5\%)$ followed by the UK $(26.7\%)$ while $28.8\%$ are from the rest of the world (e.g. Germany, Canada, India, Russia).
85
+
86
+ # 3.2 Collecting Real and Parody Tweets
87
+
88
+ We collect all of the available original tweets, excluding retweets and quoted tweets, from all the parody and real politician accounts. We further balance the number of tweets in a real - parody account pair in order for our experiments and linguistic analysis not to be driven by a few prolific users or by imbalances in the tweet ratio for a specific pair. We keep a ratio of maximum $\pm 20\%$ between the real and parody tweets per pair by keeping all tweets from the less prolific account and randomly down-sampling from the more prolific one. Subsequently, for the parody accounts with no corresponding real account, we sample a number of tweets equal to the median number of tweets for the real accounts. Finally, we label tweets as parody or real, depending on the type of account they come from. In total, the data set contains 131,666 tweets, with 65,710 real and 65,956 parody.
89
+
90
+ # 3.3 Data Splits
91
+
92
+ To test that automatically predicting political parody is robust and generalizes to held-out situations not included in the training data, we create the following three data splits for running experiments:
93
+
94
+ Person Split We first split the data by adding all tweets from each real - parody account pair to a single split, either train, development or test. To obtain a fairly balanced data set without pairs of accounts with a large number of tweets dominating any splits, we compute the mean between real and parody tweets for each account, and stratify them, with pairs of proportionally distributed means across the train, development, and test sets (see Table 2).
95
+
96
+ <table><tr><td colspan="5">Gender</td></tr><tr><td>Trained on</td><td></td><td>Real</td><td>Parody</td><td>Total</td></tr><tr><td rowspan="3">Female</td><td>Train</td><td>10,081</td><td>11,036</td><td>21,117</td></tr><tr><td>Dev</td><td>302</td><td>230</td><td>532</td></tr><tr><td>Test (Male)</td><td>55,327</td><td>54,690</td><td>110,017</td></tr><tr><td rowspan="3">Male</td><td>Train</td><td>51,048</td><td>50,184</td><td>101,232</td></tr><tr><td>Dev</td><td>4,279</td><td>4,506</td><td>8,785</td></tr><tr><td>Test (Female)</td><td>10,383</td><td>11,266</td><td>21,649</td></tr></table>
97
+
98
+ Table 3: Data set statistics with the gender split (Male, Female).
99
+
100
+ <table><tr><td colspan="5">Location</td></tr><tr><td>Trained on</td><td></td><td>Real</td><td>Parody</td><td>Total</td></tr><tr><td rowspan="3">US &amp; RoW</td><td>Train</td><td>47,018</td><td>45,005</td><td>92,023</td></tr><tr><td>Dev</td><td>1,030</td><td>2,190</td><td>3,220</td></tr><tr><td>Test (UK)</td><td>17,662</td><td>18,761</td><td>36,423</td></tr><tr><td rowspan="3">UK &amp; RoW</td><td>Train</td><td>33,687</td><td>35,371</td><td>69,058</td></tr><tr><td>Dev</td><td>1,030</td><td>1,274</td><td>2,304</td></tr><tr><td>Test (US)</td><td>30,993</td><td>29,311</td><td>60,304</td></tr><tr><td rowspan="3">US &amp; UK</td><td>Train</td><td>43,211</td><td>42,597</td><td>85,808</td></tr><tr><td>Dev</td><td>5,444</td><td>5,475</td><td>10,919</td></tr><tr><td>Test (RoW)</td><td>17,055</td><td>17,884</td><td>34,939</td></tr></table>
101
+
102
+ Table 4: Data set statistics with the location split (US, UK, Rest of the World–RoW).
103
+
104
+ Gender Split We also split the data by the gender of the politicians into training, development and test, obtaining two versions of the data: (i) one with female accounts in train/dev and male in test; and (ii) male accounts in train/dev and female in test (see Table 3).
105
+
106
+ Location split Finally, we split the data based on the location of the politicians. We group the accounts in three groups of locations: US, UK and the rest of the world (RoW). We obtain three different splits, where each group makes up the test set and the other two groups make up the train and development set (see Table 4).
107
+
108
+ # 3.4 Text Preprocessing
109
+
110
+ We preprocess text by lower-casing, replacing all URLs and anonymizing all mentions of usernames with placeholder token. We preserve emoticons and punctuation marks and replace tokens that appear in less than five tweets with a special 'unknown' token. We tokenize text using DLATK (Schwartz et al., 2017), a Twitter-aware tokenizer.
111
+
112
+ # 4 Predictive Models
113
+
114
+ We experiment with a series of approaches to classification of parody tweets, ranging from linear models, neural network architectures and pretrained contextual embedding models. Hyperparameter selection is included in the Appendix.
115
+
116
+ # 4.1 Linear Baselines
117
+
118
+ LR-BOW As a first baseline, we use a logistic regression with standard bag-of-words (LR-BOW) representation of the tweets.
119
+
120
+ LR-BOW+POS We extend LR-BOW using syntactic information from Part-Of-Speech (POS) tags. We first tag all tweets in our data using the NLTK tagger and then we extract bag-of-words features where each unigram consists of a token with its associated POS tag.
121
+
122
+ # 4.2 BiLSTM-Att
123
+
124
+ The first neural model is a bidirectional Long-Short Term Memory (LSTM) network (Hochreiter and Schmidhuber, 1997) with a self-attention mechanism (BiLSTM-Att; Zhou et al. (2016)). Tokens $t_i$ in a given tweet $T = \{t_1, \dots, t_n\}$ are mapped to embeddings and passed through a bidirectional LSTM. A single tweet representation $(h)$ is computed as the sum of the resulting contextualized vector representations $(\sum_{i} a_i h_i)$ where $a_i$ is the self-attention score in timestep $i$ . The tweet representation $(h)$ is subsequently passed to the output layer using a sigmoid activation function.
125
+
126
+ # 4.3 ULMFit
127
+
128
+ The Universal Language Model Fine-tuning (ULMFit) is a method for efficient transfer learning (Howard and Ruder, 2018). The key intuition is to train a text encoder on a language modelling task (i.e. predicting the next token in a sequence) where data is abundant, then fine-tune it on a target task where data is more limited. During fine-tuning, ULMFit uses gradual layer unfreezing to avoid catastrophic forgetting. We experiment with using AWD-LSTM (Merit et al., 2018) as the base text encoder pretrained on the Wikitext 103 data set and we fine-tune it on our own parody classification task. For this purpose, after the AWS-LSTM layers, we add a fully-connected layer with a ReLU activation function followed by an output layer with a sigmoid activation function. Before each of these two additional layers, we perform batch normalization.
129
+
130
+ # 4.4 BERT and RoBERTa
131
+
132
+ Bidirectional Encoder Representations from Transformers (BERT) is a language model based on transformer networks (Vaswani et al., 2017) pre-trained on large corpora (Devlin et al., 2019). The model makes use of multiple multi-head attention layers to learn bidirectional embeddings for input tokens. It is trained for masked language modelling, where a fraction of the input tokens in a given sequence are masked and the task is to predict a masked word given its context. BERT uses wordpieces which are passed through an embedding layer and get summed together with positional and segment embeddings. The former introduce positional information to the attention layers, while the latter contain information about the location of a segment. Similar to ULMFit, we fine-tune the BERT-base model for predicting parody tweets by adding an output dense layer for binary classification and feeding it with the 'classification' token.
133
+
134
+ We further experiment with RoBERTa (Liu et al., 2019), which is an extension of BERT trained on more data and different hyperparameters. RoBERTa has been showed to improve performance in various benchmarks compared to the original BERT (Liu et al., 2019).
135
+
136
+ # 4.5 XLNet
137
+
138
+ XLNet is another pre-trained neural language model based on transformer networks (Yang et al., 2019). XLNet is similar to BERT in its structure, but is trained on a permutated (instead of masked) language modelling task. During training, sentence words are permuted and the model predicts a word given the shuffled context. We also adapt XLNet for predicting parody, similar to BERT and ULMFit.
139
+
140
+ # 4.6 Model Hyperparameters
141
+
142
+ We optimize all model parameters on the development set for each data split (see Section 3).
143
+
144
+ Linear models For the LR-BOW, we use n-grams with $n = (1,2)$ , $n \in \{(1,1),(1,2),(1,3)\}$ weighted by TF.IDF. For the LR-BOW+POS, we use TF with POS n-grams where $n = (1,3)$ . For both baselines we use L2 regularization.
145
+
146
+ BiLSTM-Att We use 200-dimensional GloVe embeddings (Pennington et al., 2014) pre-trained on Twitter data. The maximum sequence length
147
+
148
+ is set to 50 covering $95\%$ of the tweets in the training set. The LSTM size is $h = 300$ where $h \in \{50,100,300\}$ with dropout $d = 0.5$ where $d \in \{2,5\}$ . We use Adam (Kingma and Ba, 2014) with default learning rate, minimizing the binary cross-entropy using a batch size of 64 over 10 epochs with early stopping.
149
+
150
+ ULMFit We first update only the AWD-LSTM weights with a learning rate $l = 2\mathrm{e} - 3$ for one epoch where $l \in \{1\mathrm{e} - 3, 2\mathrm{e} - 3, 4\mathrm{e} - 3\}$ for language modeling. Then, we update both the AWD-LSTM and embedding weights for one more epoch, using a learning rate of $l = 2\mathrm{e} - 5$ where $l \in \{1\mathrm{e} - 4, 2\mathrm{e} - 5, 5\mathrm{e} - 5\}$ . The size of the intermediate fully-connected layer (after AWD-LSTM and before the output) is set by default to 50. Both in the intermediate and output layers we use default dropout of 0.08 and 0.1 respectively from Howard and Ruder (2018).
151
+
152
+ BERT and RoBERTa For BERT, we used the base model (12 layers and 110M total parameters) trained on lowercase English. We fine-tune it for 1 epoch with a learning rate $l = 5\mathrm{e} - 5$ where $l \in \{2\mathrm{e} - 5,3\mathrm{e} - 5,5\mathrm{e} - 5\}$ as recommended in Devlin et al. (2019) with a batch size of 128. For RoBERTa, we use the same fine-tuning parameters as BERT.
153
+
154
+ XLNet We use the same parameters as BERT except for the learning rate, which we set at $l = 4\mathrm{e} - 5$ where $l \in \{2\mathrm{e} - 5, 4\mathrm{e} - 5, 5\mathrm{e} - 5\}$ .
155
+
156
+ # 5 Results
157
+
158
+ This section contains the experimental results obtained on all three different data splits proposed in Section 3. We evaluate our methods (Section 4) using several metrics, including accuracy, precision, recall, macro F1 score, and Area under the ROC (AUC). We report results over three runs using different random seeds and we report the average and standard deviation.
159
+
160
+ # 5.1 Person Split
161
+
162
+ Table 5 presents the results for the parody prediction models with the data split by person. We observe the architectures using pre-trained text encoders (i.e. ULMFit, BERT, RoBERTa and XLNet) outperform both neural (BiLSTM-Att) and feature-based (LR-BOW and LR-BOW+POS) by a large margin across metrics with transformer architectures (BERT, RoBERTa and XLNet) performing best. The highest scoring model,
163
+
164
+ <table><tr><td colspan="6">Person</td></tr><tr><td>Model</td><td>Acc</td><td>P</td><td>R</td><td>F1</td><td>AUC</td></tr><tr><td>LR-BOW</td><td>73.95 ±0.00</td><td>70.08 ± 0.01</td><td>83.53 ±0.02</td><td>76.19 ±0.00</td><td>73.96 ±0.00</td></tr><tr><td>LR-BOW+POS</td><td>74.33 ±0.00</td><td>71.34 ±0.00</td><td>81.19 ±0.00</td><td>75.95 ±0.00</td><td>74.34 ±0.00</td></tr><tr><td>BiLSTM-Att</td><td>79.92 ±0.01</td><td>81.63 ±0.01</td><td>77.11 ±0.03</td><td>79.29 ±0.02</td><td>79.91 ±0.01</td></tr><tr><td>ULMFit</td><td>81.11 ±0.38</td><td>75.57 ±2.03</td><td>84.97 ±0.87</td><td>81.05 ±0.42</td><td>81.10 ±0.38</td></tr><tr><td>BERT</td><td>87.65 ±0.29</td><td>87.63 ±0.58</td><td>87.67 ±0.40</td><td>87.65 ±0.18</td><td>87.65 ±0.32</td></tr><tr><td>RoBERTa</td><td>90.01 ±0.35</td><td>90.90 ±0.55</td><td>88.45 ±0.22</td><td>89.66 ±0.33</td><td>90.05 ±0.29</td></tr><tr><td>XLNet</td><td>86.45 ±0.41</td><td>88.24 ±0.52</td><td>85.18 ±0.40</td><td>86.68 ±0.37</td><td>86.45 ±0.36</td></tr></table>
165
+
166
+ RoBERTa, classifies accounts (parody and real) with an accuracy of 90, which is more than $8\%$ greater than the best non-transformer model (the ULMFit method). RoBERTa also outperforms the Logistic Regression baselines (LR-BOW and LR-BOW+POS) by more than 16 in accuracy and 13 in F1 score. Furthermore, it is the only model to score higher than 90 on precision.
167
+
168
+ # 5.2 Gender Split
169
+
170
+ Table 6 shows the F1-scores obtained when training on the gender splits, i.e. training on male and testing on female accounts and vice versa. We first observe that models trained on the male set are in general more accurate than models trained on the female set, with the sole exception of ULMFit. This is probably due to the fact that the data set is imbalanced towards men as shown in Table 3 (see also Section 3). We also do not observe a dramatic performance drop compared to the mixed-gender model on the person split (see Table 5). Again, RoBERTa is the most accurate model when trained in both splits, obtaining an F1-score of 87.11 and 84.87 for the male and female data respectively. The transformer-based architectures are again the best performing models overall, but the difference between them and the feature-based methods is smaller than it was on the person split.
171
+
172
+ # 5.3 Location Split
173
+
174
+ Table 7 shows the F1-scores obtained training our models on the location splits: (i) train/dev on UK and RoW, test on US; (ii) train/dev on US and RoW, test on UK; and (iii) train/dev on US and UK, test on RoW. In general, the best results are obtained by training on the US & UK split, while results of the models trained on the RoW & US,
175
+
176
+ Table 5: Accuracy (Acc), Precision (P), Recall (R), F1-Score (F1) and ROC-AUC for parody prediction splitting by person (± std. dev.). Best results are in bold.
177
+
178
+ <table><tr><td colspan="3">Gender</td></tr><tr><td>Model</td><td>M→F</td><td>F→M</td></tr><tr><td>LR-BOW</td><td>78.89</td><td>76.63</td></tr><tr><td>LR-BOW+POS</td><td>78.74</td><td>76.74</td></tr><tr><td>BiLSTM-Att</td><td>77.00</td><td>77.11</td></tr><tr><td>ULMFit</td><td>81.20</td><td>82.53</td></tr><tr><td>BERT</td><td>85.85</td><td>84.40</td></tr><tr><td>RoBERTa</td><td>87.11</td><td>84.87</td></tr><tr><td>XLNet</td><td>85.69</td><td>84.16</td></tr></table>
179
+
180
+ Table 6: F1-scores for parody prediction splitting by gender (Male-M, Female-F). Best results are in bold.
181
+
182
+ <table><tr><td colspan="4">Location</td></tr><tr><td>Model</td><td>+ →</td><td>+ →</td><td>+ →</td></tr><tr><td>LR-BOW</td><td>78.58</td><td>78.27</td><td>77.97</td></tr><tr><td>LR-BOW+POS</td><td>78.27</td><td>77.88</td><td>78.08</td></tr><tr><td>BiLSTM-Att</td><td>80.29</td><td>77.59</td><td>73.19</td></tr><tr><td>ULMFit</td><td>83.47</td><td>81.55</td><td>81.55</td></tr><tr><td>BERT</td><td>86.69</td><td>83.78</td><td>83.12</td></tr><tr><td>RoBERTa</td><td>87.70</td><td>85.10</td><td>85.99</td></tr><tr><td>XLNet</td><td>85.32</td><td>85.17</td><td>85.32</td></tr></table>
183
+
184
+ Table 7: F1-scores for parody prediction splitting by location. Best results are in bold.
185
+
186
+ and RoW & UK splits are similar. The model with the best performance trained on US & UK, and RoW & UK splits is RoBERTa with F1 scores of 87.70 and 85.99 respectively. XLNet performs slightly better than RoBERTa when trained on RoW & US data split.
187
+
188
+ # 5.4 Discussion
189
+
190
+ Through experiments over three different data splits, we show that all models predict parody tweets consistently above random, even if tested
191
+
192
+ on people unseen in training. In general, we observe that the pre-trained contextual embedding based models perform best, with an average of around 10 F1 better than the linear methods. From these methods, we find that RoBERTa outperforms the other methods by a small, but consistent margin, similar to past research (Liu et al., 2019). Further, we see that the predictions are robust to any location or gender specific differences, as the performance on held-out locations and genders are close to when splitting by person with a maximum of $< 5$ F1 drop, also impacted by training on less data (e.g. female users). This highlights the fact that our models capture information beyond topics or features specific to any person, gender or location and can potentially identify stylistic differences between parody and real tweets.
193
+
194
+ # 6 Analysis
195
+
196
+ We finally perform an analysis based on our novel data set to uncover the peculiarities of political parody and understand the limits of the predictive models.
197
+
198
+ # 6.1 Linguistic Feature Analysis
199
+
200
+ We first analyze the linguistic features specific of real and parody tweets. For this purpose, we use the method introduced in (Schwartz et al., 2013) and used in several other analyses of user traits (Preotciuc-Pietro et al., 2017) or speech acts (Preotciuc-Pietro et al., 2019). We thus rank the feature sets described in Section 4 using univariate Pearson correlation (note that for the analysis we use POS tags instead of POS n-grams). Features are normalized to sum up to unit for each tweet. Then, for each feature, we compute correlations independently between its distribution across posts and the label of the post (parody or not).
201
+
202
+ Table 8 presents the top unigrams and part-of-speech features correlated with real and parody tweets. We first note that the top features related to either parody or genuine tweets are function words or related to style, as opposed to the topic. This enforces that the make-up of the data set or any of its categories are not impacted by topic choice and parody detection is mostly a stylistic difference. The only exception are a few hashtags related to parody accounts (e.g. #imwithme), but on a closer inspection, all of these are related to tweets from a single parody account and are thus not useful in prediction by any setup, as tweets containing these
203
+
204
+ <table><tr><td colspan="2">Real</td><td colspan="2">Parody</td></tr><tr><td>Feature</td><td>r</td><td>Feature</td><td>r</td></tr><tr><td colspan="4">Unigrams</td></tr><tr><td>our</td><td>0.140</td><td>i</td><td>0.181</td></tr><tr><td>in</td><td>0.131</td><td>?</td><td>0.156</td></tr><tr><td>and</td><td>0.129</td><td>&lt;mention&gt;</td><td>0.145</td></tr><tr><td>:</td><td>0.118</td><td>me</td><td>0.136</td></tr><tr><td>&amp;</td><td>0.114</td><td>not</td><td>0.106</td></tr><tr><td>today</td><td>0.105</td><td>like</td><td>0.097</td></tr><tr><td>to</td><td>0.105</td><td>my</td><td>0.095</td></tr><tr><td>of</td><td>0.098</td><td>dude</td><td>0.094</td></tr><tr><td>the</td><td>0.091</td><td>don’t</td><td>0.090</td></tr><tr><td>at</td><td>0.087</td><td>i’m</td><td>0.087</td></tr><tr><td>lhl</td><td>0.086</td><td>just</td><td>0.083</td></tr><tr><td>great</td><td>0.085</td><td>know</td><td>0.081</td></tr><tr><td>with</td><td>0.084</td><td>#feeltheburp</td><td>0.078</td></tr><tr><td>de</td><td>0.079</td><td>you</td><td>0.076</td></tr><tr><td>meeting</td><td>0.078</td><td>#callmedick</td><td>0.075</td></tr><tr><td>for</td><td>0.077</td><td>#imwithme</td><td>0.073</td></tr><tr><td>across</td><td>0.073</td><td>”</td><td>0.073</td></tr><tr><td>families</td><td>0.073</td><td>#visionzero</td><td>0.069</td></tr><tr><td>on</td><td>0.070</td><td>if</td><td>0.069</td></tr><tr><td>country</td><td>0.067</td><td>have</td><td>0.067</td></tr><tr><td colspan="4">POS (Unigrams and Bigrams)</td></tr><tr><td>NN IN</td><td>0.1600</td><td>RB</td><td>0.1749</td></tr><tr><td>IN</td><td>0.1507</td><td>PRP</td><td>0.1546</td></tr><tr><td>CC</td><td>0.1309</td><td>RB VB</td><td>0.1271</td></tr><tr><td>IN JJ</td><td>0.1210</td><td>VBP</td><td>0.1206</td></tr><tr><td>NNS IN</td><td>0.1165</td><td>VBP RB</td><td>0.1123</td></tr><tr><td>NN CC</td><td>0.1114</td><td>.</td><td>0.1114</td></tr><tr><td>IN NN</td><td>0.1048</td><td>NNP NNP</td><td>0.1094</td></tr><tr><td>NN TO</td><td>0.1030</td><td>NN NNP</td><td>0.1057</td></tr><tr><td>NNS TO</td><td>0.1013</td><td>WRB</td><td>0.0925</td></tr><tr><td>TO</td><td>0.1001</td><td>VBP PRP</td><td>0.0904</td></tr><tr><td>CC JJ</td><td>0.0972</td><td>IN PRP</td><td>0.0890</td></tr><tr><td>IN DT</td><td>0.0941</td><td>NN VBP</td><td>0.0863</td></tr><tr><td>: JJ</td><td>0.0875</td><td>RB .</td><td>0.0854</td></tr><tr><td>NNS</td><td>0.0855</td><td>NNP</td><td>0.0837</td></tr><tr><td>: NN</td><td>0.0827</td><td>JJ VBP</td><td>0.0813</td></tr></table>
205
+
206
+ Table 8: Feature correlations with parody and real tweets, sorted by Pearson correlation (r). All correlations are significant at $p < .01$ , two-tailed t-test.
207
+
208
+ will only appear in either the train or test set.
209
+
210
+ The top features related to either category of tweets are pronouns ('our' for genuine tweets, 'i' for parody tweets). In general, we observe that parody tweets are much more personal and include possessives ('me', 'my', 'i', "i'm", PRP) or second person pronouns ('you'). This indicates that parodies are more personal and direct, which is
211
+
212
+ also supported by use of more @-mentions and quotation marks. The real politician tweets are more impersonal and the use of ‘our’ indicates a desire to include the reader in the conversation.
213
+
214
+ The real politician tweets include more stopwords (e.g. prepositions, conjunctions, determiners), which indicate that these tweets are more well formed. Conversely, the parody tweets include more contractions ("don't", "i'm"), hinting to a less formal style ('dude'). Politician tweets frequently use their account to promote events they participate in or are relevant to the day-to-day schedule of a politician, as hinted by several prepositions ('at', 'on') and words ('meeting', 'today') (Preoticiuc-Pietro and Devlin Marier, 2019). For example, this is a tweet of the U.S. Senator from Connecticut, Chris Murphy:
215
+
216
+ Rudy Giuliani is in Ukraine today, meeting with Ukrainian leaders on behalf of the President of the United States, representing the President's re-election campaign.[...]
217
+
218
+ Through part-of-speech patterns, we observe that parody accounts are more likely to use verbs in the present singular (VBZ, VBP). This hints that parody tweets explicitly try to mimic direct quotes from the parodied politician in first person and using present tense verbs, while actual politician tweets are more impersonal. Adverbs (RB) are used predominantly in parodies and a common sequence in parody tweets is adverbs followed by verbs (RB VB) which can be used to emphasize actions or relevant events. For example, the following is a tweet of a parody account (@Queen_Europe) of Angela Merkel:
219
+
220
+ I mean, the Brexit Express literally appears to be going backwards but OK <url>
221
+
222
+ # 6.2 Error Analysis
223
+
224
+ Finally, we perform an error analysis to examine the behavior of our best performing model (RoBERTa) and identify potential limitations of the current approaches. The first example is a tweet by the former US president Barack Obama which was classified as parody while it is in fact a real tweet:
225
+
226
+ Summer's almost over, Senate Leaders. #doyour-job <url>
227
+
228
+ Similarly, the next tweet was posted by the real account of the Virginia governor, Ralph Northam:
229
+
230
+ At this point, the list of Virginians Ed Gillespie *hasn't* sold out is shorter than the folks he has. <url>
231
+
232
+ Both of the tweets above contain humorous elements and come off as confrontational, aimed at someone else which is more prevalent in parody. We hypothesize that the model picked up this information to classify these tweets as parody. From the previous analyses, we noticed that tweets by real politicians often convey information in a more neutral or impersonal way. On the other hand, the following tweet was posted by a Mitt Romney parody account and was classified as real:
233
+
234
+ It's up to you, America: do you want a repeat of the last four years, or four years staggeringly worse than the last four years?
235
+
236
+ This parody tweet, even though it is more opinionated, is more similar in style to a slogan or campaign speech and is therefore missclassified. Lastly, the following is a tweet from former President Obama that was misclassified as parody:
237
+
238
+ It's the #GimmeFive challenge, presidential style.<url>
239
+
240
+ The reason behind is that there are politicians, such as Barack Obama, who often write in an informal manner and this may cause the models to misclassify this kind of tweets.
241
+
242
+ # 7 Conclusion
243
+
244
+ We presented the first study of parody using methods from computational linguistics and machine learning, a related but distinct linguistic phenomenon to irony and sarcasm. Focusing on political parody in social media, we introduced a freely available large-scale data set containing a total of 131,666 English tweets from 184 real and corresponding parody accounts. We defined parody prediction as a new binary classification task at a tweet level and evaluated a battery of feature-based and neural models achieving high predictive accuracy of up to $89.7\%$ F1 on tweets from people unseen in training.
245
+
246
+ In the future, we plan to study more in depth the stylistic and figurative devices used for parody, extend the data set beyond the political case study and explore human behavior regarding parody, including how this is detected and diffused through social media.
247
+
248
+ # Acknowledgments
249
+
250
+ We thank Bekah Hampson for providing early input and helping with the data annotation. NA is supported by ESRC grant ES/T012714/1 and an Amazon AWS Cloud Credits for Research Award.
251
+
252
+ # References
253
+
254
+ Nikolaos Aletras and Benjamin Paul Chamberlain. 2018. Predicting Twitter user socioeconomic attributes with network and language information. In Proceedings of the 29th on Hypertext and Social Media, pages 20-24.
255
+ David Bamman and Noah A Smith. 2015. Contextualized Sarcasm Detection on Twitter. In Ninth International AAAI Conference on Web and Social Media, ICWSM, pages 574-577.
256
+ John D Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating gender on Twitter. In Proceedings of the conference on empirical methods in natural language processing, pages 1301-1309. Association for Computational Linguistics.
257
+ Zhiyuan Cheng, James Caverlee, and Kyumin Lee. 2010. You are where you tweet: a content-based approach to geo-locating Twitter users. In Proceedings of the 19th ACM international conference on Information and knowledge management, pages 759-768.
258
+ Munmun De Choudhury, Nicholas Diakopoulos, and Mor Naaman. 2012. Unfolding the event landscape on Twitter: Classification and exploration of user categories. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, CSCW, pages 241-244.
259
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.
260
+ Liviu P. Dinu, Vlad Niculae, and Maria-Octavia Sulea. 2012. Pastiche detection based on stopwords rankings. Exposing Impersonators of a Romanian writer. In Proceedings of the Workshop on Computational Approaches to Deception Detection, pages 72-77, Avignon, France. Association for Computational Linguistics.
261
+ Marta Dynel. 2014. Isn't it ironic? Defining the scope of humorous irony. Humor, 27(4):619-639.
262
+ Herbert Franke. 1971. A note on parody in Chinese traditional literature. Oriens Extremus, 18(2):237-251.
263
+ Roberto González-Ibanez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in Twitter: A closer look. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 581-586.
264
+
265
+ H Paul Grice, Peter Cole, Jerry Morgan, et al. 1975. Logic and conversation. 1975, pages 41-58.
266
+ Robert Hariman. 2008. Political Parody and Public Culture. Quarterly Journal of Speech, 94(3):247-272.
267
+ Tim Highfield. 2016. News via Voldemort: Parody accounts in topical discussions on Twitter. New Media & Society, 18(9):2028-2045.
268
+ Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
269
+ Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328-339.
270
+ Aditya Joshi, Pushpak Bhattacharyya, and Mark J Carman. 2017. Automatic sarcasm detection: A survey. ACM Computing Surveys (CSUR), 50(5):73.
271
+ Patrick Juola et al. 2008. Authorship attribution. Foundations and Trends in Information Retrieval, 1(3):233-334.
272
+ Twin Karmakharm, Nikolaos Aletras, and Kalina Bontcheva. 2019. Journalist-in-the-loop: Continuous learning as a service for rumour analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 115-120.
273
+ Anupam Khatri, Aditya Joshi, Pushpak Bhattacharyya, and Mark Carman. 2015. Your sentiment precedes you: Using an author's historical tweets to predict sarcasm. In Proceedings of the 6th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 25-30.
274
+ Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
275
+ Moshe Koppel, Jonathan Schler, and Shlomo Argamon. 2009. Computational methods in authorship attribution. Journal of the Association for Information Science and Technology, 60(1):9-26.
276
+ Roger J. Kreuz and Richard M. Roberts. 1993. On satire and parody: The importance of being ironic. Metaphor and Symbolic Activity, 8(2):97-109.
277
+ Vasileios Lampos, Nikolaos Aletras, Jens K Geyti, Bin Zou, and Ingemar J Cox. 2016. Inferring the socioeconomic status of social media users based on behaviour and language. In ECIR, pages 689-695.
278
+ Vasileios Lampos, Nikolaos Aletras, Daniel Preoiciuc-Pietro, and Trevor Cohn. 2014. Predicting and characterising user impact on Twitter. In 14th conference of the European chapter of the Association for
279
+
280
+ Computational Linguistics 2014, EACL 2014, pages 405-413.
281
+ Vasileios Lampos, Daniel Preoticiuc-Pietro, and Trevor Cohn. 2013. A user-centric model of voting intention from social media. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 993-1003, Sofia, Bulgaria. Association for Computational Linguistics.
282
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.
283
+ Sunghwan Mac Kim, Qiongkai Xu, Lizhen Qu, Stephen Wan, and Cecile Paris. 2017. Demographic Inference on Twitter using Recursive Neural Networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 471-477.
284
+ James McCorriston, David Jurgens, and Derek Ruths. 2015. Organizations are users too: Characterizing and detecting the presence of organizations on Twitter. ICWSM, pages 650-653.
285
+ Robert McHardy, Heike Adel, and Roman Klinger. 2019. Adversarial training for satire detection: Controlling for confounding variables. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 660-665.
286
+ Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and optimizing LSTM language models. In International Conference on Learning Representations.
287
+ Janaina Ignacio de Morais, Hugo Queiroz Abonizio, Gabriel Marques Tavares, André Azevedo da Fonseca, and Sylvio Barbon Jr. 2019. Deciding among fake, satirical, objective and legitimate news: A multi-label classification system. In Proceedings of the XV Brazilian Symposium on Information Systems, page 22.
288
+ Dong Nguyen, Noah A Smith, and Carolyn P Rose. 2011. Author age prediction from text using linear regression. In Proceedings of the 5th ACL-HLT workshop on language technology for cultural heritage, social sciences, and humanities, pages 115-123. Association for Computational Linguistics.
289
+ Silviu Oprea and Walid Magdy. 2019. Exploring author context for detecting intended vs perceived sarcasm. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2854-2859.
290
+ Ruth Page. 2014. Hoaxes, hacking and humour: analysing impersonated identity on social network sites, pages 46-64. Palgrave Macmillan UK.
291
+
292
+ Bo Pang, Lillian Lee, et al. 2008. Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1-2):1-135.
293
+ John H Parmelee and Shannon L Bichard. 2011. Politics and the Twitter revolution: How tweets influence the relationship between political leaders and the public. Lexington Books.
294
+ Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.
295
+ Daniel Preoticiuc-Pietro, Jordan Carpenter, Salvatore Giorgi, and Lyle Ungar. 2016. Studying the dark triad of personality through Twitter behavior. In Proceedings of the 25th ACM international on conference on information and knowledge management, pages 761-770.
296
+ Daniel Preotjiuc-Pietro and Rita Devlin Marier. 2019. Analyzing linguistic differences between owner and staff attributed tweets. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2848-2853.
297
+ Daniel Preoticiuc-Pietro, Mihaela Gaman, and Nikolaos Aletras. 2019. Automatically identifying complaints in social media. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5008-5019.
298
+ Daniel Preoticiuc-Pietro, Ye Liu, Daniel Hopkins, and Lyle Ungar. 2017. Beyond binary labels: political ideology prediction of Twitter users. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 729-740.
299
+ Daniel Preotciuc-Pietro and Lyle Ungar. 2018. User-level race and ethnicity predictors from Twitter text. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1534-1545.
300
+ Daniel Preotjiuc-Pietro, Svitlana Volkova, Vasileios Lampos, Yoram Bachrach, and Nikolaos Aletras. 2015. Studying user income through language, behaviour and affect in social media. PloS one, 10(9).
301
+ Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi. 2017. Truth of varying shades: Analyzing language in fake news and political fact-checking. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2931-2937.
302
+ Margaret A Rose. 1993. Parody: ancient, modern and post-modern. Cambridge University Press.
303
+ Deborah F. Rossen-Knill and Richard Henry. 1997. The pragmatics of verbal parody. Journal of Pragmatics, 27(6):719-752.
304
+
305
+ H Andrew Schwartz, Johannes C Eichstaedt, Margaret L Kern, Lukasz Dziurzynski, Stephanie M Ramones, Megha Agrawal, Achal Shah, Michal Kosinski, David Stillwell, Martin EP Seligman, et al. 2013. Personality, gender, and age in the language of social media: The open-vocabulary approach. PloS one, 8(9):e73791.
306
+ H. Andrew Schwartz, Salvatore Giorgi, Maarten Sap, Patrick Crutchley, Lyle Ungar, and Johannes Eichstaedt. 2017. DLATK: Differential language analysis ToolKit. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 55-60.
307
+ Dan Sperber. 1984. Verbal Irony: Pretense or Echoic Mention? American Psychological Association.
308
+ Efstathios Stamatos. 2009. A survey of modern authorship attribution methods. Journal of the Association for Information Science and Technology, 60(3):538-556.
309
+ Adam Tsakalidis, Nikolaos Aletras, Alexandra I Cristea, and Maria Liakata. 2018. Nowcasting the stance of social media users in a sudden vote: The case of the Greek Referendum. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pages 367-376.
310
+ Andranik Tumasjan, Timm O Sprenger, Philipp G Sandner, and Isabell M Welpe. 2010. Predicting elections with Twitter: What 140 characters reveal about political sentiment. In 4th International AAAI Conference on Weblogs and Social Media, pages 178-185.
311
+ Cynthia Van Hee, Els Lefever, and Véronique Hoste. 2018. SemEval-2018 task 3: Irony detection in English tweets. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 39-50.
312
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008.
313
+ Farida Vis. 2013. Twitter as a reporting tool for breaking news: Journalists tweeting the 2011 UK riots. Digital journalism, 1(1):27-47.
314
+ Andreas Vlachos and Sebastian Riedel. 2014. Fact checking: Task definition and dataset construction. In Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, pages 18-22.
315
+ Svitlana Volkova, Glen Coppersmith, and Benjamin Van Durme. 2014. Inferring user political preferences from streaming communications. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 186-196.
316
+
317
+ Soroush Vosoughi, Deb Roy, and Sinan Aral. 2018. The spread of true and false news online. Science, 359(6380):1146-1151.
318
+ Byron C Wallace. 2015. Computational irony: A survey and new perspectives. Artificial Intelligence Review, 43(4):467-483.
319
+ Sarah Wan, Regina Koh, Andrew Ong, and Augustine Pang. 2015. Parody social media accounts: Influence and impact on organizations during crisis. *Public Relations Review*, 41(3):381-385.
320
+ Claire Wardle and Hossein Derakhshan. 2018. Thinking about information disorder: formats of misinformation, disinformation, and mal-information. Journalism, fake news & disinformation. Paris: Unesco, pages 43-54.
321
+ Deirdre Wilson. 2006. The pragmatics of verbal irony: Echo or pretence? Lingua, 116(10):1722-1743.
322
+ Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237.
323
+ Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention-based bidirectional long short-term memory networks for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 207-212.
analyzingpoliticalparodyinsocialmedia/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e739867fa209fd248e9bad53cd85572d42dd42f75f44faf710b4c3e57e4b15e1
3
+ size 423169
analyzingpoliticalparodyinsocialmedia/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7182ecd246072693d78f7ee31c5e425a667a02589a7db75230c3cef7e20f119
3
+ size 368235
analyzingthepersuasiveeffectofstyleinnewseditorialargumentation/d8d36c18-850e-4233-819c-8a8a8bf35acd_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b09af634a36a74f92e855a33bcf3bf8881e74ff96151c2557569865d9160dbf9
3
+ size 49446
analyzingthepersuasiveeffectofstyleinnewseditorialargumentation/d8d36c18-850e-4233-819c-8a8a8bf35acd_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0118e55d637a59baf93199dc5251d7fc576dde8588d68c5560ac9e9b6d13de2f
3
+ size 62858
analyzingthepersuasiveeffectofstyleinnewseditorialargumentation/d8d36c18-850e-4233-819c-8a8a8bf35acd_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff878791e077efd375b093de5db3734d505b835ff5906845d733ef4f8a32af3e
3
+ size 579483
analyzingthepersuasiveeffectofstyleinnewseditorialargumentation/full.md ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Analyzing the Persuasive Effect of Style in News Editorial Argumentation
2
+
3
+ Roxanne El Baff $^{1,2}$ Henning Wachsmuth $^{3}$ Khalid Al-Khatib $^{2}$ Benno Stein
4
+
5
+ $^{1}$ German Aerospace Center (DLR), Germany, roxanne.elbaff@dlr.de $^{2}$ Bauhaus-Universität Weimar, Weimar, Germany, <first>.<last>@uni-weimar.de
6
+
7
+ <sup>3</sup> Paderborn University, Paderborn, Germany, henningw@upb.de
8
+
9
+ # Abstract
10
+
11
+ News editorials argue about political issues in order to challenge or reinforce the stance of readers with different ideologies. Previous research has investigated such persuasive effects for argumentative content. In contrast, this paper studies how important the style of news editorials is to achieve persuasion. To this end, we first compare content- and style-oriented classifiers on editorials from the liberal NYTimes with ideology-specific effect annotations. We find that conservative readers are resistant to NYTimes style, but on liberals, style even has more impact than content. Focusing on liberals, we then cluster the leads, bodies, and endings of editorials, in order to learn about writing style patterns of effective argumentation.
12
+
13
+ # 1 Introduction
14
+
15
+ The interaction between the author and the intended reader of an argumentative text is encoded in the linguistic choices of the author and their persuasive effect on the reader (Halmari and Virtanen, 2005). News editorials, in particular, aim to challenge or to reinforce the stance of readers towards controversial political issues, depending on the readers' ideology (El Baff et al., 2018). To affect readers, they often start with an enticing lead paragraph and end their argument with a "punch" (Rich, 2015).
16
+
17
+ Existing research has studied the persuasive effect of argumentative content and structure (Zhang et al., 2016; Wachsmuth et al., 2016) or combinations of content and style (Wang et al., 2017; Persing and Ng, 2017). In addition, some works indicate that different types of content affect readers with different personalities (Lukin et al., 2017) and beliefs (Durmus and Cardie, 2018). However, it remains unexplored so far what stylistic choices in argumentation actually affect which readers. We expect such choices to be key to generating effective argumentation (Wachsmuth et al., 2018).
18
+
19
+ This paper analyzes the persuasive effect of style in news editorial argumentation on readers with different political ideologies (conservative vs. liberal). We model style with widely-used features capturing argumentativeness (Somasundaran et al., 2007), psychological meaning (Tausczik and Pennebaker, 2010), and similar (Section 3). Based on the NY-Times editorial corpus of El Baff et al. (2018) with ideology-specific effect annotations (Section 4), we compare style-oriented with content-oriented classifiers for persuasive effect (Section 5).
20
+
21
+ While the general performance of effect prediction seems somewhat limited on the corpus, our experiments yield important results: Conservative readers seem largely unaffected by the style of the (liberal) NYTimes, matching the intuition that content is what dominates opposing ideologies. On the other hand, the style features predict the persuasive effect on liberal readers even better than the content features — while being complementary. That is, style matters as soon as ideology matches.
22
+
23
+ Knowing about the specific structure of news editorials, we finally obtain common stylistic choices in their leads, bodies, and endings through clustering. From these, we derive writing style patterns that challenge or reinforce the stance of (liberal) readers of (liberal) news editorials, giving insights into what makes argumentation effective.
24
+
25
+ # 2 Related Work
26
+
27
+ Compared to other argumentative genres (Stede and Schneider, 2018), news editorials use many rhetorical means to achieve a persuasive effect on readers (van Dijk, 1995). Computational research has dealt with news editorials for retrieving opinions (Yu and Hatzivassiloglou, 2003; Bal, 2009), mining arguments (Al-Khatib et al., 2017), and
28
+
29
+ <table><tr><td>Feature Base</td><td>Overview</td><td>Reference</td></tr><tr><td>Linguistic inquiry and word count</td><td>Psychological meaningfulness in percentile</td><td>Pennebaker et al. (2015)</td></tr><tr><td>NRC emotional and sentiment lexicon</td><td>Count of emotions (e.g. sad, etc.) and polarity words</td><td>Mohammad and Turney (2013)</td></tr><tr><td>Webis Argumentative Discourse Units</td><td>Count of each evidence type (e.g., statistics)</td><td>Al-Khatib et al. (2017)</td></tr><tr><td>MPQA Arguing Lexicon</td><td>Count of 17 types of arguing (e.g., assessments)</td><td>Somasundaran et al. (2007)</td></tr><tr><td>MPQA Subjectivity Classifier</td><td>Count of subjective and objective sentences</td><td>Riloff and Wiebe (2003)</td></tr></table>
30
+
31
+ Table 1: Summary of the style feature types in our dataset. Each feature is quantified at the level of the editorial.
32
+
33
+ analyzing their properties (Bal and Dizier, 2010; Scheffler and Stede, 2016). While Al-Khatib et al. (2016) modeled the structure underlying editorial argumentation, we use the corpus of El Baff et al. (2018) meant to study the persuasive effects of editorials depending on the readers' political ideology. Halmari and Virtanen (2005) state that four aspects affect persuasion in editorials: linguistic choices, prior beliefs of readers, prior beliefs and behaviors of authors, and the effect of the text.
34
+
35
+ Persuasive effectiveness reflects the rhetorical quality of argumentation (Wachsmuth et al., 2017). To assess effectiveness, Zhang et al. (2016) modeled the flow of content in debates, and Wachsmuth et al. (2016) the argumentative structure of student essays. Others combined different features for these genres (Persing and Ng, 2015). The impact of content selection relates to the notion of framing (Ajjour et al., 2019) and is well-studied in theory (van Eemeren, 2015). As Wang et al. (2017), however, we hypothesize that content and style achieve persuasion jointly. We target argumentative style here primarily, and we analyze its impact on liberal and conservative readers.
36
+
37
+ In related work, Lukin et al. (2017) found that emotional and rational arguments affect people with different personalities, and Durmus and Cardie (2018) take into account the religious and political ideology of debate portal participants. In follow-up work, Longpre et al. (2019) observed that style is more important for decided listeners. Unlike them, we focus on the stylistic choices made in well-planned argumentative texts.
38
+
39
+ The lead paragraphs and the ending of an editorial have special importance (Rich, 2015). Hynds (1990) analyzes how leads and endings changed over time, whereas Moznette and Rarick (1968) examined the readability of an editorial based on them. To our knowledge, however, no one investigated their importance computationally so far. In this paper, we close this gap by analyzing what style of leads and endings is particularly effective compared to the editorial's body.
40
+
41
+ # 3 Style Features
42
+
43
+ To model style, we need to abstract from the content of a news editorial. This section outlines the feature types that we employ for this purpose. Most of them have been widely used in the literature. Table 1 summarizes all features.
44
+
45
+ LIWC Psychological word usage is reflected in the Linguistic Inquiry and Word Count (Tausczik and Pennebaker, 2010). LIWC is a lexicon-based text analysis that assigns words to psychologically meaningful categories (Tausczik and Pennebaker, 2010). We use the LIWC version of Pennebaker et al. (2015), which contains 15 dimensions listed in the following with examples.
46
+
47
+ (1) Language metrics: words per sentence, long words. (2) Function words: pronouns, auxiliaries. (3) Other grammar: common verbs, comparisons. (4) Affect words: positive and negative emotion. (5) Social word: family, friends. (6) Cognitive processes: discrepancies, certainty. (7) Perceptual processes: feeling, seeing. (8) Biological processes: body, health. (9) Core drives and needs: power, reward focus. (10) Time orientation. (11) Relativity. (12) Personal concerns. (13) Informal speech. (14) Punctuation. (15) Summary variables.
48
+
49
+ The last dimension (15) contains four variables, each of which is derived from various LIWC dimensions: (a) Analytical thinking (Pennebaker et al., 2014): The degree to which people use narrative language (low score), or more logical and formal language (high score). (b) Clout (Kacewicz et al., 2014): The relative social status, confidence, and leadership displaced in a text. (c) Authenticity (Newman et al., 2003): The degree to which people reveal themselves authentically. (d) Emotional tone (Cohn et al., 2004): Negative emotions, for scores lower than 50, and positive emotions otherwise.
50
+
51
+ NRC Emotion&Sentiment To represent the mood of editorials, we use the NRC lexicon of Mohammad and Turney (2013). NRC contains a set of English words and their associations with (1) emotions such as anger, disgust, and fear as
52
+
53
+ well as (2) negative and positive sentiment polarities. These features are represented as the count of words associated with each category.
54
+
55
+ Webis ADUs To identify argumentative units in editorials that present evidence, we use the pre-trained evidence classifier of Al-Khatib et al. (2017). For each editorial, we identify the number of sentences that manifest anecdotal, statistical, and testimonial evidence respectively.
56
+
57
+ MPQA Arguing Somasundaran et al. (2007) constructed a lexicon that includes various patterns of arguing such as assessments, doubt, authority, emphasis. For each lexicon, we have one feature that represents the count of the respective pattern in an editorial.
58
+
59
+ MPQA Subjectivity We apply the subjectivity classifier provided in OpinionFinder 2.0 (Riloff and Wiebe, 2003; Wiebe and Riloff, 2005) on the editorials, in order to count the number of subjective and objective sentences there.
60
+
61
+ # 4 Data
62
+
63
+ As the basis of our analysis, we use the Webis-Editorial-Quality-18 corpus (El Baff et al., 2018). The corpus includes persuasive effect annotations of 1000 English news editorials from the liberal New York Times (NYTimes). The annotations capture whether a given editorial challenges the prior stance of readers (i.e., making them rethink it, but not necessarily change it), reinforces their stance (i.e., helping them argue better about the discussed topic), or is ineffective for them. Each editorial has been annotated by six annotators: three with liberal and three with conservative ideology.
64
+
65
+ To evaluate an editorial's persuasive effect on liberals, we computed the majority vote of their annotations for the editorial (and, similarly, for conservatives). We ended up with 979 editorials with effect labels for liberals and conservatives, because we found 21 duplicate editorials with the same content but different IDs (for these, we use the majority vote across all duplicates).
66
+
67
+ The corpus does not have predefined evaluation datasets. To mimic real-life scenarios, we chronologically split it into a training set (oldest $80\%$ ) and a test set (newest $20\%$ ). Table 2 shows the distribution of ideology-specific effects in the datasets.
68
+
69
+ <table><tr><td rowspan="2">Class</td><td colspan="2">Training</td><td colspan="2">Test</td></tr><tr><td>Liberal</td><td>Conserv.</td><td>Liberal</td><td>Conserv.</td></tr><tr><td>Challenging</td><td>126</td><td>128</td><td>22</td><td>41</td></tr><tr><td>Ineffective</td><td>118</td><td>292</td><td>32</td><td>71</td></tr><tr><td>Reinforcing</td><td>539</td><td>363</td><td>142</td><td>84</td></tr><tr><td>Overall</td><td>783</td><td>783</td><td>196</td><td>196</td></tr></table>
70
+
71
+ Table 2: Distribution of the majority persuasive effect of the news editorials in the given training and test set for liberal and conservative ideology respectively.
72
+
73
+ # 5 Prediction of Persuasive Effects
74
+
75
+ To assess the impact of news editorial style on readers, we employ our style-based features on the task of predicting an editorial's persuasive effect: Given either of the two ideologies (liberal or conservative), predict for each editorial whether it is challenging, reinforcing, or ineffective.
76
+
77
+ We developed separate prediction models for the effect on liberals and conservatives, respectively. For each style feature type and for their combinations, we trained one SVM model with a linear kernel on the training set using scikit-learn (Pedregosa et al., 2011).
78
+
79
+ Given the dataset split mentioned above (training set $80\%$ , test set $20\%$ ), we tuned the SVM's cost hyperparameter using grid search with 5-fold cross-validation on the training set. Since the distribution of effect labels is highly skewed, we set the hyperparameter class_weight to "balanced". We then trained the best model on the whole training set and evaluated it on the test set. For comparison, we also built models for standard content features (lemma 1- to 3-grams), and we consider the random baseline that picks an effect class by chance.
80
+
81
+ For both ideologies, Table 3 reports the macro- and micro $\mathrm{F_1}$ -scores for the style features, their best-performing combination, $^3$ the content features, and the best combination of content and style. $^4$
82
+
83
+ We computed significance using Wilcoxon's test to reveal differences between each two approaches among best style, content, best content+style, and baseline. We obtained the means of $\mathrm{F_1}$ -scores used in the significance tests by conducting five-fold cross-validation on the test set, using the same SVM hyperparameters as above.
84
+
85
+ <table><tr><td rowspan="2">Features</td><td colspan="2">Liberals</td><td colspan="2">Conservatives</td></tr><tr><td>Macro</td><td>Micro</td><td>Macro</td><td>Micro</td></tr><tr><td>LIWC</td><td>0.31</td><td>0.40</td><td>0.25</td><td>0.26</td></tr><tr><td>NRC Emotion&amp;Sentiment</td><td>0.33</td><td>0.39</td><td>0.28</td><td>0.29</td></tr><tr><td>Webis ADUs</td><td>0.28</td><td>0.36</td><td>0.31</td><td>0.31</td></tr><tr><td>MPQA Arguing</td><td>0.33</td><td>0.41</td><td>0.29</td><td>0.29</td></tr><tr><td>MPQA Subjectivity</td><td>0.33</td><td>0.38</td><td>0.26</td><td>0.28</td></tr><tr><td>Best Style</td><td>*0.38</td><td>*0.49</td><td>0.36</td><td>0.37</td></tr><tr><td>Content</td><td>0.36</td><td>*0.49</td><td>0.37</td><td>0.38</td></tr><tr><td>Best Content+Style</td><td>*†0.43</td><td>*†0.54</td><td>0.36</td><td>0.36</td></tr><tr><td>Random baseline</td><td>0.23</td><td>0.26</td><td>0.33</td><td>0.34</td></tr></table>
86
+
87
+ Table 3: Test set micro and macro $\mathrm{F}_1$ -scores of each feature type and their best combinations in classifying the persuasive effect on liberals and conservatives. * and † indicate significant differences at $p < 0.05$ against the Random baseline and Content respectively.
88
+
89
+ In general, the results indicate that the persuasive effect seems hard to predict on the given corpus. Still, we observe that the style features play a notable role in predicting the effect of editorials on liberals. They achieve a significantly better macro $\mathrm{F}_1$ -score of 0.43 when combined with content compared to 0.36 when using content alone, at $p < 0.05$ . On the other hand, the $\mathrm{F}_1$ -scores of content (macro 0.37, micro 0.38) and style (both 0.36) in predicting the effect on conservatives, are insignificantly different even from the baseline (0.33, 0.34).
90
+
91
+ These results suggest that style is important as soon as the ideology of a reader matches the one of the news portal (at least, this holds for liberal ideology), but not if it mismatches (here, conservative).
92
+
93
+ # 6 Identification of Style Patterns
94
+
95
+ Observing that the style of NYTimes editorials affects liberal readers, we seek to learn what patterns of writing style makes their argumentation effective. To this end, we (1) abstract each discourse part of an editorial (lead, body, ending) into a style label using cluster analysis and (2) identify sequential patterns of style labels that are specific to challenging, ineffective, and reinforcing editorials.
96
+
97
+ Clustering Styles of Discourse Parts Given the importance of specific discourse parts of editorials (Rich, 2015), we split each editorial into lead, body, and ending. For each part, we separately perform three steps on the training set of the given corpus:<sup>6</sup>
98
+
99
+ <table><tr><td>Part</td><td>Cluster</td><td>Chall.</td><td>Ineff.</td><td>Reinf.</td></tr><tr><td rowspan="7">Lead</td><td>▲tone, ▼authenticity</td><td>0.15</td><td>0.12</td><td>0.11</td></tr><tr><td>▼tone, ▲authenticity</td><td>0.11</td><td>0.13</td><td>0.14</td></tr><tr><td>▼tone, ▼authenticity</td><td>0.20</td><td>0.09</td><td>0.15</td></tr><tr><td>▼tone, ▷authenticity, ▲# words</td><td>0.11</td><td>0.11</td><td>0.14</td></tr><tr><td>▲tone, ▲authenticity</td><td>0.06</td><td>0.18</td><td>0.14</td></tr><tr><td>▲tone, ▵authenticity</td><td>0.13</td><td>0.14</td><td>0.15</td></tr><tr><td>▲tone, ▷authenticity, ▲# words</td><td>0.24</td><td>0.23</td><td>0.17</td></tr><tr><td rowspan="7">Body</td><td>▲tone, ▼authenticity</td><td>0.17</td><td>0.25</td><td>0.13</td></tr><tr><td>▼tone, ▲authenticity, ▲relativity</td><td>0.09</td><td>0.05</td><td>0.10</td></tr><tr><td>▼tone, ▵authenticity, ▼relativity</td><td>0.13</td><td>0.10</td><td>0.09</td></tr><tr><td>▼tone, ▵authenticity, ▼relativity</td><td>0.15</td><td>0.10</td><td>0.17</td></tr><tr><td>▲tone, ▲authenticity, ▲relativity</td><td>0.17</td><td>0.18</td><td>0.15</td></tr><tr><td>▲tone, ▵authenticity, ▼relativity</td><td>0.11</td><td>0.11</td><td>0.16</td></tr><tr><td>▲tone, ▵authenticity</td><td>0.18</td><td>0.21</td><td>0.19</td></tr><tr><td rowspan="7">End.</td><td>▲tone, ▲authenticity, ▼# words</td><td>0.10</td><td>0.11</td><td>0.07</td></tr><tr><td>▲tone, ▲authenticity, ▲# words</td><td>0.24</td><td>0.25</td><td>0.25</td></tr><tr><td>▲tone, ▲authenticity, ▼# words</td><td>0.15</td><td>0.15</td><td>0.14</td></tr><tr><td>▼tone, ▲authenticity, ▼# words</td><td>0.06</td><td>0.08</td><td>0.09</td></tr><tr><td>▼tone, ▵authenticity, ▼# words</td><td>0.21</td><td>0.12</td><td>0.17</td></tr><tr><td>▼tone, ▵authenticity, ▼# words</td><td>0.06</td><td>0.08</td><td>0.06</td></tr><tr><td>▼tone, ▵authenticity, ▲# words</td><td>0.17</td><td>0.19</td><td>0.22</td></tr></table>
100
+
101
+ Table 4: Distribution of clusters over the leads, bodies, and endings of challenging, ineffective, and reinforcing editorials in the training set. The clusters are labeled by their most discriminating features (ordered). $\triangle ,\triangleright ,\triangledown$ and $\nabla$ denote relatively high, medium, and (very) low scores. The highest value in each row is marked bold.
102
+
103
+ 1. Extract the style features from Section 3.
104
+ 2. Perform a cluster analysis on the style features using cosine $k$ -means. $k$ is determined with the elbow method on the inertia of the clusters.
105
+ 3. Derive cluster labels from the most discriminating features across clusters: For each cluster, we determine those 2-3 values (e.g., "high tone, low authenticity") whose combination suffices to significantly distinguish a cluster from others. With high to very low, we mean here a feature has significantly higher or lower scores compared to other clusters.
106
+
107
+ Table 4 shows the distribution of lead, body, and ending clusters over challenging, ineffective, and reinforcing editorials.
108
+
109
+ For each discourse part, the most discriminating feature is tone, followed by authenticity. The former combines positive (higher scores) and neg-
110
+
111
+ ![](images/1f4a38fedf748947e66e7b943c3aa6f542a44cae0c1560230e51828a3c299922.jpg)
112
+ Figure 1: Sequences of lead, body, and ending styles most specific to challenging, ineffective, and reinforcing news editorials. The triangles denote whether the given style attribute is high, medium, or (very) low. The ordering of attributes reflects their importance.
113
+
114
+ ative (lower scores) emotional tones (Cohn et al., 2004). The latter indicates the degree to which people authentically reveal themselves; the higher the score, the more personal, humble, or vulnerable the writer is (Newman et al., 2003). In Table 4, we observe, for example, that the lead of challenging editorials over-proportionally often shows low authenticity, or that bodies with positive tone but low authenticity tend to be ineffective.
115
+
116
+ Identification of Style Patterns From Table 4, we determine the (maximum) two labels for each discourse part that are most specific to each of the three persuasive effect classes. From these, we build all possible lead-body-ending sequences, as visualized in Figure 1. According to a $\chi$ -square test, the distributions of these sequences differ significantly at $p < 0.05$ . They reveal the following patterns of NYTimes editorials for liberal readers:
117
+
118
+ - Challenging editorials often begin with a polar emotional tone, followed by a negative tone. They tend to have low authenticity (i.e., not humble/personal) in the whole discourse (see Figure 2 for an example).
119
+ - Ineffective editorials over-proportionally often start with authenticity and dull tone. They then tend to diffuse in different directions and to have a short ending paragraph.
120
+ - Reinforcing editorials tend to start and end with a negative tone. They often avoid relativ
121
+
122
+ ![](images/6d42335dea820358b14be44b2e5f4418a2898f3760cae46e65440b6199933ddf.jpg)
123
+ Figure 2: Example of a challenging editorial, along with the styles observed for its lead, body, and ending.
124
+
125
+ ity in the actual arguments (i.e., in the body).
126
+
127
+ While these insights are naturally still vague to some extent and require more analysis in follow-up research, they show a first way of capturing the style of editorial argumentation.
128
+
129
+ # 7 Conclusion
130
+
131
+ This paper analyzes the importance of news editorials style in achieving persuasive effects on readers with different political ideologies. We find evidence that style has a significant influence on how a (liberal) editorial affects a (liberal) reader. Inspired by the theory of the high importance of the lead and ending in writing editorials (Rich, 2015), we also reveal common effective and ineffective style sequences (lead-body-ending) statistically.
132
+
133
+ Our findings help to understand how effective argumentation works in the political sphere of editorial argumentation — and how to generate such argumentation. In related work, El Baff et al. (2019) revealed the impact of style features on generating pathos- and logos-oriented short argumentative texts based on the rhetorical strategies discussed by Wachsmuth et al. (2018). With the findings of this paper, we go beyond, defining the basis of a style-dependent generation model for more sophisticated argumentation, as found in news editorials.
134
+
135
+ # References
136
+
137
+ Yamen Ajjour, Milad Alshomary, Henning Wachsmuth, and Benno Stein. 2019. Modeling frames in argumentation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2922-2932, Hong Kong, China. Association for Computational Linguistics.
138
+ Khalid Al-Khatib, Henning Wachsmuth, Matthias Hagen, and Benno Stein. 2017. Patterns of argumentation strategies across topics. In 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017, pages 1362-1368. Association for Computational Linguistics.
139
+ Khalid Al-Khatib, Henning Wachsmuth, Johannes Kiesel, Matthias Hagen, and Benno Stein. 2016. A news editorial corpus for mining argumentation strategies. In 26th International Conference on Computational Linguistics (COLING 2016), pages 3433-3443. Association for Computational Linguistics.
140
+ Bal Krishna Bal. 2009. Towards an analysis of opinions in news editorials: How positive was the year? (project abstract). In Proceedings of the Eight International Conference on Computational Semantics, pages 260-263. Association for Computational Linguistics.
141
+ Bal Krishna Bal and Patrick Saint Dizier. 2010. Towards building annotated resources for analyzing opinions and argumentation in news editorials. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10). European Languages Resources Association (ELRA).
142
+ Michael A Cohn, Matthias R Mehl, and James W Pennebaker. 2004. Linguistic markers of psychological change surrounding september 11, 2001. *Psychological science*, 15(10):687-693.
143
+ Teun A. van Dijk. 1995. Opinions and ideologies in editorials. In Proceedings of the 4th International Symposium of Critical Discourse Analysis, Language, Social Life and Critical Thought, Athens.
144
+ Esin Durmus and Claire Cardie. 2018. Exploring the role of prior beliefs for argument persuasion. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1035-1045.
145
+ Frans H. van Eemeren. 2015. Strategic Maneuvering, pages 1-9. American Cancer Society.
146
+ Roxanne El Baff, Henning Wachsmuth, Khalid Al-Khatib, Manfred Stede, and Benno Stein. 2019. Computational argumentation synthesis as a language modeling task. In 12th International Natural Language Generation Conference. ACL.
147
+
148
+ Roxanne El Baff, Henning Wachsmuth, Khalid Al-Khatib, and Benno Stein. 2018. Challenge or empower: Revisiting argumentation quality in a news editorial corpus. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 454-464. Association for Computational Linguistics.
149
+ Helena Halmari and Tuija Virtanen. 2005. *Persuasion across Genres: a Linguistic Approach*, volume 130. John Benjamins Publishing.
150
+ Ernest C Hynds. 1990. Changes in editorials: A study of three newspapers, 1955-1985. Journalism Quarterly, 67(2):302-312.
151
+ Ewa Kacewicz, James W Pennebaker, Matthew Davis, Moongee Jeon, and Arthur C Graesser. 2014. Pronoun use reflects standings in social hierarchies. Journal of Language and Social Psychology, 33(2):125-143.
152
+ Liane Longpre, Esin Durmus, and Claire Cardie. 2019. Persuasion of the undecided: Language vs. the listener. In Proceedings of the 6th Workshop on Argument Mining, pages 167-176, Florence, Italy. Association for Computational Linguistics.
153
+ Stephanie Lukin, Pranav Anand, Marilyn Walker, and Steve Whittaker. 2017. Argument strength is in the eye of the beholder: Audience effects in persuasion. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 742-753. Association for Computational Linguistics.
154
+ Saif M Mohammad and Peter D Turney. 2013. Crowdsourcing a word-emotion association lexicon. Computational Intelligence, 29(3):436-465.
155
+ James Moznette and Galen Rarick. 1968. Which are more readable: Editorials or news stories? Journalism Quarterly, 45(2):319-321.
156
+ Matthew L Newman, James W Pennebaker, Diane S Berry, and Jane M Richards. 2003. Lying words: Predicting deception from linguistic styles. *Personality and social psychology bulletin*, 29(5):665-675.
157
+ F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.
158
+ James W Pennebaker, Ryan L Boyd, Kayla Jordan, and Kate Blackburn. 2015. The development and psychometric properties of LIWC2015. Technical report, University of Texas at Austin.
159
+ James W Pennebaker, Cindy K Chung, Joey Frazee, Gary M Lavergne, and David I Beaver. 2014. When small words foretell academic success: The case of college admissions essays. *PloS one*, 9(12):e115844.
160
+
161
+ Isaac Persing and Vincent Ng. 2015. Modeling argument strength in student essays. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 543-552. Association for Computational Linguistics.
162
+ Isaac Persing and Vincent Ng. 2017. Lightly-supervised modeling of argument persuasiveness. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 594-604, Taipei, Taiwan. Asian Federation of Natural Language Processing.
163
+ Carole Rich. 2015. Writing and reporting news: A coaching method. Cengage Learning.
164
+ Ellen Riloff and Janyce Wiebe. 2003. Learning extraction patterns for subjective expressions. In Proceedings of the 2003 conference on Empirical methods in natural language processing.
165
+ Evan Sandhaus. 2008. The New York Times Annotated Corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752.
166
+ Tatjana Scheffler and Manfred Stede. 2016. Realizing argumentative coherence relations in German: A contrastive study of newspaper editorials and Twitter posts. In Proceedings of the COMMA Workshop: Foundations of the Language of Argumentation, pages 73-80.
167
+ Swapna Somasundaran, Josef Ruppenhofer, and Janyce Wiebe. 2007. Detecting arguing and sentiment in meetings. In Proceedings of the SIGdial Workshop on Discourse and Dialogue, volume 6.
168
+ Manfred Stede and Jodi Schneider. 2018. Argumentation Mining. Number 40 in Synthesis Lectures on Human Language Technologies. Morgan & Claypool.
169
+ Yla R Tausczik and James W Pennebaker. 2010. The psychological meaning of words: Liwc and computerized text analysis methods. Journal of language and social psychology, 29(1):24-54.
170
+ Henning Wachsmuth, Khalid Al Khatib, and Benno Stein. 2016. Using argument mining to assess the argumentation quality of essays. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1680-1691. The COLING 2016 Organizing Committee.
171
+ Henning Wachsmuth, Nona Naderi, Yufang Hou, Yonatan Bilu, Vinodkumar Prabhakaran, Tim Alberdingk Thijm, Graeme Hirst, and Benno Stein. 2017. Computational argumentation quality assessment in natural language. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 176-187. Association for Computational Linguistics.
172
+
173
+ Henning Wachsmuth, Manfred Stede, Roxanne El Baff, Khalid Al Khatib, Maria Skeppstedt, and Benno Stein. 2018. Argumentation synthesis following rhetorical strategies. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3753-3765. Association for Computational Linguistics.
174
+ Lu Wang, Nick Beauchamp, Sarah Shugars, and Kochen Qin. 2017. Winning on the merits: The joint effects of content and style on debate outcomes. Transactions of the Association for Computational Linguistics, 5:219-232.
175
+ Janyce Wiebe and Ellen Riloff. 2005. Creating subjective and objective sentence classifiers from unannotated texts. In International conference on intelligent text processing and computational linguistics, pages 486-497. Springer.
176
+ Hong Yu and Vasileios Hatzivassiloglou. 2003. Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 129-136. Association for Computational Linguistics.
177
+ Justine Zhang, Ravi Kumar, Sujith Ravi, and Cristian Danescu-Niculescu-Mizil. 2016. Conversational flow in Oxford-style debates. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 136-141, San Diego, California. Association for Computational Linguistics.
analyzingthepersuasiveeffectofstyleinnewseditorialargumentation/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d7f72293600a97d56cbc4c036431f447884d1d6b975fdbbe2c36e5484c02755
3
+ size 350169
analyzingthepersuasiveeffectofstyleinnewseditorialargumentation/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:853944e216bb1e82824347c9b6d9b2fdb28667e4e8282952dbe96beb9cbff013
3
+ size 217916
ananalysisoftheutilityofexplicitnegativeexamplestoimprovethesyntacticabilitiesofneurallanguagemodels/453e6f4e-6353-4d36-931e-6eca1fbb37ca_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bfd299f65d4fe7c71e9cc4473ad9609a0b72585a889412f71a3fcb1b40e556d8
3
+ size 78049
ananalysisoftheutilityofexplicitnegativeexamplestoimprovethesyntacticabilitiesofneurallanguagemodels/453e6f4e-6353-4d36-931e-6eca1fbb37ca_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2523c3413405df34dc3ed8eb301e82081e7e8206af7d47fba9f8cf2c17c07e46
3
+ size 91951
ananalysisoftheutilityofexplicitnegativeexamplestoimprovethesyntacticabilitiesofneurallanguagemodels/453e6f4e-6353-4d36-931e-6eca1fbb37ca_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e49be8b0b8cea0c726148ff9a1e85781e23ea8e560c7de9608e4bfa393b5a186
3
+ size 390323
ananalysisoftheutilityofexplicitnegativeexamplestoimprovethesyntacticabilitiesofneurallanguagemodels/full.md ADDED
@@ -0,0 +1,295 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # An Analysis of the Utility of Explicit Negative Examples to Improve the Syntactic Abilities of Neural Language Models
2
+
3
+ Hiroshi Noji
4
+
5
+ Artificial Intelligence Research Center
6
+
7
+ AIST, Tokyo, Japan
8
+
9
+ hiroshi.noji@aist.go.jp
10
+
11
+ Hiroya Takamura
12
+
13
+ Artificial Intelligence Research Center
14
+
15
+ AIST, Tokyo, Japan
16
+
17
+ takamura.hiroya@aist.go.jp
18
+
19
+ # Abstract
20
+
21
+ We explore the utilities of explicit negative examples in training neural language models. Negative examples here are incorrect words in a sentence, such as barks in *The dogs barks*. Neural language models are commonly trained only on positive examples, a set of sentences in the training data, but recent studies suggest that the models trained in this way are not capable of robustly handling complex syntactic constructions, such as long-distance agreement. In this paper, we first demonstrate that appropriately using negative examples about particular constructions (e.g., subject-verb agreement) will boost the model's robustness on them in English, with a negligible loss of perplexity. The key to our success is an additional margin loss between the log-likelihoods of a correct word and an incorrect word. We then provide a detailed analysis of the trained models. One of our findings is the difficulty of object-relative clauses for RNNs. We find that even with our direct learning signals the models still suffer from resolving agreement across an object-relative clause. Augmentation of training sentences involving the constructions somewhat helps, but the accuracy still does not reach the level of subject-relative clauses. Although not directly cognitively appealing, our method can be a tool to analyze the true architectural limitation of neural models on challenging linguistic constructions.
22
+
23
+ # 1 Introduction
24
+
25
+ Despite not being exposed to explicit syntactic supervision, neural language models (LMs), such as recurrent neural networks, are able to generate fluent and natural sentences, suggesting that they induce syntactic knowledge about the language to some extent. However, it is still under debate whether such induced knowledge about grammar is
26
+
27
+ robust enough to deal with syntactically challenging constructions such as long-distance subject-verb agreement. So far, the results for RNN language models (RNN-LMs) trained only with raw text are overall negative; prior work has reported low performance on the challenging test cases (Marvin and Linzen, 2018) even with the massive size of the data and model (van Schijndel et al., 2019), or argue the necessity of an architectural change to track the syntactic structure explicitly (Wilcox et al., 2019b; Kuncoro et al., 2018). Here the task is to evaluate whether a model assigns a higher likelihood on a grammatically correct sentence (1a) over an incorrect sentence (1b) that is minimally different from the original one (Linzen et al., 2016).
28
+
29
+ (1) a. The author that the guards like laughs. b. \* The author that the guards like laugh.
30
+
31
+ In this paper, to obtain a new insight into the syntactic abilities of neural LMs, in particular RNN-LMs, we perform a series of experiments under a different condition from the prior work. Specifically, we extensively analyze the performance of the models that are exposed to explicit negative examples. In this work, negative examples are the sentences or tokens that are grammatically incorrect, such as (1b) above.
32
+
33
+ Since these negative examples provide a direct learning signal on the task at test time it may not be very surprising if the task performance goes up. We acknowledge this, and argue that our motivation for this setup is to deepen understanding, in particular the limitation or the capacity of the current architectures, which we expect can be reached with such strong supervision. Another motivation is engineering: we could exploit negative examples in different ways, and establishing a better way will be of practical importance toward building an LM or generator that can be robust on particular linguistic constructions.
34
+
35
+ The first research question we pursue is about this latter point: what is a better method to utilize negative examples that help LMs to acquire robustness on the target syntactic constructions? Regarding this point, we find that adding additional token-level loss trying to guarantee a margin between log-probabilities for the correct and incorrect words (e.g., $\log p(\text{laughs}|h)$ and $\log p(\text{laughs}|h)$ for (1a)) is superior to the alternatives. On the test set of Marvin and Linzen (2018), we show that LSTM language models (LSTM-LMs) trained by this loss reach near perfect level on most syntactic constructions for which we create negative examples, with only a slight increase of perplexity about 1.0 point.
36
+
37
+ Past work conceptually similar to us is Enguehard et al. (2017), which, while not directly exploiting negative examples, trains an LM with additional explicit supervision signals to the evaluation task. They hypothesize that LSTMs do have enough capacity to acquire robust syntactic abilities but the learning signals given by the raw text are weak, and show that multi-task learning with a binary classification task to predict the upcoming verb form (singular or plural) helps models aware of the target syntax (subject-verb agreement). Our experiments basically confirm and strengthen this argument, with even stronger learning signals from negative examples, and we argue this allows us to evaluate the true capacity of the current architectures. In our experiments (Section 4), we show that our margin loss achieves higher syntactic performance than their multi-task learning.
38
+
39
+ Another relevant work on the capacity of LSTM-LMs is Kuncoro et al. (2019), which shows that by distilling from syntactic LMs (Dyer et al., 2016), LSTM-LMs can improve their robustness on various agreement phenomena. We show that our LMs with the margin loss outperform theirs in most of the aspects, further strengthening the argument about a stronger capacity of LSTM-LMs.
40
+
41
+ The latter part of this paper is a detailed analysis of the trained models and introduced losses. Our second question is about the true limitation of LSTM-LMs: are there still any syntactic constructions that the models cannot handle robustly even with our direct learning signals? This question can be seen as a fine-grained one raised by Enguehard et al. (2017) with a stronger tool and improved evaluation metric. Among tested constructions, we find that syntactic agreement across an object relative clause (RC) is challenging. To inspect whether this
42
+
43
+ is due to the architectural limitation, we train another LM on a dataset, on which we unnaturally augment sentences involving object RCs. Since it is known that object RCs are relatively rare compared to subject RCs (Hale, 2001), frequency may be the main reason for the lower performance. Interestingly, even when increasing the number of sentences with an object RC by eight times (more than twice of sentences with a subject RC), the accuracy does not reach the same level as agreement across a subject RC. This result suggests an inherent difficulty in tracking a syntactic state across an object RC for sequential neural architectures.
44
+
45
+ We finally provide an ablation study to understand the encoded linguistic knowledge in the models learned with the help of our method. We experiment under reduced supervision at two different levels: (1) at a lexical level, by not giving negative examples on verbs that appear in the test set; (2) at a construction level, by not giving negative examples about a particular construction, e.g., verbs after a subject RC. We observe no huge score drops by both. This suggests that our learning signals at a lexical level (negative words) strengthen the abstract syntactic knowledge about the target constructions, and also that the models can generalize the knowledge acquired by negative examples to similar constructions for which negative examples are not explicitly given. The result also implies that negative examples do not have to be complete and can be noisy, which will be appealing from an engineering perspective.
46
+
47
+ # 2 Target Task and Setup
48
+
49
+ The most common evaluation metric of an LM is perplexity. Although neural LMs achieve impressive perplexity (Merit et al., 2018), it is an average score across all tokens and does not inform the models' behaviors on linguistically challenging structures, which are rare in the corpus. This is the primary motivation to separately evaluate the models' syntactic robustness by a different task.
50
+
51
+ # 2.1 Syntactic evaluation task
52
+
53
+ As introduced in Section 1, the task for a model is to assign a higher probability to the grammatical sentence over the ungrammatical one, given a pair of minimally different sentences at a critical position affecting the grammaticality. For example, (1a) and (1b) only differ at a final verb form, and to assign a higher probability to (1a), models need
54
+
55
+ to be aware of the agreement dependency between author and laughs over an RC.
56
+
57
+ Marvin and Linzen (2018) test set While initial work (Linzen et al., 2016; Gulordava et al., 2018) has collected test examples from naturally occurring sentences, this approach suffers from the coverage issue, as syntactically challenging examples are relatively rare. We use the test set compiled by Marvin and Linzen (2018), which consists of synthetic examples (in English) created by a fixed vocabulary and a grammar. This approach allows us to collect varieties of sentences with complex structures.
58
+
59
+ The test set is divided by the syntactic constructions appearing in each example. Many constructions are different types of subject-verb agreement, including local agreement on different sentential positions (2), and non-local agreement across different types of phrases. Intervening phrases include prepositional phrases, subject RCs, object RCs, and coordinated verb phrases (3). (1) is an example of agreement across an object RC.
60
+
61
+ (2) The senators smile/\*smiles.
62
+ (3) The senators like to watch television shows and are/\*is twenty three years old.
63
+
64
+ Previous work has shown that non-local agreement is particularly challenging for sequential neural models (Marvin and Linzen, 2018).
65
+
66
+ The other patterns are reflexive anaphora dependencies between a noun and a reflexive pronoun (4), and on negative polarity items (NPIs), such as ever, which requires a preceding negation word (e.g., no and none) at an appropriate scope (5):
67
+
68
+ (4) The authors hurt themselves/\*himself.
69
+ (5) No/\*Most authors have ever been popular.
70
+
71
+ Note that NPI examples differ from the others in that the context determining the grammaticality of the target word (No/\*Most) does not precede it. Rather, the grammaticality is determined by the following context. As we discuss in Section 3, this property makes it difficult to apply training with negative examples for NPIs for most of the methods studied in this work.
72
+
73
+ All examples above (1-5) are actual test sentences, and we can see that since they are synthetic some may sound somewhat unnatural. The main argument behind using this dataset is that even not very natural, they are still strictly grammatical, and an LM equipped with robust syntactic abilities should be able to handle them as a human would
74
+
75
+ do.
76
+
77
+ We use the original test set used in Marvin and Linzen (2018).<sup>1</sup> See the supplementary materials of this for the lexical items and example sentences in each construction.
78
+
79
+ # 2.2 Language models
80
+
81
+ Training data Following the practice, we train LMs on the dataset not directly relevant to the test set. Throughout the paper, we use an English Wikipedia corpus assembled by Gulordava et al. (2018), which has been used as training data for the present task (Marvin and Linzen, 2018; Kuncoro et al., 2019), consisting of $80\mathrm{M} / 10\mathrm{M} / 10\mathrm{M}$ tokens for training/dev/test sets. It is tokenized and rare words are replaced by a single unknown token, amounting to the vocabulary size of 50,000.
82
+
83
+ Baseline LSTM-LM Since our focus in this paper is an additional loss exploiting negative examples (Section 3), we fix the baseline LM throughout the experiments. Our baseline is a three-layer LSTM-LM with 1,150 hidden units at internal layers trained with the standard cross-entropy loss. Word embeddings are 400-dimensional, and input and output embeddings are tied (Inan et al., 2016). Deviating from some prior work (Marvin and Linzen, 2018; van Schijndel et al., 2019), we train LMs at sentence level as in sequence-to-sequence models (Sutskever et al., 2014). This setting has been employed in some previous work (Kuncoro et al., 2018, 2019).
84
+
85
+ Parameters are optimized by SGD. For regularization, we apply dropout on word embeddings and outputs of every layer of LSTMs, with weight decay of 1.2e-6, and anneal the learning rate by 0.5 if the validation perplexity does not improve successively, checking every 5,000 mini-batches. Mini-batch size, dropout weight, and initial learning rate are tuned by perplexity on the dev set of Wikipedia dataset. Note that we tune these values for the baseline LSTM-LM and fix them across the experiments.
86
+
87
+ The size of our three-layer LM is the same as the state-of-the-art LSTM-LM at document-level (Merit et al., 2018). Marvin and Linzen (2018)'s LSTM-LM is two-layer with 650 hidden units and word embeddings. Comparing two, since the word embeddings of our models are smaller (400 vs. 650) the total model sizes are comparable (40M for ours vs. 39M for theirs). Nonetheless, we will see in the first experiment that our carefully tuned three-layer model achieves much higher syntactic performance than their model (Section 4), being a stronger baseline to our extensions, which we introduce next.
88
+
89
+ # 3 Learning with Negative Examples
90
+
91
+ Now we describe four additional losses for exploiting negative examples. The first two are existing ones, proposed for a similar purpose or under a different motivation. As far as we know, the latter two have not appeared in past work.<sup>4</sup>
92
+
93
+ We note that we create negative examples by modifying the original Wikipedia training sentences, not sentences in the test set. As a running example, let us consider the case where sentence (6a) exists in a mini-batch, from which we create a negative example (6b).
94
+
95
+ (6) a. An industrial park with several companies is located in the close vicinity.
96
+ b. * An industrial park with several companies are located in the close vicinity.
97
+
98
+ Notations By a target word, we mean a word for which we create a negative example (e.g., is). We distinguish two types of negative examples: a negative token and a negative sentence; the former means a single incorrect word (e.g., are), while the latter means an entire ungrammatical sentence.
99
+
100
+ # 3.1 Negative Example Losses
101
+
102
+ Binary-classification loss This is proposed by Enguehard et al. (2017) to complement a weak inductive bias in LSTM-LMs for learning syntax. It is multi-task learning across the cross-entropy loss $(L_{lm})$ and an additional loss $(L_{add})$ :
103
+
104
+ $$
105
+ L = L _ {l m} + \beta L _ {a d d}, \tag {1}
106
+ $$
107
+
108
+ where $\beta$ is a relative weight for $L_{add}$ . Given outputs of LSTMs, a linear and binary softmax layers
109
+
110
+ predict whether the next token is singular or plural. $L_{add}$ is a loss for this classification, only defined for the contexts preceding a target token $x_{i}$ :
111
+
112
+ $$
113
+ L _ {a d d} = \sum_ {x _ {1: i} \in \mathbf {h} ^ {*}} - \log p (\operatorname {n u m} (x _ {i}) | x _ {1: i - 1}),
114
+ $$
115
+
116
+ where $x_{1:i} = x_1 \cdots x_i$ is a prefix sequence and $\mathbf{h}^*$ is a set of all prefixes ending with a target word (e.g., An industrial park with several companies is) in the training data. $\mathrm{num}(x) \in \{\text{singular}, \text{plural}\}$ is a function returning the number of $x$ . In practice, for each mini-batch for $L_{lm}$ , we calculate $L_{add}$ for the same set of sentences and add these two to obtain a total loss for updating parameters.
117
+
118
+ As we mentioned in Section 1, this loss does not exploit negative examples explicitly; essentially a model is only informed of a key position (target word) that determines the grammaticality. This is rather an indirect learning signal, and we expect that it does not outperform the other approaches.
119
+
120
+ Unlikelihood loss This is recently proposed (Welleck et al., 2020) for resolving the repetition issue, a known problem for neural text generators (Holtzman et al., 2019). Aiming at learning a model that can suppress repetition, they introduce an unlikelihood loss, which is an additional loss at a token level and explicitly penalizes choosing words previously appeared in the current context.
121
+
122
+ We customize their loss for negative tokens $x_{i}^{*}$ (e.g., are in (6b)). Since this loss is added at token-level, instead of Eq. 1 the total loss is $L_{lm}$ , which we modify as:
123
+
124
+ $$
125
+ \sum_ {\mathbf {x} \in D} \sum_ {x _ {i} \in \mathbf {x}} - \log p (x _ {i} | x _ {1: i - 1}) + \sum_ {x _ {i} ^ {*} \in \operatorname {n e g} _ {t} (x _ {i})} g (x _ {i} ^ {*}),
126
+ $$
127
+
128
+ $$
129
+ g (x _ {i} ^ {*}) = - \alpha \log (1 - p (x _ {i} ^ {*} | x _ {1: i - 1})),
130
+ $$
131
+
132
+ where $\mathrm{neg}_t(\cdot)$ returns negative tokens for a target $x_{i}$ . $\alpha$ controls the weight. $\mathbf{x}$ is a sentence in the training data $D$ . The unlikely loss strengthens the signal to penalize undesirable words in a context by explicitly reducing the likelihood of negative tokens $x_{i}^{*}$ . This is a more direct learning signal than the binary classification loss.
133
+
134
+ Sentence-level margin loss We propose a different loss, in which the likelihoods for correct and incorrect sentences are more tightly coupled. As in
135
+
136
+ the binary classification loss, the total loss is given by Eq. 1. We consider the following loss for $L_{add}$ :
137
+
138
+ $$
139
+ \sum_ {\mathbf {x} \in D} \sum_ {\mathbf {x} _ {j} ^ {*} \in \mathrm {n e g} _ {s} (\mathbf {x})} \max (0, \delta - (\log p (\mathbf {x}) - \log p (\mathbf {x} _ {j} ^ {*}))),
140
+ $$
141
+
142
+ where $\delta$ is a margin value between the log-likelihood of original sentence $\mathbf{x}$ and negative sentences $\{\mathbf{x}_j^*\}$ . $\mathrm{neg}_s(\cdot)$ returns a set of negative sentences by modifying the original one. Note that we change only one token for each $\mathbf{x}_j^*$ , and thus may obtain multiple negative sentences from one $\mathbf{x}$ when it contains multiple target tokens (e.g., she leaves there but comes back ...).
143
+
144
+ Comparing to the unlikelihood loss, not only decreasing the likelihood of a negative example, this loss tries to guarantee a certain difference between the two likelihoods. The learning signal of this loss seems stronger in this sense; however, the token-level supervision is missing, which may provide a more direct signal to learn a clear contrast between correct and incorrect words. This is an empirical problem we pursue in the experiments.
145
+
146
+ Token-level margin loss Our final loss is a combination of the previous two, by replacing $g(x_{i})$ in the unlikelihood loss by a margin loss:
147
+
148
+ $$
149
+ \begin{array}{l} g (x _ {i} ^ {*}) = \max (0, \delta - (\log p (x _ {i} | x _ {1: i - 1}) \\ - \log p (x _ {i} ^ {*} | x _ {1: i - 1})). \\ \end{array}
150
+ $$
151
+
152
+ We will see that this loss is the most advantageous in the experiments (Section 4).
153
+
154
+ # 3.2 Parameters
155
+
156
+ Each method employs a few additional hyperparameters ( $\beta$ for the binary classification loss, $\alpha$ for the unlikelihood loss, and $\delta$ for the margin losses). We preliminary select $\beta$ and $\alpha$ from $\{1, 10, 100, 1000\}$ that achieve the best average syntactic performance and find $\beta = 1$ and $\alpha = 1000$ . For the two margin losses, we fix $\beta = 1.0$ and $\alpha = 1.0$ and only see the effects of margin value $\delta$ .
157
+
158
+ # 3.3 Scope of Negative Examples
159
+
160
+ Since our goal is to understand to what extent LMs can be sensitive to the target syntactic constructions by giving explicit supervision via negative examples, we only prepare negative examples on the constructions that are directly tested at evaluation. Specifically, we mark the following words in the training data, and create negative examples:
161
+
162
+ Present verb To create negative examples on subject-verb agreement, we mark all present verbs and change their numbers.7
163
+
164
+ Reflexive pronoun We also create negative examples on reflexive anaphora, by flipping between $\{\text{themselves}\} \leftrightarrow \{\text{himself},\text{herself}\}$ .
165
+
166
+ These two are both related to the syntactic number of a target word. For binary classification we regard both as a target word, apart from the original work that only deals with subject-verb agreement (Enguehard et al., 2017). We use a single common linear layer for both constructions.
167
+
168
+ In this work, we do not create negative examples for NPIs. This is mainly for technical reasons. Among four losses, only the sentence-level margin loss can correctly handle negative examples for NPIs, essentially because other losses are token-level. For NPIs, left contexts do not have information to decide the grammaticality of the target token (a quantifier; no, most, etc.) (Section 2.1). Instead, in this work, we use NPI test cases as a proxy to see possible negative (or positive) impacts as compensation for specially targeting some constructions. We will see that in particular for our margin losses, such negative effects are very small.
169
+
170
+ # 4 Experiments on Additional Losses
171
+
172
+ We first see the overall performance of baseline LSTM-LMs as well as the effects of additional losses. Throughout the experiments, for each setting, we train five models from different random seeds and report the average score and standard deviation. The code is available at https://github.com/aistairc/lm Syntax_negative.
173
+
174
+ Naive LSTM-LM performs well The main accuracy comparison across target constructions for different settings is presented in Table 1. We first
175
+
176
+ <table><tr><td rowspan="2"></td><td colspan="2">LSTM-LM</td><td colspan="2">Additional margin loss (δ = 10)</td><td colspan="2">Additional loss (α = 1000, β = 1)</td><td>Distilled</td></tr><tr><td>M&amp;L18</td><td>Ours</td><td>Sentence-level</td><td>Token-level</td><td>Binary-pred.</td><td>Unlike.</td><td>K19</td></tr><tr><td colspan="8">AGREEMENT:</td></tr><tr><td>Simple</td><td>94.0</td><td>98.1 (±1.3)</td><td>100.0 (±0.0)</td><td>100.0 (±0.0)</td><td>99.1 (±1.2)</td><td>99.7 (±0.6)</td><td>100.0 (±0.0)</td></tr><tr><td>In a sent. complement</td><td>99.0</td><td>96.1 (±2.0)</td><td>95.8 (±0.7)</td><td>99.3 (±0.4)</td><td>96.9 (±2.4)</td><td>92.7 (±3.1)</td><td>98.0 (±2.0)</td></tr><tr><td>Short VP coordination</td><td>90.0</td><td>93.6 (±3.0)</td><td>100.0 (±0.0)</td><td>99.4 (±1.1)</td><td>93.8 (±3.3)</td><td>95.6 (±3.0)</td><td>99.0 (±2.0)</td></tr><tr><td>Long VP coordination</td><td>61.0</td><td>82.2 (±3.4)</td><td>94.5 (±1.0)</td><td>99.0 (±0.8)</td><td>83.9 (±3.2)</td><td>90.0 (±2.4)</td><td>80.0 (±2.0)</td></tr><tr><td>Across a PP</td><td>57.0</td><td>92.6 (±1.4)</td><td>98.8 (±0.4)</td><td>98.6 (±0.3)</td><td>92.7 (±1.3)</td><td>95.2 (±1.2)</td><td>91.0 (±3.0)</td></tr><tr><td>Across a SRC</td><td>56.0</td><td>91.5 (±3.4)</td><td>99.6 (±0.4)</td><td>99.8 (±0.2)</td><td>91.9 (±2.5)</td><td>97.1 (±0.7)</td><td>90.0 (±2.0)</td></tr><tr><td>Across an ORC</td><td>50.0</td><td>84.5 (±3.1)</td><td>93.5 (±4.0)</td><td>93.7 (±2.0)</td><td>86.3 (±3.2)</td><td>88.7 (±4.1)</td><td>84.0 (±3.0)</td></tr><tr><td>Across an ORC (no that)</td><td>52.0</td><td>75.7 (±3.3)</td><td>86.7 (±4.2)</td><td>89.4 (±2.7)</td><td>78.6 (±4.0)</td><td>86.4 (±3.5)</td><td>77.0 (±2.0)</td></tr><tr><td>In an ORC</td><td>84.0</td><td>84.3 (±5.5)</td><td>99.8 (±0.2)</td><td>99.9 (±0.1)</td><td>89.3 (±6.2)</td><td>92.4 (±3.5)</td><td>92.0 (±4.0)</td></tr><tr><td>In an ORC (no that)</td><td>71.0</td><td>81.8 (±2.3)</td><td>97.0 (±1.0)</td><td>98.6 (±0.9)</td><td>83.0 (±5.1)</td><td>88.9 (±2.4)</td><td>92.0 (±2.0)</td></tr><tr><td colspan="8">REFLEXIVE:</td></tr><tr><td>Simple</td><td>83.0</td><td>94.1 (±1.9)</td><td>99.4 (±1.1)</td><td>99.9 (±0.2)</td><td>91.8 (±2.9)</td><td>98.0 (±1.1)</td><td>91.0 (±4.0)</td></tr><tr><td>In a sent. complement</td><td>86.0</td><td>80.8 (±1.7)</td><td>99.2 (±0.6)</td><td>97.9 (±0.8)</td><td>79.0 (±3.1)</td><td>92.6 (±2.9)</td><td>82.0 (±3.0)</td></tr><tr><td>Across an ORC</td><td>55.0</td><td>74.9 (±5.0)</td><td>72.8 (±2.4)</td><td>73.9 (±1.3)</td><td>72.3 (±3.0)</td><td>78.9 (±8.6)</td><td>67.0 (±3.0)</td></tr><tr><td colspan="8">NPI:</td></tr><tr><td>Simple</td><td>40.0</td><td>99.2 (±0.7)</td><td>98.7 (±1.6)</td><td>97.7 (±2.0)</td><td>98.0 (±3.1)</td><td>98.2 (±1.2)</td><td>94.0 (±4.0)</td></tr><tr><td>Across an ORC</td><td>41.0</td><td>63.5 (±15.0)</td><td>56.8 (±6.0)</td><td>64.1 (±13.8)</td><td>64.5 (±14.0)</td><td>48.5 (±6.4)</td><td>91.0 (±7.0)</td></tr><tr><td>Perplexity</td><td>78.6</td><td>49.5 (±0.2)</td><td>56.4 (±0.5)</td><td>50.4 (±0.6)</td><td>49.6 (±0.3)</td><td>50.3 (±0.2)</td><td>56.7 (±0.2)</td></tr></table>
177
+
178
+ Table 1: Comparison of syntactic dependency evaluation accuracies across different types of dependencies and perplexities. Numbers in parentheses are standard deviations. M&L18 is the result of two-layer LSTM-LM in Marvin and Linzen (2018). K19 is the result of distilled two-layer LSTM-LM from RNNGs (Kuncoro et al., 2019). VP: verb phrase; PP: prepositional phrase; SRC: subject relative clause; and ORC: object-relative clause. Margin values are set to 10, which works better according to Figure 1. Perplexity values are calculated on the test set of the Wikipedia dataset. The values of M&L18 and K19 are copied from Kuncoro et al. (2019).
179
+
180
+ ![](images/efbc923b4c10ad8a9103dafbc9c4c06d3ed187795659dd3f0d3f6dd66f53e9d2.jpg)
181
+ Figure 1: Margin value vs. macro average accuracy over the same type of constructions, or perplexity, with standard deviation for the sentence and token-level margin losses. $\delta = 0$ is the baseline LSTM-LM without additional loss.
182
+
183
+ ![](images/9b1712e02eb48540534fcee48dfac698e3653a7efd0689c160eed08404c56bc4.jpg)
184
+
185
+ ![](images/1809b4d30802cd230b1d2022eefa5749fde83ae16cf162f5a87c8e8065ba116f.jpg)
186
+
187
+ ![](images/3827aba428e9abd9647d0fd8a8a3a3f71b74267afecb357e1c68211a270f1857.jpg)
188
+
189
+ notice that our baseline LSTM-LM (Section 2.2) performs much better than Marvin and Linzen (2018)'s LM. A similar observation is recently made by Kuncoro et al. (2019). This suggests that the original work underestimates the true syntactic ability induced by LSTM-LMs. The table also shows the results by their distilled LSTM-LM from RNNGs (Section 1).
190
+
191
+ Higher margin value is effective For the two types of margin loss, which margin value should we use? Figure 1 reports average accuracies within the same types of constructions. For both token and sentence-levels, the task performance increases along $\delta$ , but a too large value (15) causes a nega
192
+
193
+ tive effect, in particular on reflexive anaphora. Increases (degradations) of perplexity are observed in both methods but this effect is much smaller for the token-level loss. In the following experiments, we fix the margin value to 10 for both, which achieves the best syntactic performance.
194
+
195
+ Which additional loss works better? We see a clear tendency that our token-level margin loss achieves overall better performance. Unlikelihood loss does not work unless we choose a huge weight parameter $(\alpha = 1000)$ , but it does not outperform ours, with a similar value of perplexity. The improvements by binary-classification loss are smaller, indicating that the signals are weaker than other methods with explicit negative exam
196
+
197
+ ![](images/5b7d3ae4cabf024eb2cd2528ae8b0be6c5fc654688045e65a237fc27e5b9cd7a.jpg)
198
+ Figure 2: Accuracies on "Across an ORC" (with and without complementizer "that") by models trained on augmented data with additional sentences containing an object RC. Margin is set to 10. X-axis denotes the total number of object RCs in the training data. 0.37M roughly equals the number of subject RCs in the original data. "animate only" is a subset of examples (see body). Error bars are standard deviations across 5 different runs.
199
+
200
+ ![](images/08176be216eeb98852c2ed5bcfa4768fa5a1552239a8e6ed8b1fe5a5b7aea204.jpg)
201
+
202
+ ![](images/cb03809ee40c3c4395a08940297443814766198c4c074d8c82a3bd7cad96dd85.jpg)
203
+
204
+ ![](images/d28621766c1e7ececccd5950daecb12ee3b16d38654db1c86acccf37410ebd1c.jpg)
205
+
206
+ plies. Sentence-level margin loss is conceptually advantageous in that it can deal with any type of sentence-level grammaticality including NPIs. We see that it is overall competitive with token-level margin loss but suffers from a larger increase of perplexity (4.9 points), which is observed even with smaller margin values (Figure 1). Understanding the cause of this degradation as well as alleviating it is an important future direction.
207
+
208
+ # 5 Limitations of LSTM-LMs
209
+
210
+ In Table 1, the accuracies on dependencies across an object RC are relatively low. The central question in this experiment is whether this low performance is due to the limitation of current architectures, or other factors such as frequency. We base our discussion on the contrast between object (7) and subject (8) RCs:
211
+
212
+ (7) The authors (that) the chef likes laugh.
213
+ (8) The authors that like the chef laugh.
214
+
215
+ Importantly, the accuracies for a subject RC are more stable, reaching $99.8\%$ with the token-level margin loss, although the content words used in the examples are common.[9]
216
+
217
+ It is known that object RCs are less frequent than subject RCs (Hale, 2001; Levy, 2008), and it could be the case that the use of negative examples still does not fully alleviate this factor. Here, to understand the true limitation of the current LSTM architecture, we try to eliminate such other factors as much as possible under a controlled experiment.
218
+
219
+ Setup We first inspect the frequencies of object and subject RCs in the training data, by parsing them with the state-of-the-art Berkeley neural parser (Kitaev and Klein, 2018). In total, while subject RCs occur 373,186 times, object RCs only occur 106,558 times. We create three additional training datasets by adding sentences involving object RCs to the original Wikipedia corpus (Section 2.2). To this end, we randomly pick up 30 million sentences from Wikipedia (not overlapped to any sentences in the original corpus), parse by the same parser, and filter sentences containing an object RC, amounting to 680,000 sentences. We create augmented training sets by adding a subset, or all of these sentences to the original training sentences. Among the test cases about object RCs we only report accuracies on subject-verb agreement, on which the portion for subject RCs also exists. This allows us to compare the difficulties of two types of RCs for the present models. We also evaluate on "animate only" subset, which has a correspondence to the test cases for subject RCs with only differences in word order and inflection (like (7) and (8); see footnote 9). Of particular interest to us is the accuracy on these animate cases. We expect that the main reason for lower performance for object RCs is due to frequency, and with our augmentation the accuracy will reach the same level as that for subject RCs.
220
+
221
+ Results However, for both all and animate cases, accuracies are below those for subject RCs (Figure 2). Although we see improvements from the original score (93.7), the highest average accuracy by the token-level margin loss on the "animate" subset is 97.1 ("with that"), not beyond $99\%$ . This result indicates some architectural limitations of LSTM-LMs in handling object RCs robustly at a near perfect level. Answering why the accuracy
222
+
223
+ ![](images/26f3db46893e9bf87feb8b73765f796496b802e52b2e461685a158313e186a4f.jpg)
224
+ Figure 3: An ablation study to see the performance of models trained with reduced explicit negative examples (token-level and construction-level). One color represents the same models across plots, except the last bar (construction-level), which is different for each plot.
225
+
226
+ does not reach (almost) $100\%$ , perhaps with other empirical properties or inductive biases (Khandelwal et al., 2018; Ravfogel et al., 2019) is future work.
227
+
228
+ # 6 Do models generalize explicit supervision, or just memorize it?
229
+
230
+ One distinguishing property of our margin loss, in particular token-level loss, is that it is highly lexical, making a contrast explicitly between correct and incorrect words. This direct signal may make models acquire very specialized knowledge about each target word, not very generalizable one across similar words and occurring contexts. In this section, to get insights into the transferability of syntactic knowledge induced by our margin losses, we provide an ablation study by removing certain negative examples during training.
231
+
232
+ Setup We perform two kinds of ablation. For token-level ablation (-TOKEN), we avoid creating negative examples for all verbs that appear as a target verb $^{10}$ in the test set. Another is construction-level (-PATTERN), by removing all negative examples occurring in a particular syntactic pattern. We ablate a single construction at a time for -PATTERN, from four non-local subject-verb dependencies (across a prepositional phrase (PP), sub
233
+
234
+ <table><tr><td rowspan="2">Models</td><td colspan="3">Second verb (V1 and V2)</td></tr><tr><td>All verbs</td><td>like</td><td>other verbs</td></tr><tr><td>LSTM-LM</td><td>82.2 (±3.4)</td><td>13.0 (±12.2)</td><td>89.9 (±3.6)</td></tr><tr><td>Margin (token)</td><td>99.0 (±0.8)</td><td>94.0 (±6.5)</td><td>99.6 (±0.5)</td></tr><tr><td>-TOKEN</td><td>90.8 (±3.3)</td><td>51.0 (±29.9)</td><td>95.2 (±2.6)</td></tr><tr><td>-PATTERN</td><td>90.1 (±4.6)</td><td>50.0 (±30.6)</td><td>94.6 (±2.2)</td></tr></table>
235
+
236
+ Table 2: Accuracies on long VP coordinations by the models with/without ablations. "All verbs" scores are overall accuracies. "like" scores are accuracies on examples on which the second verb (target verb) is like.
237
+
238
+ <table><tr><td rowspan="2">Models</td><td colspan="2">First verb (V1 and V2)</td></tr><tr><td>likes</td><td>other verbs</td></tr><tr><td>LSTM-LM</td><td>61.5 (±20.0)</td><td>93.5 (±3.4)</td></tr><tr><td>Margin (token)</td><td>97.0 (±4.5)</td><td>99.9 (±0.1)</td></tr><tr><td>-TOKEN</td><td>63.5 (±18.5)</td><td>99.2 (±1.1)</td></tr><tr><td>-PATTERN</td><td>67.0 (±21.2)</td><td>98.0 (±1.4)</td></tr></table>
239
+
240
+ Table 3: Further analysis of accuracies on the "other verbs" cases of Table 2. Among these cases, the second column ("likes") shows accuracies on examples where the first verb (not target) is likes.
241
+
242
+ ject RC, object RC, and long verb phrase (VP)).<sup>11</sup> We hypothesize that models are less affected by token-level ablation, as knowledge transfer across words appearing in similar contexts is promoted by language modeling objective. We expect that construction-level supervision would be necessary to induce robust syntactic knowledge, as perhaps different phrases, e.g., a PP and a VP, are processed differently.
243
+
244
+ Results Figure 3 is the main results. Across models, we restrict the evaluation on four nonlocal dependency constructions, which we select as ablation candidates as well. For a model with -PATTERN, we evaluate only on examples of construction ablated in training (see caption). To our surprise, both -TOKEN and -PATTERN have similar effects, except "Across an ORC", on which the degradation by -PATTERN is larger. This may be related to the inherent difficulty of object RCs for LSTM-LMs that we verified in Section 5. For such particularly challenging constructions, models may need explicit supervision signals. We observe lesser score degradation by abating prepositional phrases and subject RCs. This suggests that, for example, the syntactic knowledge strengthened for prepositional phrases with negative examples could be exploited to learn the syntactic patterns about
245
+
246
+ subject RCs, even when direct learning signals on subject RCs are missing.
247
+
248
+ We see approximately 10.0 points score degradation on long VP coordination by both ablations. Does this mean that long VPs are particularly hard in terms of transferability? We find that the main reasons for this drop, relative to other cases, are rather technical, essentially due to the target verbs used in the test cases. See Table 2, 3, which show that failed cases for the ablated models are often characterized by the existence of either like or likes. Excluding these cases ("other verbs" in Table 3), the accuracies reach 99.2 and 98.0 by -TOKEN and -PATTERN, respectively. These verbs do not appear as a target verb in the test cases of other tested constructions. This result suggests that the transferability of syntactic knowledge to a particular word may depend on some characteristics of that word. We conjecture that the reason for weak transferability to likes and like is that they are polysemous; e.g., in the corpus, like is much more often used as a preposition and being used as a present tense verb is rare. This type of issue due to frequency may be one reason for lessening the transferability. In other words, like can be seen as a challenging verb to learn its usage only from the corpus, and our margin loss helps for such cases.
249
+
250
+ # 7 Discussion and Conclusion
251
+
252
+ Our results with explicit negative examples are overall positive. We have demonstrated that models exposed to these examples at training time in an appropriate way will be capable of handling the targeted constructions at near perfect level except a few cases. We found that our new token-level margin loss is superior to the other approaches and the remaining challenging cases are dependencies across an object relative clause.
253
+
254
+ Object relative clauses are known to be harder for a human as well, and our results may indicate some similarities in the sentence processing behaviors by a human and RNN, though other studies also find some dissimilarities between them (Linzen and Leonard, 2018; Wilcox et al., 2019a). The difficulty of object relative clauses for RNN-LMs has also been observed in the prior work (Marvin and Linzen, 2018; van Schijndel et al., 2019). A new insight provided by our study is that this difficulty holds even after alleviating the frequency effects by augmenting the target structures along with direct supervision signals. This
255
+
256
+ indicates that RNNs might inherently suffer from some memory limitation like a human subject, for which the difficulty of particular constructions, including center-embedded object relative clauses, are known to be incurred due to memory limitation (Gibson, 1998; Demberg and Keller, 2008) rather than purely frequencies of the phenomena. In terms of language acquisition, the supervision provided in our approach can be seen as direct negative evidence (Marcus, 1993). Since human learners are known to acquire syntax without such direct feedback we do not claim that our proposed learning method itself is cognitively plausible.
257
+
258
+ One limitation of our approach is that the scope of negative examples has to be predetermined and fixed. Alleviating this restriction is an important future direction. Though it is challenging, we believe that our final analysis for transferability, which indicates that the negative examples do not have to be complete and can be noisy, suggests a possibility of a mechanism to induce negative examples themselves during training, perhaps relying on other linguistic cues or external knowledge.
259
+
260
+ # Acknowledgements
261
+
262
+ We would like to thank Naho Orita and the members of Computational Psycholinguistics Tokyo for their valuable suggestions and comments. This paper is based on results obtained from projects commissioned by the New Energy and Industrial Technology Development Organization (NEDO).
263
+
264
+ # References
265
+
266
+ Vera Demberg and Frank Keller. 2008. Data from eyetracking corpora as evidence for theories of syntactic processing complexity. Cognition, 109:193-210.
267
+ Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199-209, San Diego, California. Association for Computational Linguistics.
268
+ Émile Enguehard, Yoav Goldberg, and Tal Linzen. 2017. Exploring the syntactic abilities of RNNs with multi-task learning. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 3-14, Vancouver, Canada. Association for Computational Linguistics.
269
+ Edward Gibson. 1998. Linguistic complexity: Locality of syntactic dependencies. Cognition, 68(1):1-76.
270
+
271
+ Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195-1205. Association for Computational Linguistics.
272
+ John Hale. 2001. A probabilistic earley parser as a psycholinguistic model. In Second Meeting of the North American Chapter of the Association for Computational Linguistics.
273
+ Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration.
274
+ Jiaji Huang, Yi Li, Wei Ping, and Liang Huang. 2018. Large margin neural language model. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1183-1191, Brussels, Belgium. Association for Computational Linguistics.
275
+ Hakan Inan, Khashayar Khosravi, and Richard Socher. 2016. Tying word vectors and word classifiers: A loss framework for language modeling. In International Conference on Learning Representations.
276
+ Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. 2018. Sharp nearby, fuzzy far away: How neural language models use context. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 284-294, Melbourne, Australia. Association for Computational Linguistics.
277
+ Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676–2686, Melbourne, Australia. Association for Computational Linguistics.
278
+ Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom. 2018. Lstms can learn syntax-sensitive dependencies well, but modeling structure makes them better. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426-1436, Melbourne, Australia. Association for Computational Linguistics.
279
+ Adhiguna Kuncoro, Chris Dyer, Laura Rimell, Stephen Clark, and Phil Blunsom. 2019. Scalable syntax-aware language models using knowledge distillation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3472-3484, Florence, Italy. Association for Computational Linguistics.
280
+ Roger Levy. 2008. Expectation-based syntactic comprehension. Cognition, 106(3):1126-1177.
281
+
282
+ Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of lstms to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521-535.
283
+ Tal Linzen and Brian Leonard. 2018. Distinct patterns of syntactic agreement errors in recurrent networks and humans. In Proceedings of the 40th Annual Conference of the Cognitive Science Society, pages 692-697, Austin, TX. Cognitive Science Society.
284
+ Gary F. Marcus. 1993. Negative evidence in language acquisition. Cognition, 46(1):53 - 85.
285
+ Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Brussels, Belgium. Association for Computational Linguistics.
286
+ Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and optimizing LSTM language models. In International Conference on Learning Representations.
287
+ Tomas Mikolov, Stefan Kombrink, Lukás Burget, Jan Cernocký, and Sanjeev Khudanpur. 2011. Extensions of recurrent neural network language model. In ICASSP, pages 5528-5531. IEEE.
288
+ Shauli Ravfogel, Yoav Goldberg, and Tal Linzen. 2019. Studying the inductive biases of RNNs with synthetic variations of natural languages. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3532-3542, Minneapolis, Minnesota. Association for Computational Linguistics.
289
+ Marten van Schijndel, Aaron Mueller, and Tal Linzen. 2019. Quantity doesn't buy quality syntax with neural language models. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, Hong Kong, China. Association for Computational Linguistics.
290
+ Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104-3112.
291
+ Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of the 2003 conference of the North American chapter of the association for computational linguistics on human language technology volume 1, pages 173-180. Association for Computational Linguistics.
292
+ Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2020. Neural text generation with unlikelihood training. In International Conference on Learning Representations.
293
+
294
+ Ethan Wilcox, Roger P. Levy, and Richard Futrell. 2019a. What syntactic structures block dependencies in rnn language models? In Proceedings of the 41st Annual Meeting of the Cognitive Science Society. Cognitive Science Society.
295
+ Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballesteros, and Roger Levy. 2019b. Structural supervision improves learning of non-local grammatical dependencies. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3302-3312, Minneapolis, Minnesota. Association for Computational Linguistics.
ananalysisoftheutilityofexplicitnegativeexamplestoimprovethesyntacticabilitiesofneurallanguagemodels/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64ab7d2592bb85b3866fdea225ff26ecc8ff5c161241cbbcef9314a66e81953f
3
+ size 360147
ananalysisoftheutilityofexplicitnegativeexamplestoimprovethesyntacticabilitiesofneurallanguagemodels/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1969f9c3879d159dce648ddce0bc5abfb181a76ed1306559ddebdfcef521767f
3
+ size 329908
aneffectivenessmetricforordinalclassificationformalpropertiesandexperimentalresults/2197aa75-6617-4cf4-a4b3-db27edad2cee_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94454bbbc41dbbb9c5217ce67b36476468315141dda9ff2c2e4090d8f2475f0b
3
+ size 97883
aneffectivenessmetricforordinalclassificationformalpropertiesandexperimentalresults/2197aa75-6617-4cf4-a4b3-db27edad2cee_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2836069ebe828d0786b6824e388b035697cebde9e387a3224d406fe445f7b7ed
3
+ size 113530
aneffectivenessmetricforordinalclassificationformalpropertiesandexperimentalresults/2197aa75-6617-4cf4-a4b3-db27edad2cee_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a94dab46165a3fe61445aee4051d4285c992ad579e77590a898b39a90ec5b1c7
3
+ size 524195
aneffectivenessmetricforordinalclassificationformalpropertiesandexperimentalresults/full.md ADDED
@@ -0,0 +1,484 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # An Effectiveness Metric for Ordinal Classification: Formal Properties and Experimental Results
2
+
3
+ Enrique Amigo
4
+
5
+ UNED
6
+
7
+ Madrid, Spain
8
+
9
+ enrique@lsi.uned.es
10
+
11
+ Julio Gonzalo
12
+
13
+ UNED
14
+
15
+ Madrid, Spain
16
+
17
+ julio@lsi.uned.es
18
+
19
+ Stefano Mizzaro
20
+
21
+ University of Udine
22
+
23
+ Udine, Italy
24
+
25
+ mizzaro@uniud.it
26
+
27
+ Jorge Carrillo-de-Albornoz
28
+
29
+ UNED
30
+
31
+ Madrid, Spain
32
+
33
+ jcalbornoz@lsi.uned.es
34
+
35
+ # Abstract
36
+
37
+ In Ordinal Classification tasks, items have to be assigned to classes that have a relative ordering, such as positive, neutral, negative in sentiment analysis. Remarkably, the most popular evaluation metrics for ordinal classification tasks either ignore relevant information (for instance, precision/recall on each of the classes ignores their relative ordering) or assume additional information (for instance, Mean Average Error assumes absolute distances between classes). In this paper we propose a new metric for Ordinal Classification, Closeness Evaluation Measure, that is rooted on Measurement Theory and Information Theory. Our theoretical analysis and experimental results over both synthetic data and data from NLP shared tasks indicate that the proposed metric captures quality aspects from different traditional tasks simultaneously. In addition, it generalizes some popular classification (nominal scale) and error minimization (interval scale) metrics, depending on the measurement scale in which it is instantiated.
38
+
39
+ # 1 Introduction
40
+
41
+ In Ordinal Classification (OC) tasks, items have to be assigned to classes that have a relative ordering, such as positive, neutral, negative in sentiment analysis. It is different from n-ary classification, because it considers ordinal relationships between classes. It is also different from ranking tasks, which only care about relative ordering between items, because it requires category matching; and it is also different from value prediction, because it does not assume fixed numeric intervals between categories.
42
+
43
+ Most research on Ordinal Classification, however, evaluates systems with metrics designed for
44
+
45
+ those other problems. But classification measures ignore the ordering between classes, ranking metrics ignore category matching, and value prediction metrics are used by assuming (usually equal) numeric intervals between categories.
46
+
47
+ In this paper we propose a metric designed to evaluate Ordinal Classification systems which relies on concepts from Measurement Theory and from Information Theory. The key idea is defining a general notion of closeness between item value assignments (system output prediction vs gold standard class) which is instantiated into ordinal scales but can be also be used with nominal or interval scales. Our approach establishes closeness between classes in terms of the distribution of items per class in the gold standard, instead of assuming predefined intervals between classes. We provide a formal (Section 4) and empirical (Section 5) comparison of our metric with previous approaches, and both analytical and empirical evidence indicate that our metric suits the problem best than the current most popular choices.
48
+
49
+ # 2 State of the Art
50
+
51
+ In this section we first summarize the most popular metrics used in OC evaluation campaigns, and then discuss previous work on OC evaluation.
52
+
53
+ # 2.1 OC Metrics in NLP shared tasks
54
+
55
+ OC does not match traditional classification, because the ordering between classes makes some errors more severe than others. For instance, misclassifying a positive opinion as negative is a more severe error than as a neutral opinion. Classification metrics, however, have been used for OC tasks in several shared tasks (see Table 1). For instance, Evalita-16 (Barbieri et al., 2016) uses $F_{1}$ , NTCIR
56
+
57
+ Table 1: Metrics used for OC in evaluation campaigns
58
+
59
+ <table><tr><td></td><td>Acc</td><td>F1</td><td>AvgRec</td><td>Pearson</td><td>R/S</td><td>MAEM</td><td>MSE</td></tr><tr><td>NTCIR-7</td><td>✓</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>REPLAB-13</td><td></td><td></td><td></td><td></td><td>✓</td><td></td><td></td></tr><tr><td>SEM15-T11</td><td></td><td></td><td></td><td></td><td></td><td></td><td>✓</td></tr><tr><td>EVALITA-16</td><td></td><td>✓</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>STS-16</td><td></td><td></td><td></td><td>✓</td><td></td><td></td><td></td></tr><tr><td>SEM17-T4</td><td></td><td></td><td>✓</td><td></td><td></td><td>✓</td><td></td></tr></table>
60
+
61
+ 7 (Kando, 2008) uses Accuracy, and Semeval-17 Task 4 (Rosenthal et al., 2017) uses Macro Average Recall.
62
+
63
+ OC does not match ranking metrics either: three items categorized by a system as very high/high/low, respectively, are perfectly ranked with respect to a ground-truth high/low/very_low, but yet no single item is correctly classified. However, ranking metrics have been applied in some campaigns, such as R/S for reputation polarity and priority in Replab-2013 (Amigo et al., 2013a).
64
+
65
+ OC has also been evaluated as a value prediction problem – for instance, SemEval 2015 Task 11 (Ghosh et al., 2015) – with metrics such as Mean Average Error (MAE) or Mean Squared Error (MSE), usually assuming that all classes are equidistant. But, in general, we cannot assume fixed intervals between classes if we are dealing with an OC task. For instance, in a paper reviewing scale strong_accept/accept/weak_accept/undecided/weak_reject/reject/strong_reject, the differences in appreciation between each ordinal step do not necessarily map into predefined numerical intervals.
66
+
67
+ Finally, OC has been also considered as a linear correlation problem. as in the Semantic Textual Similarity track (Cer et al., 2017). An OC output, however, can have perfect linear correlation with the ground truth without matching any single value.
68
+
69
+ This diversity of approaches – which do not happen in other types of tasks – indicates a lack of consensus about what tasks are true Ordinal Classification problems, and what are the general requirements of OC evaluation.
70
+
71
+ # 2.2 Studies on Ordinal Classification
72
+
73
+ There is a number of previous formal studies on OC in the literature. First, the problem has been studied from the perspective of loss functions for ordinal regression Machine Learning algorithms.
74
+
75
+ In particular, in a comprehensive work, Rennie and Srebro (2005) reviewed the existing loss functions for traditional classification and they extended them to OC. Although they did not try to formalize OC tasks, in further sections we will study the implication of using their loss function for OC evaluation purposes.
76
+
77
+ Other authors analyzed OC from a classification perspective. For instance, Waegeman et al. (2006) presented an extended version of the ROC curve for ordinal classification, and Vanbelle and Albert (2009) studied the properties of the Weighted Kappa coefficient in OC.
78
+
79
+ Other authors applied a value prediction perspective. Gaudette and Japkowicz (2009) analysed the effect of using different error minimization metrics for OC. Baccianella et al. (2009) focused on imbalanced datasets. They imported macro averaging (from classification) to error minimization metrics such as MAE, MSE, and Mean Zero-One Error.
80
+
81
+ Remarkably, a common aspect of all these contributions is that they all assume predefined intervals between categories. Rennie and Srebro assumed, for their loss function, uniform interval distributions across categories. In their probabilistic extension, they assume predefined intervals via parameters in the join distribution model. Waegeman et al. explicitly assumed that "the misclassification costs are always proportional to the absolute difference between the real and the predicted label". The predefined intervals are defined by Vanbelle and Albert via weighting parameters in Kappa. The MAE and MSE metrics compared by Gaudette and Japkowicz also assume predefined (uniform) intervals. Finally, the solution proposed by Baccianella et al. is based on "a sum of the classification errors across classes".
82
+
83
+ In our opinion, assuming and adding intervals between categories to estimate misclassification errors violates the notion of ordinal scale in Measurement Theory (Stevens, 1946), which establishes that intervals are not meaningful relationships for ordinal scales. Our measure and our theoretical analysis are meant to address this problem.
84
+
85
+ # 3 Closeness Evaluation Measure (CEM)
86
+
87
+ # 3.1 Measure Definition
88
+
89
+ Evaluation metrics establish proximity between a system output and the gold standard (Amigo and Mizzaro, 2020). In ordinal classification we have to compare the classes assigned by the system with
90
+
91
+ ![](images/7b569bfe4e08ddee378cfe1ed2f999032f6706fe848354fa61ec771db141ac2d.jpg)
92
+ Figure 1: In the left distribution, weak accept vs. weak reject would be a strong disagreement between reviewers (i.e., the classes are distant), because in practice these are almost the extreme cases of the scale (reviewers rarely go for accept or reject). In the right distribution the situation is the opposite: reviewers tend to take a clear stance, which makes weak accept and weak reject closer assessments than in the left case.
93
+
94
+ ![](images/e0d0fc07ce5940e301adc7482b351f002adc9b232f1bd886332e707c1b1c78ab.jpg)
95
+
96
+ the true classes in the gold standard.
97
+
98
+ A key idea in our metric is to establish a notion of informational closeness that depends on how items are distributed in the rank of classes. The idea is that two items $a$ and $b$ are informationally close if the probability of finding an item between the two is low. As an example, Figure 1 illustrates the intuition of how item distribution affects informational closeness in the context of paper reviewing. This is similar in spirit to, for instance, comparing the quality of two journals according to their quartiles in the rank of journals of comparable topics. With this notion of informational closeness, proximity between classes adapts to the way in which classes are used in a given dataset.
99
+
100
+ This idea of informational closeness can be implemented using Information Theory: the more unexpected it is to find an item between $a$ and $b$ , the more information such event provides, and the more $a$ and $b$ are informationally closer. Let $P(x \preceq_{\mathrm{ORD}}^b a)$ be the probability that, sampling an item $x$ from the space of items, $x$ is closer to $b$ than $a$ in the ordinal scale of classes. Then we can define Closeness Information Quantity (CIQ) between $a$ and $b$ as the Information Quantity of the event $x \preceq_{\mathrm{ORD}}^b a$ , as follows:
101
+
102
+ $$
103
+ \operatorname {C I Q} ^ {\mathcal {O R D}} (a, b) \equiv - \log \left(P \left(x \preceq_ {\mathcal {O R D}} ^ {b} a\right)\right). \tag {1}
104
+ $$
105
+
106
+ Let us now apply this concept for the evaluation of system outputs. Let $\mathcal{D}$ be the item collection, $\mathcal{C} = \{c_1,\ldots ,c_n\}$ a set of sorted classes such that $c_{1} < c_{2} < \dots < c_{n}$ , and $g,s:\mathcal{D}\longrightarrow \mathcal{C}$ the gold standard and a system output. Given the classes $g(d),s(d)$ assigned to an
107
+
108
+ item $d\in \mathcal{D}$ by the gold standard and the system output, $\mathrm{CIQ}^{\mathrm{ORD}}(s(d),g(d))$ measures the closeness between the assigned class and the gold standard class:
109
+
110
+ $$
111
+ \operatorname {C I Q} ^ {\mathsf {O R D}} (s (d), g (d)) = - \log (P (x \preceq_ {\mathsf {O R D}} ^ {g (d)} s (d))).
112
+ $$
113
+
114
+ Our proposed evaluation measure consists in adding CIQ values for all items $d \in \mathcal{D}$ , and normalizing the sum by its maximal value, which is the one obtained by a system output that matches the gold standard perfectly. This is what we call Closeness Evaluation Measure, $\mathrm{CEM}^{\mathrm{ORD}}$ :
115
+
116
+ $$
117
+ \mathrm {C E M} ^ {\mathsf {O R D}} (s, g) = \frac {\sum_ {d \in \mathcal {D}} \mathrm {C I Q} ^ {\mathsf {O R D}} (s (d) , g (d))}{\sum_ {d \in \mathcal {D}} \mathrm {C I Q} ^ {\mathsf {O R D}} (g (d) , g (d))}.
118
+ $$
119
+
120
+ In an ordinal scale, the condition $x \preceq_{\mathrm{ORD}}^b a$ ( $x$ is closer to $b$ than $a$ ) implies that $x$ is between $a$ and $b$ ( $a \geq x \geq b$ or $a \leq x \leq b$ ). Therefore, if $n_i$ is the amount of items assigned to class $c_i$ in the gold standard, and $N$ is the total amount of items, the formula above turns into:
121
+
122
+ $$
123
+ \mathrm {C E M} ^ {\mathcal {O R D}} (s, g) = \frac {\sum_ {d \in \mathcal {D}} \operatorname {p r o x} (s (d) , g (d))}{\sum_ {d \in \mathcal {D}} \operatorname {p r o x} (g (d) , g (d))}
124
+ $$
125
+
126
+ where $\mathrm{prox}(c_i, c_j) = -\log \left(\frac{\frac{n_i}{2} + \sum_{k=i+1}^{j} n_k}{N}\right)$ .
127
+
128
+ Note that the term $\mathrm{prox}(c_i, c_j)$ , which is the core of the metric, reflects the informational closeness that the metric assigns to a pair of classes $c_i, c_j$ . Note also that half of the ties (elements in the class $i$ ) are included in the computation. Every time the system assigns the class $c_i$ and the ground truth is $c_j$ , the contribution of that assignment to the final
129
+
130
+ value of $\mathrm{CEM}^{\mathrm{ORD}}$ is proportional to the informational closeness between both classes.
131
+
132
+ As an example, let us consider the two ground truth distributions in Figure 1. The proximity between the classes weak_accept and weak_reject for the left distribution is:
133
+
134
+ $$
135
+ - \log \left(\frac {9 0 / 2 + 1 9 3 + 1 0 5}{4 0 2}\right) = 0. 2 3
136
+ $$
137
+
138
+ and for the right distribution is:
139
+
140
+ $$
141
+ - \log \left(\frac {1 0 / 2 + 3 + 1 0}{3 7 6}\right) = 4. 3 8.
142
+ $$
143
+
144
+ A mistake between these two classes is more heavily penalized by the metric in the left distribution. Note also that correct predictions have different weights - $\mathrm{prox}(c_i, c_i)$ - which are higher for infrequent classes. For instance, a correct guess for a reject ground truth in the left distribution has a weight of $\mathrm{prox}(\mathrm{reject}, \mathrm{reject}) = 6.84$ , because it is a rare class (7/402 items); but a correct guess for an undecided item has only a weight of 2.06 because the class is very frequent in the ground truth (193/402 items). This is an effect of using Information Theory to characterize closeness: an infrequent class has more information than a frequent class.
145
+
146
+ Overall, $\mathrm{CEM}^{\mathrm{ORD}}$ rewards exact matches, considers ordinal relationships, and does not assume predefined intervals between classes (instead, intervals depend on the distribution of items into classes in the gold standard). Appendix A shows detailed examples of how to compute $\mathrm{CEM}^{\mathrm{ORD}}$ from the confusion matrix for a system output.
147
+
148
+ # 3.2 Formalization of CEM on Different Scales
149
+
150
+ We have specified our measure $\mathrm{CEM}^{\mathrm{ORD}}$ at ordinal scale to address OC tasks, but it could be used at any scale. In this section we briefly investigate this generalization. In Measurement Theory, at least in Stevens's model (1946), all measures map items to real numbers, and measurement equivalence at different scales is determined by permissible transformation functions. Permissible transformations are bijective functions in nominal scale ( $\mathcal{F}_{\mathrm{NOM}}$ ), strictly increasing functions in ordinal scale ( $\mathcal{F}_{\mathrm{ORD}}$ ), and linear functions for the interval scale ( $\mathcal{F}_{\mathrm{INT}}$ ).
151
+
152
+ Starting from the notion of $|a - b|$ as the standard algebraic distance between numbers, we define closeness at a certain measurement scale $T$ if
153
+
154
+ it fits for at least one permissible transformation in $\mathcal{F}_{\mathrm{T}}$
155
+
156
+ Definition 1 (Closeness for a Scale Type) Being three numbers $x$ , $a$ , and $b$ , we say that $x$ is closer to $b$ than $a$ , $(x \preceq_{\mathrm{T}}^{b} a)$ for a certain scale type $\mathrm{T}$ , if and only if:
157
+
158
+ $$
159
+ \exists f \in \mathcal {F} _ {\mathrm {T}} \left(\left| f (x) - f (b) \right| \leq \left| f (a) - f (b) \right|\right).
160
+ $$
161
+
162
+ The conditions for $x \preceq_{\mathbb{T}}^{b} a$ at ordinal scale ( $\mathrm{T} = \mathrm{ORD}$ ) are $(b \geq x \geq a) \lor (a \geq x \geq b)$ (see proof in the supplementary material). That is, at ordinal scale, $x$ must be located between $a$ and $b$ to be closer to $a$ than $b$ . The condition for nominal scale ( $\mathrm{T} = \mathrm{NOM}$ ) is $(b = x \lor b \neq a)$ . At interval scale ( $\mathrm{T} = \mathrm{INT}$ ), the condition matches the standard algebraic closeness between numbers: $(|b - x| \leq |b - a|)$ .
163
+
164
+ We can generalize $\mathrm{CIQ}^{\mathrm{ORD}}$ and $\mathrm{CEM}^{\mathrm{ORD}}$ to consider closeness at any scale $\mathbf{T}$ , simply replacing $x \preceq_{\mathrm{ORD}}^{b} a$ with $x \preceq_{\mathrm{T}}^{b} a$ . We denote these generalizations as $\mathrm{CIQ}^{\mathrm{T}}$ , $\mathrm{CEM}^{\mathrm{T}}$ . The $\mathrm{CEM}^{\mathrm{T}}$ metric generalizes some of the most popular metrics in classification.
165
+
166
+ Proposition 1 Assuming that categories in $g$ follow a uniform distribution, then Accuracy is proportional to CEM at nominal scale. Formally, whenever $P(g(d) = c)$ is equal for all categories $c \in \mathcal{C}$ , then:
167
+
168
+ $$
169
+ \operatorname {A c c} (s, g) \propto \operatorname {C E M} ^ {\text {N O M}} (s, g).
170
+ $$
171
+
172
+ Macro Average Accuracy can be also defined by aggregating $\mathrm{CIQ}^{\mathrm{NOM}}(s(d), g(d))$ in the corresponding manner. Also, under the same statistical assumptions, Precision and Recall for a category $c$ can be defined in terms of aggregated CIQs of items in the system or gold category respectively.
173
+
174
+ Proposition 2 Whenever $P(g(d) = c)$ is equal for all categories $c \in \mathcal{C}$ , then:
175
+
176
+ $$
177
+ \operatorname{Pre}_{g,c}(s)\propto \sum_{d\in \mathcal{D}:s(d) = c}\operatorname{CIQ}^{\text{NOM}}(s(d),g(d))
178
+ $$
179
+
180
+ $$
181
+ \operatorname{Rec}_{g,c}(s)\propto \sum_{d\in \mathcal{D}:g(d) = c}\operatorname{CIQ}^{\mathsf{NOM}}(s(d),g(d)).
182
+ $$
183
+
184
+ Exact match between Precision, Recall and the CIQ aggregation is achieved when values are normalized with respect to the maximum.
185
+
186
+ On the other hand, if we do not assume a uniform distribution of items into classes in the gold standard, then we obtain a classification metric $\mathbf{CEM}^{\mathrm{NOM}}(s,g)$ which gives more (logarithmic) weight to errors in infrequent classes.
187
+
188
+ Finally, at interval scale, $\mathbf{CEM}^{\mathrm{INT}}$ would be equivalent to a logarithmic version of MAE whenever items are uniformly distributed across classes.
189
+
190
+ We leave a more detailed formal and empirical analysis of CEM at other scales for future work, as it is not the primary scope of this paper.
191
+
192
+ # 4 Theoretical Evidence
193
+
194
+ Following a methodology previously applied for Classification (Sebastiani, 2015; Sokolova, 2006), Clustering (Dom, 2001; Meila, 2003; Amigo et al., 2009), and document ranking tasks (Moffat, 2013; Amigo et al., 2013b), here we define a formal framework for OC via desirable properties to be satisfied, which are illustrated in Figure 2 and introduced below.
195
+
196
+ # 4.1 Metric Properties
197
+
198
+ The first property states that an effectiveness metric $\operatorname{Eff}(s, g)$ should not assume predefined intervals between classes, i.e., it should be invariant under permissible transformation functions at ordinal scale.
199
+
200
+ Property 1 (Ordinal Invariance) An effectiveness metric Eff has ordinal invariance if it is invariant under strictly increasing functions $f_{\mathsf{ORD}} \in \mathcal{F}_{\mathsf{ORD}}$ applied to both the system output and the gold standard:
201
+
202
+ $$
203
+ \operatorname {E f f} (s, g) = \operatorname {E f f} \left(f _ {\text {O R D}} (s), f _ {\text {O R D}} (g)\right).
204
+ $$
205
+
206
+ For instance, $\operatorname{Eff}((1,2,2),(1,2,3))$ should be equivalent to $\operatorname{Eff}((11,24,24),(11,24,39))$ , by considering the (strictly increasing) permissible transformation function $f_{\mathrm{ORD}}(x) = 10x + x^2$ .
207
+
208
+ Although we can not compare intervals at ordinal scale, we know, e.g., that "neutral" is closer to "positive" than "negative". Therefore we need another property to verify monotonicity with respect to category closeness.
209
+
210
+ Property 2 (Ordinal Monotonicity) Changing system predictions closer to the true category should result in a metric increase:
211
+
212
+ $$
213
+ I f \exists d. (s (d) \neq s ^ {\prime} (d)) \wedge
214
+ $$
215
+
216
+ $$
217
+ (\forall d. ((s (d) > s ^ {\prime} (d) \geq g (d)) \lor (s (d) = s ^ {\prime} (d))))
218
+ $$
219
+
220
+ then $\operatorname{Eff}(s', g) > \operatorname{Eff}(s, g)$ .
221
+
222
+ The formalization of ordinal monotonicity states that if all predictions by system $s'$ are better or equal than predictions by $s$ , and at least one is
223
+
224
+ ![](images/8d953e996c79d909eda361d9866e386a751e87fd005950571ddded5747671827.jpg)
225
+ Figure 2: Illustration of desirable formal properties for Ordinal Classification. Each bin is a system output, where columns represent ordered classes assigned by the system, and colors represent the items' true classes, ordered from black to white. "=" means that both outputs should have the same quality, and "> " that the left output should receive a higher metric value than the right output.
226
+
227
+ strictly better, then the metric score of $s'$ must be higher.
228
+
229
+ Finally, in order to manage the effect of imbalanced data sets, another desirable property is that an item classification error in a frequent class should have less effect than a classification error in a small class (Fatourechi et al., 2008). In order to formalize this property, we use $g_{d \to c}$ to denote the result of moving the item $d$ to the class $c$ in the gold standard.
230
+
231
+ Property 3 (Imbalance) Distancing items from a small class has more effect than distancing items from a large class. Let $(c_{1}, c_{2}, c_{3})$ be three contiguous classes such that $c_{1}$ is larger than $c_{3}$ , and $d_{1}, d_{3}$ two items such that $g(d_{1}) = c_{1}$ and $g(d_{3}) = c_{3}$ . Then
232
+
233
+ $$
234
+ \operatorname {E f f} \left(g _ {d _ {1} \rightarrow c _ {2}}, g\right) > \operatorname {E f f} \left(g _ {d _ {3} \rightarrow c _ {2}}, g\right).
235
+ $$
236
+
237
+ # 4.2 Metric Analysis
238
+
239
+ Table 2 displays the properties satisfied by metrics grouped by families. Classification metrics are ordinal invariant, but they do not satisfy ordinal monotonicity. Attempts to mitigate this limitation include (i) Accuracy at n (Gaudette and Japkowicz, 2009) which relaxes Accuracy with an ordinal margin error, and (ii) ignoring the neutral class (Rosenthal et al., 2014). However, both approaches are insensitive to some types of error. Some classification metrics such as MAAC, Cohen's Kappa or F-measure averaged across classes satisfy the imbalance constraint.
240
+
241
+ Table 2: Constraint-based Metric Analysis
242
+
243
+ <table><tr><td rowspan="2" colspan="2">Metric family Metrics</td><td>Constraints Ord.</td><td>Ord.</td><td>Imb.</td></tr><tr><td>Inv.</td><td>Mon.</td><td></td></tr><tr><td rowspan="2">Classification</td><td>Acc</td><td>✓</td><td>-</td><td>-</td></tr><tr><td>Acc with n</td><td>✓</td><td>-</td><td>-</td></tr><tr><td rowspan="2">Metrics</td><td>Macro Avg Acc, Cohen&#x27;s κ</td><td>✓</td><td>-</td><td>✓</td></tr><tr><td>F-measure avg. across classes</td><td>✓</td><td>-</td><td>✓</td></tr><tr><td rowspan="5">Value Prediction</td><td>MAE, MSE</td><td>-</td><td>✓</td><td>-</td></tr><tr><td>Macro Avg. MAE/MSE</td><td>-</td><td>✓</td><td>✓</td></tr><tr><td>Weighted κ</td><td>-</td><td>✓</td><td>✓</td></tr><tr><td>Rennie &amp; Srebro loss function</td><td>-</td><td>✓</td><td>-</td></tr><tr><td>Cosine similarity</td><td>-</td><td>✓</td><td>-</td></tr><tr><td rowspan="4">Correlation Coefficients</td><td>Linear correlation</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Ordinal: Kendall (tau-b), Spea.</td><td>✓</td><td>-</td><td>✓</td></tr><tr><td>Kendall-(Tau-a)</td><td>✓</td><td>-</td><td>-</td></tr><tr><td>Reliability and Sensitivity</td><td>✓</td><td>-</td><td>✓</td></tr><tr><td>Clustering</td><td>MI, Purity and Inv. Purity</td><td>✓</td><td>-</td><td>✓</td></tr><tr><td>Path based</td><td>Ordinal Classification Index</td><td>✓</td><td>-</td><td>-</td></tr><tr><td rowspan="3">CEM</td><td>CEM\( ^{NOM} \)</td><td>✓</td><td>-</td><td>✓</td></tr><tr><td>CEM\( ^{INT} \)</td><td>-</td><td>✓</td><td>✓</td></tr><tr><td>CEM\( ^{ORD} \)</td><td>✓</td><td>✓</td><td>✓</td></tr></table>
244
+
245
+ The most popular Value Prediction metrics are Mean Absolute Error (MAE) and Mean Square Error (MSE). They both assume a predefined fixed numerical value for each category. Therefore, ordinal invariance is violated. The imbalance property is satisfied by the Macro Average versions $\mathbf{MAE}^m$ $\mathrm{MSE}^m$ (Baccianella et al., 2009). The weighted Kappa can be monotonic whenever the accumulated weights are consistent with the ordinal structure (Vanbelle and Albert, 2009). In addition, it can satisfy imbalance depending on the weighting scheme. However, ordinal invariance is not satisfied. The loss function for ordinal classification proposed by Rennie and Srebro (2005) is, in the same way as MAE, grounded on category differences, and therefore does not satisfy ordinal invariance. Finally, the cosine similarity has also been employed to evaluate OC (Ghosh et al., 2015), where documents are dimensions and categories are vector values. Just like any other geometric measure, it is not ordinal invariant and it does not satisfy imbalance.
246
+
247
+ In general, correlation coefficients do not satisfy monotonicity, given that exact matching of gold standard values is not required to achieve the maximum score. Unlike linear correlation, ordinal correlation coefficients (i.e., Kendall or Spearman) are ordinal invariant. Kendall can be computed in different ways depending on how ties are man
248
+
249
+ aged. In Tau-a, only discordant pairs are considered $(g(d_{1}) > g(d_{2})$ and $s(d_{1}) < s(d_{2}))$ and imbalance is not satisfied. The most popular Kendall coefficient approach (Tau-b) and Spearman both satisfy imbalance. Pearson coefficient does not, due to the interval effect. Reliability and Sensitivity metrics, which extend the clustering metric BCubed, are essentially an ordinal correlation metric, being invariant but failing in monotonicity, with the advantage of satisfying imbalance due to the precision/recall notions.
250
+
251
+ By definition, clustering metrics are ordinal invariant, because they are not affected by the cluster of category descriptors. In addition, most of them, such as Mutual Information (MI) or Purity and Inverse Purity, satisfy imbalance. However, they are not ordinal monotonic, given that they do not consider any ordinal relationship between categories.
252
+
253
+ Finally, we must include the approach by Cardoso and Sousa (2011), a path based metric called Ordinal Classification Index which is designed specifically for OC problems. This is a metric that integrates aspects from the previous three metric families, including two parameters $\beta_{1}$ and $\beta_{2}$ to combine different components. Therefore, this metric can capture the different quality aspects involved in the OC process. However, the metric inherits the lack of invariance of MAE and MSE when computing the ordinal distance between categories, and monotonicity can be violated depending on the effect of discordant item pairs.
254
+
255
+ The table ends with our proposed metric CEM, which is either a classification, error minimization, or OC metric depending if it is instantiated into nominal (CEM $^{\text{NOM}}$ ), interval (CEM $^{\text{INT}}$ ), or ordinal measurement scale (CEM $^{\text{ORD}}$ ). CEM $^{\text{ORD}}$ is the only metric that satisfies the three properties, provided that there are no empty classes in the gold standard (see Appendix A.2).
256
+
257
+ # 5 Empirical Study
258
+
259
+ Meta-evaluating metrics is not straightforward. A common criterion is robustness, defined as consistence (correlation) of system rankings across data sets. However, although robustness is relevant – and we do report it at the end of this section – it does not reflect to what extent a metric captures the quality aspects of systems.
260
+
261
+ As many authors have pointed out, an OC metric should capture diverse aspects of systems: class matching, ordering, and imbalance. In our experi
262
+
263
+ iments, in addition to robustness, we select three complementary metrics, each focused on one of these partial aspects, and we evaluate to what extent existing OC metrics are able to capture all these aspects simultaneously.
264
+
265
+ The selected metrics are: (i) Accuracy, as a partial metric which captures class matching; (ii) Kendall's correlation coefficient Tau-a (without counting ties), in order to capture class ordering $^2$ ; and (iii) Mutual Information (MI), a clustering metric which reflects how much knowing the system output reduces uncertainty about the gold standard values. This metric accentuates the effect of small classes (imbalance property).
266
+
267
+ # 5.1 Meta-evaluation Metric
268
+
269
+ In order to quantify the ability of metrics to capture the aspects reflected by these three metrics, we use the Unanimous Improvement Ratio (UIR) (Amigó et al., 2011). While robustness focuses on consistence across data sets, UIR focuses on consistence across metrics. It essentially counts in how many test cases an improvement is observed for all metrics simultaneously. Being $\mathcal{M}$ a set of metrics, and $\mathcal{T}$ a set of test cases, and $s_t$ a system output for the test case $t$ , the Unanimous Improvement Ratio $\mathrm{UIR}_{\mathcal{M}}(s,s')$ between two systems is defined as:
270
+
271
+ $$
272
+ \frac {\left| \left\{t \in \mathcal {T} : s _ {t} \geq_ {\mathcal {M}} s _ {t} ^ {\prime} \right\} \right| - \left| \left\{t \in \mathcal {T} : s _ {t} ^ {\prime} \geq_ {\mathcal {M}} s _ {t} \right\} \right|}{\left| \mathcal {T} \right|},
273
+ $$
274
+
275
+ where $s_t \geq_{\mathcal{M}} s_t'$ represents that system $s$ improves system $s'$ , on the test case $t$ , unanimously for every metric:
276
+
277
+ $$
278
+ s _ {t} \geq_ {\mathcal {M}} s _ {t} ^ {\prime} \equiv \big (\forall m \in \mathcal {M} \big (m (s _ {t}) \geq m (s _ {t} ^ {\prime}) \big).
279
+ $$
280
+
281
+ Therefore, UIR reflects to what extent a system outperforms another system for several metrics simultaneously. Then, we define our meta-evaluation measure Coverage for a single metric $m$ as the Spearman correlation (over system output pairs $s, s'$ in the set of system outputs) between differences in $m$ and unanimous improvements over the reference metric set. Being $\mathcal{M}$ the reference metric set:
282
+
283
+ $$
284
+ \operatorname {C o v} _ {\mathcal {M}} (m) = \operatorname {S p e a} \bigg (m (s) - m (s ^ {\prime}), \operatorname {U I R} _ {\mathcal {M}} (s, s ^ {\prime}) \bigg).
285
+ $$
286
+
287
+ The more the coverage of a metric $m$ is high with respect to a reference metric set $\mathcal{M}$ , the more an improvement according to $m$ reflects all quality aspects represented by $\mathcal{M}$ .
288
+
289
+ # 5.2 Compared Metrics
290
+
291
+ We evaluate the coverage of $\mathrm{CEM}^{\mathrm{ORD}}$ and other metrics with respect to the reference metric set Accuracy, Kendall, and MI. In the empirical study we have considered most metrics used in practice to evaluate OC problems; we have excluded a few metrics which are included in the theoretical study, either because they have not been used previously to evaluate OC problems (such as clustering metrics) or because they have internal parameters and therefore a range of variability that requires a dedicated study (such as weighted Kappa and Ordinal Index). In order to check the need for the logarithmic scaling in $\mathrm{CEM}^{\mathrm{ORD}}$ (which comes from the application of Information Quantity), we also include an alternative metric $\mathrm{CEM}_{\text{flat}}^{\mathrm{ORD}}$ , which is similar to CEM but without the logarithmic scaling.
292
+
293
+ # 5.3 Experiments on Synthetic Data
294
+
295
+ In order to play with a representative and controlled amount of classes and distributions, we first experiment with synthetic data. Let us consider a synthetic dataset with 100 test cases and 200 documents per test case, classified into 11 categories. In order to study different degrees of imbalance, we assign ground truth labels to documents according to a normal distribution with average 4 and a typical deviation between 1 and 3. The imbalance grade (deviation) varies uniformly across topics. The majority class is therefore the fourth class. Finally, we discretize the resulting values into their closest category in $\{1,2,\ldots ,11\}$ .
296
+
297
+ We generate synthetic system outputs according to the following behaviour: each system makes mistakes in a certain ratio $r$ of value assignments, where $r \in \{0.1, 0.2, \dots, 0.9, 1\}$ . Then we distinguish between five kinds of mistakes, thus obtaining $10 \times 5$ possible system configurations. The five alternative mistakes are:
298
+
299
+ 1. Majority class assignment: Assign the most frequent category: $s_{maj}(d) = 4$ .
300
+ 2. Random assignment: Assign classes randomly: $s_{rand}(d) = v$ with $v \sim U(1, 11)$ .
301
+
302
+ Table 3: Metric Coverage: Spearman Correlation between single metrics and the UIR combination of Mutual Information, Accuracy, and Kendall across system pairs in both the synthetic and real data sets.
303
+
304
+ <table><tr><td rowspan="2" colspan="2"></td><td colspan="6">Synthetic data</td><td colspan="6">Real data</td></tr><tr><td>all systems</td><td>minus sRand</td><td>minus sprox</td><td>minus smaj</td><td>minus stDisp</td><td>minus soDisp</td><td>Replab 2013</td><td>SEM-2014 T9-A</td><td>T9-B</td><td>SEM-2015 T10-A</td><td>T10-B</td><td>T10-C</td></tr><tr><td rowspan="3">Reference metrics in UIR</td><td>Accuracy</td><td>0.81</td><td>0.77</td><td>0.78</td><td>0.78</td><td>0.94</td><td>0.77</td><td>0.75</td><td>0.90</td><td>0.98</td><td>0.85</td><td>0.94</td><td>0.80</td></tr><tr><td>Kendall</td><td>0.84</td><td>0.81</td><td>0.82</td><td>0.82</td><td>0.93</td><td>0.82</td><td>0.88</td><td>0.94</td><td>0.98</td><td>0.84</td><td>0.97</td><td>0.88</td></tr><tr><td>MI</td><td>0.84</td><td>0.82</td><td>0.84</td><td>0.82</td><td>0.93</td><td>0.82</td><td>0.91</td><td>0.97</td><td>0.99</td><td>0.93</td><td>0.98</td><td>0.93</td></tr><tr><td rowspan="4">Classification metrics</td><td>F-measure</td><td>0.83</td><td>0.80</td><td>0.82</td><td>0.81</td><td>0.93</td><td>0.81</td><td>0.66</td><td>0.90</td><td>0.98</td><td>0.91</td><td>0.98</td><td>0.92</td></tr><tr><td>MAAC</td><td>0.83</td><td>0.81</td><td>0.82</td><td>0.79</td><td>0.91</td><td>0.81</td><td>0.84</td><td>0.86</td><td>0.97</td><td>0.84</td><td>0.95</td><td>0.82</td></tr><tr><td>Kappa</td><td>0.81</td><td>0.78</td><td>0.79</td><td>0.77</td><td>0.94</td><td>0.77</td><td>0.44</td><td>0.95</td><td>0.99</td><td>0.93</td><td>0.98</td><td>0.97</td></tr><tr><td>Acc with 1</td><td>0.79</td><td>0.75</td><td>0.77</td><td>0.80</td><td>0.85</td><td>0.79</td><td>0.23</td><td>0.82</td><td>0.60</td><td>0.31</td><td>0.35</td><td>-0.19</td></tr><tr><td rowspan="4">Error minimization</td><td>MAE</td><td>0.84</td><td>0.82</td><td>0.83</td><td>0.87</td><td>0.86</td><td>0.84</td><td>0.81</td><td>0.96</td><td>0.95</td><td>0.95</td><td>0.87</td><td>0.56</td></tr><tr><td>MAEm</td><td>0.74</td><td>0.73</td><td>0.74</td><td>0.80</td><td>0.76</td><td>0.73</td><td>0.73</td><td>0.95</td><td>0.88</td><td>0.91</td><td>0.74</td><td>0.30</td></tr><tr><td>MSE</td><td>0.89</td><td>0.87</td><td>0.87</td><td>0.88</td><td>0.93</td><td>0.88</td><td>0.28</td><td>0.87</td><td>0.98</td><td>0.63</td><td>0.97</td><td>0.93</td></tr><tr><td>MSEm</td><td>0.83</td><td>0.80</td><td>0.80</td><td>0.82</td><td>0.90</td><td>0.83</td><td>0.10</td><td>0.85</td><td>0.94</td><td>0.48</td><td>0.91</td><td>0.52</td></tr><tr><td rowspan="2">Correlation coefficients</td><td>Pearson</td><td>0.77</td><td>0.79</td><td>0.74</td><td>0.73</td><td>0.83</td><td>0.79</td><td>0.91</td><td>0.97</td><td>0.98</td><td>0.96</td><td>0.97</td><td>0.79</td></tr><tr><td>Spearman</td><td>0.72</td><td>0.67</td><td>0.69</td><td>0.77</td><td>0.76</td><td>0.70</td><td>0.07</td><td>0.96</td><td>0.98</td><td>0.97</td><td>0.98</td><td>0.80</td></tr><tr><td rowspan="2">Measurement theory</td><td>CEMORD</td><td>0.91</td><td>0.89</td><td>0.90</td><td>0.90</td><td>0.95</td><td>0.89</td><td>0.94</td><td>0.96</td><td>0.99</td><td>0.98</td><td>0.99</td><td>0.96</td></tr><tr><td>CEMord flat</td><td>0.87</td><td>0.84</td><td>0.86</td><td>0.88</td><td>0.89</td><td>0.87</td><td>0.82</td><td>0.96</td><td>0.96</td><td>0.94</td><td>0.92</td><td>0.65</td></tr></table>
305
+
306
+ 3. Tag displacement: Assign the next category:
307
+
308
+ $$
309
+ s _ {t D i s p} (d) = g (d) + 1.
310
+ $$
311
+
312
+ 4. Ordinal displacement: Being $\operatorname{ord}(d)$ the ordinal position of $d$ in a sorting of documents in concordance with category values $(g(d) > g(d') \Rightarrow \operatorname{ord}(d) > \operatorname{ord}(d'))$ , the system displaces the document $\frac{n}{10}$ positions:
313
+
314
+ $$
315
+ s _ {o D i s p} \left(d _ {i}\right) = g \left(d ^ {\prime}: o r d \left(d ^ {\prime}\right) = o r d (d) + \frac {n}{1 0}\right).
316
+ $$
317
+
318
+ 5. Proximity assignment: The assignment is closer to the gold standard than a random one: it assigns a category between a randomly selected one and the gold standard:
319
+
320
+ $$
321
+ s _ {p r o x} (d) = g \left(d ^ {\prime}: o r d \left(d ^ {\prime}\right) = \frac {o r d (d) + r P o s}{2}\right)
322
+ $$
323
+
324
+ with $rPos \sim U(1, n)$ (a random position between 1 and $n$ ).
325
+
326
+ We discretize the resulting values in the same way than the gold standard. The synthetic outputs are designed to produce trade-offs between evaluation metrics. For instance, a total displacement $(s_{tDisp}^{r=1})$ achieves the maximal Kendall correlation but the lowest Accuracy. On the contrary, a $30\%$ of random assignments $s_{\{r=0.3, rand\}}$ can decrease substantially the ordinal relationships, but keeping a $70\%$ of Accuracy. Also, $s_{rand}^{r=0.3}$ outperforms $s_{prox}^{r=0.5}$ in terms of accuracy, but not necessarily in terms of error minimization metrics. Finally, $s_{rand}^{r=0.3}$ can be outperformed by $s_{maj}^{r=0.4}$ given that
327
+
328
+ the second system assigns documents to the majority class, but not in terms of MI, which accounts for the imbalance effect.
329
+
330
+ Table 3 (left part) shows the results. The metric coverage can vary substantially when changing the distribution of systems. For this reason, we first consider every synthetic output and then we repeat the experiment removing each of the system types. As the table shows, $\mathrm{CEM}^{\mathrm{ORD}}$ improves all other metrics, including the individual metrics used as a reference via UIR (MI, Kendall, and Accuracy). Note that the flat (not logarithmic) version $\mathrm{CEM}_{\text{flat}}^{\mathrm{ORD}}$ performs systematically worse than the original metric, which supports the use of the logarithmic, information-theoretic formula to compute similarity.
331
+
332
+ # 5.4 Experiments on NLP shared tasks
333
+
334
+ Let us now study how metrics behave with actual data from evaluation campaigns, where we cannot control the amount and types of error. We use data from six OC evaluation campaigns for which system outputs are publicly available.
335
+
336
+ The first data set comes from the Replab 2013 reputational polarity task (Amigo et al., 2013a). It consists of 61 companies with 1,500 tweets each; tweets are annotated as positive, negative, or neutral for the company's reputation.
337
+
338
+ All the other five datasets are sentiment analysis subtasks from SemEval for which system outputs are available online: SemEval-2015 task 10A
339
+
340
+ (1680 samples, 13 systems), task 10B (8985 samples, 51 systems) and task 10C (3097 samples, 11 systems) (Rosenthal et al., 2015); and SemEval2014, tasks 9A (2392 samples, 48 systems) and 9B (2396 samples, 7 systems). All these tasks contain three categories. Given that SemEval tasks do not distribute samples in test cases, we emulate 10 test cases by dividing randomly the data sets into 10 partitions in order to compute UIR.
341
+
342
+ Table 3 (right part) shows the results. $\mathrm{CEM}^{\mathrm{ORD}}$ is the top performer in four datasets, and the second best (with a minimal difference of 0.01 with respect to the best metric) in the other two. The non-logarithmic version of $\mathrm{CEM}^{\mathrm{ORD}}$ is, again, worse than the logarithmic version in all cases (except one, SemEval 2014 task 9A, where they both give the same result).
343
+
344
+ Some metrics are able to achieve a high coverage in some data sets, but not in a consistent manner. For instance, Kappa maximizes the coverage in the last dataset in the table, but achieves an extremely low result for RepLab. In general, the table also shows that the relative coverage performance of metrics varies depending on the dataset characteristics.
345
+
346
+ Finally, we also computed metrics robustness in terms of Spearman correlation between system rankings produced by the metric for topics (or data set partition) pairs in the campaigns. The highest robustness (0.57) is achieved by $\mathrm{CEM}^{\mathrm{ORD}}$ , Accuracy and F-measure; and the lowest robustness (0.49) is achieved by Accuracy with 1 and Macro Average MAE. $\mathrm{CEM}^{\mathrm{ORD}}$ is more robust than its non-logarithmic version $\mathrm{CEM}_{flat}^{\mathrm{ORD}}$ (0.57 vs 0.55), again supporting the use of the information-theoretic logarithmic formula.
347
+
348
+ # 6 Conclusions
349
+
350
+ Our findings can be summarized as follows: (i) metrics commonly used for Ordinal Classification problems are highly heterogeneous and, in general, inconsistent with the notion of ordinal scale in Measurement Theory; (ii) the notion of closeness between classes can be modelled in terms of Measurement Theory and Information Theory and particularized for different scales; and (iii) our proposed Ordinal Closeness Evaluation Measure $(\mathrm{CEM}^{\mathrm{ORD}})$ is the only one that satisfies all desirable formal properties, it is as robust as the best state-of-the-art metrics, and it is the one that better captures the different quality aspects of OC problems in our ex
351
+
352
+ perimentation, with both synthetic and naturalistic datasets.
353
+
354
+ From a methodological perspective, the evidence that we have presented covers the four approaches pointed out in Amigo et al. (2018): we have compared metrics in terms of desirable formal properties to be satisfied (theoretic top-down), we have generalized existing approaches (theoretic bottom-up), and we have compared effectiveness on human assessed and on synthetic data (empirical bottom-up and top-down). Future work includes the application of CEM at scales other than the ordinal.
355
+
356
+ Code to compute CEM will be available at github.com/EvALLTEAM/EvALLToolkit.
357
+
358
+ # Acknowledgements
359
+
360
+ This research has been partially supported by grants Vemodalen (TIN2015-71785-R) and MIS-MIS (PGC2018-096212-B-C32) from the Spanish government, as well as by the Google Research Award Axiometrics: Foundations of Evaluation Metrics in IR.
361
+
362
+ # References
363
+
364
+ Enrique Amigo, Jorge Carrillo de Albornoz, Irina Chugur, Adolfo Corujo, Julio Gonzalo, Tamara Martín, Edgar Meij, Maarten de Rijke, and Damiano Spina. 2013a. Overview of RepLab 2013: Evaluating online reputation monitoring systems. In Information Access Evaluation. Multilinguality, Multimodality, and Visualization, pages 333-352.
365
+ Enrique Amigó, Hui Fang, Stefano Mizzaro, and ChengXiang Zhai. 2018. Are we on the right track?: An examination of information retrieval methodologies. In Proceedings of ACM SIGIR'18, pages 997-1000.
366
+ Enrique Amigó, Julio Gonzalo, Javier Artiles, and Felisa Verdejo. 2009. A comparison of extrinsic clustering evaluation metrics based on formal constraints. Information Retrieval, 12(4):461-486.
367
+ Enrique Amigó, Julio Gonzalo, Javier Artiles, and Felisa Verdejo. 2011. Combining evaluation metrics via the unanimous improvement ratio and its application to clustering tasks. Journal of Artificial Intelligence Research, 42:689-718.
368
+ Enrique Amigó, Julio Gonzalo, and Felisa Verdejo. 2013b. A general evaluation measure for document organization tasks. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval, pages 643-652.
369
+ Enrique Amigo and Stefano Mizzaro. 2020. On the Nature of Information Access Evaluation Metrics: A
370
+
371
+ Unifying Framework. Information Retrieval Journal. To appear.
372
+ S. Baccianella, A. Esuli, and F. Sebastiani. 2009. Evaluation measures for ordinal regression. In 2009 Ninth International Conference on Intelligent Systems Design and Applications, pages 283-287.
373
+ Francesco Barbieri, Valerio Basile, Danilo Croce, Malvina Nissim, Nicole Novielli, and Viviana Patti. 2016. Overview of the Evalita 2016 SENTiment POLarity Classification Task. In Proceedings of Third Italian Conference on Computational Linguistics (CLiC-it 2016).
374
+ Jaime S. Cardoso and Ricardo Sousa. 2011. Measuring the performance of ordinal classification. International Journal of Pattern Recognition and Artificial Intelligence, 25(8):1173-1195.
375
+ Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of SemEval-2017, pages 1-14.
376
+ B. Dom. 2001. An information-theoretic external cluster-validity measure. IBM Research Report.
377
+ M. Fatourechi, R. K. Ward, S. G. Mason, J. Huggins, A. Schlögl, and G. E. Birch. 2008. Comparison of evaluation metrics in classification applications with imbalanced datasets. In 2008 Seventh International Conference on Machine Learning and Applications, pages 777-782.
378
+ Lisa Gaudette and Nathalie Japkowicz. 2009. Evaluation methods for ordinal classification. In Canadian AI 2009, volume 5549, pages 207-210.
379
+ Aniruddha Ghosh, Guofu Li, Tony Veale, Paolo Rosso, Ekaterina Shutova, John Barnden, and Antonio Reyes. 2015. SemEval-2015 task 11: Sentiment analysis of figurative language in twitter. In Proceedings of SemEval 2015, pages 470-478.
380
+ Noriko Kando, editor. 2008. Proceedings of the 7th NTCIR Workshop Meeting on Evaluation of Information Access Technologies: Information Retrieval, Question Answering and Cross-Linguual Information Access, NTCIR-7, National Center of Sciences, Tokyo, Japan, December 16-19, 2008. National Institute of Informatics (NII).
381
+ Marina Meila. 2003. Comparing clusterings. In Proceedings of COLT 03.
382
+ Alistair Moffat. 2013. Seven numeric properties of effectiveness metrics. In AIRS'13, pages 1-12.
383
+ Jason Rennie and Nathan Srebro. 2005. Loss functions for preference levels: Regression with discrete ordered labels. Proceedings of the IJCAI Multidisciplinary Workshop on Advances in Preference Handling.
384
+
385
+ Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017. SemEval-2017 task 4: Sentiment analysis in Twitter. In Proceedings of SemEval '17. ACL.
386
+ Sara Rosenthal, Preslav Nakov, Svetlana Kiritchenko, Saif Mohammad, Alan Ritter, and Veselin Stoyanov. 2015. SemEval-2015 task 10: Sentiment analysis in Twitter. In Proceedings of SemEval 2015, pages 451-463.
387
+ Sara Rosenthal, Alan Ritter, Preslav Nakov, and Veselin Stoyanov. 2014. SemEval-2014 task 9: Sentiment analysis in Twitter. In Proceedings of SemEval 2014, pages 73-80.
388
+ Fabrizio Sebastiani. 2015. An axiomatically derived measure for the evaluation of classification algorithms. In Proceedings of ICTIR 2015, pages 11-20. ACM.
389
+ Marina Sokolova. 2006. Assessing invariance properties of evaluation measures. In Proceedings of NIPS'06 Workshop on Testing Deployable Learning and Decision Systems.
390
+ Stanley Smith Stevens. 1946. On the theory of scales of measurement. Science, 103 (2684):677-80.
391
+ S. Vanbelle and A. Albert. 2009. A note on the linearly weighted kappa coefficient for ordinal scales. Statistical Methodology, 6(2):157-163.
392
+ Willem Waegeman, Bernard De Baets, and Luc Boullart. 2006. A comparison of different ROC measures for ordinal regression. In Proceedings of the CML 2006 workshop on ROC Analysis in Machine Learning.
393
+
394
+ # Appendix A. Example computation of CEM
395
+
396
+ Figure 3 illustrates the computation of CEM for two systems (A and B) on the same ground truth with the three usual classes in sentiment analysis: negative, neutral, positive. The ground truth distribution is 10, 60 and 30 items, respectively, which is all the information needed to compute proximity between classes. Note that proximity of one class with respect to other is $-\log$ of the amount of items that lie between them (including all items in the ground truth class and half of the items in the system-predicted class) divided by the total number of items. The lowest score corresponds to the proximity between the two extreme cases (in the example, the negative and positive classes), because all items except half of the items in the system-predicted class lie between them, and therefore the $-\log$ value is minimal.
397
+
398
+ System A and System B in the figure both have the same accuracy (0.70), but system B receives a higher $\mathrm{CEM}^{\mathrm{ORD}}$ score (0.76 vs 0.71). The main reason is that system A makes more mistakes between distant classes (positive and negative). Another reason is that system A makes more positive/neutral than negative/neutral mistakes; and positive/neutral errors are more penalized by the metric than negative/neutral. The reason is that, together, the positive and neutral classes represent $90\%$ of the items in the dataset, and therefore are considered less close from an information-theoretic point of view.
399
+
400
+ # Appendix B. Metric Properties Counter Examples
401
+
402
+ Here we provide examples of how certain metrics fail to satisfy some of the properties proposed in the paper.
403
+
404
+ Ordinal Monotonicity. Let us consider the set of categories $\mathcal{C} = \{1,2,3,4,5\}$ . All classification metrics and correlation coefficients fail to satisfy ordinal monotonicity, given that for all of them:
405
+
406
+ $$
407
+ \operatorname {E f f} ((1, 2, 3), (3, 4, 5)) = \operatorname {E f f} ((2, 3, 4), (3, 4, 5)).
408
+ $$
409
+
410
+ But, according to the ordinal monotonicity property, the system output (2, 3, 4) should receive a higher value than (1, 2, 3), because all predicted classes are closer to the ground truth labels.
411
+
412
+ Ordinal Invariance Pearson correlation, and every error minimization metric fails to satisfy ordi
413
+
414
+ nal invariance, given that for all of them:
415
+
416
+ $$
417
+ \operatorname {E f f} ((1, 2, 3), (3, 4, 5)) \neq
418
+ $$
419
+
420
+ $$
421
+ \operatorname {E f f} ((f (1), f (2), f (3)), (f (3), f (4), f (5))
422
+ $$
423
+
424
+ being $f$ , for instance, the strict (not linear) increasing function $f(x) = 10 + x^3$ .
425
+
426
+ Imbalance. According to the imbalance property,
427
+
428
+ $$
429
+ \operatorname {E f f} ((1, 2, 2, 3), (1, 1, 2, 3)) >
430
+ $$
431
+
432
+ $$
433
+ \operatorname {E f f} ((1, 1, 2, 2), (1, 1, 2, 3)).
434
+ $$
435
+
436
+ Metrics that do not satisfy this restriction are Accuracy $\left(\frac{3}{4},\frac{3}{4}\right)$ , Accuracy with 1 (1,1), MAE and MSE $\left(-\frac{1}{4}, - \frac{1}{4}\right)$ , cosine similarity (0.973,0.979) and Pearson (0.85,0.9).
437
+
438
+ # Appendix C. Proofs
439
+
440
+ Here we provide proofs for the properties satisfied by metrics in our study. For the sake of brevity, we do not include formal complete proofs, but their explanations.
441
+
442
+ Proof for closeness conditions at ordinal scale: Focusing on the ordinal scale, if $x$ is located between $y$ and $r$ ( $y \leq x \leq r$ or $r \leq x \leq y$ ), then $|f(x) - f(r)| \leq |f(y) - f(r)|$ for any strict increasing function $f$ . In other case, that is, if $x < y \wedge x < r$ or $y < x \wedge r < x$ we can define a strict increasing function that invalidates $|f(x) - f(r)| \leq |f(y) - f(r)|$ . The reasoning for the strict case is similar.
443
+
444
+ Proof for $\mathrm{CEM}^{\mathrm{ORD}}$ properties: $\mathrm{CEM}^{\mathrm{ORD}}$ is computed over ordinal comparisons $(g(d') \preceq_{\mathrm{ORD}}^{g(d)} s(d))$ . By definition, closeness at ordinal scale is invariant under ordinal transformation. Therefore, $\mathrm{CEM}^{\mathrm{ORD}}$ is ordinal invariant. Monotonicity is also satisfied given that approaching the predicted category to the ground truth category necessarily reduces the amount of documents appearing in intermediate categories (provided there is no empty category in the gold standard), and therefore increases the similarity weight used by the metric. Finally, imbalance is also satisfied given that, being $g(d_i) = c_i$ and being $c_i$ and $c_j$ contiguous classes:
445
+
446
+ $$
447
+ \begin{array}{l} \mathrm {C E M} ^ {\mathrm {O R D}} \left(g _ {d _ {i} \rightarrow c _ {j}}, g\right) - \mathrm {C E M} ^ {\mathrm {O R D}} (g, g) \\ \propto - \log \left(\frac {n _ {i} + \frac {n _ {j}}{2}}{N}\right) - \left(- \log \left(\frac {\frac {n _ {i}}{2}}{N}\right)\right). \\ \end{array}
448
+ $$
449
+
450
+ <table><tr><td rowspan="6">system A</td><td colspan="5">ground truth</td></tr><tr><td>neg</td><td>neu</td><td>pos</td><td>total</td><td></td></tr><tr><td>\( neg_A \)</td><td>5</td><td>5</td><td>7</td><td>17</td></tr><tr><td>\( neu_A \)</td><td>1</td><td>50</td><td>8</td><td>59</td></tr><tr><td>\( pos_A \)</td><td>4</td><td>5</td><td>15</td><td>24</td></tr><tr><td>total</td><td>10</td><td>60</td><td>30</td><td>100</td></tr></table>
451
+
452
+ <table><tr><td></td><td colspan="5">ground truth</td></tr><tr><td></td><td>neg</td><td>neu</td><td>pos</td><td>total</td><td></td></tr><tr><td rowspan="3">system B</td><td>\( neg_B \)</td><td>7</td><td>12</td><td>4</td><td>23</td></tr><tr><td>\( neu_B \)</td><td>1</td><td>45</td><td>8</td><td>54</td></tr><tr><td>\( pos_B \)</td><td>2</td><td>3</td><td>18</td><td>23</td></tr><tr><td colspan="2">total</td><td>10</td><td>60</td><td>30</td><td>100</td></tr></table>
453
+
454
+ <table><tr><td colspan="4">class proximity</td></tr><tr><td></td><td>neg</td><td>neu</td><td>pos</td></tr><tr><td>neg</td><td>4.32</td><td>0.62</td><td>0.07</td></tr><tr><td>neu</td><td>1.32</td><td>1.74</td><td>0.74</td></tr><tr><td>pos</td><td>0.23</td><td>0.42</td><td>2.74</td></tr></table>
455
+
456
+ $$
457
+ \operatorname {p r o x} (\text {n e g}, \text {n e g}) = - \log \frac {1 0 / 2}{1 0 0} = 4. 3 2 \quad \operatorname {p r o x} (\text {n e g}, \text {n e u}) = - \log \frac {1 0 / 2 + 6 0}{1 0 0} = 0. 6 2 \operatorname {p r o x} (\text {n e g}, \text {p o s}) = - \log \frac {1 0 / 2 + 9 0}{1 0 0} = 0. 0 7
458
+ $$
459
+
460
+ $$
461
+ \operatorname {p r o x} (\mathrm {n e u}, \mathrm {n e g}) = - \log \frac {6 0 / 2 + 1 0}{1 0 0} = 1. 3 2 \quad \operatorname {p r o x} (\mathrm {n e u}, \mathrm {n e u}) = - \log \frac {6 0 / 2}{1 0 0} = 1. 7 4 \quad \operatorname {p r o x} (\mathrm {n e u}, \mathrm {p o s}) = - \log \frac {6 0 / 2 + 3 0}{1 0 0} = 0. 7 4
462
+ $$
463
+
464
+ $$
465
+ \operatorname {p r o x} (\mathrm {p o s}, \mathrm {n e g}) = - \log \frac {3 0 / 2 + 6 0 + 1 0}{1 0 0} = 0. 2 3 \operatorname {p r o x} (\mathrm {p o s}, \mathrm {n e u}) = - \log \frac {3 0 / 2 + 6 0}{1 0 0} = 0. 4 2 \operatorname {p r o x} (\mathrm {p o s}, \mathrm {p o s}) = - \log \frac {3 0 / 2}{1 0 0} = 2. 7 4
466
+ $$
467
+
468
+ $$
469
+ \mathrm {C E M} ^ {\text {O R D}} (A, g) = \frac {5 * 4 . 3 2 + 5 * 0 . 6 2 + 7 * 0 . 0 7 + 1 * 1 . 3 2 + 5 0 * 1 . 7 4 + 8 * 0 . 7 4 + 4 * 0 . 2 3 + 5 * 0 . 4 2 + 1 5 * 2 . 7 4}{1 0 * 4 . 3 2 + 6 0 * 1 . 7 4 + 3 0 * 2 . 7 4} = 0. 7 1
470
+ $$
471
+
472
+ $$
473
+ \mathrm {C E M} ^ {\text {O R D}} (B, g) = \frac {7 * 4 . 3 2 + 1 2 * 0 . 6 2 + 4 * 0 . 0 7 + 1 * 1 . 3 2 + 4 5 * 1 . 7 4 + 8 * 0 . 7 4 + 2 * 0 . 2 3 + 3 * 0 . 4 2 + 1 8 * 2 . 7 4}{1 0 * 4 . 3 2 + 6 0 * 1 . 7 4 + 3 0 * 2 . 7 4} = 0. 7 6
474
+ $$
475
+
476
+ Figure 3: Example computation of $\mathrm{CEM}^{\mathrm{ORD}}$ values for two hypothetical systems A and B with respect to the same dataset. The first two tables represent the confusion matrices for both systems. The third table shows $prox(c_i, c_j)$ for the ground truth, according to the distribution of items in the negative, positive and neutral classes (10, 60 and 30, respectively). The rest of the equations illustrate how proximity values between classes are computed, and the resulting $\mathrm{CEM}^{\mathrm{ORD}}$ values for both systems.
477
+
478
+ Therefore,
479
+
480
+ $$
481
+ \begin{array}{l} \operatorname {E f f} \left(g _ {d _ {1} \rightarrow c _ {2}}, g\right) - \operatorname {E f f} \left(g _ {d _ {3} \rightarrow c _ {2}}, g\right) \\ \propto \operatorname {E f f} (g, g) - \log \left(\frac {n _ {1} + \frac {n _ {2}}{2}}{N}\right) - \left(- \log \left(\frac {\frac {n _ {1}}{2}}{N}\right)\right) \\ \left. - \left(\operatorname {E f f} (g, g) - \log \left(\frac {n _ {3} + \frac {n _ {2}}{2}}{N}\right) - \left(- \log \left(\frac {\frac {n _ {3}}{2}}{N}\right)\right)\right) \right. \\ \propto \log \left(\frac {\frac {n _ {1}}{2} \left(n _ {3} + \frac {n _ {2}}{2}\right)}{\left(n _ {1} + \frac {n _ {2}}{2}\right) \frac {n _ {3}}{2}}\right), \\ \end{array}
482
+ $$
483
+
484
+ which is larger than 0 whenever $n_1 > n_3$ .
aneffectivenessmetricforordinalclassificationformalpropertiesandexperimentalresults/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c59a70ebf5192d0211d010bba6e6c55a1826f084f8aaa108374716e0ee768b27
3
+ size 546835
aneffectivenessmetricforordinalclassificationformalpropertiesandexperimentalresults/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1b6b509f016380f7bc47d9205c8b01882673085360ac2996e894a58b79c57c0
3
+ size 528317
aneffectivetransitionbasedmodelfordiscontinuousner/b056e99d-f30b-47d4-ac4f-d5f368333bc3_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be97e8bc2d2a725acc017dc6d01a78437af6f6ead9b7fb8826849fd8cd5f0f00
3
+ size 77016
aneffectivetransitionbasedmodelfordiscontinuousner/b056e99d-f30b-47d4-ac4f-d5f368333bc3_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17e529c18c6380cc6621d6b722b1625cf9e08683a179ad2732ca83ad100bf5e6
3
+ size 97536