ACL-OCL / Base_JSON /prefixA /json /alta /2021.alta-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:10:22.662777Z"
},
"title": "Harnessing Privileged Information for Hyperbole Detection",
"authors": [
{
"first": "Rhys",
"middle": [],
"last": "Biddle",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Technology Sydney",
"location": {
"country": "Australia"
}
},
"email": "rhys.biddle@student.uts.edu.au"
},
{
"first": "Maciej",
"middle": [],
"last": "Rybi\u0144ski",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CSIRO Data61",
"location": {
"settlement": "Sydney",
"country": "Australia"
}
},
"email": ""
},
{
"first": "Qian",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Technology Sydney",
"location": {
"country": "Australia"
}
},
"email": ""
},
{
"first": "\u2660",
"middle": [],
"last": "C\u00e9cile",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CSIRO Data61",
"location": {
"settlement": "Sydney",
"country": "Australia"
}
},
"email": ""
},
{
"first": "Paris",
"middle": [
"\u2663"
],
"last": "Guandong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Technology Sydney",
"location": {
"country": "Australia"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The detection of hyperbole is an important stepping stone to understanding the intentions of a hyperbolic utterance. We propose a model that combines pre-trained language models with privileged information for the task of hyperbole detection. We also introduce a suite of behavioural tests to probe the capabilities of hyperbole detection models across a range of hyperbole types. Our experiments show that our model improves upon baseline models on an existing hyperbole detection dataset. Probing experiments combined with analysis using local linear approximations (LIME) show that our model excels at detecting one particular type of hyperbole. Further, we discover that our experiments highlight annotation artifacts introduced through the process of literal paraphrasing of hyperbole. These annotation artifacts are likely to be a roadblock to further improvements in hyperbole detection.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The detection of hyperbole is an important stepping stone to understanding the intentions of a hyperbolic utterance. We propose a model that combines pre-trained language models with privileged information for the task of hyperbole detection. We also introduce a suite of behavioural tests to probe the capabilities of hyperbole detection models across a range of hyperbole types. Our experiments show that our model improves upon baseline models on an existing hyperbole detection dataset. Probing experiments combined with analysis using local linear approximations (LIME) show that our model excels at detecting one particular type of hyperbole. Further, we discover that our experiments highlight annotation artifacts introduced through the process of literal paraphrasing of hyperbole. These annotation artifacts are likely to be a roadblock to further improvements in hyperbole detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The analysis of figurative language by Natural Language Processing (NLP) systems is a challenge confronting researchers and practitioners (Reyes and Rosso, 2014; Rai and Chakraverty, 2020) . Hyperbole is a common type of figurative language that is defined by an intentionally excessive contrast between utterance meaning and reality along a semantic scale to convey an evaluation (e.g., 'my bedroom is the size of a postage stamp') ( McCarthy and Carter, 2004; Mora, 2009; Claridge, 2010; Carston and Wearing, 2015; Burgers et al., 2016) . The detection of hyperbole has proven to be a challenging problem for NLP systems, much like the detection of other figures of speech (Troiano et al., 2018; Kong et al., 2020; Abulaish et al., 2020) . The evaluative nature of hyperbole motivates the importance of understanding hyperbole for affective computing applications (e.g., sentiment analysis).",
"cite_spans": [
{
"start": 138,
"end": 161,
"text": "(Reyes and Rosso, 2014;",
"ref_id": "BIBREF23"
},
{
"start": 162,
"end": 188,
"text": "Rai and Chakraverty, 2020)",
"ref_id": "BIBREF22"
},
{
"start": 435,
"end": 461,
"text": "McCarthy and Carter, 2004;",
"ref_id": "BIBREF16"
},
{
"start": 462,
"end": 473,
"text": "Mora, 2009;",
"ref_id": "BIBREF18"
},
{
"start": 474,
"end": 489,
"text": "Claridge, 2010;",
"ref_id": "BIBREF6"
},
{
"start": 490,
"end": 516,
"text": "Carston and Wearing, 2015;",
"ref_id": "BIBREF5"
},
{
"start": 517,
"end": 538,
"text": "Burgers et al., 2016)",
"ref_id": "BIBREF4"
},
{
"start": 675,
"end": 697,
"text": "(Troiano et al., 2018;",
"ref_id": "BIBREF28"
},
{
"start": 698,
"end": 716,
"text": "Kong et al., 2020;",
"ref_id": null
},
{
"start": 717,
"end": 739,
"text": "Abulaish et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "is a learning paradigm that involves providing additional information during training to help teach a model to learn a particular phenomenon (Pechyony and Vapnik, 2010) . The source and type of privileged information (PI) varies depending on application, such as a list of ingredients present in an image to help teach a computer vision model to detect food in images (Meng et al., 2019) , or the human ratings of various aesthetic categories of images for automated assessment of aesthetic photo quality (Shu et al., 2020) . We propose to use literal paraphrases of hyperbole as a source of PI for hyperbole detection. We hypothesise that this information will help a model to learn the excessive contrast within a particular hyperbole (e.g., 'my head is exploding right now' \u2192 'my head is hurting right now').",
"cite_spans": [
{
"start": 141,
"end": 168,
"text": "(Pechyony and Vapnik, 2010)",
"ref_id": "BIBREF20"
},
{
"start": 368,
"end": 387,
"text": "(Meng et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 505,
"end": 523,
"text": "(Shu et al., 2020)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning under Privileged Information (LUPI)",
"sec_num": null
},
{
"text": "Our contributions in this paper are as follows;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning under Privileged Information (LUPI)",
"sec_num": null
},
{
"text": "(1) We propose a method for hyperbole detection based on the injection of PI; (2) We introduce Hyper-Probe, a suite of behavioural tests for hyperbole detection models; (3) We reveal that annotation artifacts are a potential roadblock for progress on hyperbole detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning under Privileged Information (LUPI)",
"sec_num": null
},
{
"text": "The HYPO dataset is an annotated collection of hyperbole introduced by Troiano et al. (2018) . The dataset consists of manually composed hyperbole and hyperbole sourced from various online sources including click-bait headlines, love letters, advertisements, and animated cartoons.",
"cite_spans": [
{
"start": 71,
"end": 92,
"text": "Troiano et al. (2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "HYPO",
"sec_num": "2"
},
{
"text": "Annotation for HYPO was carried out by crowd workers who were given several tasks based on each example. The crowd workers had to assess whether they thought the utterance contained hyperbolic content. A follow up task was to highlight the specific words in the utterance they considered Hyperbole Corpus",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HYPO",
"sec_num": "2"
},
{
"text": "Minimal Units Corpus The principal is unhappy...we're cooked.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase Corpus",
"sec_num": null
},
{
"text": "The principal is unhappy...we're in trouble.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase Corpus",
"sec_num": null
},
{
"text": "Well cooked vegetables can be pureed easily. Her morning jog turned into a marathon Her morning jog turned into a long run There was a marathon in the city today Mora, 2009) . Keywords is a list of the keywords in word list.",
"cite_spans": [
{
"start": 162,
"end": 173,
"text": "Mora, 2009)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase Corpus",
"sec_num": null
},
{
"text": "to be hyperbolic. Additionally, the workers were then asked to paraphrase the original hyperbolic sentence such that it was no longer hyperbolic. The worker responses to the first task were used to filter out non-hyperbolic utterances resulting in 709 hyperbolic utterances in total, denoted as the Hyperbole Corpus. The list of hyperbolic tokens identified by the crowd workers was used to create a second corpus, denoted the Minimal Units Corpus (709 sentences). The literal paraphrases also made up another corpus, the Paraphrase Corpus (709 sentences). Combining these three corpora, every hyperbolic utterance in the Hyperbole Corpus has two non-hyperbolic counterparts from the Minimal Units Corpus and Paraphrase Corpus respectively, see Table 1 . In total just over 2.1k sentences make up the final version of HYPO.",
"cite_spans": [],
"ref_spans": [
{
"start": 745,
"end": 752,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Paraphrase Corpus",
"sec_num": null
},
{
"text": "Our HyperProbe suite consists of synthetic data generated to probe the ability of models to detect hyperbole 1 . The suite is created to target the three types of hyperbole identified by (Mora, 2009 3. Test Sentence Generation: consists of the generation of test sentences, via CheckList, using the word lists and templates generated in the previous steps.",
"cite_spans": [
{
"start": 187,
"end": 198,
"text": "(Mora, 2009",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "HyperProbe",
"sec_num": "3"
},
{
"text": "4. Manual Assessment and Annotation: we assess the grammar and semantics of the generated test sentences and annotate the sentences. Our annotation consists of a binary label indicating the presence of hyperbolic content.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HyperProbe",
"sec_num": "3"
},
{
"text": "ECFs are semantic formulations that invoke extreme descriptions of events or objects (Whitehead, 2015; Pomerantz, 1986) . A simple example of an ECF is a sentence that contains an extreme description via an adjective (absolute, entire, infinite, etc.), adverb (always, never, etc.), quantifier (all, none, etc.) or indefinite pronoun (everybody, nobody, etc.) (Edwards, 2000; Norrick, 2004) . The intentionally non-literal use of ECFs has been identified as a rich source for hyperbolic expressions (McCarthy and Carter, 2004; Norrick, 2004; Mora, 2009; Whitehead, 2015; Carston and Wearing, 2015) . The detection of ECFs is a fundamental requirement for a hyperbole detection model, and we design a set of test sentences to probe this ability. Given that ECF prone-words from Table 2 belong to various word classes and can appear in a myriad of grammatical patterns, we design several sentence templates, see Table 3 . Upon completion of assessment and annotation there were 181 test sentences, 95 (52%) of which were labelled as hyperbolic, see Table 3 .",
"cite_spans": [
{
"start": 85,
"end": 102,
"text": "(Whitehead, 2015;",
"ref_id": "BIBREF29"
},
{
"start": 103,
"end": 119,
"text": "Pomerantz, 1986)",
"ref_id": "BIBREF21"
},
{
"start": 360,
"end": 375,
"text": "(Edwards, 2000;",
"ref_id": "BIBREF7"
},
{
"start": 376,
"end": 390,
"text": "Norrick, 2004)",
"ref_id": "BIBREF19"
},
{
"start": 499,
"end": 526,
"text": "(McCarthy and Carter, 2004;",
"ref_id": "BIBREF16"
},
{
"start": 527,
"end": 541,
"text": "Norrick, 2004;",
"ref_id": "BIBREF19"
},
{
"start": 542,
"end": 553,
"text": "Mora, 2009;",
"ref_id": "BIBREF18"
},
{
"start": 554,
"end": 570,
"text": "Whitehead, 2015;",
"ref_id": "BIBREF29"
},
{
"start": 571,
"end": 597,
"text": "Carston and Wearing, 2015)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 777,
"end": 784,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 910,
"end": 917,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1047,
"end": 1054,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Extreme Case Formulation Tests",
"sec_num": "3.1"
},
{
"text": "Qualitative hyperboles align with the subjectiveemotional dimension of hyperbole (Mora, 2009) . A subjective evaluation made to an excessive degree is the defining feature of qualitative hyperboles (e.g., 'this video is cancer', 'Sweet n sour chicken is God Tier'). The ability to detect and interpret qualitative hyperbole is a fundamental requirement of a hyperbole detection model. From the list of qualitative terms in Table 2 , we compile a list containing 54 adjectives. We create six sentence templates to incorporate the adjectives into a sentence, see Table 4 . Upon completion of assessment and annotation there were 306 test sentences, 87 (28%) of which were labelled as hyperbolic, see Table 4 .",
"cite_spans": [
{
"start": 81,
"end": 93,
"text": "(Mora, 2009)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 423,
"end": 430,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 561,
"end": 568,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 698,
"end": 705,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Qualitative Hyperbole Tests",
"sec_num": "3.2"
},
{
"text": "Quantitative hyperboles align with the objectivegradational dimension of hyperbole (Mora, 2009) . The defining feature of this type of hyperbole is the up-scaling of an obvious quantity or magnitude to an excessive degree (e.g., 'i have a million things left to do', 'this year has felt like a decade'). We design a set of test sentences that allows us to probe the ability of models to detect hyperbolic expressions along quantitative dimensions. We use the list of quantitative terms in Table 2 and their comparative forms (e.g., bigger, smaller, lighter, etc.) as seed word lists for these sentences. We create two sentence templates to incorporate these into a sentence, see Table 5 . Upon completion of assessment and annotation there were 43 test sentences, 21 (48%) of which were labelled as hyperbolic, see Table 5 . ",
"cite_spans": [
{
"start": 83,
"end": 95,
"text": "(Mora, 2009)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 489,
"end": 496,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 679,
"end": 686,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 815,
"end": 822,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Quantitative Hyperbole Tests",
"sec_num": "3.3"
},
{
"text": "Our motivation for incorporating privileged information into a hyperbole detection model is based on observations from the foundational work of Troiano et al. (2018) . The authors found that models trained on hyperboles and literal paraphrases performed marginally better on the task of hyperbole detection than models trained on hyperboles and non-literal sentences that used the hyperbolic words/phrases in a non-hyperbole context. We propose that treating literal paraphrases as privileged information and incorporating this information into a hyperbole detection model could improve the ability of a model to detect when a word or phrase was being used in an excessive hyperbolic manner. In our proposed model, BERT+PI, we incorporate privileged information via triplet loss. We utilise a triplet loss because we want to force our model to differentiate between hyperbolic and nonhyperbolic usage of words and phrases, and we can strictly enforce this via triplet loss. Specifically, by specifying a hyperbolic sentence as an anchor sample, another hyperbole as a positive sample and a manually composed literal paraphrase (i.e., PI) as a negative sample, we are enforcing this difference in representation space.",
"cite_spans": [
{
"start": 144,
"end": 165,
"text": "Troiano et al. (2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Privileged Information for Hyperbole Detection",
"sec_num": "4"
},
{
"text": "BERT+PI is based on a multi-task text classification framework. We use a triplet sampling module to sample negative and positive sentences for each sentence in the dataset. We use BERT (?) to encode a representation for each of these sentences and send the representation of the original sentence to a linear classification head. Representations of ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT+PI",
"sec_num": "4.1"
},
{
"text": "Require:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Semi-Random Triplet Sampling",
"sec_num": null
},
{
"text": "D = [t 0 , t 1 , ..., t n ] Require: s \u2208 Z + Sampling Factor H \u2190 t\u2200t \u2208 D | t.label == 1 t.label contains annotated label for t P \u2190 t\u2200t \u2208 D | t.label == 2 N consists of literal paraphrases (i.e., PI) S \u2190 \u2205 for i = 0, i < |D|, i + + do a \u2190 D i T \u2190 \u2205 for j = 0, j < s, j + + do if a.label == 1 then p \u2190 sample(H) sample(X) draws a random sample from X n \u2190 p.par t.par is a literal paraphrase of t else if a.label == 0 then p \u2190 sample(P ) n \u2190 p.hyp t.hyp is a hyperbolic expression of t end if T .insert([a, p, n]) end for S.insert(T ) end for return S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Semi-Random Triplet Sampling",
"sec_num": null
},
{
"text": "all three sentences are used in the computation of the triplet loss. An important aspect of models based on any type of contrastive loss, including triplet loss, is the sampling methodology (Wu et al., 2017) . For BERT+PI our triplet sampling algorithm involves randomly sampling examples based on label and the relationship between a hyperbole and its literal paraphrase, see Algorithm 1 and see Table 6 for examples.",
"cite_spans": [
{
"start": 190,
"end": 207,
"text": "(Wu et al., 2017)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 397,
"end": 404,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Algorithm 1 Semi-Random Triplet Sampling",
"sec_num": null
},
{
"text": "The logic in our sampling algorithm is that if the anchor is a hyperbole, then we randomly sample another hyperbole as a positive (i.e., same class) sample for that triplet. We then set the negative sample to be the literal paraphrase of the positive sample (note: This sample is PI). This ensures that optimisation of the triplet loss forces a hyperbole to be closer to another hyperbole than its literal paraphrase in representation space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Semi-Random Triplet Sampling",
"sec_num": null
},
{
"text": "If the anchor is not a hyperbole, we randomly sample a literal paraphrase as a positive sample for that triplet (note: This sample is PI). We then set the negative sample to be the hyperbole of the positive. The motivation here is that optimisation of the triplet loss will result in a non-hyperbolic sentence and a literal paraphrase being closer in representation space than a non-hyperbolic text and a hyperbole.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Semi-Random Triplet Sampling",
"sec_num": null
},
{
"text": "Formally, the class probability for an individual Football is important to him.* Football is his oxygen. sentence is calculated by BERT+PI as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Semi-Random Triplet Sampling",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y i = \u03c3(e a i W + b),",
"eq_num": "(1)"
}
],
"section": "Algorithm 1 Semi-Random Triplet Sampling",
"sec_num": null
},
{
"text": "where e a i is the dense representation of anchor example i computed by BERT, W Y and b Y are learnable parameters and \u03c3 is a softmax function. The model is optimised via multi-task loss, see eq. 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Semi-Random Triplet Sampling",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = L c + \u03bbL t",
"eq_num": "(2)"
}
],
"section": "Algorithm 1 Semi-Random Triplet Sampling",
"sec_num": null
},
{
"text": "Where L c is a binary cross entropy loss (eq 3), and L t is a triplet loss (see eq. 4). \u03bb is a parameter to weight the importance of the triplet loss and as a result the influence of the PI. In the cross-entropy loss, y i is a binary indicator for class label, and\u0177 is the prediction output from eq. 1. In the triplet loss, D is the cosine distance, m is a hyperparamater indicating the margin, e a i , e p ij , e n ij are the BERT representations for an anchor, positive and negative sample, and s is the sampling factor (i.e., how many positive and negative examples per anchor).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Semi-Random Triplet Sampling",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L c = \u2212 1 N N i=1 y i log(\u0177 i ) + (1 \u2212 y i ) log(1 \u2212\u0177 i )",
"eq_num": "(3)"
}
],
"section": "Algorithm 1 Semi-Random Triplet Sampling",
"sec_num": null
},
{
"text": "Lt = 1 N s N i=1 s j=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Semi-Random Triplet Sampling",
"sec_num": null
},
{
"text": "max(D(e a i , e p ij ) \u2212 D(e a i , e n ij ) + m, 0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Semi-Random Triplet Sampling",
"sec_num": null
},
{
"text": "5 Experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Semi-Random Triplet Sampling",
"sec_num": null
},
{
"text": "We implement models presented in previous research on hyperbole as baseline methods for our experiments on hyperbole detection. Troiano et al. (2018) introduce an NLP pipeline style approach to detecting hyperbole in their foundational work on computational hyperbole detection. They introduce a number of hand-crafted features that are motivated by findings from cognitive linguistics on the mechanisms humans use for identifying and interpreting hyperbole. These features range from unexpectedness, imageability, polarity, subjectivity and intensity. These features are concatenated together and referred to as QQ (i.e., Qualitative and Quantitative) features by the authors, we adhere to that nomenclature and refer to our implementation of these features as QQ for the remainder of the paper. The authors experiment with several 'traditional' statistical learners for the classification layer of their pipeline. We use Logistic Regression and Naive Bayes, as those two methods were more accurate at the detection of hyperbole compared to the other methods in their experiments. We refer to these methods as LR+QQ and NB+QQ for the remainder of the paper. Follow on from that work Kong et al. (2020) leverage the QQ features adjusting them slightly to compensate for differences in language and utilise pre-trained language models (i.e., BERT) for a hyperbole detection model. The authors combine the QQ features with the output from the BERT embeddings and pass the concatenated vector to a linear classification layer. We refer to this model as BERT+QQ in the remainder of the paper. We also include a simple vanilla BERT baseline that we refer to as BERT in the remainder of the paper.",
"cite_spans": [
{
"start": 128,
"end": 149,
"text": "Troiano et al. (2018)",
"ref_id": "BIBREF28"
},
{
"start": 1184,
"end": 1202,
"text": "Kong et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.1"
},
{
"text": "We merge the Hyperbole Corpus and Minimal Units Corpus from HYPO and split into train-devtest sets based on a 70:20:10 ratio. The Paraphrase Corpus is treated as a source of PI and thus only available at training time, also note that no sentences from HyperProbe were used for training,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "5.2"
},
{
"text": "Negative When the girl lost her puppy she cried an ocean of tears.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Positive",
"sec_num": null
},
{
"text": "The little girl was drowning in her tears.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Positive",
"sec_num": null
},
{
"text": "The little girl was crying a lot.* I was crying for leaving my home.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Positive",
"sec_num": null
},
{
"text": "My dad'll be very angry when he finds out that I wrecked his car.* My dad'll hit the roof when he finds out that I wrecked his car. only testing. Overall we are left with four test datasets, HYPO, Extreme Case Formulations, Qualitative Hyperbole and Quantitative Hyperbole. We perform grid-search to find optimal hyperparameters for BERT, BERT+QQ, BERT+PI, see Table 8 .",
"cite_spans": [],
"ref_spans": [
{
"start": 361,
"end": 369,
"text": "Table 8",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Positive",
"sec_num": null
},
{
"text": "6 Results",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Positive",
"sec_num": null
},
{
"text": "Results of our experiments on HYPO show that models that incorporate PI outperform the baselines, with respect to F 1 score, see Table 9 . We see a .071 (10%) increase in F1 for BERT+PI over the best performing baseline (LR+QQ). We use LIME (Ribeiro et al., 2016) to provide explanations for model predictions, see Figure 2 . From this Figure we see examples that suggest that the increase in both precision and recall for BERT+PI seen in Table 9 is a result of a better contextual understanding of hyperbole-prone ECF terms. The first two examples in particular highlight the understanding of the word 'brainless' in both a hyperbolic and ",
"cite_spans": [
{
"start": 236,
"end": 263,
"text": "LIME (Ribeiro et al., 2016)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 129,
"end": 136,
"text": "Table 9",
"ref_id": "TABREF11"
},
{
"start": 315,
"end": 323,
"text": "Figure 2",
"ref_id": null
},
{
"start": 336,
"end": 342,
"text": "Figure",
"ref_id": null
},
{
"start": 439,
"end": 446,
"text": "Table 9",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "HYPO",
"sec_num": "6.1"
},
{
"text": "From Table 10 we see models that incorporate PI provide improvements in detecting ECF hyperbole, .023 increase in F1, compared to LR+QQ. This aligns with results observed in Section 6.1 regarding the better understanding of hyperboleprone ECF words in hyperbolic and non-hyperbolic contexts by BERT+PI compared to the baselines. We provide LIME explanations, (see Figure 3) , and again observe examples that indicate a better contextual understanding of hyperbole-prone ECF terms by BERT+PI.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 13,
"text": "Table 10",
"ref_id": "TABREF0"
},
{
"start": 364,
"end": 373,
"text": "Figure 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Extreme Case Formulations",
"sec_num": "6.2"
},
{
"text": "From Table 11 we observe that all models struggle to detect qualitative hyperbolic expressions, BERT+PI achieves the highest F 1 of only 0.527 with a sub-0.5 precision of 0.486. With respect to variance we see many models with wild variances in recall, (.529, .497 ), suggesting that some of these runs are degenerating to outputting all positive class or all negative class predictions. These results suggest that qualitative hyperbole is harder to detect than ECF hyperbole. Table 12 we see that all models struggle to detect quantitative hyperbole and display a similar pattern of high recall (0.633 to 0.800) and low precision (0.463 to 0.5).",
"cite_spans": [
{
"start": 253,
"end": 264,
"text": "(.529, .497",
"ref_id": null
}
],
"ref_spans": [
{
"start": 5,
"end": 13,
"text": "Table 11",
"ref_id": "TABREF0"
},
{
"start": 477,
"end": 485,
"text": "Table 12",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Qualitative Hyperbole",
"sec_num": "6.2.1"
},
{
"text": "From an analysis of LIME explanations we identified one particular decision pattern as the source of many false positives. For sentences generated using the comparative sentence template (i.e., {MASK}{MASK} is as {JJ} as {MASK}{MASK}), the model always predicts a hyperbole irrespective of the comparison being made (see Figure 4) . We observe that the first word of the sentence and the words and phrases 'is', 'as', 'is as' and 'as a' are the most influential words that lead to the decision to classify the sentence as a hyperbole. Our hypothesis for this error is that the literal paraphrases of hyperbolic expressions that take this form remove many tokens from the original sentence (e.g., 'He's as mad as a hippo with a hernia' \u2192 'He's very mad'). We suspect this contributes to particular words and phrases (e.g., 'is as' and 'as a') being incorrectly considered hyperbolic because they were removed from the original sentence during the literal paraphrase. We also note, that this is a particularly common form of hyperbolic expression in the training data (e.g., 'There lived a man as big as a barge' 'He has as many debts as a dog has fleas', 'He's as mad as a hippo with a hernia'. 'you look as white as a ghost').",
"cite_spans": [],
"ref_spans": [
{
"start": 321,
"end": 330,
"text": "Figure 4)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Qualitative Hyperbole",
"sec_num": "6.2.1"
},
{
"text": "Troiano et al. (2018) posed the hyperbole detection task as a binary sequence classification task and introduced a dataset of annotated hyperbole as a benchmark for this task. The existing methods for detecting hyperbole, albeit scant, share similarities to methodologies for solving the problem of detecting other figures of speech. Generally, features are hand-crafted based on linguistic insights of a particular phenomenon (e.g., hyperbole) then combined with general purpose representations of textual content Joshi et al., 2016; Troiano et al., 2018; Abulaish et al., 2020) . We see this in sarcasm detection (Joshi et al., 2016) , irony detection and metaphor detection (Jang et al., 2015) . With respect to hyperbole, we see this approach in the foundation work on hyperbole detection (Troiano et al., 2018) . Approaches to figurative language detection based on deep learning models have been also developed, such as irony detection (Huang et al., 2017) , sarcasm detection (Ghosh and Veale, 2016) and metaphor detection (Wu et al., 2018) . With respect to hyperbole detection, research has shown that deep learning improves accuracy on the task of detection of hyperbole in Mandarin Chinese compared to the use of traditional statistical learners (Kong et al., 2020) . We extend upon both of these works by introducing a new model for hyperbole detection and introducing new data to evaluate hyperbole detection models.",
"cite_spans": [
{
"start": 515,
"end": 534,
"text": "Joshi et al., 2016;",
"ref_id": "BIBREF11"
},
{
"start": 535,
"end": 556,
"text": "Troiano et al., 2018;",
"ref_id": "BIBREF28"
},
{
"start": 557,
"end": 579,
"text": "Abulaish et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 615,
"end": 635,
"text": "(Joshi et al., 2016)",
"ref_id": "BIBREF11"
},
{
"start": 677,
"end": 696,
"text": "(Jang et al., 2015)",
"ref_id": "BIBREF10"
},
{
"start": 793,
"end": 815,
"text": "(Troiano et al., 2018)",
"ref_id": "BIBREF28"
},
{
"start": 942,
"end": 962,
"text": "(Huang et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 983,
"end": 1006,
"text": "(Ghosh and Veale, 2016)",
"ref_id": "BIBREF8"
},
{
"start": 1030,
"end": 1047,
"text": "(Wu et al., 2018)",
"ref_id": "BIBREF31"
},
{
"start": 1257,
"end": 1276,
"text": "(Kong et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "Recent research in NLP, and machine learning in general, has focused on the idea of explainability and interpretability. The problem of understanding the reasoning behind decisions made by increasingly complex models on increasingly complicated data is a core challenge and can be a roadblock to research progress (Ribeiro et al., 2016 (Ribeiro et al., , 2020 Bhatt et al., 2020; Linardatos et al., 2021) . We design a suite of synthetic test sentences to probe the capabilities of hyperbole detection models and utilise the LIME framework(Ribeiro et al., 2016) for local explainability to understand the reasoning behind the decisions made by hyperbole detection models. Our approaches to probing and explainability are based on existing efforts to uncover meaning in decisions made by NLP models (Ribeiro et al., 2016 (Ribeiro et al., , 2020 Rogers et al., 2020; Liu et al., 2021) .",
"cite_spans": [
{
"start": 314,
"end": 335,
"text": "(Ribeiro et al., 2016",
"ref_id": "BIBREF24"
},
{
"start": 336,
"end": 359,
"text": "(Ribeiro et al., , 2020",
"ref_id": "BIBREF25"
},
{
"start": 360,
"end": 379,
"text": "Bhatt et al., 2020;",
"ref_id": "BIBREF3"
},
{
"start": 380,
"end": 404,
"text": "Linardatos et al., 2021)",
"ref_id": "BIBREF13"
},
{
"start": 798,
"end": 819,
"text": "(Ribeiro et al., 2016",
"ref_id": "BIBREF24"
},
{
"start": 820,
"end": 843,
"text": "(Ribeiro et al., , 2020",
"ref_id": "BIBREF25"
},
{
"start": 844,
"end": 864,
"text": "Rogers et al., 2020;",
"ref_id": "BIBREF26"
},
{
"start": 865,
"end": 882,
"text": "Liu et al., 2021)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "In this paper we proposed a hyperbole detection model, BERT+PI, that incorporates PI via triplet loss with a pre-trained language model (BERT) into a multi-task text classification framework for hyperbole detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Experiment results showed improvements in detection using standard information retrieval metrics (i.e., F1, precision and recall), for models that incorporate PI on the HYPO test set. However, these results were not maintained across our synthetic test suite HyperProbe. In fact, only on the ECF test in HyperProbe did we observe similar results. On both the quantitative and qualitative hyperbole tests we observed poor performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Our hypothesis for this disparity is that the incorporation of PI into BERT+PI teaches the model to learn annotation artifacts introduced by the creation of literal paraphrases in the Paraphrase Corpus of HYPO. Specifically, ECF hyperbole can often be paraphrased quite simply by removing only a few tokens (e.g., what an absolute idiot \u2192 what an idiot). BERT+PI effectively incorporates this information well and as a result appears to be able to differentiate between hyperbolic and non-hyperbolic ECFs. However, for more complex hyperbole, unwanted annotation artifacts are introduced during the process of creating a literal paraphrase. For example, 'my heart is as heavy as the world' could be paraphrased as 'i am sad'. In this paraphrase, the contrast and the semantic scale of the hyperbole are lost in the paraphrase given the significant difference between the hyperbole and the paraphrase. In future work, exploring better annotation methods for complex hyperbole that encode the semantic scale and the source of excessive contrast will be an important focus to overcome the shortcomings caused by unwanted annotation artifacts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "https://github.com/marcotcr/checklist",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A survey of figurative language and its computational detection in online social networks",
"authors": [
{
"first": "Muhammad",
"middle": [],
"last": "Abulaish",
"suffix": ""
},
{
"first": "Ashraf",
"middle": [],
"last": "Kamal",
"suffix": ""
},
{
"first": "Mohammed",
"middle": [
"J"
],
"last": "Zaki",
"suffix": ""
}
],
"year": 2020,
"venue": "ACM Transactions on the Web (TWEB)",
"volume": "14",
"issue": "1",
"pages": "1--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muhammad Abulaish, Ashraf Kamal, and Mo- hammed J Zaki. 2020. A survey of figurative language and its computational detection in online social networks. ACM Transactions on the Web (TWEB), 14(1):1-52.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Automatic detection of irony and humour in twitter",
"authors": [
{
"first": "Francesco",
"middle": [],
"last": "Barbieri",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2014,
"venue": "ICCC",
"volume": "",
"issue": "",
"pages": "155--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francesco Barbieri and Horacio Saggion. 2014. Auto- matic detection of irony and humour in twitter. In ICCC, pages 155-162.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Modelling sarcasm in twitter, a novel approach",
"authors": [
{
"first": "Francesco",
"middle": [],
"last": "Barbieri",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Ronzano",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "50--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francesco Barbieri, Horacio Saggion, and Francesco Ronzano. 2014. Modelling sarcasm in twitter, a novel approach. In Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Senti- ment and Social Media Analysis, pages 50-58.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Explainable machine learning in deployment",
"authors": [
{
"first": "Umang",
"middle": [],
"last": "Bhatt",
"suffix": ""
},
{
"first": "Alice",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Shubham",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Adrian",
"middle": [],
"last": "Weller",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Taly",
"suffix": ""
},
{
"first": "Yunhan",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Joydeep",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Ruchir",
"middle": [],
"last": "Puri",
"suffix": ""
},
{
"first": "M",
"middle": [
"F"
],
"last": "Jos\u00e9",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Moura",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eckersley",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency",
"volume": "",
"issue": "",
"pages": "648--657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, Jos\u00e9 MF Moura, and Peter Eckersley. 2020. Explainable machine learning in deployment. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 648-657.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Hip: A method for linguistic hyperbole identification in discourse",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Burgers",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Britta",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brugman",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Kiki",
"suffix": ""
},
{
"first": "Gerard J",
"middle": [],
"last": "Renardel De Lavalette",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Steen",
"suffix": ""
}
],
"year": 2016,
"venue": "Metaphor and Symbol",
"volume": "31",
"issue": "3",
"pages": "163--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Burgers, Britta C Brugman, Kiki Y Re- nardel de Lavalette, and Gerard J Steen. 2016. Hip: A method for linguistic hyperbole identification in discourse. Metaphor and Symbol, 31(3):163-178.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Hyperbolic language and its relation to metaphor and irony",
"authors": [
{
"first": "Robyn",
"middle": [],
"last": "Carston",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Wearing",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of Pragmatics",
"volume": "79",
"issue": "",
"pages": "79--92",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robyn Carston and Catherine Wearing. 2015. Hyper- bolic language and its relation to metaphor and irony. Journal of Pragmatics, 79:79-92.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Hyperbole in English: A corpus-based study of exaggeration",
"authors": [
{
"first": "Claudia",
"middle": [],
"last": "Claridge",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claudia Claridge. 2010. Hyperbole in English: A corpus-based study of exaggeration. Cambridge University Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Extreme case formulations: Softeners, investment, and doing nonliteral. Research on language and social interaction",
"authors": [
{
"first": "Derek",
"middle": [],
"last": "Edwards",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "33",
"issue": "",
"pages": "347--373",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Derek Edwards. 2000. Extreme case formula- tions: Softeners, investment, and doing nonlit- eral. Research on language and social interaction, 33(4):347-373.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Fracking sarcasm using neural network",
"authors": [
{
"first": "Aniruddha",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Veale",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 7th workshop on computational approaches to subjectivity, sentiment and social media analysis",
"volume": "",
"issue": "",
"pages": "161--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aniruddha Ghosh and Tony Veale. 2016. Fracking sar- casm using neural network. In Proceedings of the 7th workshop on computational approaches to sub- jectivity, sentiment and social media analysis, pages 161-169.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Irony detection with attentive recurrent neural networks",
"authors": [
{
"first": "Yu-Hsiang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hen-Hsen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hsin-Hsi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2017,
"venue": "European Conference on Information Retrieval",
"volume": "",
"issue": "",
"pages": "534--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu-Hsiang Huang, Hen-Hsen Huang, and Hsin-Hsi Chen. 2017. Irony detection with attentive recurrent neural networks. In European Conference on Infor- mation Retrieval, pages 534-540. Springer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Metaphor detection in discourse",
"authors": [
{
"first": "Hyeju",
"middle": [],
"last": "Jang",
"suffix": ""
},
{
"first": "Seungwhan",
"middle": [],
"last": "Moon",
"suffix": ""
},
{
"first": "Yohan",
"middle": [],
"last": "Jo",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [],
"last": "Rose",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "384--392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hyeju Jang, Seungwhan Moon, Yohan Jo, and Car- olyn Rose. 2015. Metaphor detection in discourse. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 384-392.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Harnessing sequence labeling for sarcasm detection in dialogue from tv series 'friends",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Vaibhav",
"middle": [],
"last": "Tripathi",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Carman",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "146--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Joshi, Vaibhav Tripathi, Pushpak Bhat- tacharyya, and Mark Carman. 2016. Harnessing se- quence labeling for sarcasm detection in dialogue from tv series 'friends'. In Proceedings of The 20th SIGNLL Conference on Computational Natural Lan- guage Learning, pages 146-155.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bin Luo, and Vincent Ng. 2020. An empirical study of hyperbole",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Chuanyi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jidong",
"middle": [],
"last": "Ge",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7024--7034",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Kong, Chuanyi Li, Jidong Ge, Bin Luo, and Vin- cent Ng. 2020. An empirical study of hyperbole. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7024-7034.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Explainable ai: A review of machine learning interpretability methods",
"authors": [
{
"first": "Pantelis",
"middle": [],
"last": "Linardatos",
"suffix": ""
},
{
"first": "Vasilis",
"middle": [],
"last": "Papastefanopoulos",
"suffix": ""
},
{
"first": "Sotiris",
"middle": [],
"last": "Kotsiantis",
"suffix": ""
}
],
"year": 2021,
"venue": "Entropy",
"volume": "23",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pantelis Linardatos, Vasilis Papastefanopoulos, and Sotiris Kotsiantis. 2021. Explainable ai: A review of machine learning interpretability methods. Entropy, 23(1):18.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Probing across time: What does roberta know and when? arXiv preprint",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Leo",
"suffix": ""
},
{
"first": "Yizhong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jungo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Kasai",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2104.07885"
]
},
"num": null,
"urls": [],
"raw_text": "Leo Z Liu, Yizhong Wang, Jungo Kasai, Hannaneh Ha- jishirzi, and Noah A Smith. 2021. Probing across time: What does roberta know and when? arXiv preprint arXiv:2104.07885.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "there's millions of them\": hyperbole in everyday conversation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Ronald",
"middle": [],
"last": "Carter",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of pragmatics",
"volume": "36",
"issue": "2",
"pages": "149--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael McCarthy and Ronald Carter. 2004. \"there's millions of them\": hyperbole in everyday conversa- tion. Journal of pragmatics, 36(2):149-184.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning using privileged information for food recognition",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Long",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xun",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Dacheng",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Hanwang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chunyan",
"middle": [],
"last": "Miao",
"suffix": ""
},
{
"first": "Tat-Seng",
"middle": [],
"last": "Chua",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 27th ACM International Conference on Multimedia, MM '19",
"volume": "",
"issue": "",
"pages": "557--565",
"other_ids": {
"DOI": [
"10.1145/3343031.3350870"
]
},
"num": null,
"urls": [],
"raw_text": "Lei Meng, Long Chen, Xun Yang, Dacheng Tao, Han- wang Zhang, Chunyan Miao, and Tat-Seng Chua. 2019. Learning using privileged information for food recognition. In Proceedings of the 27th ACM International Conference on Multimedia, MM '19, page 557-565, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "All or nothing: A semantic analysis of hyperbole",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Cano",
"suffix": ""
},
{
"first": "Mora",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2009,
"venue": "Revista de Ling\u00fc\u00edstica y lenguas Aplicadas",
"volume": "4",
"issue": "1",
"pages": "25--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Cano Mora. 2009. All or nothing: A seman- tic analysis of hyperbole. Revista de Ling\u00fc\u00edstica y lenguas Aplicadas, 4(1):25-35.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Hyperbole, extreme case formulation",
"authors": [
{
"first": "",
"middle": [],
"last": "Neal R Norrick",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Pragmatics",
"volume": "36",
"issue": "9",
"pages": "1727--1739",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neal R Norrick. 2004. Hyperbole, extreme case formu- lation. Journal of Pragmatics, 36(9):1727-1739.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "On the theory of learnining with privileged information",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Pechyony",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 2010,
"venue": "Advances in neural information processing systems",
"volume": "23",
"issue": "",
"pages": "1894--1902",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Pechyony and Vladimir Vapnik. 2010. On the theory of learnining with privileged information. Advances in neural information processing systems, 23:1894-1902.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Extreme case formulations: A way of legitimizing claims",
"authors": [
{
"first": "Anita",
"middle": [],
"last": "Pomerantz",
"suffix": ""
}
],
"year": 1986,
"venue": "Human studies",
"volume": "9",
"issue": "2-3",
"pages": "219--229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anita Pomerantz. 1986. Extreme case formulations: A way of legitimizing claims. Human studies, 9(2- 3):219-229.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A survey on computational metaphor processing",
"authors": [
{
"first": "Sunny",
"middle": [],
"last": "Rai",
"suffix": ""
},
{
"first": "Shampa",
"middle": [],
"last": "Chakraverty",
"suffix": ""
}
],
"year": 2020,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "53",
"issue": "2",
"pages": "1--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunny Rai and Shampa Chakraverty. 2020. A survey on computational metaphor processing. ACM Com- puting Surveys (CSUR), 53(2):1-37.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "On the difficulty of automatically detecting irony: beyond a simple case of negation",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Reyes",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2014,
"venue": "Knowledge and Information Systems",
"volume": "40",
"issue": "3",
"pages": "595--614",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonio Reyes and Paolo Rosso. 2014. On the diffi- culty of automatically detecting irony: beyond a sim- ple case of negation. Knowledge and Information Systems, 40(3):595-614.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "why should i trust you?\" explaining the predictions of any classifier",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "1135--1144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \" why should i trust you?\" explain- ing the predictions of any classifier. In Proceed- ings of the 22nd ACM SIGKDD international con- ference on knowledge discovery and data mining, pages 1135-1144.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Beyond accuracy: Behavioral testing of NLP models with CheckList",
"authors": [
{
"first": "Tongshuang",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Guestrin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4902--4912",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.442"
]
},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Be- havioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4902- 4912, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A primer in bertology: What we know about how bert works",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "842--866",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in bertology: What we know about how bert works. Transactions of the Association for Computational Linguistics, 8:842-866.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Learning with privileged information for photo aesthetic assessment",
"authors": [
{
"first": "Yangyang",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Qian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shaowu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Guandong",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2020,
"venue": "Neurocomputing",
"volume": "404",
"issue": "",
"pages": "304--316",
"other_ids": {
"DOI": [
"10.1016/j.neucom.2020.04.142"
]
},
"num": null,
"urls": [],
"raw_text": "Yangyang Shu, Qian Li, Shaowu Liu, and Guandong Xu. 2020. Learning with privileged information for photo aesthetic assessment. Neurocomputing, 404:304-316.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A computational exploration of exaggeration",
"authors": [
{
"first": "Enrica",
"middle": [],
"last": "Troiano",
"suffix": ""
},
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
},
{
"first": "G\u00f6zde\u00f6zbal",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Serra Sinem",
"middle": [],
"last": "Tekiroglu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3296--3304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Enrica Troiano, Carlo Strapparava, G\u00f6zde\u00d6zbal, and Serra Sinem Tekiroglu. 2018. A computational ex- ploration of exaggeration. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3296-3304.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Extreme-case formulations. The international encyclopedia of language and social interaction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kevin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Whitehead",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin A Whitehead. 2015. Extreme-case formulations. The international encyclopedia of language and so- cial interaction, pages 1-5.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Sampling matters in deep embedding learning",
"authors": [
{
"first": "",
"middle": [],
"last": "Chao-Yuan",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"J"
],
"last": "Manmatha",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Krahenbuhl",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "2840--2848",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao-Yuan Wu, R Manmatha, Alexander J Smola, and Philipp Krahenbuhl. 2017. Sampling matters in deep embedding learning. In Proceedings of the IEEE International Conference on Computer Vision, pages 2840-2848.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Neural metaphor detecting with cnn-lstm model",
"authors": [
{
"first": "Chuhan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Fangzhao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sixing",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhigang",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Yongfeng",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshop on Figurative Language Processing",
"volume": "",
"issue": "",
"pages": "110--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuhan Wu, Fangzhao Wu, Yubo Chen, Sixing Wu, Zhigang Yuan, and Yongfeng Huang. 2018. Neu- ral metaphor detecting with cnn-lstm model. In Pro- ceedings of the Workshop on Figurative Language Processing, pages 110-114.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "BERT+PI. Model contains a BERT encoder, a linear classification head and a Triplet Sampler. We incorporate PI via the triplet sampler."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Model Explanation Comparisons -HYPO. LIME Word Weightings indicate the importance of a word for a particular class, orange highlights indicate hyperbolic words, blue highlights indicate non-hyperbolic words. P(h) is the prediction probability that a sentence was hyperbolic with red indicating an incorrect classification (assuming a .5 decision threshold) Model Explanation Comparisons ECF Tests."
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "LIME Explanations -Quantitative Dimensions"
},
"TABREF0": {
"text": "HYPO examples. Hyperbole Corpus contains original hyperbolic utterances. Paraphrase Corpus contains a literal paraphrase. Minimal Units Corpus contains examples that contain the hyperbolic words/phrases in a non-hyperbolic context.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF1": {
"text": "",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF3": {
"text": "Template Example {DT}{MASK}{MASK}{VB}{JJ} the dishonest words are endless {DT}{JJ}{MASK}{VB}{MASK} the endless combinations are daunting {DT}{MASK}{MASK}{RB}{VBa} the code was never cracked {DT}{MASK}{MASK}{RB}{MASK} the good times always roll {DT}{MASK}{VB l }{RB}{MASK} the dog was never silent {DT}{MASK}{MASK}{VB l }{RB} the drug problem is everywhere {DT}{MASK}{MASK}{DT}{MASK} The mother of every invention {DT}{MASK}{MASK}{IN}{MASK} all rights reserved in copyright {DT}{MASK}{VB}{MASK}{MASK} every child will be impacted {DT}{MASK}{MASK}{MASK}{PRON} The law applies to everybody {PRON}{IN}{DT}{MASK}{VB}{MASK} nobody on the street is home",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF4": {
"text": "Extreme Case Formulation Test Examples. Template shows templates as provided to CheckList, Example is an example sentence as generated by CheckList.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>Template</td><td>Example</td></tr><tr><td>{DT}{MASK}{MASK}{VB}{MASK}{JJ}</td><td>a world that is truly wicked</td></tr><tr><td>{DT}{MASK}{VB}{JJ}</td><td>The argument is confusing</td></tr><tr><td>{DT}{MASK}{VB}{MASK}{JJ}</td><td>The wine is very bitter</td></tr><tr><td>{DT}{MASK}{MASK}{VB}{JJ}</td><td>the oil residue is toxic</td></tr><tr><td>{DT}{JJ}{MASK}{VB}{MASK}</td><td>A great story was completed</td></tr><tr><td>{DT}{JJ}{MASK}{VB}{MASK}{MASK}</td><td>The shocking video was posted here</td></tr></table>"
},
"TABREF5": {
"text": "",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF6": {
"text": "TemplateExample {MASK}{MASK} is as {JJ} as {MASK}{MASK} my heart is as heavy as the world {MASK}{MASK} is {JJR} than {MASK}{MASK} this version is longer than I expected",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF7": {
"text": "Quantitative Dimensions Test Examples. Template shows templates as provided to CheckList, Example is an example sentence generated by CheckList.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>Anchor</td><td>Positive</td><td>Negative</td></tr><tr><td>Inviting my mother-in-law to stay here</td><td>He eats a mountain of junk food.</td><td>He eats a lot of junk food.*</td></tr><tr><td>is a recipe for disaster.</td><td/><td/></tr><tr><td>This supersonic airliner breaks the sound</td><td/><td/></tr><tr><td>barrier.</td><td/><td/></tr></table>"
},
"TABREF8": {
"text": "Semi-Random Triplet Sampling -Example Triplets. Anchor indicates an anchor text. Positive indicates a positive text. Negative indicates negative text. Note: * indicates that the example is PI.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF9": {
"text": "Triplet Samples. Examples of anchor, positive and negative samples generated by triplet sampler. Note: * indicates PI.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>Hyperparameter</td><td>Values</td></tr><tr><td>Dropout</td><td>0.1, 0.2, 0.3</td></tr><tr><td>Learning Rate</td><td>1e-04, 1e-05, 1e-06</td></tr><tr><td>\u03bb</td><td>0.25, 0.5, 1</td></tr><tr><td colspan=\"2\">s (Sampling Factor) 1, 3, 5</td></tr><tr><td>Encoder</td><td>BERT, RoBERTa</td></tr></table>"
},
"TABREF10": {
"text": "Hyperparameter search. Hyperparameter indicates the hyperparameter. Values indicates the values used in search. Note: Not all parameters are applicable for all models (i.e., \u03bb, s only required for BERT+PI)",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td/><td>.</td><td/><td/></tr><tr><td>Model</td><td>F1</td><td>Precision</td><td>Recall</td></tr><tr><td>LR+QQ</td><td>0.710(-)</td><td>0.679(-)</td><td>0.745(-)</td></tr><tr><td>NB+QQ</td><td>0.693(-)</td><td>0.689(-)</td><td>0.696(-)</td></tr><tr><td>BERT</td><td colspan=\"3\">0.709(.064) 0.711(.077) 0.735(.177)</td></tr><tr><td colspan=\"4\">BERT+QQ 0.671(.086) 0.650(.147) 0.765(.246)</td></tr><tr><td>BERT+PI</td><td colspan=\"3\">0.781(.012) 0.754(.053) 0.814(.039)</td></tr></table>"
},
"TABREF11": {
"text": "HYPO Results. We provide the mean F1, precision and recall score as well as standard deviation across three runs for all models.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF13": {
"text": "Hyperprobe Results. Extreme Case Formulations",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>Model</td><td>F1</td><td>Precision</td><td>Recall</td></tr><tr><td>BERT</td><td>0.407(-)</td><td>0.333(-)</td><td>0.522(-)</td></tr><tr><td>BERT</td><td>0.336(-)</td><td>0.400(-)</td><td>0.290(-)</td></tr><tr><td>BERT</td><td colspan=\"3\">0.278(.275) 0.240(.209) 0.401(.497)</td></tr><tr><td colspan=\"4\">BERT+QQ 0.352(.307) 0.255(.227) 0.599(.529)</td></tr><tr><td>BERT+PI</td><td colspan=\"2\">0.527(.030) .486(.054)</td><td>0.590(.089)</td></tr></table>"
},
"TABREF14": {
"text": "Hyperprobe Results.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>Qualitative Hyper-</td></tr></table>"
},
"TABREF15": {
"text": "Hyperprobe Results.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>Quantitative Dimen-</td></tr><tr><td>sions</td></tr><tr><td>6.2.2 Quantitative Hyperbole</td></tr><tr><td>From</td></tr></table>"
}
}
}
}