ACL-OCL / Base_JSON /prefixF /json /figlang /2020.figlang-1.0.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:42:45.696640Z"
},
"title": "",
"authors": [],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "Figurative language processing is a rapidly growing area in Natural Language Processing (NLP), including processing of metaphors, idioms, puns, irony, sarcasm, as well as other figures. Characteristic to all areas of human activity (from poetic to ordinary to scientific) and, thus, to all types of discourse, figurative language becomes an important problem for NLP systems. Its ubiquity in language has been established in a number of corpus studies, and the role it plays in human reasoning has been confirmed in psychological experiments. This makes figurative language an important research area for computational and cognitive linguistics, and its automatic identification and interpretation indispensable for any semantics-oriented NLP application.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "This workshop is the second in a series of biannual workshops on Figurative Language Processing. This new workshop series builds upon the successful start of the Metaphor in NLP workshop series (at NAACL-HLT 2013, ACL 2014, NAACL-HLT 2015, NAACL-HLT 2016), expanding its scope to incorporate the rapidly growing body of research on various types of figurative language such as sarcasm, irony and puns, with the aim of maintaining and nourishing a community of NLP researchers interested in this topic. The workshop features both regular research papers and two shared tasks on metaphor and sarcasm detection. In the regular research track, we received 20 research paper submissions and accepted 9 (3 oral presentations and 6 posters). The two shared tasks on metaphor and sarcasm detection serve to benchmark various computational approaches to metaphor and sarcasm, clarifying the state of this steadily growing field and facilitating further research. For the metaphor shared task, we used the VU Amsterdam Metaphor Corpus (VUA) corpus as one of the corpora for the shared tasks. New to this year's benchmarking tasks, we added a corpus of TOEFL essays written by non-native speakers of English annotated for metaphor (a subset from the publicly available ETS Corpus of Non-Native Written English), allowing us to broaden the genres covered in the task and in accordance with findings in the literature demonstrating the potential of information on metaphor usage for assessing English proficiency of students.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "The shared task was organized into four tracks: a Verbs track and an All Content Part-of-Speech track for both VUA and TOEFL. Overall, there were 1,224 submissions from 71 teams. There were 805 submissions from the 14 teams who submitted system papers; one paper was withdrawn before publication. In terms of performance, the current published state-of-art on VUA corpus has been matched by the best participating system, while a new state-of-art was established for the TOEFL corpus. We observed the following general trends: (1) Transformer architectures were highly popular and resulted in competitive performance; (2) New sources of information were explored by participants, such as fine-grained POS, spell-corrected variants of words (for TOEFL data), sub-word level information (e.g., character embeddings), idioms, sensorimotor and embodiment-related information; (3) The relative performance rankings of teams were largely consistent between VUA and TOEFL datasets; (4) Performance of participating systems was generally better on Verbs than on the All POS tracks, across both corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "The shared task on sarcasm detection was designed to benchmark the usefulness of modeling conversation context (i.e., all the prior dialogue turns) for sarcasm detection. Two types of social media content are used as training data for the two tracks -microblogging platforms such as Twitter and online discussion forums such as Reddit. Overall, we received an overwhelming number of submissions: 655 for the Reddit track and 1070 for the Twitter track. The CodaLab leaderboard showcases results from 39 systems for the Reddit track and 38 systems for the Twitter track, respectively. Out of all submissions, 14 shared task system papers were submitted. Almost all the submitted systems have used the transformerarchitecture that seems to perform better than RNN-architectures, even without any task-specific finetuning. The best system has shown the usefulness of augmenting \"other\" dataset(s) during training. In terms of context, novel approaches include: CNN-LSTM based summarization of the prior dialogue turns, time-series fusion with proxy labels, an ensemble of a variety of transformers with different depth of context, aspect-based sentiment classification for the immediate context, etc. When explicitly modeling the number of turns, systems have shown better accuracy with a depth of a maximum of three prior turns. In the future, we plan to continuously grow the training corpus, collecting data from a variety of subreddits, in case of Reddit, and different topics from Twitter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "We wish to thank everyone who showed interest and submitted a paper, all of the authors for their contributions, the members of the Program Committee for their thoughtful reviews, the invited speaker for sharing her perspective on the topic, and all the attendees of the workshop. All of these factors contribute to a truly enriching event! ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Exploring Parallels between Visually Grounded Metaphors and Image Classifiers Yuri Bizzoni and Simon Dobnik",
"authors": [
{
"first": "=",
"middle": [],
"last": "Sky + Fire",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sunset",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sky + Fire = Sunset. Exploring Parallels between Visually Grounded Metaphors and Image Classifiers Yuri Bizzoni and Simon Dobnik . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Recognizing Euphemisms and Dysphemisms Using Sentiment Analysis Christian Felt and",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Recognizing Euphemisms and Dysphemisms Using Sentiment Analysis Christian Felt and Ellen Riloff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "IlliniMet: Illinois System for Metaphor Detection with Contextual and Linguistic Information Hongyu Gong",
"authors": [
{
"first": "Kshitij",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Akriti",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Suma",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": ".",
"middle": [
"."
],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "IlliniMet: Illinois System for Metaphor Detection with Contextual and Linguistic Information Hongyu Gong, Kshitij Gupta, Akriti Jain and Suma Bhat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Adaptation of Word-Level Benchmark Datasets for Relation-Level Metaphor Identification Omnia Zayed",
"authors": [
{
"first": "John",
"middle": [],
"last": "Philip Mccrae",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Buitelaar",
"suffix": ""
},
{
"first": ".",
"middle": [
"."
],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adaptation of Word-Level Benchmark Datasets for Relation-Level Metaphor Identification Omnia Zayed, John Philip McCrae and Paul Buitelaar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Generating Ethnographic Models from Communities' Online Data Tomek Strzalkowski",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Newheiser",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Kemper",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Sa",
"suffix": ""
},
{
"first": "Bharvee",
"middle": [],
"last": "Acharya",
"suffix": ""
},
{
"first": "Gregorios",
"middle": [],
"last": "Katsios",
"suffix": ""
},
{
"first": ".",
"middle": [
"."
],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Generating Ethnographic Models from Communities' Online Data Tomek Strzalkowski, Anna Newheiser, Nathan Kemper, Ning Sa, Bharvee Acharya and Gregorios Katsios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Oxymorons: a preliminary corpus investigation Marta La Pietra and Francesca Masini . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Can Humor Prediction Datasets be used for Humor Generation? Humorous Headline Generation via Style Transfer Orion Weller, Nancy Fulda and Kevin Seppi",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Can Humor Prediction Datasets be used for Humor Generation? Humorous Headline Generation via Style Transfer Orion Weller, Nancy Fulda and Kevin Seppi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Evaluating a Bi-LSTM Model for Metaphor Detection in TOEFL Essays Kevin Kuo and Marine Carpuat",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evaluating a Bi-LSTM Model for Metaphor Detection in TOEFL Essays Kevin Kuo and Marine Carpuat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Neural Metaphor Detection with a Residual biLSTM-CRF Model Andr\u00e9s Torres Rivera",
"authors": [
{
"first": "Antoni",
"middle": [],
"last": "Oliver",
"suffix": ""
},
{
"first": "Salvador",
"middle": [],
"last": "Climent",
"suffix": ""
},
{
"first": "Marta",
"middle": [
". . . . . . . . . . . ."
],
"last": "Coll-Florit",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neural Metaphor Detection with a Residual biLSTM-CRF Model Andr\u00e9s Torres Rivera, Antoni Oliver, Salvador Climent and Marta Coll-Florit . . . . . . . . . . . . . . . . 197",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Augmenting Neural Metaphor Detection with Concreteness Ghadi Alnafesah, Harish Tayyar Madabushi and Mark Lee",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Augmenting Neural Metaphor Detection with Concreteness Ghadi Alnafesah, Harish Tayyar Madabushi and Mark Lee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Supervised Disambiguation of German Verbal Idioms with a BiLSTM Architecture Rafael Ehren",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Timm Lichte",
"suffix": ""
},
{
"first": "Jakub",
"middle": [],
"last": "Kallmeyer",
"suffix": ""
},
{
"first": ".",
"middle": [
"."
],
"last": "Waszczuk",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Supervised Disambiguation of German Verbal Idioms with a BiLSTM Architecture Rafael Ehren, Timm Lichte, Laura Kallmeyer and Jakub Waszczuk . . . . . . . . . . . . . . . . . . . . . . . . . 211",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Metaphor Detection using Context and Concreteness Rowan Hall Maudslay",
"authors": [
{
"first": "Tiago",
"middle": [],
"last": "Pimentel",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Simone",
"middle": [
". . . . . . . . . . . . . ."
],
"last": "Teufel",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Metaphor Detection using Context and Concreteness Rowan Hall Maudslay, Tiago Pimentel, Ryan Cotterell and Simone Teufel . . . . . . . . . . . . . . . . . . 221",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Neural metaphor identification in discourse Verna Dankers, Karan Malhotra, Gaurav Kudva, Volodymyr Medentsiy and Ekaterina Shutova",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Being neighbourly: Neural metaphor identification in discourse Verna Dankers, Karan Malhotra, Gaurav Kudva, Volodymyr Medentsiy and Ekaterina Shutova 227",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Go Figure! Multi-task transformer-based architecture for metaphor detection using idioms: ETS team in 2020 metaphor shared task Xianyang Chen, Chee Wee (Ben)",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Leong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Flor",
"suffix": ""
},
{
"first": "",
"middle": [
". . . ."
],
"last": "Beata Beigman Klebanov",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Go Figure! Multi-task transformer-based architecture for metaphor detection using idioms: ETS team in 2020 metaphor shared task Xianyang Chen, Chee Wee (Ben) Leong, Michael Flor and Beata Beigman Klebanov . . . . . . . . . 235",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Metaphor Detection using Ensembles of Bidirectional Recurrent Neural Networks Jennifer Brooks and",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Metaphor Detection using Ensembles of Bidirectional Recurrent Neural Networks Jennifer Brooks and Abdou Youssef . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Metaphor Detection Using Contextual Word Embeddings From Transformers Jerry Liu",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Nathan O'hara",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rubin",
"suffix": ""
},
{
"first": "Cynthia",
"middle": [],
"last": "Draelos",
"suffix": ""
},
{
"first": "",
"middle": [
". . . . . ."
],
"last": "Rudin",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Metaphor Detection Using Contextual Word Embeddings From Transformers Jerry Liu, Nathan O'Hara, Alexander Rubin, Rachel Draelos and Cynthia Rudin . . . . . . . . . . . . . 250",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Testing the role of metadata in metaphor identification Egon",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Stemle",
"suffix": ""
},
{
"first": ".",
"middle": [
"."
],
"last": "Onysko",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Testing the role of metadata in metaphor identification Egon Stemle and Alexander Onysko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Ferdous Barbhuiya and Kuntal Dey 08:45 Transformers on Sarcasm Detection with Context Amardeep Kumar and Vivek Anand 08:50 A Novel Hierarchical BERT Architecture for Sarcasm Detection Himani Srivastava, Vaibhav Varshney, Surabhi Kumari and Saurabh Srivastava 09:00 Detecting Sarcasm in Conversation Context Using Transformer-Based Models Adithya Avvaru",
"authors": [],
"year": 2009,
"venue": "Sanath Vobilisetty and Radhika Mamidi 09:05 Using Conceptual Norms for Metaphor Detection Mingyu WAN, Kathleen Ahrens, Emmanuele Chersoni, Menghan Jiang, Qi Su, Rong Xiang and Chu-Ren Huang 09:10 ALBERT-BiLSTM for Sequential Metaphor Detection Shuqun Li, Jingjie Zeng, Jinhui Zhang, Tao Peng, Liang Yang and Hongfei Lin 09:15 Character aware models with similarity learning for metaphor detection Tarun Kumar and Yashvardhan Sharma",
"volume": "10",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thursday July 9, 2020 (continued) 08:40 Context-Aware Sarcasm Detection Using BERT Arup Baruah, Kaushik Das, Ferdous Barbhuiya and Kuntal Dey 08:45 Transformers on Sarcasm Detection with Context Amardeep Kumar and Vivek Anand 08:50 A Novel Hierarchical BERT Architecture for Sarcasm Detection Himani Srivastava, Vaibhav Varshney, Surabhi Kumari and Saurabh Srivastava 09:00 Detecting Sarcasm in Conversation Context Using Transformer-Based Models Adithya Avvaru, Sanath Vobilisetty and Radhika Mamidi 09:05 Using Conceptual Norms for Metaphor Detection Mingyu WAN, Kathleen Ahrens, Emmanuele Chersoni, Menghan Jiang, Qi Su, Rong Xiang and Chu-Ren Huang 09:10 ALBERT-BiLSTM for Sequential Metaphor Detection Shuqun Li, Jingjie Zeng, Jinhui Zhang, Tao Peng, Liang Yang and Hongfei Lin 09:15 Character aware models with similarity learning for metaphor detection Tarun Kumar and Yashvardhan Sharma 10:00",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "2020 (continued) 10:50 Oxymorons: a preliminary corpus investigation Marta La Pietra and Francesca Masini 10:55 Can Humor Prediction Datasets be used for Humor Generation? Humorous Headline Generation via Style Transfer Orion Weller, Nancy Fulda and Kevin Seppi 11:00 Evaluating a Bi-LSTM Model for Metaphor Detection in TOEFL Essays Kevin Kuo and Marine Carpuat 11:30 Neural Metaphor Detection with a Residual biLSTM-CRF Model Andr\u00e9s Torres Rivera, Antoni Oliver, Salvador Climent and Marta Coll-Florit 11:35 Augmenting Neural Metaphor Detection with Concreteness Ghadi Alnafesah, Harish Tayyar Madabushi and Mark Lee 11:40 Supervised Disambiguation of German Verbal Idioms with a BiLSTM Architecture Rafael Ehren, Timm Lichte, Laura Kallmeyer and Jakub Waszczuk 11:50 Metaphor Detection using Context and Concreteness Rowan Hall Maudslay, Tiago Pimentel, Ryan Cotterell and Simone Teufel 11:55 Being neighbourly: Neural metaphor identification in discourse Verna Dankers, Karan Malhotra, Gaurav Kudva, Volodymyr Medentsiy and Ekaterina Shutova 12:00 Go Figure! Multi-task transformer-based architecture for metaphor detection using idioms: ETS team in 2020 metaphor shared task Xianyang Chen",
"authors": [
{
"first": "=",
"middle": [],
"last": "Sky + Fire",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sunset ; Anna",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Newheiser",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Kemper",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sa",
"suffix": ""
}
],
"year": 2009,
"venue": "Exploring Parallels between Visually Grounded Metaphors and Image Classifiers Yuri Bizzoni and Simon Dobnik 10:15 Recognizing Euphemisms and Dysphemisms Using Sentiment Analysis Christian Felt and Ellen Riloff 10:30 IlliniMet: Illinois System for Metaphor Detection with Contextual and Linguistic Information Hongyu Gong, Kshitij Gupta, Akriti Jain and Suma Bhat 10:35 Adaptation of Word-Level Benchmark Datasets for Relation-Level Metaphor Identification Omnia Zayed",
"volume": "10",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sky + Fire = Sunset. Exploring Parallels between Visually Grounded Metaphors and Image Classifiers Yuri Bizzoni and Simon Dobnik 10:15 Recognizing Euphemisms and Dysphemisms Using Sentiment Analysis Christian Felt and Ellen Riloff 10:30 IlliniMet: Illinois System for Metaphor Detection with Contextual and Linguistic Information Hongyu Gong, Kshitij Gupta, Akriti Jain and Suma Bhat 10:35 Adaptation of Word-Level Benchmark Datasets for Relation-Level Metaphor Iden- tification Omnia Zayed, John Philip McCrae and Paul Buitelaar 10:40 Generating Ethnographic Models from Communities' Online Data Tomek Strzalkowski, Anna Newheiser, Nathan Kemper, Ning Sa, Bharvee Acharya and Gregorios Katsios xii Thursday July 9, 2020 (continued) 10:50 Oxymorons: a preliminary corpus investigation Marta La Pietra and Francesca Masini 10:55 Can Humor Prediction Datasets be used for Humor Generation? Humorous Head- line Generation via Style Transfer Orion Weller, Nancy Fulda and Kevin Seppi 11:00 Evaluating a Bi-LSTM Model for Metaphor Detection in TOEFL Essays Kevin Kuo and Marine Carpuat 11:30 Neural Metaphor Detection with a Residual biLSTM-CRF Model Andr\u00e9s Torres Rivera, Antoni Oliver, Salvador Climent and Marta Coll-Florit 11:35 Augmenting Neural Metaphor Detection with Concreteness Ghadi Alnafesah, Harish Tayyar Madabushi and Mark Lee 11:40 Supervised Disambiguation of German Verbal Idioms with a BiLSTM Architecture Rafael Ehren, Timm Lichte, Laura Kallmeyer and Jakub Waszczuk 11:50 Metaphor Detection using Context and Concreteness Rowan Hall Maudslay, Tiago Pimentel, Ryan Cotterell and Simone Teufel 11:55 Being neighbourly: Neural metaphor identification in discourse Verna Dankers, Karan Malhotra, Gaurav Kudva, Volodymyr Medentsiy and Ekate- rina Shutova 12:00 Go Figure! Multi-task transformer-based architecture for metaphor detection using idioms: ETS team in 2020 metaphor shared task Xianyang Chen, Chee Wee (Ben) Leong, Michael Flor and Beata Beigman Kle- banov 12:10 Metaphor Detection using Ensembles of Bidirectional Recurrent Neural Networks Jennifer Brooks and Abdou Youssef 12:15 Metaphor Detection Using Contextual Word Embeddings From Transformers Jerry Liu, Nathan O'Hara, Alexander Rubin, Rachel Draelos and Cynthia Rudin 12:20 Testing the role of metadata in metaphor identification Egon Stemle and Alexander Onysko xiii Thursday July 9, 2020 (continued) 12:50 Sarcasm Detection Using an Ensemble Approach Jens Lemmens, Ben Burtenshaw, Ehsan Lotfi, Ilia Markov and Walter Daelemans 12:55 A Transformer Approach to Contextual Sarcasm Detection in Twitter Hunter Gregory, Steven Li, Pouya Mohammadi, Natalie Tarn, Rachel Draelos and Cynthia Rudin 13:00 Transformer-based Context-aware Sarcasm Detection in Conversation Threads from Social Media Xiangjue Dong, Changmao Li and Jinho D. Choi",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "The featured papers cover a range of aspects of figurative language processing such as metaphor identification (Dankers et al.; Zayed, McCrae and Buitelaar), metaphor in the visual modality (Bizzoni and Dobnik), annotation of oxymorons (La Pietra and Massini), satirical and humorous headline generation (Weller et al.; Horvitz et al.) and recognising euphemisms and dysphemisms (Felt and Riloff). The workshop program also features a keynote talk by Marilyn Walker, Department of Computer Science, University of California Santa Cruz, on the topic of \"Generating Expressive Language by Mining User Reviews\"."
},
"TABREF1": {
"num": null,
"content": "<table><tr><td>DeepMet: A Reading Comprehension Paradigm for Token-level Metaphor Detection</td></tr><tr><td>Chuandong Su, Fumiyo Fukumoto, Xiaoxi Huang, Jiyi Li, Rongbo Wang and Zhiqun Chen . . . . 30</td></tr><tr><td>Context-Driven Satirical News Generation</td></tr><tr><td>Zachary Horvitz,</td></tr></table>",
"html": null,
"type_str": "table",
"text": "A Report on the 2020 Sarcasm Detection Shared Task Debanjan Ghosh, Avijit Vajpayee and Smaranda Muresan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Augmenting Data for Sarcasm Detection with Unlabeled Conversation Context Hankyol Lee, Youngjae Yu and Gunhee Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 A Report on the 2020 VUA and TOEFL Metaphor Detection Shared Task Chee Wee (Ben) Leong, Beata Beigman Klebanov, Chris Hamill, Egon Stemle, Rutuja Ubale and Xianyang Chen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Nam Do and Michael L. Littman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Sarcasm Detection using Context Separators in Online Discourse TANVI DADU and Kartikey Pant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Sarcasm Detection in Tweets with BERT and GloVe Embeddings Akshay Khatri and Pranav P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 C-Net: Contextual Network for Sarcasm Detection Amit Kumar Jena, Aman Sinha and Rohit Agarwal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Applying Transformers and Aspect-based Sentiment Analysis approaches on Sarcasm Detection Taha Shangipour ataei, Soroush Javdan and Behrouz Minaei-Bidgoli . . . . . . . . . . . . . . . . . . . . . . . . . 67 Sarcasm Identification and Detection in Conversion Context using BERT kalaivani A and Thenmozhi D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Neural Sarcasm Detection using Conversation Context Nikhil Jaiswal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Context-Aware Sarcasm Detection Using BERT Arup Baruah, Kaushik Das, Ferdous Barbhuiya and Kuntal Dey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Transformers on Sarcasm Detection with Context Amardeep Kumar and Vivek Anand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 A Novel Hierarchical BERT Architecture for Sarcasm Detection Himani Srivastava, Vaibhav Varshney, Surabhi Kumari and Saurabh Srivastava . . . . . . . . . . . . . . . . 93 Detecting Sarcasm in Conversation Context Using Transformer-Based Models Adithya Avvaru, Sanath Vobilisetty and Radhika Mamidi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Using Conceptual Norms for Metaphor Detection Mingyu WAN, Kathleen Ahrens, Emmanuele Chersoni, Menghan Jiang, Qi Su, Rong Xiang and Chu-Ren Huang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 ALBERT-BiLSTM for Sequential Metaphor Detection Shuqun Li, Jingjie Zeng, Jinhui Zhang, Tao Peng, Liang Yang and Hongfei Lin . . . . . . . . . . . . . . 110"
}
}
}
}