title
stringlengths
10
192
authors
stringlengths
5
1.08k
abstract
stringlengths
0
5.84k
url
stringlengths
0
108
detail_url
stringlengths
0
108
abs
stringlengths
0
64
OpenReview
stringlengths
0
42
Download PDF
stringlengths
0
115
tags
stringclasses
32 values
source_dataset
stringclasses
6 values
source_config
stringclasses
1 value
source_split
stringclasses
33 values
K-PLUG: Knowledge-injected Pre-trained Language Model for Natural Language Understanding and Generation in E-Commerce
Song Xu, Haoran Li, Peng Yuan, Yujia Wang, Youzheng Wu, Xiaodong He, Ying Liu, Bowen Zhou
Existing pre-trained language models (PLMs) have demonstrated the effectiveness of self-supervised learning for a broad range of natural language processing (NLP) tasks. However, most of them are not explicitly aware of domain-specific knowledge, which is essential for downstream tasks in many domains, such as tasks in...
https://aclanthology.org/2021.findings-emnlp.1
https://aclanthology.org/2021.findings-emnlp.1.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Extracting Topics with Simultaneous Word Co-occurrence and Semantic Correlation Graphs: Neural Topic Modeling for Short Texts
Yiming Wang, Ximing Li, Xiaotang Zhou, Jihong Ouyang
Short text nowadays has become a more fashionable form of text data, e.g., Twitter posts, news titles, and product reviews. Extracting semantic topics from short texts plays a significant role in a wide spectrum of NLP applications, and neural topic modeling is now a major tool to achieve it. Motivated by learning more...
https://aclanthology.org/2021.findings-emnlp.2
https://aclanthology.org/2021.findings-emnlp.2.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Self-supervised Contrastive Cross-Modality Representation Learning for Spoken Question Answering
Chenyu You, Nuo Chen, Yuexian Zou
Spoken question answering (SQA) requires fine-grained understanding of both spoken documents and questions for the optimal answer prediction. In this paper, we propose novel training schemes for spoken question answering with a self-supervised training stage and a contrastive representation learning stage. In the self-...
https://aclanthology.org/2021.findings-emnlp.3
https://aclanthology.org/2021.findings-emnlp.3.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Language Clustering for Multilingual Named Entity Recognition
Kyle Shaffer
Recent work in multilingual natural language processing has shown progress in various tasks such as natural language inference and joint multilingual translation. Despite success in learning across many languages, challenges arise where multilingual training regimes often boost performance on some languages at the expe...
https://aclanthology.org/2021.findings-emnlp.4
https://aclanthology.org/2021.findings-emnlp.4.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Neural News Recommendation with Collaborative News Encoding and Structural User Encoding
Zhiming Mao, Xingshan Zeng, Kam-Fai Wong
Automatic news recommendation has gained much attention from the academic community and industry. Recent studies reveal that the key to this task lies within the effective representation learning of both news and users. Existing works typically encode news title and content separately while neglecting their semantic in...
https://aclanthology.org/2021.findings-emnlp.5
https://aclanthology.org/2021.findings-emnlp.5.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Self-Teaching Machines to Read and Comprehend with Large-Scale Multi-Subject Question-Answering Data
Dian Yu, Kai Sun, Dong Yu, Claire Cardie
{'url': 'https://dataset.org/examqa/', '#text': 'Despite considerable progress, most machine reading comprehension (MRC) tasks still lack sufficient training data to fully exploit powerful deep neural network models with millions of parameters, and it is laborious, expensive, and time-consuming to create large-scale, h...
https://aclanthology.org/2021.findings-emnlp.6
https://aclanthology.org/2021.findings-emnlp.6.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
A Web Scale Entity Extraction System
Xuanting Cai, Quanbin Ma, Jianyu Liu, Pan Li, Qi Zeng, Zhengkan Yang, Pushkar Tripathi
Understanding the semantic meaning of content on the web through the lens of entities and concepts has many practical advantages. However, when building large-scale entity extraction systems, practitioners are facing unique challenges involving finding the best ways to leverage the scale and variety of data available o...
https://aclanthology.org/2021.findings-emnlp.7
https://aclanthology.org/2021.findings-emnlp.7.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Joint Multimedia Event Extraction from Video and Article
Brian Chen, Xudong Lin, Christopher Thomas, Manling Li, Shoya Yoshida, Lovish Chum, Heng Ji, Shih-Fu Chang
Visual and textual modalities contribute complementary information about events described in multimedia documents. Videos contain rich dynamics and detailed unfoldings of events, while text describes more high-level and abstract concepts. However, existing event extraction methods either do not handle video or solely t...
https://aclanthology.org/2021.findings-emnlp.8
https://aclanthology.org/2021.findings-emnlp.8.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Fine-grained Semantic Alignment Network for Weakly Supervised Temporal Language Grounding
Yuechen Wang, Wengang Zhou, Houqiang Li
Temporal language grounding (TLG) aims to localize a video segment in an untrimmed video based on a natural language description. To alleviate the expensive cost of manual annotations for temporal boundary labels,we are dedicated to the weakly supervised setting, where only video-level descriptions are provided for tra...
https://aclanthology.org/2021.findings-emnlp.9
https://aclanthology.org/2021.findings-emnlp.9.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Factual Consistency Evaluation for Text Summarization via Counterfactual Estimation
Yuexiang Xie, Fei Sun, Yang Deng, Yaliang Li, Bolin Ding
{'url': 'https://github.com/xieyxclack/factual_coco', '#text': 'Despite significant progress has been achieved in text summarization, factual inconsistency in generated summaries still severely limits its practical applications. Among the key factors to ensure factual consistency, a reliable automatic evaluation metric...
https://aclanthology.org/2021.findings-emnlp.10
https://aclanthology.org/2021.findings-emnlp.10.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Cross-Modal Retrieval Augmentation for Multi-Modal Classification
Shir Gur, Natalia Neverova, Chris Stauffer, Ser-Nam Lim, Douwe Kiela, Austin Reiter
Recent advances in using retrieval components over external knowledge sources have shown impressive results for a variety of downstream tasks in natural language processing. Here, we explore the use of unstructured external knowledge sources of images and their corresponding captions for improving visual question answe...
https://aclanthology.org/2021.findings-emnlp.11
https://aclanthology.org/2021.findings-emnlp.11.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
HiTRANS: A Hierarchical Transformer Network for Nested Named Entity Recognition
Zhiwei Yang, Jing Ma, Hechang Chen, Yunke Zhang, Yi Chang
Nested Named Entity Recognition (NNER) has been extensively studied, aiming to identify all nested entities from potential spans (i.e., one or more continuous tokens). However, recent studies for NNER either focus on tedious tagging schemas or utilize complex structures, which fail to learn effective span representatio...
https://aclanthology.org/2021.findings-emnlp.12
https://aclanthology.org/2021.findings-emnlp.12.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Improving Embedding-based Large-scale Retrieval via Label Enhancement
Peiyang Liu, Xi Wang, Sen Wang, Wei Ye, Xiangyu Xi, Shikun Zhang
Current embedding-based large-scale retrieval models are trained with 0-1 hard label that indicates whether a query is relevant to a document, ignoring rich information of the relevance degree. This paper proposes to improve embedding-based retrieval from the perspective of better characterizing the query-document rele...
https://aclanthology.org/2021.findings-emnlp.13
https://aclanthology.org/2021.findings-emnlp.13.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Improving Privacy Guarantee and Efficiency of Latent Dirichlet Allocation Model Training Under Differential Privacy
Tao Huang, Hong Chen
Latent Dirichlet allocation (LDA), a widely used topic model, is often employed as a fundamental tool for text analysis in various applications. However, the training process of the LDA model typically requires massive text corpus data. On one hand, such massive data may expose private information in the training data,...
https://aclanthology.org/2021.findings-emnlp.14
https://aclanthology.org/2021.findings-emnlp.14.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Generating Mammography Reports from Multi-view Mammograms with BERT
Alexander Yalunin, Elena Sokolova, Ilya Burenko, Alexander Ponomarchuk, Olga Puchkova, Dmitriy Umerenkov
Writing mammography reports can be error-prone and time-consuming for radiologists. In this paper we propose a method to generate mammography reports given four images, corresponding to the four views used in screening mammography. To the best of our knowledge our work represents the first attempt to generate the mammo...
https://aclanthology.org/2021.findings-emnlp.15
https://aclanthology.org/2021.findings-emnlp.15.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Euphemistic Phrase Detection by Masked Language Model
Wanzheng Zhu, Suma Bhat
It is a well-known approach for fringe groups and organizations to use euphemisms—ordinary-sounding and innocent-looking words with a secret meaning—to conceal what they are discussing. For instance, drug dealers often use “pot” for marijuana and “avocado” for heroin. From a social media content moderation perspective,...
https://aclanthology.org/2021.findings-emnlp.16
https://aclanthology.org/2021.findings-emnlp.16.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Decomposing Complex Questions Makes Multi-Hop QA Easier and More Interpretable
Ruiliu Fu, Han Wang, Xuejun Zhang, Jun Zhou, Yonghong Yan
Multi-hop QA requires the machine to answer complex questions through finding multiple clues and reasoning, and provide explanatory evidence to demonstrate the machine’s reasoning process. We propose Relation Extractor-Reader and Comparator (RERC), a three-stage framework based on complex question decomposition. The Re...
https://aclanthology.org/2021.findings-emnlp.17
https://aclanthology.org/2021.findings-emnlp.17.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Segmenting Natural Language Sentences via Lexical Unit Analysis
Yangming Li, Lemao Liu, Shuming Shi
The span-based model enjoys great popularity in recent works of sequence segmentation. However, each of these methods suffers from its own defects, such as invalid predictions. In this work, we introduce a unified span-based model, lexical unit analysis (LUA), that addresses all these matters. Segmenting a lexical unit...
https://aclanthology.org/2021.findings-emnlp.18
https://aclanthology.org/2021.findings-emnlp.18.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Dense Hierarchical Retrieval for Open-domain Question Answering
Ye Liu, Kazuma Hashimoto, Yingbo Zhou, Semih Yavuz, Caiming Xiong, Philip Yu
Dense neural text retrieval has achieved promising results on open-domain Question Answering (QA), where latent representations of questions and passages are exploited for maximum inner product search in the retrieval process. However, current dense retrievers require splitting documents into short passages that usuall...
https://aclanthology.org/2021.findings-emnlp.19
https://aclanthology.org/2021.findings-emnlp.19.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Visually Grounded Concept Composition
Bowen Zhang, Hexiang Hu, Linlu Qiu, Peter Shaw, Fei Sha
We investigate ways to compose complex concepts in texts from primitive ones while grounding them in images. We propose Concept and Relation Graph (CRG), which builds on top of constituency analysis and consists of recursively combined concepts with predicate functions. Meanwhile, we propose a concept composition neura...
https://aclanthology.org/2021.findings-emnlp.20
https://aclanthology.org/2021.findings-emnlp.20.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Compositional Networks Enable Systematic Generalization for Grounded Language Understanding
Yen-Ling Kuo, Boris Katz, Andrei Barbu
Humans are remarkably flexible when understanding new sentences that include combinations of concepts they have never encountered before. Recent work has shown that while deep networks can mimic some human language abilities when presented with novel sentences, systematic variation uncovers the limitations in the langu...
https://aclanthology.org/2021.findings-emnlp.21
https://aclanthology.org/2021.findings-emnlp.21.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
An Unsupervised Method for Building Sentence Simplification Corpora in Multiple Languages
Xinyu Lu, Jipeng Qiang, Yun Li, Yunhao Yuan, Yi Zhu
The availability of parallel sentence simplification (SS) is scarce for neural SS modelings. We propose an unsupervised method to build SS corpora from large-scale bilingual translation corpora, alleviating the need for SS supervised corpora. Our method is motivated by the following two findings: neural machine transla...
https://aclanthology.org/2021.findings-emnlp.22
https://aclanthology.org/2021.findings-emnlp.22.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
WhiteningBERT: An Easy Unsupervised Sentence Embedding Approach
Junjie Huang, Duyu Tang, Wanjun Zhong, Shuai Lu, Linjun Shou, Ming Gong, Daxin Jiang, Nan Duan
{'url': 'https://github.com/Jun-jie-Huang/WhiteningBERT', '#text': 'Producing the embedding of a sentence in anunsupervised way is valuable to natural language matching and retrieval problems in practice. In this work, we conduct a thorough examination of pretrained model based unsupervised sentence embeddings. We stud...
https://aclanthology.org/2021.findings-emnlp.23
https://aclanthology.org/2021.findings-emnlp.23.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
TWEETSUMM - A Dialog Summarization Dataset for Customer Service
Guy Feigenblat, Chulaka Gunasekara, Benjamin Sznajder, Sachindra Joshi, David Konopnicki, Ranit Aharonov
In a typical customer service chat scenario, customers contact a support center to ask for help or raise complaints, and human agents try to solve the issues. In most cases, at the end of the conversation, agents are asked to write a short summary emphasizing the problem and the proposed solution, usually for the benef...
https://aclanthology.org/2021.findings-emnlp.24
https://aclanthology.org/2021.findings-emnlp.24.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Discourse-Based Sentence Splitting
Liam Cripwell, Joël Legrand, Claire Gardent
Sentence splitting involves the segmentation of a sentence into two or more shorter sentences. It is a key component of sentence simplification, has been shown to help human comprehension and is a useful preprocessing step for NLP tasks such as summarisation and relation extraction. While several methods and datasets h...
https://aclanthology.org/2021.findings-emnlp.25
https://aclanthology.org/2021.findings-emnlp.25.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Multi-Task Dense Retrieval via Model Uncertainty Fusion for Open-Domain Question Answering
Minghan Li, Ming Li, Kun Xiong, Jimmy Lin
{'url': 'https://github.com/alexlimh/DPR_MUF', '#text': 'Multi-task dense retrieval models can be used to retrieve documents from a common corpus (e.g., Wikipedia) for different open-domain question-answering (QA) tasks. However, Karpukhin et al. (2020) shows that jointly learning different QA tasks with one dense mode...
https://aclanthology.org/2021.findings-emnlp.26
https://aclanthology.org/2021.findings-emnlp.26.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Mining the Cause of Political Decision-Making from Social Media: A Case Study of COVID-19 Policies across the US States
Zhijing Jin, Zeyu Peng, Tejas Vaidhya, Bernhard Schoelkopf, Rada Mihalcea
Mining the causes of political decision-making is an active research area in the field of political science. In the past, most studies have focused on long-term policies that are collected over several decades of time, and have primarily relied on surveys as the main source of predictors. However, the recent COVID-19 p...
https://aclanthology.org/2021.findings-emnlp.27
https://aclanthology.org/2021.findings-emnlp.27.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Self-Attention Graph Residual Convolutional Networks for Event Detection with dependency relations
Anan Liu, Ning Xu, Haozhe Liu
Event detection (ED) task aims to classify events by identifying key event trigger words embedded in a piece of text. Previous research have proved the validity of fusing syntactic dependency relations into Graph Convolutional Networks(GCN). While existing GCN-based methods explore latent node-to-node dependency relati...
https://aclanthology.org/2021.findings-emnlp.28
https://aclanthology.org/2021.findings-emnlp.28.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Mixup Decoding for Diverse Machine Translation
Jicheng Li, Pengzhi Gao, Xuanfu Wu, Yang Feng, Zhongjun He, Hua Wu, Haifeng Wang
Diverse machine translation aims at generating various target language translations for a given source language sentence. To leverage the linear relationship in the sentence latent space introduced by the mixup training, we propose a novel method, MixDiversity, to generate different translations for the input sentence ...
https://aclanthology.org/2021.findings-emnlp.29
https://aclanthology.org/2021.findings-emnlp.29.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
An Alignment-Agnostic Model for Chinese Text Error Correction
Liying Zheng, Yue Deng, Weishun Song, Liang Xu, Jing Xiao
This paper investigates how to correct Chinese text errors with types of mistaken, missing and redundant characters, which are common for Chinese native speakers. Most existing models based on detect-correct framework can correct mistaken characters, but cannot handle missing or redundant characters due to inconsistenc...
https://aclanthology.org/2021.findings-emnlp.30
https://aclanthology.org/2021.findings-emnlp.30.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Reasoning Visual Dialog with Sparse Graph Learning and Knowledge Transfer
Gi-Cheon Kang, Junseok Park, Hwaran Lee, Byoung-Tak Zhang, Jin-Hwa Kim
{'url': 'https://github.com/gicheonkang/SGLKT-VisDial', '#text': 'Visual dialog is a task of answering a sequence of questions grounded in an image using the previous dialog history as context. In this paper, we study how to address two fundamental challenges for this task: (1) reasoning over underlying semantic struct...
https://aclanthology.org/2021.findings-emnlp.31
https://aclanthology.org/2021.findings-emnlp.31.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Exploring Sentence Community for Document-Level Event Extraction
Yusheng Huang, Weijia Jia
Document-level event extraction is critical to various natural language processing tasks for providing structured information. Existing approaches by sequential modeling neglect the complex logic structures for long texts. In this paper, we leverage the entity interactions and sentence interactions within long document...
https://aclanthology.org/2021.findings-emnlp.32
https://aclanthology.org/2021.findings-emnlp.32.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems
San Kim, Jin Yea Jang, Minyoung Jung, Saim Shin
Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the perfor...
https://aclanthology.org/2021.findings-emnlp.33
https://aclanthology.org/2021.findings-emnlp.33.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
WHOSe Heritage: Classification of UNESCO World Heritage Statements of "Outstanding Universal Value” with Soft Labels
Nan Bai, Renqian Luo, Pirouz Nourian, Ana Pereira Roders
The UNESCO World Heritage List (WHL) includes the exceptionally valuable cultural and natural heritage to be preserved for mankind. Evaluating and justifying the Outstanding Universal Value (OUV) is essential for each site inscribed in the WHL, and yet a complex task, even for experts, since the selection criteria of O...
https://aclanthology.org/2021.findings-emnlp.34
https://aclanthology.org/2021.findings-emnlp.34.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
P-INT: A Path-based Interaction Model for Few-shot Knowledge Graph Completion
Jingwen Xu, Jing Zhang, Xirui Ke, Yuxiao Dong, Hong Chen, Cuiping Li, Yongbin Liu
Few-shot knowledge graph completion is to infer the unknown facts (i.e., query head-tail entity pairs) of a given relation with only a few observed reference entity pairs. Its general process is to first encode the implicit relation of an entity pair and then match the relation of a query entity pair with the relations...
https://aclanthology.org/2021.findings-emnlp.35
https://aclanthology.org/2021.findings-emnlp.35.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Cartography Active Learning
Mike Zhang, Barbara Plank
We propose Cartography Active Learning (CAL), a novel Active Learning (AL) algorithm that exploits the behavior of the model on individual instances during training as a proxy to find the most informative instances for labeling. CAL is inspired by data maps, which were recently proposed to derive insights into dataset ...
https://aclanthology.org/2021.findings-emnlp.36
https://aclanthology.org/2021.findings-emnlp.36.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Beyond Reptile: Meta-Learned Dot-Product Maximization between Gradients for Improved Single-Task Regularization
Akhil Kedia, Sai Chetan Chinthakindi, Wonho Ryu
Meta-learning algorithms such as MAML, Reptile, and FOMAML have led to improved performance of several neural models. The primary difference between standard gradient descent and these meta-learning approaches is that they contain as a small component the gradient for maximizing dot-product between gradients of batches...
https://aclanthology.org/2021.findings-emnlp.37
https://aclanthology.org/2021.findings-emnlp.37.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
GooAQ: Open Question Answering with Diverse Answer Types
Daniel Khashabi, Amos Ng, Tushar Khot, Ashish Sabharwal, Hannaneh Hajishirzi, Chris Callison-Burch
{'i': ['questions', 'answers'], '#text': 'While day-to-day questions come with a variety of answer types, the current question-answering (QA) literature has failed to adequately address the answer diversity of questions. To this end, we present GooAQ, a large-scale dataset with a variety of answer types. This dataset c...
https://aclanthology.org/2021.findings-emnlp.38
https://aclanthology.org/2021.findings-emnlp.38.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Attention Weights in Transformer NMT Fail Aligning Words Between Sequences but Largely Explain Model Predictions
Javier Ferrando, Marta R. Costa-jussà
This work proposes an extensive analysis of the Transformer architecture in the Neural Machine Translation (NMT) setting. Focusing on the encoder-decoder attention mechanism, we prove that attention weights systematically make alignment errors by relying mainly on uninformative tokens from the source sequence. However,...
https://aclanthology.org/2021.findings-emnlp.39
https://aclanthology.org/2021.findings-emnlp.39.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
BFClass: A Backdoor-free Text Classification Framework
Zichao Li, Dheeraj Mekala, Chengyu Dong, Jingbo Shang
Backdoor attack introduces artificial vulnerabilities into the model by poisoning a subset of the training data via injecting triggers and modifying labels. Various trigger design strategies have been explored to attack text classifiers, however, defending such attacks remains an open problem. In this work, we propose ...
https://aclanthology.org/2021.findings-emnlp.40
https://aclanthology.org/2021.findings-emnlp.40.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Multilingual Chart-based Constituency Parse Extraction from Pre-trained Language Models
Taeuk Kim, Bowen Li, Sang-goo Lee
As it has been unveiled that pre-trained language models (PLMs) are to some extent capable of recognizing syntactic concepts in natural language, much effort has been made to develop a method for extracting complete (binary) parses from PLMs without training separate parsers. We improve upon this paradigm by proposing ...
https://aclanthology.org/2021.findings-emnlp.41
https://aclanthology.org/2021.findings-emnlp.41.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Hyperbolic Geometry is Not Necessary: Lightweight Euclidean-Based Models for Low-Dimensional Knowledge Graph Embeddings
Kai Wang, Yu Liu, Dan Lin, Michael Sheng
Recent knowledge graph embedding (KGE) models based on hyperbolic geometry have shown great potential in a low-dimensional embedding space. However, the necessity of hyperbolic space in KGE is still questionable, because the calculation based on hyperbolic geometry is much more complicated than Euclidean operations. In...
https://aclanthology.org/2021.findings-emnlp.42
https://aclanthology.org/2021.findings-emnlp.42.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
CascadeBERT: Accelerating Inference of Pre-trained Language Models via Calibrated Complete Models Cascade
Lei Li, Yankai Lin, Deli Chen, Shuhuai Ren, Peng Li, Jie Zhou, Xu Sun
Dynamic early exiting aims to accelerate the inference of pre-trained language models (PLMs) by emitting predictions in internal layers without passing through the entire model. In this paper, we empirically analyze the working mechanism of dynamic early exiting and find that it faces a performance bottleneck under hig...
https://aclanthology.org/2021.findings-emnlp.43
https://aclanthology.org/2021.findings-emnlp.43.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Semi-supervised Relation Extraction via Incremental Meta Self-Training
Xuming Hu, Chenwei Zhang, Fukun Ma, Chenyao Liu, Lijie Wen, Philip S. Yu
To alleviate human efforts from obtaining large-scale annotations, Semi-Supervised Relation Extraction methods aim to leverage unlabeled data in addition to learning from limited samples. Existing self-training methods suffer from the gradual drift problem, where noisy pseudo labels on unlabeled data are incorporated d...
https://aclanthology.org/2021.findings-emnlp.44
https://aclanthology.org/2021.findings-emnlp.44.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Keyphrase Generation with Fine-Grained Evaluation-Guided Reinforcement Learning
Yichao Luo, Yige Xu, Jiacheng Ye, Xipeng Qiu, Qi Zhang
{'tex-math': ['F_1@5', 'F_1@M', 'F_1', 'F_1'], 'url': 'https://github.com/xuyige/FGRL4KG', '#text': 'Aiming to generate a set of keyphrases, Keyphrase Generation (KG) is a classical task for capturing the central idea from a given document. Based on Seq2Seq models, the previous reinforcement learning framework on KG ta...
https://aclanthology.org/2021.findings-emnlp.45
https://aclanthology.org/2021.findings-emnlp.45.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Improving Knowledge Graph Embedding Using Affine Transformations of Entities Corresponding to Each Relation
Jinfa Yang, Yongjie Shi, Xin Tong, Robin Wang, Taiyan Chen, Xianghua Ying
To find a suitable embedding for a knowledge graph remains a big challenge nowadays. By using previous knowledge graph embedding methods, every entity in a knowledge graph is usually represented as a k-dimensional vector. As we know, an affine transformation can be expressed in the form of a matrix multiplication follo...
https://aclanthology.org/2021.findings-emnlp.46
https://aclanthology.org/2021.findings-emnlp.46.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Using Question Answering Rewards to Improve Abstractive Summarization
Chulaka Gunasekara, Guy Feigenblat, Benjamin Sznajder, Ranit Aharonov, Sachindra Joshi
Neural abstractive summarization models have drastically improved in the recent years. However, the summaries generated by these models generally suffer from issues such as: not capturing the critical facts in source documents, and containing facts that are inconsistent with the source documents. In this work, we prese...
https://aclanthology.org/2021.findings-emnlp.47
https://aclanthology.org/2021.findings-emnlp.47.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Effect Generation Based on Causal Reasoning
Feiteng Mu, Wenjie Li, Zhipeng Xie
Causal reasoning aims to predict the future scenarios that may be caused by the observed actions. However, existing causal reasoning methods deal with causalities on the word level. In this paper, we propose a novel event-level causal reasoning method and demonstrate its use in the task of effect generation. In particu...
https://aclanthology.org/2021.findings-emnlp.48
https://aclanthology.org/2021.findings-emnlp.48.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Distilling Word Meaning in Context from Pre-trained Language Models
Yuki Arase, Tomoyuki Kajiwara
In this study, we propose a self-supervised learning method that distils representations of word meaning in context from a pre-trained masked language model. Word representations are the basis for context-aware lexical semantics and unsupervised semantic textual similarity (STS) estimation. A previous study transforms ...
https://aclanthology.org/2021.findings-emnlp.49
https://aclanthology.org/2021.findings-emnlp.49.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Unseen Entity Handling in Complex Question Answering over Knowledge Base via Language Generation
Xin Huang, Jung-Jae Kim, Bowei Zou
Complex question answering over knowledge base remains as a challenging task because it involves reasoning over multiple pieces of information, including intermediate entities/relations and other constraints. Previous methods simplify the SPARQL query of a question into such forms as a list or a graph, missing such con...
https://aclanthology.org/2021.findings-emnlp.50
https://aclanthology.org/2021.findings-emnlp.50.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Bidirectional Hierarchical Attention Networks based on Document-level Context for Emotion Cause Extraction
Guimin Hu, Guangming Lu, Yi Zhao
Emotion cause extraction (ECE) aims to extract the causes behind the certain emotion in text. Some works related to the ECE task have been published and attracted lots of attention in recent years. However, these methods neglect two major issues: 1) pay few attentions to the effect of document-level context information...
https://aclanthology.org/2021.findings-emnlp.51
https://aclanthology.org/2021.findings-emnlp.51.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Distantly Supervised Relation Extraction in Federated Settings
Dianbo Sui, Yubo Chen, Kang Liu, Jun Zhao
In relation extraction, distant supervision is widely used to automatically label a large-scale training dataset by aligning a knowledge base with unstructured text. Most existing studies in this field have assumed there is a great deal of centralized unstructured text. However, in practice, texts are usually distribut...
https://aclanthology.org/2021.findings-emnlp.52
https://aclanthology.org/2021.findings-emnlp.52.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Casting the Same Sentiment Classification Problem
Erik Körner, Ahmad Dawar Hakimi, Gerhard Heyer, Martin Potthast
We introduce and study a problem variant of sentiment analysis, namely the “same sentiment classification problem”, where, given a pair of texts, the task is to determine if they have the same sentiment, disregarding the actual sentiment polarity. Among other things, our goal is to enable a more topic-agnostic sentimen...
https://aclanthology.org/2021.findings-emnlp.53
https://aclanthology.org/2021.findings-emnlp.53.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Detecting Compositionally Out-of-Distribution Examples in Semantic Parsing
Denis Lukovnikov, Sina Daubener, Asja Fischer
While neural networks are ubiquitous in state-of-the-art semantic parsers, it has been shown that most standard models suffer from dramatic performance losses when faced with compositionally out-of-distribution (OOD) data. Recently several methods have been proposed to improve compositional generalization in semantic p...
https://aclanthology.org/2021.findings-emnlp.54
https://aclanthology.org/2021.findings-emnlp.54.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Saliency-based Multi-View Mixed Language Training for Zero-shot Cross-lingual Classification
Siyu Lai, Hui Huang, Dong Jing, Yufeng Chen, Jinan Xu, Jian Liu
Recent multilingual pre-trained models, like XLM-RoBERTa (XLM-R), have been demonstrated effective in many cross-lingual tasks. However, there are still gaps between the contextualized representations of similar words in different languages. To solve this problem, we propose a novel framework named Multi-View Mixed Lan...
https://aclanthology.org/2021.findings-emnlp.55
https://aclanthology.org/2021.findings-emnlp.55.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Fighting the COVID-19 Infodemic: Modeling the Perspective of Journalists, Fact-Checkers, Social Media Platforms, Policy Makers, and the Society
Firoj Alam, Shaden Shaar, Fahim Dalvi, Hassan Sajjad, Alex Nikolov, Hamdy Mubarak, Giovanni Da San Martino, Ahmed Abdelali, Nadir Durrani, Kareem Darwish, Abdulaziz Al-Homaid, Wajdi Zaghouani, Tommaso Caselli, Gijs Danoe, Friso Stolk, Britt Bruntink, Preslav Nakov
With the emergence of the COVID-19 pandemic, the political and the medical aspects of disinformation merged as the problem got elevated to a whole new level to become the first global infodemic. Fighting this infodemic has been declared one of the most important focus areas of the World Health Organization, with danger...
https://aclanthology.org/2021.findings-emnlp.56
https://aclanthology.org/2021.findings-emnlp.56.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
FANATIC: FAst Noise-Aware TopIc Clustering
Ari Silburt, Anja Subasic, Evan Thompson, Carmeline Dsilva, Tarec Fares
Extracting salient topics from a collection of documents can be a challenging task when a) the amount of data is large, b) the number of topics is not known a priori, and/or c) “topic noise” is present. We define “topic noise” as the collection of documents that are irrelevant to any coherent topic and should be filter...
https://aclanthology.org/2021.findings-emnlp.57
https://aclanthology.org/2021.findings-emnlp.57.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Stream-level Latency Evaluation for Simultaneous Machine Translation
Javier Iranzo-Sánchez, Jorge Civera Saiz, Alfons Juan
Simultaneous machine translation has recently gained traction thanks to significant quality improvements and the advent of streaming applications. Simultaneous translation systems need to find a trade-off between translation quality and response time, and with this purpose multiple latency measures have been proposed. ...
https://aclanthology.org/2021.findings-emnlp.58
https://aclanthology.org/2021.findings-emnlp.58.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning
Kexin Wang, Nils Reimers, Iryna Gurevych
Learning sentence embeddings often requires a large amount of labeled data. However, for most tasks and domains, labeled data is seldom available and creating it is expensive. In this work, we present a new state-of-the-art unsupervised method based on pre-trained Transformers and Sequential Denoising Auto-Encoder (TSD...
https://aclanthology.org/2021.findings-emnlp.59
https://aclanthology.org/2021.findings-emnlp.59.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
How Suitable Are Subword Segmentation Strategies for Translating Non-Concatenative Morphology?
Chantal Amrhein, Rico Sennrich
Data-driven subword segmentation has become the default strategy for open-vocabulary machine translation and other NLP tasks, but may not be sufficiently generic for optimal learning of non-concatenative morphology. We design a test suite to evaluate segmentation strategies on different types of morphological phenomena...
https://aclanthology.org/2021.findings-emnlp.60
https://aclanthology.org/2021.findings-emnlp.60.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Rethinking Why Intermediate-Task Fine-Tuning Works
Ting-Yun Chang, Chi-Jen Lu
Supplementary Training on Intermediate Labeled-data Tasks (STILT) is a widely applied technique, which first fine-tunes the pretrained language models on an intermediate task before on the target task of interest. While STILT is able to further improve the performance of pretrained language models, it is still unclear ...
https://aclanthology.org/2021.findings-emnlp.61
https://aclanthology.org/2021.findings-emnlp.61.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning
Xisen Jin, Bill Yuchen Lin, Mohammad Rostami, Xiang Ren
The ability to continuously expand knowledge over time and utilize it to rapidly generalize to new tasks is a key feature of human linguistic intelligence. Existing models that pursue rapid generalization to new tasks (e.g., few-shot learning methods), however, are mostly trained in a single shot on fixed datasets, una...
https://aclanthology.org/2021.findings-emnlp.62
https://aclanthology.org/2021.findings-emnlp.62.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Efficient Test Time Adapter Ensembling for Low-resource Language Varieties
Xinyi Wang, Yulia Tsvetkov, Sebastian Ruder, Graham Neubig
Adapters are light-weight modules that allow parameter-efficient fine-tuning of pretrained models. Specialized language and task adapters have recently been proposed to facilitate cross-lingual transfer of multilingual pretrained models (Pfeiffer et al., 2020b). However, this approach requires training a separate langu...
https://aclanthology.org/2021.findings-emnlp.63
https://aclanthology.org/2021.findings-emnlp.63.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
An Analysis of Euclidean vs. Graph-Based Framing for Bilingual Lexicon Induction from Word Embedding Spaces
Kelly Marchisio, Youngser Park, Ali Saad-Eldin, Anton Alyakin, Kevin Duh, Carey Priebe, Philipp Koehn
{'url': 'https://github.com/kellymarchisio/euc-v-graph-bli', '#text': 'Much recent work in bilingual lexicon induction (BLI) views word embeddings as vectors in Euclidean space. As such, BLI is typically solved by finding a linear transformation that maps embeddings to a common space. Alternatively, word embeddings may...
https://aclanthology.org/2021.findings-emnlp.64
https://aclanthology.org/2021.findings-emnlp.64.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
How to Select One Among All ? An Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding
Tianda Li, Ahmad Rashid, Aref Jafari, Pranav Sharma, Ali Ghodsi, Mehdi Rezagholizadeh
Knowledge Distillation (KD) is a model compression algorithm that helps transfer the knowledge in a large neural network into a smaller one. Even though KD has shown promise on a wide range of Natural Language Processing (NLP) applications, little is understood about how one KD algorithm compares to another and whether...
https://aclanthology.org/2021.findings-emnlp.65
https://aclanthology.org/2021.findings-emnlp.65.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Recommend for a Reason: Unlocking the Power of Unsupervised Aspect-Sentiment Co-Extraction
Zeyu Li, Wei Cheng, Reema Kshetramade, John Houser, Haifeng Chen, Wei Wang
Compliments and concerns in reviews are valuable for understanding users’ shopping interests and their opinions with respect to specific aspects of certain items. Existing review-based recommenders favor large and complex language encoders that can only learn latent and uninterpretable text representations. They lack e...
https://aclanthology.org/2021.findings-emnlp.66
https://aclanthology.org/2021.findings-emnlp.66.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Learning Hard Retrieval Decoder Attention for Transformers
Hongfei Xu, Qiuhui Liu, Josef van Genabith, Deyi Xiong
The Transformer translation model is based on the multi-head attention mechanism, which can be parallelized easily. The multi-head attention network performs the scaled dot-product attention function in parallel, empowering the model by jointly attending to information from different representation subspaces at differe...
https://aclanthology.org/2021.findings-emnlp.67
https://aclanthology.org/2021.findings-emnlp.67.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Recall and Learn: A Memory-augmented Solver for Math Word Problems
Shifeng Huang, Jiawei Wang, Jiao Xu, Da Cao, Ming Yang
In this article, we tackle the math word problem, namely, automatically answering a mathematical problem according to its textual description. Although recent methods have demonstrated their promising results, most of these methods are based on template-based generation scheme which results in limited generalization ca...
https://aclanthology.org/2021.findings-emnlp.68
https://aclanthology.org/2021.findings-emnlp.68.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
An Uncertainty-Aware Encoder for Aspect Detection
Thi-Nhung Nguyen, Kiem-Hieu Nguyen, Young-In Song, Tuan-Dung Cao
Aspect detection is a fundamental task in opinion mining. Previous works use seed words either as priors of topic models, as anchors to guide the learning of aspects, or as features of aspect classifiers. This paper presents a novel weakly-supervised method to exploit seed words for aspect detection based on an encoder...
https://aclanthology.org/2021.findings-emnlp.69
https://aclanthology.org/2021.findings-emnlp.69.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Improving Empathetic Response Generation by Recognizing Emotion Cause in Conversations
Jun Gao, Yuhan Liu, Haolin Deng, Wei Wang, Yu Cao, Jiachen Du, Ruifeng Xu
Current approaches to empathetic response generation focus on learning a model to predict an emotion label and generate a response based on this label and have achieved promising results. However, the emotion cause, an essential factor for empathetic responding, is ignored. The emotion cause is a stimulus for human emo...
https://aclanthology.org/2021.findings-emnlp.70
https://aclanthology.org/2021.findings-emnlp.70.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Probing Across Time: What Does RoBERTa Know and When?
Zeyu Liu, Yizhong Wang, Jungo Kasai, Hannaneh Hajishirzi, Noah A. Smith
Models of language trained on very large corpora have been demonstrated useful for natural language processing. As fixed artifacts, they have become the object of intense study, with many researchers “probing” the extent to which they acquire and readily demonstrate linguistic abstractions, factual and commonsense know...
https://aclanthology.org/2021.findings-emnlp.71
https://aclanthology.org/2021.findings-emnlp.71.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Knowledge-Guided Paraphrase Identification
Haoyu Wang, Fenglong Ma, Yaqing Wang, Jing Gao
Paraphrase identification (PI), a fundamental task in natural language processing, is to identify whether two sentences express the same or similar meaning, which is a binary classification problem. Recently, BERT-like pre-trained language models have been a popular choice for the frameworks of various PI models, but a...
https://aclanthology.org/2021.findings-emnlp.72
https://aclanthology.org/2021.findings-emnlp.72.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
R2-D2: A Modular Baseline for Open-Domain Question Answering
Martin Fajcik, Martin Docekal, Karel Ondrej, Pavel Smrz
This work presents a novel four-stage open-domain QA pipeline R2-D2 (Rank twice, reaD twice). The pipeline is composed of a retriever, passage reranker, extractive reader, generative reader and a mechanism that aggregates the final prediction from all system’s components. We demonstrate its strength across three open-d...
https://aclanthology.org/2021.findings-emnlp.73
https://aclanthology.org/2021.findings-emnlp.73.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
What Does Your Smile Mean? Jointly Detecting Multi-Modal Sarcasm and Sentiment Using Quantum Probability
Yaochen Liu, Yazhou Zhang, Qiuchi Li, Benyou Wang, Dawei Song
Sarcasm and sentiment embody intrinsic uncertainty of human cognition, making joint detection of multi-modal sarcasm and sentiment a challenging task. In view of the advantages of quantum probability (QP) in modeling such uncertainty, this paper explores the potential of QP as a mathematical framework and proposes a QP...
https://aclanthology.org/2021.findings-emnlp.74
https://aclanthology.org/2021.findings-emnlp.74.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Discovering Representation Sprachbund For Multilingual Pre-Training
Yimin Fan, Yaobo Liang, Alexandre Muzio, Hany Hassan, Houqiang Li, Ming Zhou, Nan Duan
Multilingual pre-trained models have demonstrated their effectiveness in many multilingual NLP tasks and enabled zero-shot or few-shot transfer from high-resource languages to low-resource ones. However, due to significant typological differences and contradictions between some languages, such models usually perform po...
https://aclanthology.org/2021.findings-emnlp.75
https://aclanthology.org/2021.findings-emnlp.75.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Plan-then-Generate: Controlled Data-to-Text Generation via Planning
Yixuan Su, David Vandyke, Sihui Wang, Yimai Fang, Nigel Collier
Recent developments in neural networks have led to the advance in data-to-text generation. However, the lack of ability of neural models to control the structure of generated output can be limiting in certain real-world applications. In this study, we propose a novel Plan-then-Generate (PlanGen) framework to improve th...
https://aclanthology.org/2021.findings-emnlp.76
https://aclanthology.org/2021.findings-emnlp.76.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Few-Shot Table-to-Text Generation with Prototype Memory
Yixuan Su, Zaiqiao Meng, Simon Baker, Nigel Collier
Neural table-to-text generation models have achieved remarkable progress on an array of tasks. However, due to the data-hungry nature of neural models, their performances strongly rely on large-scale training examples, limiting their applicability in real-world applications. To address this, we propose a new framework:...
https://aclanthology.org/2021.findings-emnlp.77
https://aclanthology.org/2021.findings-emnlp.77.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Leveraging Word-Formation Knowledge for Chinese Word Sense Disambiguation
Hua Zheng, Lei Li, Damai Dai, Deli Chen, Tianyu Liu, Xu Sun, Yang Liu
In parataxis languages like Chinese, word meanings are constructed using specific word-formations, which can help to disambiguate word senses. However, such knowledge is rarely explored in previous word sense disambiguation (WSD) methods. In this paper, we propose to leverage word-formation knowledge to enhance Chinese...
https://aclanthology.org/2021.findings-emnlp.78
https://aclanthology.org/2021.findings-emnlp.78.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Exploiting Curriculum Learning in Unsupervised Neural Machine Translation
Jinliang Lu, Jiajun Zhang
Back-translation (BT) has become one of the de facto components in unsupervised neural machine translation (UNMT), and it explicitly makes UNMT have translation ability. However, all the pseudo bi-texts generated by BT are treated equally as clean data during optimization without considering the quality diversity, lead...
https://aclanthology.org/2021.findings-emnlp.79
https://aclanthology.org/2021.findings-emnlp.79.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Robust Fragment-Based Framework for Cross-lingual Sentence Retrieval
Nattapol Trijakwanich, Peerat Limkonchotiwat, Raheem Sarwar, Wannaphong Phatthiyaphaibun, Ekapol Chuangsuwanich, Sarana Nutanong
Cross-lingual Sentence Retrieval (CLSR) aims at retrieving parallel sentence pairs that are translations of each other from a multilingual set of comparable documents. The retrieved parallel sentence pairs can be used in other downstream NLP tasks such as machine translation and cross-lingual word sense disambiguation....
https://aclanthology.org/2021.findings-emnlp.80
https://aclanthology.org/2021.findings-emnlp.80.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Towards Improving Adversarial Training of NLP Models
Jin Yong Yoo, Yanjun Qi
Adversarial training, a method for learning robust deep neural networks, constructs adversarial examples during training. However, recent methods for generating NLP adversarial examples involve combinatorial search and expensive sentence encoders for constraining the generated instances. As a result, it remains challen...
https://aclanthology.org/2021.findings-emnlp.81
https://aclanthology.org/2021.findings-emnlp.81.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
To Protect and To Serve? Analyzing Entity-Centric Framing of Police Violence
Caleb Ziems, Diyi Yang
Framing has significant but subtle effects on public opinion and policy. We propose an NLP framework to measure entity-centric frames. We use it to understand media coverage on police violence in the United States in a new Police Violence Frames Corpus of 82k news articles spanning 7k police killings. Our work uncovers...
https://aclanthology.org/2021.findings-emnlp.82
https://aclanthology.org/2021.findings-emnlp.82.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Calibrate your listeners! Robust communication-based training for pragmatic speakers
Rose Wang, Julia White, Jesse Mu, Noah Goodman
To be good conversational partners, natural language processing (NLP) systems should be trained to produce contextually useful utterances. Prior work has investigated training NLP systems with communication-based objectives, where a neural listener stands in as a communication partner. However, these systems commonly s...
https://aclanthology.org/2021.findings-emnlp.83
https://aclanthology.org/2021.findings-emnlp.83.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
When Retriever-Reader Meets Scenario-Based Multiple-Choice Questions
ZiXian Huang, Ao Wu, Yulin Shen, Gong Cheng, Yuzhong Qu
Scenario-based question answering (SQA) requires retrieving and reading paragraphs from a large corpus to answer a question which is contextualized by a long scenario description. Since a scenario contains both keyphrases for retrieval and much noise, retrieval for SQA is extremely difficult. Moreover, it can hardly be...
https://aclanthology.org/2021.findings-emnlp.84
https://aclanthology.org/2021.findings-emnlp.84.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Structured abbreviation expansion in context
Kyle Gorman, Christo Kirov, Brian Roark, Richard Sproat
Ad hoc abbreviations are commonly found in informal communication channels that favor shorter messages. We consider the task of reversing these abbreviations in context to recover normalized, expanded versions of abbreviated messages. The problem is related to, but distinct from, spelling correction, as ad hoc abbrevia...
https://aclanthology.org/2021.findings-emnlp.85
https://aclanthology.org/2021.findings-emnlp.85.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Task-adaptive Pre-training and Self-training are Complementary for Natural Language Understanding
Shiyang Li, Semih Yavuz, Wenhu Chen, Xifeng Yan
Task-adaptive pre-training (TAPT) and Self-training (ST) have emerged as the major semi-supervised approaches to improve natural language understanding (NLU) tasks with massive amount of unlabeled data. However, it’s unclear whether they learn similar representations or they can be effectively combined. In this paper, ...
https://aclanthology.org/2021.findings-emnlp.86
https://aclanthology.org/2021.findings-emnlp.86.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
CNNBiF: CNN-based Bigram Features for Named Entity Recognition
Chul Sung, Vaibhava Goel, Etienne Marcheret, Steven Rennie, David Nahamoo
Transformer models fine-tuned with a sequence labeling objective have become the dominant choice for named entity recognition tasks. However, a self-attention mechanism with unconstrained length can fail to fully capture local dependencies, particularly when training data is limited. In this paper, we propose a novel j...
https://aclanthology.org/2021.findings-emnlp.87
https://aclanthology.org/2021.findings-emnlp.87.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Compositional Generalization via Semantic Tagging
Hao Zheng, Mirella Lapata
Although neural sequence-to-sequence models have been successfully applied to semantic parsing, they fail at compositional generalization, i.e., they are unable to systematically generalize to unseen compositions of seen components. Motivated by traditional semantic parsing where compositionality is explicitly accounte...
https://aclanthology.org/2021.findings-emnlp.88
https://aclanthology.org/2021.findings-emnlp.88.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Towards Document-Level Paraphrase Generation with Sentence Rewriting and Reordering
Zhe Lin, Yitao Cai, Xiaojun Wan
Paraphrase generation is an important task in natural language processing. Previous works focus on sentence-level paraphrase generation, while ignoring document-level paraphrase generation, which is a more challenging and valuable task. In this paper, we explore the task of document-level paraphrase generation for the ...
https://aclanthology.org/2021.findings-emnlp.89
https://aclanthology.org/2021.findings-emnlp.89.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Exploring Decomposition for Table-based Fact Verification
Xiaoyu Yang, Xiaodan Zhu
Fact verification based on structured data is challenging as it requires models to understand both natural language and symbolic operations performed over tables. Although pre-trained language models have demonstrated a strong capability in verifying simple statements, they struggle with complex statements that involve...
https://aclanthology.org/2021.findings-emnlp.90
https://aclanthology.org/2021.findings-emnlp.90.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Diversity and Consistency: Exploring Visual Question-Answer Pair Generation
Sen Yang, Qingyu Zhou, Dawei Feng, Yang Liu, Chao Li, Yunbo Cao, Dongsheng Li
Although showing promising values to downstream applications, generating question and answer together is under-explored. In this paper, we introduce a novel task that targets question-answer pair generation from visual images. It requires not only generating diverse question-answer pairs but also keeping the consistenc...
https://aclanthology.org/2021.findings-emnlp.91
https://aclanthology.org/2021.findings-emnlp.91.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Entity-level Cross-modal Learning Improves Multi-modal Machine Translation
Xin Huang, Jiajun Zhang, Chengqing Zong
Multi-modal machine translation (MMT) aims at improving translation performance by incorporating visual information. Most of the studies leverage the visual information through integrating the global image features as auxiliary input or decoding by attending to relevant local regions of the image. However, this kind of...
https://aclanthology.org/2021.findings-emnlp.92
https://aclanthology.org/2021.findings-emnlp.92.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Learning to Ground Visual Objects for Visual Dialog
Feilong Chen, Xiuyi Chen, Can Xu, Daxin Jiang
Visual dialog is challenging since it needs to answer a series of coherent questions based on understanding the visual environment. How to ground related visual objects is one of the key problems. Previous studies utilize the question and history to attend to the image and achieve satisfactory performance, while these ...
https://aclanthology.org/2021.findings-emnlp.93
https://aclanthology.org/2021.findings-emnlp.93.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
KERS: A Knowledge-Enhanced Framework for Recommendation Dialog Systems with Multiple Subgoals
Jun Zhang, Yan Yang, Chencai Chen, Liang He, Zhou Yu
Recommendation dialogs require the system to build a social bond with users to gain trust and develop affinity in order to increase the chance of a successful recommendation. It is beneficial to divide up, such conversations with multiple subgoals (such as social chat, question answering, recommendation, etc.), so that...
https://aclanthology.org/2021.findings-emnlp.94
https://aclanthology.org/2021.findings-emnlp.94.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Less Is More: Domain Adaptation with Lottery Ticket for Reading Comprehension
Haichao Zhu, Zekun Wang, Heng Zhang, Ming Liu, Sendong Zhao, Bing Qin
In this paper, we propose a simple few-shot domain adaptation paradigm for reading comprehension. We first identify the lottery subnetwork structure within the Transformer-based source domain model via gradual magnitude pruning. Then, we only fine-tune the lottery subnetwork, a small fraction of the whole parameters, o...
https://aclanthology.org/2021.findings-emnlp.95
https://aclanthology.org/2021.findings-emnlp.95.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Effectiveness of Pre-training for Few-shot Intent Classification
Haode Zhang, Yuwei Zhang, Li-Ming Zhan, Jiaxin Chen, Guangyuan Shi, Albert Y.S. Lam, Xiao-Ming Wu
{'url': 'https://github.com/hdzhang-code/IntentBERT', '#text': 'This paper investigates the effectiveness of pre-training for few-shot intent classification. While existing paradigms commonly further pre-train language models such as BERT on a vast amount of unlabeled corpus, we find it highly effective and efficient t...
https://aclanthology.org/2021.findings-emnlp.96
https://aclanthology.org/2021.findings-emnlp.96.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Improving Abstractive Dialogue Summarization with Hierarchical Pretraining and Topic Segment
MengNan Qi, Hao Liu, YuZhuo Fu, Ting Liu
With the increasing abundance of meeting transcripts, meeting summary has attracted more and more attention from researchers. The unsupervised pre-training method based on transformer structure combined with fine-tuning of downstream tasks has achieved great success in the field of text summarization. However, the sema...
https://aclanthology.org/2021.findings-emnlp.97
https://aclanthology.org/2021.findings-emnlp.97.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Learning to Answer Psychological Questionnaire for Personality Detection
Feifan Yang, Tao Yang, Xiaojun Quan, Qinliang Su
Existing text-based personality detection research mostly relies on data-driven approaches to implicitly capture personality cues in online posts, lacking the guidance of psychological knowledge. Psychological questionnaire, which contains a series of dedicated questions highly related to personality traits, plays a cr...
https://aclanthology.org/2021.findings-emnlp.98
https://aclanthology.org/2021.findings-emnlp.98.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Exploiting Reasoning Chains for Multi-hop Science Question Answering
Weiwen Xu, Yang Deng, Huihui Zhang, Deng Cai, Wai Lam
{'i': 'Chain-aware loss', '#text': 'We propose a novel Chain Guided Retriever-reader (CGR) framework to model the reasoning chain for multi-hop Science Question Answering. Our framework is capable of performing explainable reasoning without the need of any corpus-specific annotations, such as the ground-truth reasoning...
https://aclanthology.org/2021.findings-emnlp.99
https://aclanthology.org/2021.findings-emnlp.99.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021
Winnowing Knowledge for Multi-choice Question Answering
Yeqiu Li, Bowei Zou, Zhifeng Li, Ai Ti Aw, Yu Hong, Qiaoming Zhu
We tackle multi-choice question answering. Acquiring related commonsense knowledge to the question and options facilitates the recognition of the correct answer. However, the current reasoning models suffer from the noises in the retrieved knowledge. In this paper, we propose a novel encoding method which is able to co...
https://aclanthology.org/2021.findings-emnlp.100
https://aclanthology.org/2021.findings-emnlp.100.pdf
EMNLP 2021
AIM-Harvard/EMNLP-Accepted-Papers
default
emnlp_findings_2021