venue stringclasses 1 value | title stringlengths 18 162 | abstract stringlengths 252 1.89k | doc_id stringlengths 32 32 | publication_year int64 2.02k 2.02k | sentences listlengths 1 13 | events listlengths 1 24 | document listlengths 50 348 |
|---|---|---|---|---|---|---|---|
ACL | Using Context in Neural Machine Translation Training Objectives | We present Neural Machine Translation (NMT) training using document-level metrics with batch-level documents. Previous sequence-objective approaches to NMT training focus exclusively on sentence-level metrics like sentence BLEU which do not correspond to the desired evaluation metric, typically document BLEU. Meanwhile research into document-level NMT training focuses on data or model architecture rather than training procedure. We find that each of these lines of research has a clear space in it for the other, and propose merging them with a scheme that allows a document-level evaluation metric to be used in the NMT training objective. We first sample pseudo-documents from sentence samples. We then approximate the expected document BLEU gradient with Monte Carlo sampling for use as a cost function in Minimum Risk Training (MRT). This two-level sampling procedure gives NMT performance gains over sequence MRT and maximum-likelihood training. We demonstrate that training is more robust for document-level metrics than with sequence metrics. We further demonstrate improvements on NMT with TER and Grammatical Error Correction (GEC) using GLEU, both metrics used at the document level for evaluations. | d7d5022680f79faaff67d1d696f418fa | 2,020 | [
"we present neural machine translation ( nmt ) training using document - level metrics with batch - level documents .",
"previous sequence - objective approaches to nmt training focus exclusively on sentence - level metrics like sentence bleu which do not correspond to the desired evaluation metric , typically do... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
... | [
"we",
"present",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"training",
"using",
"document",
"-",
"level",
"metrics",
"with",
"batch",
"-",
"level",
"documents",
".",
"previous",
"sequence",
"-",
"objective",
"approaches",
"to",
"nmt",
"training",
"f... |
ACL | Learning Event Graph Knowledge for Abductive Reasoning | Abductive reasoning aims at inferring the most plausible explanation for observed events, which would play critical roles in various NLP applications, such as reading comprehension and question answering. To facilitate this task, a narrative text based abductive reasoning task 𝛼NLI is proposed, together with explorations about building reasoning framework using pretrained language models. However, abundant event commonsense knowledge is not well exploited for this task. To fill this gap, we propose a variational autoencoder based model ege-RoBERTa, which employs a latent variable to capture the necessary commonsense knowledge from event graph for guiding the abductive reasoning task. Experimental results show that through learning the external event graph knowledge, our approach outperforms the baseline methods on the 𝛼NLI task. | 5de56f6a42e445a2823b4134d8e845a9 | 2,021 | [
"abductive reasoning aims at inferring the most plausible explanation for observed events , which would play critical roles in various nlp applications , such as reading comprehension and question answering .",
"to facilitate this task , a narrative text based abductive reasoning task [UNK] is proposed , together... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1
],
"text": "abductive reasoning",
"tokens": [
"abductive",
"reasoning"
]
}
],
"event_type": "ITT",
"trigge... | [
"abductive",
"reasoning",
"aims",
"at",
"inferring",
"the",
"most",
"plausible",
"explanation",
"for",
"observed",
"events",
",",
"which",
"would",
"play",
"critical",
"roles",
"in",
"various",
"nlp",
"applications",
",",
"such",
"as",
"reading",
"comprehension",
... |
ACL | Glancing Transformer for Non-Autoregressive Neural Machine Translation | Recent work on non-autoregressive neural machine translation (NAT) aims at improving the efficiency by parallel decoding without sacrificing the quality. However, existing NAT methods are either inferior to Transformer or require multiple decoding passes, leading to reduced speedup. We propose the Glancing Language Model (GLM) for single-pass parallel generation models. With GLM, we develop Glancing Transformer (GLAT) for machine translation. With only single-pass parallel decoding, GLAT is able to generate high-quality translation with 8×-15× speedup. Note that GLAT does not modify the network architecture, which is a training method to learn word interdependency. Experiments on multiple WMT language directions show that GLAT outperforms all previous single pass non-autoregressive methods, and is nearly comparable to Transformer, reducing the gap to 0.25-0.9 BLEU points. | 2f49880cc8858681fbec75915d0aa441 | 2,021 | [
"recent work on non - autoregressive neural machine translation ( nat ) aims at improving the efficiency by parallel decoding without sacrificing the quality .",
"however , existing nat methods are either inferior to transformer or require multiple decoding passes , leading to reduced speedup .",
"we propose th... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
3,
4,
5,
6,
7,
8
],
"text": "non - autoregressive neural machine translation",
"tokens": [
"non",
... | [
"recent",
"work",
"on",
"non",
"-",
"autoregressive",
"neural",
"machine",
"translation",
"(",
"nat",
")",
"aims",
"at",
"improving",
"the",
"efficiency",
"by",
"parallel",
"decoding",
"without",
"sacrificing",
"the",
"quality",
".",
"however",
",",
"existing",
... |
ACL | Generalized Tuning of Distributional Word Vectors for Monolingual and Cross-Lingual Lexical Entailment | Lexical entailment (LE; also known as hyponymy-hypernymy or is-a relation) is a core asymmetric lexical relation that supports tasks like taxonomy induction and text generation. In this work, we propose a simple and effective method for fine-tuning distributional word vectors for LE. Our Generalized Lexical ENtailment model (GLEN) is decoupled from the word embedding model and applicable to any distributional vector space. Yet – unlike existing retrofitting models – it captures a general specialization function allowing for LE-tuning of the entire distributional space and not only the vectors of words seen in lexical constraints. Coupled with a multilingual embedding space, GLEN seamlessly enables cross-lingual LE detection. We demonstrate the effectiveness of GLEN in graded LE and report large improvements (over 20% in accuracy) over state-of-the-art in cross-lingual LE detection. | f23b64f32731964aad292c8b17b84875 | 2,019 | [
"lexical entailment ( le ; also known as hyponymy - hypernymy or is - a relation ) is a core asymmetric lexical relation that supports tasks like taxonomy induction and text generation .",
"in this work , we propose a simple and effective method for fine - tuning distributional word vectors for le .",
"our gene... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "FEA",
"offsets": [
125
],
"text": "lexical entailment",
"tokens": [
"le"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
22
... | [
"lexical",
"entailment",
"(",
"le",
";",
"also",
"known",
"as",
"hyponymy",
"-",
"hypernymy",
"or",
"is",
"-",
"a",
"relation",
")",
"is",
"a",
"core",
"asymmetric",
"lexical",
"relation",
"that",
"supports",
"tasks",
"like",
"taxonomy",
"induction",
"and",
... |
ACL | On the Encoder-Decoder Incompatibility in Variational Text Modeling and Beyond | Variational autoencoders (VAEs) combine latent variables with amortized variational inference, whose optimization usually converges into a trivial local optimum termed posterior collapse, especially in text modeling. By tracking the optimization dynamics, we observe the encoder-decoder incompatibility that leads to poor parameterizations of the data manifold. We argue that the trivial local optimum may be avoided by improving the encoder and decoder parameterizations since the posterior network is part of a transition map between them. To this end, we propose Coupled-VAE, which couples a VAE model with a deterministic autoencoder with the same structure and improves the encoder and decoder parameterizations via encoder weight sharing and decoder signal matching. We apply the proposed Coupled-VAE approach to various VAE models with different regularization, posterior family, decoder structure, and optimization strategy. Experiments on benchmark datasets (i.e., PTB, Yelp, and Yahoo) show consistently improved results in terms of probability estimation and richness of the latent space. We also generalize our method to conditional language modeling and propose Coupled-CVAE, which largely improves the diversity of dialogue generation on the Switchboard dataset. | 8c458737268c27e7db14825bcb984856 | 2,020 | [
"variational autoencoders ( vaes ) combine latent variables with amortized variational inference , whose optimization usually converges into a trivial local optimum termed posterior collapse , especially in text modeling .",
"by tracking the optimization dynamics , we observe the encoder - decoder incompatibility... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1
],
"text": "variational autoencoders",
"tokens": [
"variational",
"autoencoders"
]
}
],
"event_type": "ITT",
... | [
"variational",
"autoencoders",
"(",
"vaes",
")",
"combine",
"latent",
"variables",
"with",
"amortized",
"variational",
"inference",
",",
"whose",
"optimization",
"usually",
"converges",
"into",
"a",
"trivial",
"local",
"optimum",
"termed",
"posterior",
"collapse",
"... |
ACL | OpinionDigest: A Simple Framework for Opinion Summarization | We present OpinionDigest, an abstractive opinion summarization framework, which does not rely on gold-standard summaries for training. The framework uses an Aspect-based Sentiment Analysis model to extract opinion phrases from reviews, and trains a Transformer model to reconstruct the original reviews from these extractions. At summarization time, we merge extractions from multiple reviews and select the most popular ones. The selected opinions are used as input to the trained Transformer model, which verbalizes them into an opinion summary. OpinionDigest can also generate customized summaries, tailored to specific user needs, by filtering the selected opinions according to their aspect and/or sentiment. Automatic evaluation on Yelp data shows that our framework outperforms competitive baselines. Human studies on two corpora verify that OpinionDigest produces informative summaries and shows promising customization capabilities. | 467668a5a9da53b385fce3f041434a7d | 2,020 | [
"we present opiniondigest , an abstractive opinion summarization framework , which does not rely on gold - standard summaries for training .",
"the framework uses an aspect - based sentiment analysis model to extract opinion phrases from reviews , and trains a transformer model to reconstruct the original reviews... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
... | [
"we",
"present",
"opiniondigest",
",",
"an",
"abstractive",
"opinion",
"summarization",
"framework",
",",
"which",
"does",
"not",
"rely",
"on",
"gold",
"-",
"standard",
"summaries",
"for",
"training",
".",
"the",
"framework",
"uses",
"an",
"aspect",
"-",
"base... |
ACL | Multi-Domain Dialogue Acts and Response Co-Generation | Generating fluent and informative responses is of critical importance for task-oriented dialogue systems. Existing pipeline approaches generally predict multiple dialogue acts first and use them to assist response generation. There are at least two shortcomings with such approaches. First, the inherent structures of multi-domain dialogue acts are neglected. Second, the semantic associations between acts and responses are not taken into account for response generation. To address these issues, we propose a neural co-generation model that generates dialogue acts and responses concurrently. Unlike those pipeline approaches, our act generation module preserves the semantic structures of multi-domain dialogue acts and our response generation module dynamically attends to different acts as needed. We train the two modules jointly using an uncertainty loss to adjust their task weights adaptively. Extensive experiments are conducted on the large-scale MultiWOZ dataset and the results show that our model achieves very favorable improvement over several state-of-the-art models in both automatic and human evaluations. | 71fb0c7d2635264af91b4c0e47bb9d1b | 2,020 | [
"generating fluent and informative responses is of critical importance for task - oriented dialogue systems .",
"existing pipeline approaches generally predict multiple dialogue acts first and use them to assist response generation .",
"there are at least two shortcomings with such approaches .",
"first , the... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
10,
11,
12,
13,
14
],
"text": "task - oriented dialogue systems",
"tokens": [
"task",
"-",
"orien... | [
"generating",
"fluent",
"and",
"informative",
"responses",
"is",
"of",
"critical",
"importance",
"for",
"task",
"-",
"oriented",
"dialogue",
"systems",
".",
"existing",
"pipeline",
"approaches",
"generally",
"predict",
"multiple",
"dialogue",
"acts",
"first",
"and",... |
ACL | LexSubCon: Integrating Knowledge from Lexical Resources into Contextual Embeddings for Lexical Substitution | Lexical substitution is the task of generating meaningful substitutes for a word in a given textual context. Contextual word embedding models have achieved state-of-the-art results in the lexical substitution task by relying on contextual information extracted from the replaced word within the sentence. However, such models do not take into account structured knowledge that exists in external lexical databases.We introduce LexSubCon, an end-to-end lexical substitution framework based on contextual embedding models that can identify highly-accurate substitute candidates. This is achieved by combining contextual information with knowledge from structured lexical resources. Our approach involves: (i) introducing a novel mix-up embedding strategy to the target word’s embedding through linearly interpolating the pair of the target input embedding and the average embedding of its probable synonyms; (ii) considering the similarity of the sentence-definition embeddings of the target word and its proposed candidates; and, (iii) calculating the effect of each substitution on the semantics of the sentence through a fine-tuned sentence similarity model. Our experiments show that LexSubCon outperforms previous state-of-the-art methods by at least 2% over all the official lexical substitution metrics on LS07 and CoInCo benchmark datasets that are widely used for lexical substitution tasks. | 13cec878ea64f2c69669d4bd5c580e6e | 2,022 | [
"lexical substitution is the task of generating meaningful substitutes for a word in a given textual context .",
"contextual word embedding models have achieved state - of - the - art results in the lexical substitution task by relying on contextual information extracted from the replaced word within the sentence... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1
],
"text": "lexical substitution",
"tokens": [
"lexical",
"substitution"
]
}
],
"event_type": "ITT",
"trig... | [
"lexical",
"substitution",
"is",
"the",
"task",
"of",
"generating",
"meaningful",
"substitutes",
"for",
"a",
"word",
"in",
"a",
"given",
"textual",
"context",
".",
"contextual",
"word",
"embedding",
"models",
"have",
"achieved",
"state",
"-",
"of",
"-",
"the",... |
ACL | Learning Language Specific Sub-network for Multilingual Machine Translation | Multilingual neural machine translation aims at learning a single translation model for multiple languages. These jointly trained models often suffer from performance degradationon rich-resource language pairs. We attribute this degeneration to parameter interference. In this paper, we propose LaSS to jointly train a single unified multilingual MT model. LaSS learns Language Specific Sub-network (LaSS) for each language pair to counter parameter interference. Comprehensive experiments on IWSLT and WMT datasets with various Transformer architectures show that LaSS obtains gains on 36 language pairs by up to 1.2 BLEU. Besides, LaSS shows its strong generalization performance at easy adaptation to new language pairs and zero-shot translation. LaSS boosts zero-shot translation with an average of 8.3 BLEU on 30 language pairs. Codes and trained models are available at https://github.com/NLP-Playground/LaSS. | 80dec31c5b2ba876175c17ef4fd94982 | 2,021 | [
"multilingual neural machine translation aims at learning a single translation model for multiple languages .",
"these jointly trained models often suffer from performance degradationon rich - resource language pairs .",
"we attribute this degeneration to parameter interference .",
"in this paper , we propose... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
3
],
"text": "multilingual neural machine translation",
"tokens": [
"multilingual",
"neural",
"mac... | [
"multilingual",
"neural",
"machine",
"translation",
"aims",
"at",
"learning",
"a",
"single",
"translation",
"model",
"for",
"multiple",
"languages",
".",
"these",
"jointly",
"trained",
"models",
"often",
"suffer",
"from",
"performance",
"degradationon",
"rich",
"-",... |
ACL | Search from History and Reason for Future: Two-stage Reasoning on Temporal Knowledge Graphs | Temporal Knowledge Graphs (TKGs) have been developed and used in many different areas. Reasoning on TKGs that predicts potential facts (events) in the future brings great challenges to existing models. When facing a prediction task, human beings usually search useful historical information (i.e., clues) in their memories and then reason for future meticulously. Inspired by this mechanism, we propose CluSTeR to predict future facts in a two-stage manner, Clue Searching and Temporal Reasoning, accordingly. Specifically, at the clue searching stage, CluSTeR learns a beam search policy via reinforcement learning (RL) to induce multiple clues from historical facts. At the temporal reasoning stage, it adopts a graph convolution network based sequence method to deduce answers from clues. Experiments on four datasets demonstrate the substantial advantages of CluSTeR compared with the state-of-the-art methods. Moreover, the clues found by CluSTeR further provide interpretability for the results. | 192d8a4ffa593c92f2309fa163bcf4d9 | 2,021 | [
"temporal knowledge graphs ( tkgs ) have been developed and used in many different areas .",
"reasoning on tkgs that predicts potential facts ( events ) in the future brings great challenges to existing models .",
"when facing a prediction task , human beings usually search useful historical information ( i . e... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "temporal knowledge graphs",
"tokens": [
"temporal",
"knowledge",
"graphs"
]
}
]... | [
"temporal",
"knowledge",
"graphs",
"(",
"tkgs",
")",
"have",
"been",
"developed",
"and",
"used",
"in",
"many",
"different",
"areas",
".",
"reasoning",
"on",
"tkgs",
"that",
"predicts",
"potential",
"facts",
"(",
"events",
")",
"in",
"the",
"future",
"brings"... |
ACL | Exploiting Invertible Decoders for Unsupervised Sentence Representation Learning | Encoder-decoder models for unsupervised sentence representation learning using the distributional hypothesis effectively constrain the learnt representation of a sentence to only that needed to reproduce the next sentence. While the decoder is important to constrain the representation, these models tend to discard the decoder after training since only the encoder is needed to map the input sentence into a vector representation. However, parameters learnt in the decoder also contain useful information about the language. In order to utilise the decoder after learning, we present two types of decoding functions whose inverse can be easily derived without expensive inverse calculation. Therefore, the inverse of the decoding function serves as another encoder that produces sentence representations. We show that, with careful design of the decoding functions, the model learns good sentence representations, and the ensemble of the representations produced from the encoder and the inverse of the decoder demonstrate even better generalisation ability and solid transferability. | 35069245e7f012e3ad914bb97dc0933c | 2,019 | [
"encoder - decoder models for unsupervised sentence representation learning using the distributional hypothesis effectively constrain the learnt representation of a sentence to only that needed to reproduce the next sentence .",
"while the decoder is important to constrain the representation , these models tend t... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
0,
1,
2,
3
],
"text": "encoder - decoder models",
"tokens": [
"encoder",
"-",
"decoder",
"models"... | [
"encoder",
"-",
"decoder",
"models",
"for",
"unsupervised",
"sentence",
"representation",
"learning",
"using",
"the",
"distributional",
"hypothesis",
"effectively",
"constrain",
"the",
"learnt",
"representation",
"of",
"a",
"sentence",
"to",
"only",
"that",
"needed",
... |
ACL | A Training-free and Reference-free Summarization Evaluation Metric via Centrality-weighted Relevance and Self-referenced Redundancy | In recent years, reference-based and supervised summarization evaluation metrics have been widely explored. However, collecting human-annotated references and ratings are costly and time-consuming. To avoid these limitations, we propose a training-free and reference-free summarization evaluation metric. Our metric consists of a centrality-weighted relevance score and a self-referenced redundancy score. The relevance score is computed between the pseudo reference built from the source document and the given summary, where the pseudo reference content is weighted by the sentence centrality to provide importance guidance. Besides an F1-based relevance score, we also design an F𝛽-based variant that pays more attention to the recall score. As for the redundancy score of the summary, we compute a self-masked similarity score with the summary itself to evaluate the redundant information in the summary. Finally, we combine the relevance and redundancy scores to produce the final evaluation score of the given summary. Extensive experiments show that our methods can significantly outperform existing methods on both multi-document and single-document summarization evaluation. The source code is released at https://github.com/Chen-Wang-CUHK/Training-Free-and-Ref-Free-Summ-Evaluation. | b41140ceee5159bf597022cd0a55ac97 | 2,021 | [
"in recent years , reference - based and supervised summarization evaluation metrics have been widely explored .",
"however , collecting human - annotated references and ratings are costly and time - consuming .",
"to avoid these limitations , we propose a training - free and reference - free summarization eval... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
4,
5,
6,
9,
10,
11
],
"text": "reference - based summarization evaluation metrics",
"tokens": [
"reference"... | [
"in",
"recent",
"years",
",",
"reference",
"-",
"based",
"and",
"supervised",
"summarization",
"evaluation",
"metrics",
"have",
"been",
"widely",
"explored",
".",
"however",
",",
"collecting",
"human",
"-",
"annotated",
"references",
"and",
"ratings",
"are",
"co... |
ACL | Learning to Identify Follow-Up Questions in Conversational Question Answering | Despite recent progress in conversational question answering, most prior work does not focus on follow-up questions. Practical conversational question answering systems often receive follow-up questions in an ongoing conversation, and it is crucial for a system to be able to determine whether a question is a follow-up question of the current conversation, for more effective answer finding subsequently. In this paper, we introduce a new follow-up question identification task. We propose a three-way attentive pooling network that determines the suitability of a follow-up question by capturing pair-wise interactions between the associated passage, the conversation history, and a candidate follow-up question. It enables the model to capture topic continuity and topic shift while scoring a particular candidate follow-up question. Experiments show that our proposed three-way attentive pooling network outperforms all baseline systems by significant margins. | 50955d0ef5ad879a6bd5a890da309b1d | 2,020 | [
"despite recent progress in conversational question answering , most prior work does not focus on follow - up questions .",
"practical conversational question answering systems often receive follow - up questions in an ongoing conversation , and it is crucial for a system to be able to determine whether a questio... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
4,
5,
6
],
"text": "conversational question answering",
"tokens": [
"conversational",
"question",
"answering"
... | [
"despite",
"recent",
"progress",
"in",
"conversational",
"question",
"answering",
",",
"most",
"prior",
"work",
"does",
"not",
"focus",
"on",
"follow",
"-",
"up",
"questions",
".",
"practical",
"conversational",
"question",
"answering",
"systems",
"often",
"receiv... |
ACL | Improving Low-Resource Cross-lingual Document Retrieval by Reranking with Deep Bilingual Representations | In this paper, we propose to boost low-resource cross-lingual document retrieval performance with deep bilingual query-document representations. We match queries and documents in both source and target languages with four components, each of which is implemented as a term interaction-based deep neural network with cross-lingual word embeddings as input. By including query likelihood scores as extra features, our model effectively learns to rerank the retrieved documents by using a small number of relevance labels for low-resource language pairs. Due to the shared cross-lingual word embedding space, the model can also be directly applied to another language pair without any training label. Experimental results on the Material dataset show that our model outperforms the competitive translation-based baselines on English-Swahili, English-Tagalog, and English-Somali cross-lingual information retrieval tasks. | a49a86f25fc0f9aa890df32a9f0250cc | 2,019 | [
"in this paper , we propose to boost low - resource cross - lingual document retrieval performance with deep bilingual query - document representations .",
"we match queries and documents in both source and target languages with four components , each of which is implemented as a term interaction - based deep neu... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
4
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
... | [
"in",
"this",
"paper",
",",
"we",
"propose",
"to",
"boost",
"low",
"-",
"resource",
"cross",
"-",
"lingual",
"document",
"retrieval",
"performance",
"with",
"deep",
"bilingual",
"query",
"-",
"document",
"representations",
".",
"we",
"match",
"queries",
"and",... |
ACL | Multilingual Speech Translation from Efficient Finetuning of Pretrained Models | We present a simple yet effective approach to build multilingual speech-to-text (ST) translation through efficient transfer learning from a pretrained speech encoder and text decoder. Our key finding is that a minimalistic LNA (LayerNorm and Attention) finetuning can achieve zero-shot crosslingual and cross-modality transfer ability by only finetuning 10 50% of the pretrained parameters. This effectively leverages large pretrained models at low training cost such as wav2vec 2.0 for acoustic modeling, and mBART for multilingual text generation. This sets a new state-of-the-art for 36 translation directions (and surpassing cascaded ST for 26 of them) on the large-scale multilingual ST benchmark CoVoST 2 (+6.4 BLEU on average for En-X directions and +6.7 BLEU for X-En directions). Our approach demonstrates strong zero-shot performance in a many-to-many multilingual model (+5.6 BLEU on average across 28 non-English directions), making it an appealing approach for attaining high-quality speech translation with improved parameter and data efficiency. | 8950b621e2b0d27c770ce070cf3a2710 | 2,021 | [
"we present a simple yet effective approach to build multilingual speech - to - text ( st ) translation through efficient transfer learning from a pretrained speech encoder and text decoder .",
"our key finding is that a minimalistic lna ( layernorm and attention ) finetuning can achieve zero - shot crosslingual ... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
... | [
"we",
"present",
"a",
"simple",
"yet",
"effective",
"approach",
"to",
"build",
"multilingual",
"speech",
"-",
"to",
"-",
"text",
"(",
"st",
")",
"translation",
"through",
"efficient",
"transfer",
"learning",
"from",
"a",
"pretrained",
"speech",
"encoder",
"and... |
ACL | ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic | Pre-trained language models (LMs) are currently integral to many natural language processing systems. Although multilingual LMs were also introduced to serve many languages, these have limitations such as being costly at inference time and the size and diversity of non-English data involved in their pre-training. We remedy these issues for a collection of diverse Arabic varieties by introducing two powerful deep bidirectional transformer-based models, ARBERT and MARBERT. To evaluate our models, we also introduce ARLUE, a new benchmark for multi-dialectal Arabic language understanding evaluation. ARLUE is built using 42 datasets targeting six different task clusters, allowing us to offer a series of standardized experiments under rich conditions. When fine-tuned on ARLUE, our models collectively achieve new state-of-the-art results across the majority of tasks (37 out of 48 classification tasks, on the 42 datasets). Our best model acquires the highest ARLUE score (77.40) across all six task clusters, outperforming all other models including XLM-R Large ( 3.4x larger size). Our models are publicly available at https://github.com/UBC-NLP/marbert and ARLUE will be released through the same repository. | f2b4891faff4c22cbe5e10057df485bd | 2,021 | [
"pre - trained language models ( lms ) are currently integral to many natural language processing systems .",
"although multilingual lms were also introduced to serve many languages , these have limitations such as being costly at inference time and the size and diversity of non - english data involved in their p... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
0,
1,
2,
3,
4
],
"text": "pre - trained language models",
"tokens": [
"pre",
"-",
"trained",
... | [
"pre",
"-",
"trained",
"language",
"models",
"(",
"lms",
")",
"are",
"currently",
"integral",
"to",
"many",
"natural",
"language",
"processing",
"systems",
".",
"although",
"multilingual",
"lms",
"were",
"also",
"introduced",
"to",
"serve",
"many",
"languages",
... |
ACL | Merge and Label: A Novel Neural Network Architecture for Nested NER | Named entity recognition (NER) is one of the best studied tasks in natural language processing. However, most approaches are not capable of handling nested structures which are common in many applications. In this paper we introduce a novel neural network architecture that first merges tokens and/or entities into entities forming nested structures, and then labels each of them independently. Unlike previous work, our merge and label approach predicts real-valued instead of discrete segmentation structures, which allow it to combine word and nested entity embeddings while maintaining differentiability. We evaluate our approach using the ACE 2005 Corpus, where it achieves state-of-the-art F1 of 74.6, further improved with contextual embeddings (BERT) to 82.4, an overall improvement of close to 8 F1 points over previous approaches trained on the same data. Additionally we compare it against BiLSTM-CRFs, the dominant approach for flat NER structures, demonstrating that its ability to predict nested structures does not impact performance in simpler cases. | 2f2ec087efba5c576a3efe3261ea1908 | 2,019 | [
"named entity recognition ( ner ) is one of the best studied tasks in natural language processing .",
"however , most approaches are not capable of handling nested structures which are common in many applications .",
"in this paper we introduce a novel neural network architecture that first merges tokens and / ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "named entity recognition",
"tokens": [
"named",
"entity",
"recognition"
]
}
],
... | [
"named",
"entity",
"recognition",
"(",
"ner",
")",
"is",
"one",
"of",
"the",
"best",
"studied",
"tasks",
"in",
"natural",
"language",
"processing",
".",
"however",
",",
"most",
"approaches",
"are",
"not",
"capable",
"of",
"handling",
"nested",
"structures",
... |
ACL | TriggerNER: Learning with Entity Triggers as Explanations for Named Entity Recognition | Training neural models for named entity recognition (NER) in a new domain often requires additional human annotations (e.g., tens of thousands of labeled instances) that are usually expensive and time-consuming to collect. Thus, a crucial research question is how to obtain supervision in a cost-effective way. In this paper, we introduce “entity triggers,” an effective proxy of human explanations for facilitating label-efficient learning of NER models. An entity trigger is defined as a group of words in a sentence that helps to explain why humans would recognize an entity in the sentence. We crowd-sourced 14k entity triggers for two well-studied NER datasets. Our proposed model, Trigger Matching Network, jointly learns trigger representations and soft matching module with self-attention such that can generalize to unseen sentences easily for tagging. Our framework is significantly more cost-effective than the traditional neural NER frameworks. Experiments show that using only 20% of the trigger-annotated sentences results in a comparable performance as using 70% of conventional annotated sentences. | e53319032cc6df1f3fca11fdf3e33612 | 2,020 | [
"training neural models for named entity recognition ( ner ) in a new domain often requires additional human annotations ( e . g . , tens of thousands of labeled instances ) that are usually expensive and time - consuming to collect .",
"thus , a crucial research question is how to obtain supervision in a cost - ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
1,
2,
3,
4,
5,
6
],
"text": "neural models for named entity recognition",
"tokens": [
"neural",
"... | [
"training",
"neural",
"models",
"for",
"named",
"entity",
"recognition",
"(",
"ner",
")",
"in",
"a",
"new",
"domain",
"often",
"requires",
"additional",
"human",
"annotations",
"(",
"e",
".",
"g",
".",
",",
"tens",
"of",
"thousands",
"of",
"labeled",
"inst... |
ACL | Defense against Synonym Substitution-based Adversarial Attacks via Dirichlet Neighborhood Ensemble | Although deep neural networks have achieved prominent performance on many NLP tasks, they are vulnerable to adversarial examples. We propose Dirichlet Neighborhood Ensemble (DNE), a randomized method for training a robust model to defense synonym substitution-based attacks. During training, DNE forms virtual sentences by sampling embedding vectors for each word in an input sentence from a convex hull spanned by the word and its synonyms, and it augments them with the training data. In such a way, the model is robust to adversarial attacks while maintaining the performance on the original clean data. DNE is agnostic to the network architectures and scales to large models (e.g., BERT) for NLP applications. Through extensive experimentation, we demonstrate that our method consistently outperforms recently proposed defense methods by a significant margin across different network architectures and multiple data sets. | d67821c76c052b390a6fa1a2e7df8172 | 2,021 | [
"although deep neural networks have achieved prominent performance on many nlp tasks , they are vulnerable to adversarial examples .",
"we propose dirichlet neighborhood ensemble ( dne ) , a randomized method for training a robust model to defense synonym substitution - based attacks .",
"during training , dne ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
1,
2,
3
],
"text": "deep neural networks",
"tokens": [
"deep",
"neural",
"networks"
]
}
],
"eve... | [
"although",
"deep",
"neural",
"networks",
"have",
"achieved",
"prominent",
"performance",
"on",
"many",
"nlp",
"tasks",
",",
"they",
"are",
"vulnerable",
"to",
"adversarial",
"examples",
".",
"we",
"propose",
"dirichlet",
"neighborhood",
"ensemble",
"(",
"dne",
... |
ACL | Fact-based Content Weighting for Evaluating Abstractive Summarisation | Abstractive summarisation is notoriously hard to evaluate since standard word-overlap-based metrics are insufficient. We introduce a new evaluation metric which is based on fact-level content weighting, i.e. relating the facts of the document to the facts of the summary. We fol- low the assumption that a good summary will reflect all relevant facts, i.e. the ones present in the ground truth (human-generated refer- ence summary). We confirm this hypothe- sis by showing that our weightings are highly correlated to human perception and compare favourably to the recent manual highlight- based metric of Hardy et al. (2019). | 4bcee9c4e9d80c5b37672738a4a6d9b2 | 2,020 | [
"abstractive summarisation is notoriously hard to evaluate since standard word - overlap - based metrics are insufficient .",
"we introduce a new evaluation metric which is based on fact - level content weighting , i . e . relating the facts of the document to the facts of the summary .",
"we follow the assumpt... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1
],
"text": "abstractive summarisation",
"tokens": [
"abstractive",
"summarisation"
]
}
],
"event_type": "ITT",... | [
"abstractive",
"summarisation",
"is",
"notoriously",
"hard",
"to",
"evaluate",
"since",
"standard",
"word",
"-",
"overlap",
"-",
"based",
"metrics",
"are",
"insufficient",
".",
"we",
"introduce",
"a",
"new",
"evaluation",
"metric",
"which",
"is",
"based",
"on",
... |
ACL | A Tale of a Probe and a Parser | Measuring what linguistic information is encoded in neural models of language has become popular in NLP. Researchers approach this enterprise by training “probes”—supervised models designed to extract linguistic structure from another model’s output. One such probe is the structural probe (Hewitt and Manning, 2019), designed to quantify the extent to which syntactic information is encoded in contextualised word representations. The structural probe has a novel design, unattested in the parsing literature, the precise benefit of which is not immediately obvious. To explore whether syntactic probes would do better to make use of existing techniques, we compare the structural probe to a more traditional parser with an identical lightweight parameterisation. The parser outperforms structural probe on UUAS in seven of nine analysed languages, often by a substantial amount (e.g. by 11.1 points in English). Under a second less common metric, however, there is the opposite trend—the structural probe outperforms the parser. This begs the question: which metric should we prefer? | 96238aae558577321fcde8c9dc1745cd | 2,020 | [
"measuring what linguistic information is encoded in neural models of language has become popular in nlp .",
"researchers approach this enterprise by training “ probes ” — supervised models designed to extract linguistic structure from another model ’ s output .",
"one such probe is the structural probe ( hewit... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "FEA",
"offsets": [
2,
3
],
"text": "linguistic information",
"tokens": [
"linguistic",
"information"
]
}
],
"event_type": "ITT",
"... | [
"measuring",
"what",
"linguistic",
"information",
"is",
"encoded",
"in",
"neural",
"models",
"of",
"language",
"has",
"become",
"popular",
"in",
"nlp",
".",
"researchers",
"approach",
"this",
"enterprise",
"by",
"training",
"“",
"probes",
"”",
"—",
"supervised",... |
ACL | STARC: Structured Annotations for Reading Comprehension | We present STARC (Structured Annotations for Reading Comprehension), a new annotation framework for assessing reading comprehension with multiple choice questions. Our framework introduces a principled structure for the answer choices and ties them to textual span annotations. The framework is implemented in OneStopQA, a new high-quality dataset for evaluation and analysis of reading comprehension in English. We use this dataset to demonstrate that STARC can be leveraged for a key new application for the development of SAT-like reading comprehension materials: automatic annotation quality probing via span ablation experiments. We further show that it enables in-depth analyses and comparisons between machine and human reading comprehension behavior, including error distributions and guessing ability. Our experiments also reveal that the standard multiple choice dataset in NLP, RACE, is limited in its ability to measure reading comprehension. 47% of its questions can be guessed by machines without accessing the passage, and 18% are unanimously judged by humans as not having a unique correct answer. OneStopQA provides an alternative test set for reading comprehension which alleviates these shortcomings and has a substantially higher human ceiling performance. | 14afb55f0102379d36a344f64ba77953 | 2,020 | [
"we present starc ( structured annotations for reading comprehension ) , a new annotation framework for assessing reading comprehension with multiple choice questions .",
"our framework introduces a principled structure for the answer choices and ties them to textual span annotations .",
"the framework is imple... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
... | [
"we",
"present",
"starc",
"(",
"structured",
"annotations",
"for",
"reading",
"comprehension",
")",
",",
"a",
"new",
"annotation",
"framework",
"for",
"assessing",
"reading",
"comprehension",
"with",
"multiple",
"choice",
"questions",
".",
"our",
"framework",
"int... |
ACL | Self-Regulated Interactive Sequence-to-Sequence Learning | Not all types of supervision signals are created equal: Different types of feedback have different costs and effects on learning. We show how self-regulation strategies that decide when to ask for which kind of feedback from a teacher (or from oneself) can be cast as a learning-to-learn problem leading to improved cost-aware sequence-to-sequence learning. In experiments on interactive neural machine translation, we find that the self-regulator discovers an 𝜖-greedy strategy for the optimal cost-quality trade-off by mixing different feedback types including corrections, error markups, and self-supervision. Furthermore, we demonstrate its robustness under domain shift and identify it as a promising alternative to active learning. | ddc40b77dfe2002119dd45302273b018 | 2,019 | [
"not all types of supervision signals are created equal : different types of feedback have different costs and effects on learning .",
"we show how self - regulation strategies that decide when to ask for which kind of feedback from a teacher ( or from oneself ) can be cast as a learning - to - learn problem lead... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
60
],
"text": "improved",
"tokens": [
"improved"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
... | [
"not",
"all",
"types",
"of",
"supervision",
"signals",
"are",
"created",
"equal",
":",
"different",
"types",
"of",
"feedback",
"have",
"different",
"costs",
"and",
"effects",
"on",
"learning",
".",
"we",
"show",
"how",
"self",
"-",
"regulation",
"strategies",
... |
ACL | Dependency Graph Enhanced Dual-transformer Structure for Aspect-based Sentiment Classification | Aspect-based sentiment classification is a popular task aimed at identifying the corresponding emotion of a specific aspect. One sentence may contain various sentiments for different aspects. Many sophisticated methods such as attention mechanism and Convolutional Neural Networks (CNN) have been widely employed for handling this challenge. Recently, semantic dependency tree implemented by Graph Convolutional Networks (GCN) is introduced to describe the inner connection between aspects and the associated emotion words. But the improvement is limited due to the noise and instability of dependency trees. To this end, we propose a dependency graph enhanced dual-transformer network (named DGEDT) by jointly considering the flat representations learnt from Transformer and graph-based representations learnt from the corresponding dependency graph in an iterative interaction manner. Specifically, a dual-transformer structure is devised in DGEDT to support mutual reinforcement between the flat representation learning and graph-based representation learning. The idea is to allow the dependency graph to guide the representation learning of the transformer encoder and vice versa. The results on five datasets demonstrate that the proposed DGEDT outperforms all state-of-the-art alternatives with a large margin. | 596c1341fb15c39e21eb5ec58c1be2a0 | 2,020 | [
"aspect - based sentiment classification is a popular task aimed at identifying the corresponding emotion of a specific aspect .",
"one sentence may contain various sentiments for different aspects .",
"many sophisticated methods such as attention mechanism and convolutional neural networks ( cnn ) have been wi... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
3,
4
],
"text": "aspect - based sentiment classification",
"tokens": [
"aspect",
"-",
"b... | [
"aspect",
"-",
"based",
"sentiment",
"classification",
"is",
"a",
"popular",
"task",
"aimed",
"at",
"identifying",
"the",
"corresponding",
"emotion",
"of",
"a",
"specific",
"aspect",
".",
"one",
"sentence",
"may",
"contain",
"various",
"sentiments",
"for",
"diff... |
ACL | Learning Disentangled Semantic Representations for Zero-Shot Cross-Lingual Transfer in Multilingual Machine Reading Comprehension | Multilingual pre-trained models are able to zero-shot transfer knowledge from rich-resource to low-resource languages in machine reading comprehension (MRC). However, inherent linguistic discrepancies in different languages could make answer spans predicted by zero-shot transfer violate syntactic constraints of the target language. In this paper, we propose a novel multilingual MRC framework equipped with a Siamese Semantic Disentanglement Model (S2DM) to disassociate semantics from syntax in representations learned by multilingual pre-trained models. To explicitly transfer only semantic knowledge to the target language, we propose two groups of losses tailored for semantic and syntactic encoding and disentanglement. Experimental results on three multilingual MRC datasets (i.e., XQuAD, MLQA, and TyDi QA) demonstrate the effectiveness of our proposed approach over models based on mBERT and XLM-100. | 079965c5d73e9685d7940cbd268c46a1 | 2,022 | [
"multilingual pre - trained models are able to zero - shot transfer knowledge from rich - resource to low - resource languages in machine reading comprehension ( mrc ) .",
"however , inherent linguistic discrepancies in different languages could make answer spans predicted by zero - shot transfer violate syntacti... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
23,
24,
25
],
"text": "machine reading comprehension",
"tokens": [
"machine",
"reading",
"comprehension"
]
... | [
"multilingual",
"pre",
"-",
"trained",
"models",
"are",
"able",
"to",
"zero",
"-",
"shot",
"transfer",
"knowledge",
"from",
"rich",
"-",
"resource",
"to",
"low",
"-",
"resource",
"languages",
"in",
"machine",
"reading",
"comprehension",
"(",
"mrc",
")",
".",... |
ACL | Incorporating External Knowledge through Pre-training for Natural Language to Code Generation | Open-domain code generation aims to generate code in a general-purpose programming language (such as Python) from natural language (NL) intents. Motivated by the intuition that developers usually retrieve resources on the web when writing code, we explore the effectiveness of incorporating two varieties of external knowledge into NL-to-code generation: automatically mined NL-code pairs from the online programming QA forum StackOverflow and programming language API documentation. Our evaluations show that combining the two sources with data augmentation and retrieval-based data re-sampling improves the current state-of-the-art by up to 2.2% absolute BLEU score on the code generation testbed CoNaLa. The code and resources are available at https://github.com/neulab/external-knowledge-codegen. | d22c6e7f19179ee2257f335c39f80d91 | 2,020 | [
"open - domain code generation aims to generate code in a general - purpose programming language ( such as python ) from natural language ( nl ) intents .",
"motivated by the intuition that developers usually retrieve resources on the web when writing code , we explore the effectiveness of incorporating two varie... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
3,
4
],
"text": "open - domain code generation",
"tokens": [
"open",
"-",
"domain",
... | [
"open",
"-",
"domain",
"code",
"generation",
"aims",
"to",
"generate",
"code",
"in",
"a",
"general",
"-",
"purpose",
"programming",
"language",
"(",
"such",
"as",
"python",
")",
"from",
"natural",
"language",
"(",
"nl",
")",
"intents",
".",
"motivated",
"b... |
ACL | SENT: Sentence-level Distant Relation Extraction via Negative Training | Distant supervision for relation extraction provides uniform bag labels for each sentence inside the bag, while accurate sentence labels are important for downstream applications that need the exact relation type. Directly using bag labels for sentence-level training will introduce much noise, thus severely degrading performance. In this work, we propose the use of negative training (NT), in which a model is trained using complementary labels regarding that “the instance does not belong to these complementary labels”. Since the probability of selecting a true label as a complementary label is low, NT provides less noisy information. Furthermore, the model trained with NT is able to separate the noisy data from the training data. Based on NT, we propose a sentence-level framework, SENT, for distant relation extraction. SENT not only filters the noisy data to construct a cleaner dataset, but also performs a re-labeling process to transform the noisy data into useful training data, thus further benefiting the model’s performance. Experimental results show the significant improvement of the proposed method over previous methods on sentence-level evaluation and de-noise effect. | e77fa8cf76fa6eba234e3e3667542bf1 | 2,021 | [
"distant supervision for relation extraction provides uniform bag labels for each sentence inside the bag , while accurate sentence labels are important for downstream applications that need the exact relation type .",
"directly using bag labels for sentence - level training will introduce much noise , thus sever... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
3,
4
],
"text": "relation extraction",
"tokens": [
"relation",
"extraction"
]
},
{
"argument_type": "Subject"... | [
"distant",
"supervision",
"for",
"relation",
"extraction",
"provides",
"uniform",
"bag",
"labels",
"for",
"each",
"sentence",
"inside",
"the",
"bag",
",",
"while",
"accurate",
"sentence",
"labels",
"are",
"important",
"for",
"downstream",
"applications",
"that",
"... |
ACL | RNSum: A Large-Scale Dataset for Automatic Release Note Generation via Commit Logs Summarization | A release note is a technical document that describes the latest changes to a software product and is crucial in open source software development. However, it still remains challenging to generate release notes automatically. In this paper, we present a new dataset called RNSum, which contains approximately 82,000 English release notes and the associated commit messages derived from the online repositories in GitHub. Then, we propose classwise extractive-then-abstractive/abstractive summarization approaches to this task, which can employ a modern transformer-based seq2seq network like BART and can be applied to various repositories without specific constraints. The experimental results on the RNSum dataset show that the proposed methods can generate less noisy release notes at higher coverage than the baselines. We also observe that there is a significant gap in the coverage of essential information when compared to human references. Our dataset and the code are publicly available. | 05a70e78817bc9f371ca7f17a079ac85 | 2,022 | [
"a release note is a technical document that describes the latest changes to a software product and is crucial in open source software development .",
"however , it still remains challenging to generate release notes automatically .",
"in this paper , we present a new dataset called rnsum , which contains appro... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
41
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
... | [
"a",
"release",
"note",
"is",
"a",
"technical",
"document",
"that",
"describes",
"the",
"latest",
"changes",
"to",
"a",
"software",
"product",
"and",
"is",
"crucial",
"in",
"open",
"source",
"software",
"development",
".",
"however",
",",
"it",
"still",
"rem... |
ACL | Biomedical Entity Representations with Synonym Marginalization | Biomedical named entities often play important roles in many biomedical text mining tools. However, due to the incompleteness of provided synonyms and numerous variations in their surface forms, normalization of biomedical entities is very challenging. In this paper, we focus on learning representations of biomedical entities solely based on the synonyms of entities. To learn from the incomplete synonyms, we use a model-based candidate selection and maximize the marginal likelihood of the synonyms present in top candidates. Our model-based candidates are iteratively updated to contain more difficult negative samples as our model evolves. In this way, we avoid the explicit pre-selection of negative samples from more than 400K candidates. On four biomedical entity normalization datasets having three different entity types (disease, chemical, adverse reaction), our model BioSyn consistently outperforms previous state-of-the-art models almost reaching the upper bound on each dataset. | 0f5670200ebc7deed3e4e03433d02d4d | 2,020 | [
"biomedical named entities often play important roles in many biomedical text mining tools .",
"however , due to the incompleteness of provided synonyms and numerous variations in their surface forms , normalization of biomedical entities is very challenging .",
"in this paper , we focus on learning representat... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "biomedical named entities",
"tokens": [
"biomedical",
"named",
"entities"
]
},
... | [
"biomedical",
"named",
"entities",
"often",
"play",
"important",
"roles",
"in",
"many",
"biomedical",
"text",
"mining",
"tools",
".",
"however",
",",
"due",
"to",
"the",
"incompleteness",
"of",
"provided",
"synonyms",
"and",
"numerous",
"variations",
"in",
"thei... |
ACL | CDRNN: Discovering Complex Dynamics in Human Language Processing | The human mind is a dynamical system, yet many analysis techniques used to study it are limited in their ability to capture the complex dynamics that may characterize mental processes. This study proposes the continuous-time deconvolutional regressive neural network (CDRNN), a deep neural extension of continuous-time deconvolutional regression (Shain & Schuler, 2021) that jointly captures time-varying, non-linear, and delayed influences of predictors (e.g. word surprisal) on the response (e.g. reading time). Despite this flexibility, CDRNN is interpretable and able to illuminate patterns in human cognition that are otherwise difficult to study. Behavioral and fMRI experiments reveal detailed and plausible estimates of human language processing dynamics that generalize better than CDR and other baselines, supporting a potential role for CDRNN in studying human language processing. | db3c18ed9ebd443d810ff1b65ebcdddb | 2,021 | [
"the human mind is a dynamical system , yet many analysis techniques used to study it are limited in their ability to capture the complex dynamics that may characterize mental processes .",
"this study proposes the continuous - time deconvolutional regressive neural network ( cdrnn ) , a deep neural extension of ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
1,
2
],
"text": "human mind",
"tokens": [
"human",
"mind"
]
}
],
"event_type": "ITT",
"trigger": {
"offse... | [
"the",
"human",
"mind",
"is",
"a",
"dynamical",
"system",
",",
"yet",
"many",
"analysis",
"techniques",
"used",
"to",
"study",
"it",
"are",
"limited",
"in",
"their",
"ability",
"to",
"capture",
"the",
"complex",
"dynamics",
"that",
"may",
"characterize",
"me... |
ACL | PairRE: Knowledge Graph Embeddings via Paired Relation Vectors | Distance based knowledge graph embedding methods show promising results on link prediction task, on which two topics have been widely studied: one is the ability to handle complex relations, such as N-to-1, 1-to-N and N-to-N, the other is to encode various relation patterns, such as symmetry/antisymmetry. However, the existing methods fail to solve these two problems at the same time, which leads to unsatisfactory results. To mitigate this problem, we propose PairRE, a model with paired vectors for each relation representation. The paired vectors enable an adaptive adjustment of the margin in loss function to fit for different complex relations. Besides, PairRE is capable of encoding three important relation patterns, symmetry/antisymmetry, inverse and composition. Given simple constraints on relation representations, PairRE can encode subrelation further. Experiments on link prediction benchmarks demonstrate the proposed key capabilities of PairRE. Moreover, We set a new state-of-the-art on two knowledge graph datasets of the challenging Open Graph Benchmark. | b273737384966b0b68fb25d3378c94d0 | 2,021 | [
"distance based knowledge graph embedding methods show promising results on link prediction task , on which two topics have been widely studied : one is the ability to handle complex relations , such as n - to - 1 , 1 - to - n and n - to - n , the other is to encode various relation patterns , such as symmetry / an... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
3,
4,
5
],
"text": "distance based knowledge graph embedding methods",
"tokens": [
"distance",
... | [
"distance",
"based",
"knowledge",
"graph",
"embedding",
"methods",
"show",
"promising",
"results",
"on",
"link",
"prediction",
"task",
",",
"on",
"which",
"two",
"topics",
"have",
"been",
"widely",
"studied",
":",
"one",
"is",
"the",
"ability",
"to",
"handle",... |
ACL | Pre-Learning Environment Representations for Data-Efficient Neural Instruction Following | We consider the problem of learning to map from natural language instructions to state transitions (actions) in a data-efficient manner. Our method takes inspiration from the idea that it should be easier to ground language to concepts that have already been formed through pre-linguistic observation. We augment a baseline instruction-following learner with an initial environment-learning phase that uses observations of language-free state transitions to induce a suitable latent representation of actions before processing the instruction-following training data. We show that mapping to pre-learned representations substantially improves performance over systems whose representations are learned from limited instructional data alone. | aa9188977b86040437676923fd7875a6 | 2,019 | [
"we consider the problem of learning to map from natural language instructions to state transitions ( actions ) in a data - efficient manner .",
"our method takes inspiration from the idea that it should be easier to ground language to concepts that have already been formed through pre - linguistic observation ."... | [
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
56,
57,
58,
59,
60
],
"text": "baseline instruction - following learner",
"tokens": [
"baseline",
"i... | [
"we",
"consider",
"the",
"problem",
"of",
"learning",
"to",
"map",
"from",
"natural",
"language",
"instructions",
"to",
"state",
"transitions",
"(",
"actions",
")",
"in",
"a",
"data",
"-",
"efficient",
"manner",
".",
"our",
"method",
"takes",
"inspiration",
... |
ACL | GLUECoS: An Evaluation Benchmark for Code-Switched NLP | Code-switching is the use of more than one language in the same conversation or utterance. Recently, multilingual contextual embedding models, trained on multiple monolingual corpora, have shown promising results on cross-lingual and multilingual tasks. We present an evaluation benchmark, GLUECoS, for code-switched languages, that spans several NLP tasks in English-Hindi and English-Spanish. Specifically, our evaluation benchmark includes Language Identification from text, POS tagging, Named Entity Recognition, Sentiment Analysis, Question Answering and a new task for code-switching, Natural Language Inference. We present results on all these tasks using cross-lingual word embedding models and multilingual models. In addition, we fine-tune multilingual models on artificially generated code-switched data. Although multilingual models perform significantly better than cross-lingual models, our results show that in most tasks, across both language pairs, multilingual models fine-tuned on code-switched data perform best, showing that multilingual models can be further optimized for code-switching tasks. | 37d3be281a502589a62d96ad295d2a9c | 2,020 | [
"code - switching is the use of more than one language in the same conversation or utterance .",
"recently , multilingual contextual embedding models , trained on multiple monolingual corpora , have shown promising results on cross - lingual and multilingual tasks .",
"we present an evaluation benchmark , gluec... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "code - switching",
"tokens": [
"code",
"-",
"switching"
]
}
],
"event_type"... | [
"code",
"-",
"switching",
"is",
"the",
"use",
"of",
"more",
"than",
"one",
"language",
"in",
"the",
"same",
"conversation",
"or",
"utterance",
".",
"recently",
",",
"multilingual",
"contextual",
"embedding",
"models",
",",
"trained",
"on",
"multiple",
"monolin... |
ACL | The AI Doctor Is In: A Survey of Task-Oriented Dialogue Systems for Healthcare Applications | Task-oriented dialogue systems are increasingly prevalent in healthcare settings, and have been characterized by a diverse range of architectures and objectives. Although these systems have been surveyed in the medical community from a non-technical perspective, a systematic review from a rigorous computational perspective has to date remained noticeably absent. As a result, many important implementation details of healthcare-oriented dialogue systems remain limited or underspecified, slowing the pace of innovation in this area. To fill this gap, we investigated an initial pool of 4070 papers from well-known computer science, natural language processing, and artificial intelligence venues, identifying 70 papers discussing the system-level implementation of task-oriented dialogue systems for healthcare applications. We conducted a comprehensive technical review of these papers, and present our key findings including identified gaps and corresponding recommendations. | e80b6406d11e5a6e254a618fb7a3a6cf | 2,022 | [
"task - oriented dialogue systems are increasingly prevalent in healthcare settings , and have been characterized by a diverse range of architectures and objectives .",
"although these systems have been surveyed in the medical community from a non - technical perspective , a systematic review from a rigorous comp... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
0,
1,
2,
3,
4
],
"text": "task - oriented dialogue systems",
"tokens": [
"task",
"-",
"oriented",... | [
"task",
"-",
"oriented",
"dialogue",
"systems",
"are",
"increasingly",
"prevalent",
"in",
"healthcare",
"settings",
",",
"and",
"have",
"been",
"characterized",
"by",
"a",
"diverse",
"range",
"of",
"architectures",
"and",
"objectives",
".",
"although",
"these",
... |
ACL | Multi-Task Networks with Universe, Group, and Task Feature Learning | We present methods for multi-task learning that take advantage of natural groupings of related tasks. Task groups may be defined along known properties of the tasks, such as task domain or language. Such task groups represent supervised information at the inter-task level and can be encoded into the model. We investigate two variants of neural network architectures that accomplish this, learning different feature spaces at the levels of individual tasks, task groups, as well as the universe of all tasks: (1) parallel architectures encode each input simultaneously into feature spaces at different levels; (2) serial architectures encode each input successively into feature spaces at different levels in the task hierarchy. We demonstrate the methods on natural language understanding (NLU) tasks, where a grouping of tasks into different task domains leads to improved performance on ATIS, Snips, and a large in-house dataset. | 6716fce2fef8d1538be70fa2c1cfd088 | 2,019 | [
"we present methods for multi - task learning that take advantage of natural groupings of related tasks .",
"task groups may be defined along known properties of the tasks , such as task domain or language .",
"such task groups represent supervised information at the inter - task level and can be encoded into t... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
... | [
"we",
"present",
"methods",
"for",
"multi",
"-",
"task",
"learning",
"that",
"take",
"advantage",
"of",
"natural",
"groupings",
"of",
"related",
"tasks",
".",
"task",
"groups",
"may",
"be",
"defined",
"along",
"known",
"properties",
"of",
"the",
"tasks",
","... |
ACL | Multimodal Transformer Networks for End-to-End Video-Grounded Dialogue Systems | Developing Video-Grounded Dialogue Systems (VGDS), where a dialogue is conducted based on visual and audio aspects of a given video, is significantly more challenging than traditional image or text-grounded dialogue systems because (1) feature space of videos span across multiple picture frames, making it difficult to obtain semantic information; and (2) a dialogue agent must perceive and process information from different modalities (audio, video, caption, etc.) to obtain a comprehensive understanding. Most existing work is based on RNNs and sequence-to-sequence architectures, which are not very effective for capturing complex long-term dependencies (like in videos). To overcome this, we propose Multimodal Transformer Networks (MTN) to encode videos and incorporate information from different modalities. We also propose query-aware attention through an auto-encoder to extract query-aware features from non-text modalities. We develop a training procedure to simulate token-level decoding to improve the quality of generated responses during inference. We get state of the art performance on Dialogue System Technology Challenge 7 (DSTC7). Our model also generalizes to another multimodal visual-grounded dialogue task, and obtains promising performance. | e870b5de7c7a7c6fe03618294b3d4de3 | 2,019 | [
"developing video - grounded dialogue systems ( vgds ) , where a dialogue is conducted based on visual and audio aspects of a given video , is significantly more challenging than traditional image or text - grounded dialogue systems because ( 1 ) feature space of videos span across multiple picture frames , making ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
1,
2,
3,
4,
5
],
"text": "video - grounded dialogue systems",
"tokens": [
"video",
"-",
"grounded... | [
"developing",
"video",
"-",
"grounded",
"dialogue",
"systems",
"(",
"vgds",
")",
",",
"where",
"a",
"dialogue",
"is",
"conducted",
"based",
"on",
"visual",
"and",
"audio",
"aspects",
"of",
"a",
"given",
"video",
",",
"is",
"significantly",
"more",
"challengi... |
ACL | On the Sensitivity and Stability of Model Interpretations in NLP | Recent years have witnessed the emergence of a variety of post-hoc interpretations that aim to uncover how natural language processing (NLP) models make predictions. Despite the surge of new interpretation methods, it remains an open problem how to define and quantitatively measure the faithfulness of interpretations, i.e., to what extent interpretations reflect the reasoning process by a model. We propose two new criteria, sensitivity and stability, that provide complementary notions of faithfulness to the existed removal-based criteria. Our results show that the conclusion for how faithful interpretations are could vary substantially based on different notions. Motivated by the desiderata of sensitivity and stability, we introduce a new class of interpretation methods that adopt techniques from adversarial robustness. Empirical results show that our proposed methods are effective under the new criteria and overcome limitations of gradient-based methods on removal-based criteria. Besides text classification, we also apply interpretation methods and metrics to dependency parsing. Our results shed light on understanding the diverse set of interpretations. | 7264090b4308baaa09669943a9601593 | 2,022 | [
"recent years have witnessed the emergence of a variety of post - hoc interpretations that aim to uncover how natural language processing ( nlp ) models make predictions .",
"despite the surge of new interpretation methods , it remains an open problem how to define and quantitatively measure the faithfulness of i... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
19,
20,
21,
25
],
"text": "natural language processing ( nlp ) models",
"tokens": [
"natural",
"language",
... | [
"recent",
"years",
"have",
"witnessed",
"the",
"emergence",
"of",
"a",
"variety",
"of",
"post",
"-",
"hoc",
"interpretations",
"that",
"aim",
"to",
"uncover",
"how",
"natural",
"language",
"processing",
"(",
"nlp",
")",
"models",
"make",
"predictions",
".",
... |
ACL | DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts | Despite recent advances in natural language generation, it remains challenging to control attributes of generated text. We propose DExperts: Decoding-time Experts, a decoding-time method for controlled text generation that combines a pretrained language model with “expert” LMs and/or “anti-expert” LMs in a product of experts. Intuitively, under the ensemble, tokens only get high probability if they are considered likely by the experts, and unlikely by the anti-experts. We apply DExperts to language detoxification and sentiment-controlled generation, where we outperform existing controllable generation methods on both automatic and human evaluations. Moreover, because DExperts operates only on the output of the pretrained LM, it is effective with (anti-)experts of smaller size, including when operating on GPT-3. Our work highlights the promise of tuning small LMs on text with (un)desirable attributes for efficient decoding-time steering. | a09e451530ff30a625bd90f1b074cd21 | 2,021 | [
"despite recent advances in natural language generation , it remains challenging to control attributes of generated text .",
"we propose dexperts : decoding - time experts , a decoding - time method for controlled text generation that combines a pretrained language model with “ expert ” lms and / or “ anti - expe... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
18
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
... | [
"despite",
"recent",
"advances",
"in",
"natural",
"language",
"generation",
",",
"it",
"remains",
"challenging",
"to",
"control",
"attributes",
"of",
"generated",
"text",
".",
"we",
"propose",
"dexperts",
":",
"decoding",
"-",
"time",
"experts",
",",
"a",
"dec... |
ACL | Training Neural Machine Translation to Apply Terminology Constraints | This paper proposes a novel method to inject custom terminology into neural machine translation at run time. Previous works have mainly proposed modifications to the decoding algorithm in order to constrain the output to include run-time-provided target terms. While being effective, these constrained decoding methods add, however, significant computational overhead to the inference step, and, as we show in this paper, can be brittle when tested in realistic conditions. In this paper we approach the problem by training a neural MT system to learn how to use custom terminology when provided with the input. Comparative experiments show that our method is not only more effective than a state-of-the-art implementation of constrained decoding, but is also as fast as constraint-free decoding. | c53df484d48aa0772b4ad558040c4eff | 2,019 | [
"this paper proposes a novel method to inject custom terminology into neural machine translation at run time .",
"previous works have mainly proposed modifications to the decoding algorithm in order to constrain the output to include run - time - provided target terms .",
"while being effective , these constrai... | [
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
4,
5
],
"text": "novel method",
"tokens": [
"novel",
"method"
]
}
],
"event_type": "PRP",
"trigger": {
"... | [
"this",
"paper",
"proposes",
"a",
"novel",
"method",
"to",
"inject",
"custom",
"terminology",
"into",
"neural",
"machine",
"translation",
"at",
"run",
"time",
".",
"previous",
"works",
"have",
"mainly",
"proposed",
"modifications",
"to",
"the",
"decoding",
"algo... |
ACL | Textbook Question Answering with Multi-modal Context Graph Understanding and Self-supervised Open-set Comprehension | In this work, we introduce a novel algorithm for solving the textbook question answering (TQA) task which describes more realistic QA problems compared to other recent tasks. We mainly focus on two related issues with analysis of the TQA dataset. First, solving the TQA problems requires to comprehend multi-modal contexts in complicated input data. To tackle this issue of extracting knowledge features from long text lessons and merging them with visual features, we establish a context graph from texts and images, and propose a new module f-GCN based on graph convolutional networks (GCN). Second, scientific terms are not spread over the chapters and subjects are split in the TQA dataset. To overcome this so called ‘out-of-domain’ issue, before learning QA problems, we introduce a novel self-supervised open-set learning process without any annotations. The experimental results show that our model significantly outperforms prior state-of-the-art methods. Moreover, ablation studies validate that both methods of incorporating f-GCN for extracting knowledge from multi-modal contexts and our newly proposed self-supervised learning process are effective for TQA problems. | 344964cbf5b4567cfec85c76b182d954 | 2,019 | [
"in this work , we introduce a novel algorithm for solving the textbook question answering ( tqa ) task which describes more realistic qa problems compared to other recent tasks .",
"we mainly focus on two related issues with analysis of the tqa dataset .",
"first , solving the tqa problems requires to comprehe... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
4
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
... | [
"in",
"this",
"work",
",",
"we",
"introduce",
"a",
"novel",
"algorithm",
"for",
"solving",
"the",
"textbook",
"question",
"answering",
"(",
"tqa",
")",
"task",
"which",
"describes",
"more",
"realistic",
"qa",
"problems",
"compared",
"to",
"other",
"recent",
... |
ACL | Eliciting Knowledge from Experts: Automatic Transcript Parsing for Cognitive Task Analysis | Cognitive task analysis (CTA) is a type of analysis in applied psychology aimed at eliciting and representing the knowledge and thought processes of domain experts. In CTA, often heavy human labor is involved to parse the interview transcript into structured knowledge (e.g., flowchart for different actions). To reduce human efforts and scale the process, automated CTA transcript parsing is desirable. However, this task has unique challenges as (1) it requires the understanding of long-range context information in conversational text; and (2) the amount of labeled data is limited and indirect—i.e., context-aware, noisy, and low-resource. In this paper, we propose a weakly-supervised information extraction framework for automated CTA transcript parsing. We partition the parsing process into a sequence labeling task and a text span-pair relation extraction task, with distant supervision from human-curated protocol files. To model long-range context information for extracting sentence relations, neighbor sentences are involved as a part of input. Different types of models for capturing context dependency are then applied. We manually annotate real-world CTA transcripts to facilitate the evaluation of the parsing tasks. | c75dc7e6e6563c59f8064bc7bf0fe23d | 2,019 | [
"cognitive task analysis ( cta ) is a type of analysis in applied psychology aimed at eliciting and representing the knowledge and thought processes of domain experts .",
"in cta , often heavy human labor is involved to parse the interview transcript into structured knowledge ( e . g . , flowchart for different a... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
0,
1,
2
],
"text": "cognitive task analysis",
"tokens": [
"cognitive",
"task",
"analysis"
]
}
],
... | [
"cognitive",
"task",
"analysis",
"(",
"cta",
")",
"is",
"a",
"type",
"of",
"analysis",
"in",
"applied",
"psychology",
"aimed",
"at",
"eliciting",
"and",
"representing",
"the",
"knowledge",
"and",
"thought",
"processes",
"of",
"domain",
"experts",
".",
"in",
... |
ACL | Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation | Neural Machine Translation (NMT) currently exhibits biases such as producing translations that are too short and overgenerating frequent words, and shows poor robustness to copy noise in training data or domain shift. Recent work has tied these shortcomings to beam search – the de facto standard inference algorithm in NMT – and Eikema & Aziz (2020) propose to use Minimum Bayes Risk (MBR) decoding on unbiased samples instead. In this paper, we empirically investigate the properties of MBR decoding on a number of previously reported biases and failure cases of beam search. We find that MBR still exhibits a length and token frequency bias, owing to the MT metrics used as utility functions, but that MBR also increases robustness against copy noise in the training data and domain shift. | 0a5eed886743c793eaaea9afcbbd9844 | 2,021 | [
"neural machine translation ( nmt ) currently exhibits biases such as producing translations that are too short and overgenerating frequent words , and shows poor robustness to copy noise in training data or domain shift .",
"recent work has tied these shortcomings to beam search – the de facto standard inference... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "neural machine translation",
"tokens": [
"neural",
"machine",
"translation"
]
}
... | [
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"currently",
"exhibits",
"biases",
"such",
"as",
"producing",
"translations",
"that",
"are",
"too",
"short",
"and",
"overgenerating",
"frequent",
"words",
",",
"and",
"shows",
"poor",
"robustness",
"to",
"copy... |
ACL | Understanding Iterative Revision from Human-Written Text | Writing is, by nature, a strategic, adaptive, and, more importantly, an iterative process. A crucial part of writing is editing and revising the text. Previous works on text revision have focused on defining edit intention taxonomies within a single domain or developing computational models with a single level of edit granularity, such as sentence-level edits, which differ from human’s revision cycles. This work describes IteraTeR: the first large-scale, multi-domain, edit-intention annotated corpus of iteratively revised text. In particular, IteraTeR is collected based on a new framework to comprehensively model the iterative text revisions that generalizes to a variety of domains, edit intentions, revision depths, and granularities. When we incorporate our annotated edit intentions, both generative and action-based text revision models significantly improve automatic evaluations. Through our work, we better understand the text revision process, making vital connections between edit intentions and writing quality, enabling the creation of diverse corpora to support computational modeling of iterative text revisions. | cbb7607cf109a392d2c5bf1c31141529 | 2,022 | [
"writing is , by nature , a strategic , adaptive , and , more importantly , an iterative process .",
"a crucial part of writing is editing and revising the text .",
"previous works on text revision have focused on defining edit intention taxonomies within a single domain or developing computational models with ... | [
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
... | [
"writing",
"is",
",",
"by",
"nature",
",",
"a",
"strategic",
",",
"adaptive",
",",
"and",
",",
"more",
"importantly",
",",
"an",
"iterative",
"process",
".",
"a",
"crucial",
"part",
"of",
"writing",
"is",
"editing",
"and",
"revising",
"the",
"text",
".",... |
ACL | The Summary Loop: Learning to Write Abstractive Summaries Without Examples | This work presents a new approach to unsupervised abstractive summarization based on maximizing a combination of coverage and fluency for a given length constraint. It introduces a novel method that encourages the inclusion of key terms from the original document into the summary: key terms are masked out of the original document and must be filled in by a coverage model using the current generated summary. A novel unsupervised training procedure leverages this coverage model along with a fluency model to generate and score summaries. When tested on popular news summarization datasets, the method outperforms previous unsupervised methods by more than 2 R-1 points, and approaches results of competitive supervised methods. Our model attains higher levels of abstraction with copied passages roughly two times shorter than prior work, and learns to compress and merge sentences without supervision. | 355f8f0a2212017f8814e309a3849da8 | 2,020 | [
"this work presents a new approach to unsupervised abstractive summarization based on maximizing a combination of coverage and fluency for a given length constraint .",
"it introduces a novel method that encourages the inclusion of key terms from the original document into the summary : key terms are masked out o... | [
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
5
],
"text": "approach",
"tokens": [
"approach"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets... | [
"this",
"work",
"presents",
"a",
"new",
"approach",
"to",
"unsupervised",
"abstractive",
"summarization",
"based",
"on",
"maximizing",
"a",
"combination",
"of",
"coverage",
"and",
"fluency",
"for",
"a",
"given",
"length",
"constraint",
".",
"it",
"introduces",
"... |
ACL | Multilingual Generative Language Models for Zero-Shot Cross-Lingual Event Argument Extraction | We present a study on leveraging multilingual pre-trained generative language models for zero-shot cross-lingual event argument extraction (EAE). By formulating EAE as a language generation task, our method effectively encodes event structures and captures the dependencies between arguments. We design language-agnostic templates to represent the event argument structures, which are compatible with any language, hence facilitating the cross-lingual transfer. Our proposed model finetunes multilingual pre-trained generative language models to generate sentences that fill in the language-agnostic template with arguments extracted from the input passage. The model is trained on source languages and is then directly applied to target languages for event argument extraction. Experiments demonstrate that the proposed model outperforms the current state-of-the-art models on zero-shot cross-lingual EAE. Comprehensive studies and error analyses are presented to better understand the advantages and the current limitations of using generative language models for zero-shot cross-lingual transfer EAE. | 8d7d544c009a3b7182b7b589553fe9fd | 2,022 | [
"we present a study on leveraging multilingual pre - trained generative language models for zero - shot cross - lingual event argument extraction ( eae ) .",
"by formulating eae as a language generation task , our method effectively encodes event structures and captures the dependencies between arguments .",
"w... | [
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
49
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "MOD",
"offsets": [
... | [
"we",
"present",
"a",
"study",
"on",
"leveraging",
"multilingual",
"pre",
"-",
"trained",
"generative",
"language",
"models",
"for",
"zero",
"-",
"shot",
"cross",
"-",
"lingual",
"event",
"argument",
"extraction",
"(",
"eae",
")",
".",
"by",
"formulating",
"... |
ACL | Nibbling at the Hard Core of Word Sense Disambiguation | With state-of-the-art systems having finally attained estimated human performance, Word Sense Disambiguation (WSD) has now joined the array of Natural Language Processing tasks that have seemingly been solved, thanks to the vast amounts of knowledge encoded into Transformer-based pre-trained language models. And yet, if we look below the surface of raw figures, it is easy to realize that current approaches still make trivial mistakes that a human would never make. In this work, we provide evidence showing why the F1 score metric should not simply be taken at face value and present an exhaustive analysis of the errors that seven of the most representative state-of-the-art systems for English all-words WSD make on traditional evaluation benchmarks.In addition, we produce and release a collection of test sets featuring (a) an amended version of the standard evaluation benchmark that fixes its lexical and semantic inaccuracies, (b) 42D, a challenge set devised to assess the resilience of systems with respect to least frequent word senses and senses not seen at training time, and (c) hardEN, a challenge set made up solely of instances which none of the investigated state-of-the-art systems can solve. We make all of the test sets and model predictions available to the research community at https://github.com/SapienzaNLP/wsd-hard-benchmark. | e083e2d20deb350ad46d0842f676a708 | 2,022 | [
"with state - of - the - art systems having finally attained estimated human performance , word sense disambiguation ( wsd ) has now joined the array of natural language processing tasks that have seemingly been solved , thanks to the vast amounts of knowledge encoded into transformer - based pre - trained language... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
16,
17,
18
],
"text": "word sense disambiguation",
"tokens": [
"word",
"sense",
"disambiguation"
]
},
... | [
"with",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"systems",
"having",
"finally",
"attained",
"estimated",
"human",
"performance",
",",
"word",
"sense",
"disambiguation",
"(",
"wsd",
")",
"has",
"now",
"joined",
"the",
"array",
"of",
"natural",
"language",... |
ACL | SaFeRDialogues: Taking Feedback Gracefully after Conversational Safety Failures | Current open-domain conversational models can easily be made to talk in inadequate ways. Online learning from conversational feedback given by the conversation partner is a promising avenue for a model to improve and adapt, so as to generate fewer of these safety failures. However, current state-of-the-art models tend to react to feedback with defensive or oblivious responses. This makes for an unpleasant experience and may discourage conversation partners from giving feedback in the future. This work proposes SaFeRDialogues, a task and dataset of graceful responses to conversational feedback about safety failures.We collect a dataset of 8k dialogues demonstrating safety failures, feedback signaling them, and a response acknowledging the feedback. We show how fine-tuning on this dataset results in conversations that human raters deem considerably more likely to lead to a civil conversation, without sacrificing engagingness or general conversational ability. | 4bc93b9ede927ee608e92f363c7fb51e | 2,022 | [
"current open - domain conversational models can easily be made to talk in inadequate ways .",
"online learning from conversational feedback given by the conversation partner is a promising avenue for a model to improve and adapt , so as to generate fewer of these safety failures .",
"however , current state - ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
1,
2,
3,
4,
5
],
"text": "open - domain conversational models",
"tokens": [
"open",
"-",
"domain"... | [
"current",
"open",
"-",
"domain",
"conversational",
"models",
"can",
"easily",
"be",
"made",
"to",
"talk",
"in",
"inadequate",
"ways",
".",
"online",
"learning",
"from",
"conversational",
"feedback",
"given",
"by",
"the",
"conversation",
"partner",
"is",
"a",
... |
ACL | Compression of Generative Pre-trained Language Models via Quantization | The increasing size of generative Pre-trained Language Models (PLMs) have greatly increased the demand for model compression. Despite various methods to compress BERT or its variants, there are few attempts to compress generative PLMs, and the underlying difficulty remains unclear. In this paper, we compress generative PLMs by quantization. We find that previous quantization methods fail on generative tasks due to the homogeneous word embeddings caused by reduced capacity and the varied distribution of weights. Correspondingly, we propose a token-level contrastive distillation to learn distinguishable word embeddings, and a module-wise dynamic scaling to make quantizers adaptive to different modules. Empirical results on various tasks show that our proposed method outperforms the state-of-the-art compression methods on generative PLMs by a clear margin. With comparable performance with the full-precision models, we achieve 14.4x and 13.4x compression rate on GPT-2 and BART, respectively. | b3d6c9bbe79092669c8344c8eaf8b9a3 | 2,022 | [
"the increasing size of generative pre - trained language models ( plms ) have greatly increased the demand for model compression .",
"despite various methods to compress bert or its variants , there are few attempts to compress generative plms , and the underlying difficulty remains unclear .",
"in this paper ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
19,
20
],
"text": "model compression",
"tokens": [
"model",
"compression"
]
}
],
"event_type": "ITT",
"trigger"... | [
"the",
"increasing",
"size",
"of",
"generative",
"pre",
"-",
"trained",
"language",
"models",
"(",
"plms",
")",
"have",
"greatly",
"increased",
"the",
"demand",
"for",
"model",
"compression",
".",
"despite",
"various",
"methods",
"to",
"compress",
"bert",
"or"... |
ACL | Do Transformers Need Deep Long-Range Memory? | Deep attention models have advanced the modelling of sequential data across many domains. For language modelling in particular, the Transformer-XL — a Transformer augmented with a long-range memory of past activations — has been shown to be state-of-the-art across a variety of well-studied benchmarks. The Transformer-XL incorporates a long-range memory at every layer of the network, which renders its state to be thousands of times larger than RNN predecessors. However it is unclear whether this is necessary. We perform a set of interventions to show that comparable performance can be obtained with 6X fewer long range memories and better performance can be obtained by limiting the range of attention in lower layers of the network. | e03269b7b1a16abb1629199a9110b90e | 2,020 | [
"deep attention models have advanced the modelling of sequential data across many domains .",
"for language modelling in particular , the transformer - xl — a transformer augmented with a long - range memory of past activations — has been shown to be state - of - the - art across a variety of well - studied bench... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
0,
1,
2
],
"text": "deep attention models",
"tokens": [
"deep",
"attention",
"models"
]
}
],
"e... | [
"deep",
"attention",
"models",
"have",
"advanced",
"the",
"modelling",
"of",
"sequential",
"data",
"across",
"many",
"domains",
".",
"for",
"language",
"modelling",
"in",
"particular",
",",
"the",
"transformer",
"-",
"xl",
"—",
"a",
"transformer",
"augmented",
... |
ACL | Automated Crossword Solving | We present the Berkeley Crossword Solver, a state-of-the-art approach for automatically solving crossword puzzles. Our system works by generating answer candidates for each crossword clue using neural question answering models and then combines loopy belief propagation with local search to find full puzzle solutions. Compared to existing approaches, our system improves exact puzzle accuracy from 57% to 82% on crosswords from The New York Times and obtains 99.9% letter accuracy on themeless puzzles. Our system also won first place at the top human crossword tournament, which marks the first time that a computer program has surpassed human performance at this event. To facilitate research on question answering and crossword solving, we analyze our system’s remaining errors and release a dataset of over six million question-answer pairs. | 0006b7d827ac773b6c89eab622928e25 | 2,022 | [
"we present the berkeley crossword solver , a state - of - the - art approach for automatically solving crossword puzzles .",
"our system works by generating answer candidates for each crossword clue using neural question answering models and then combines loopy belief propagation with local search to find full p... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
... | [
"we",
"present",
"the",
"berkeley",
"crossword",
"solver",
",",
"a",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"approach",
"for",
"automatically",
"solving",
"crossword",
"puzzles",
".",
"our",
"system",
"works",
"by",
"generating",
"answer",
"candidates",
... |
ACL | Multi-Hypothesis Machine Translation Evaluation | Reliably evaluating Machine Translation (MT) through automated metrics is a long-standing problem. One of the main challenges is the fact that multiple outputs can be equally valid. Attempts to minimise this issue include metrics that relax the matching of MT output and reference strings, and the use of multiple references. The latter has been shown to significantly improve the performance of evaluation metrics. However, collecting multiple references is expensive and in practice a single reference is generally used. In this paper, we propose an alternative approach: instead of modelling linguistic variation in human reference we exploit the MT model uncertainty to generate multiple diverse translations and use these: (i) as surrogates to reference translations; (ii) to obtain a quantification of translation variability to either complement existing metric scores or (iii) replace references altogether. We show that for a number of popular evaluation metrics our variability estimates lead to substantial improvements in correlation with human judgements of quality by up 15%. | ba5ff10f540778078e52c01a8421ce0c | 2,020 | [
"reliably evaluating machine translation ( mt ) through automated metrics is a long - standing problem .",
"one of the main challenges is the fact that multiple outputs can be equally valid .",
"attempts to minimise this issue include metrics that relax the matching of mt output and reference strings , and the ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
2,
3
],
"text": "machine translation",
"tokens": [
"machine",
"translation"
]
}
],
"event_type": "ITT",
"trigge... | [
"reliably",
"evaluating",
"machine",
"translation",
"(",
"mt",
")",
"through",
"automated",
"metrics",
"is",
"a",
"long",
"-",
"standing",
"problem",
".",
"one",
"of",
"the",
"main",
"challenges",
"is",
"the",
"fact",
"that",
"multiple",
"outputs",
"can",
"b... |
ACL | Collaborative Dialogue in Minecraft | We wish to develop interactive agents that can communicate with humans to collaboratively solve tasks in grounded scenarios. Since computer games allow us to simulate such tasks without the need for physical robots, we define a Minecraft-based collaborative building task in which one player (A, the Architect) is shown a target structure and needs to instruct the other player (B, the Builder) to build this structure. Both players interact via a chat interface. A can observe B but cannot place blocks. We present the Minecraft Dialogue Corpus, a collection of 509 conversations and game logs. As a first step towards our goal of developing fully interactive agents for this task, we consider the subtask of Architect utterance generation, and show how challenging it is. | fdc51112e9586950a9a51dce40e0ac83 | 2,019 | [
"we wish to develop interactive agents that can communicate with humans to collaboratively solve tasks in grounded scenarios .",
"since computer games allow us to simulate such tasks without the need for physical robots , we define a minecraft - based collaborative building task in which one player ( a , the arch... | [
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
35
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
... | [
"we",
"wish",
"to",
"develop",
"interactive",
"agents",
"that",
"can",
"communicate",
"with",
"humans",
"to",
"collaboratively",
"solve",
"tasks",
"in",
"grounded",
"scenarios",
".",
"since",
"computer",
"games",
"allow",
"us",
"to",
"simulate",
"such",
"tasks",... |
ACL | Learning to Generate Task-Specific Adapters from Task Description | Pre-trained text-to-text transformers such as BART have achieved impressive performance across a range of NLP tasks. Recent study further shows that they can learn to generalize to novel tasks, by including task descriptions as part of the source sequence and training the model with (source, target) examples. At test time, these fine-tuned models can make inferences on new tasks using the new task descriptions as part of the input. However, this approach has potential limitations, as the model learns to solve individual (source, target) examples (i.e., at the instance level), instead of learning to solve tasks by taking all examples within a task as a whole (i.e., at the task level). To this end, we introduce Hypter, a framework that improves text-to-text transformer’s generalization ability to unseen tasks by training a hypernetwork to generate task-specific, light-weight adapters from task descriptions. Experiments on ZEST dataset and a synthetic SQuAD dataset demonstrate that Hypter improves upon fine-tuning baselines. Notably, when using BART-Large as the main network, Hypter brings 11.3% comparative improvement on ZEST dataset. | 033faed1b3d0baac059df004b483b125 | 2,021 | [
"pre - trained text - to - text transformers such as bart have achieved impressive performance across a range of nlp tasks .",
"recent study further shows that they can learn to generalize to novel tasks , by including task descriptions as part of the source sequence and training the model with ( source , target ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "MOD",
"offsets": [
0,
1,
2,
3,
4,
5,
6,
7,
8
],
"text": "pre - trained text - to - text transformers",
"to... | [
"pre",
"-",
"trained",
"text",
"-",
"to",
"-",
"text",
"transformers",
"such",
"as",
"bart",
"have",
"achieved",
"impressive",
"performance",
"across",
"a",
"range",
"of",
"nlp",
"tasks",
".",
"recent",
"study",
"further",
"shows",
"that",
"they",
"can",
"... |
ACL | Adversarial Learning of Privacy-Preserving Text Representations for De-Identification of Medical Records | De-identification is the task of detecting protected health information (PHI) in medical text. It is a critical step in sanitizing electronic health records (EHR) to be shared for research. Automatic de-identification classifiers can significantly speed up the sanitization process. However, obtaining a large and diverse dataset to train such a classifier that works well across many types of medical text poses a challenge as privacy laws prohibit the sharing of raw medical records. We introduce a method to create privacy-preserving shareable representations of medical text (i.e. they contain no PHI) that does not require expensive manual pseudonymization. These representations can be shared between organizations to create unified datasets for training de-identification models. Our representation allows training a simple LSTM-CRF de-identification model to an F1 score of 97.4%, which is comparable to a strong baseline that exposes private information in its representation. A robust, widely available de-identification classifier based on our representation could potentially enable studies for which de-identification would otherwise be too costly. | c15c59a442e48a0814cb82247487d8ec | 2,019 | [
"de - identification is the task of detecting protected health information ( phi ) in medical text .",
"it is a critical step in sanitizing electronic health records ( ehr ) to be shared for research .",
"automatic de - identification classifiers can significantly speed up the sanitization process .",
"howeve... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "de - identification",
"tokens": [
"de",
"-",
"identification"
]
},
{
... | [
"de",
"-",
"identification",
"is",
"the",
"task",
"of",
"detecting",
"protected",
"health",
"information",
"(",
"phi",
")",
"in",
"medical",
"text",
".",
"it",
"is",
"a",
"critical",
"step",
"in",
"sanitizing",
"electronic",
"health",
"records",
"(",
"ehr",
... |
ACL | Rewarding Semantic Similarity under Optimized Alignments for AMR-to-Text Generation | A common way to combat exposure bias is by applying scores from evaluation metrics as rewards in reinforcement learning (RL). Metrics leveraging contextualized embeddings appear more flexible than their n-gram matching counterparts and thus ideal as training rewards. However, metrics such as BERTScore greedily align candidate and reference tokens, which can allow system outputs to receive excess credit relative to a reference. Furthermore, past approaches featuring semantic similarity rewards suffer from repetitive outputs and overfitting. We address these issues by proposing metrics that replace the greedy alignments in BERTScore with optimized ones. We compute them on a model’s trained token embeddings to prevent domain mismatch. Our model optimizing discrete alignment metrics consistently outperforms cross-entropy and BLEU reward baselines on AMR-to-text generation. In addition, we find that this approach enjoys stable training compared to a non-RL setting. | e5662418c786610da474ac724da42093 | 2,022 | [
"a common way to combat exposure bias is by applying scores from evaluation metrics as rewards in reinforcement learning ( rl ) .",
"metrics leveraging contextualized embeddings appear more flexible than their n - gram matching counterparts and thus ideal as training rewards .",
"however , metrics such as berts... | [
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
1,
2,
3,
4,
5,
6
],
"text": "common way to combat exposure bias",
"tokens": [
"common",
"way",
... | [
"a",
"common",
"way",
"to",
"combat",
"exposure",
"bias",
"is",
"by",
"applying",
"scores",
"from",
"evaluation",
"metrics",
"as",
"rewards",
"in",
"reinforcement",
"learning",
"(",
"rl",
")",
".",
"metrics",
"leveraging",
"contextualized",
"embeddings",
"appear... |
ACL | Tail-to-Tail Non-Autoregressive Sequence Prediction for Chinese Grammatical Error Correction | We investigate the problem of Chinese Grammatical Error Correction (CGEC) and present a new framework named Tail-to-Tail (TtT) non-autoregressive sequence prediction to address the deep issues hidden in CGEC. Considering that most tokens are correct and can be conveyed directly from source to target, and the error positions can be estimated and corrected based on the bidirectional context information, thus we employ a BERT-initialized Transformer Encoder as the backbone model to conduct information modeling and conveying. Considering that only relying on the same position substitution cannot handle the variable-length correction cases, various operations such substitution, deletion, insertion, and local paraphrasing are required jointly. Therefore, a Conditional Random Fields (CRF) layer is stacked on the up tail to conduct non-autoregressive sequence prediction by modeling the token dependencies. Since most tokens are correct and easily to be predicted/conveyed to the target, then the models may suffer from a severe class imbalance issue. To alleviate this problem, focal loss penalty strategies are integrated into the loss functions. Moreover, besides the typical fix-length error correction datasets, we also construct a variable-length corpus to conduct experiments. Experimental results on standard datasets, especially on the variable-length datasets, demonstrate the effectiveness of TtT in terms of sentence-level Accuracy, Precision, Recall, and F1-Measure on tasks of error Detection and Correction. | 265cf2d7463d32f99851ca563c514983 | 2,021 | [
"we investigate the problem of chinese grammatical error correction ( cgec ) and present a new framework named tail - to - tail ( ttt ) non - autoregressive sequence prediction to address the deep issues hidden in cgec .",
"considering that most tokens are correct and can be conveyed directly from source to targe... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
5,
6,
7,
8
],
"text": "chinese grammatical error correction",
"tokens": [
"chinese",
"grammatical",
"error"... | [
"we",
"investigate",
"the",
"problem",
"of",
"chinese",
"grammatical",
"error",
"correction",
"(",
"cgec",
")",
"and",
"present",
"a",
"new",
"framework",
"named",
"tail",
"-",
"to",
"-",
"tail",
"(",
"ttt",
")",
"non",
"-",
"autoregressive",
"sequence",
"... |
ACL | Automatic Detection of Generated Text is Easiest when Humans are Fooled | Recent advancements in neural language modelling make it possible to rapidly generate vast amounts of human-sounding text. The capabilities of humans and automatic discriminators to detect machine-generated text have been a large source of research interest, but humans and machines rely on different cues to make their decisions. Here, we perform careful benchmarking and analysis of three popular sampling-based decoding strategies—top-_k_, nucleus sampling, and untruncated random sampling—and show that improvements in decoding methods have primarily optimized for fooling humans. This comes at the expense of introducing statistical abnormalities that make detection easy for automatic systems. We also show that though both human and automatic detector performance improve with longer excerpt length, even multi-sentence excerpts can fool expert human raters over 30% of the time. Our findings reveal the importance of using both human and automatic detectors to assess the humanness of text generation systems. | 1373d264b411477bd0d301e1c98cd979 | 2,020 | [
"recent advancements in neural language modelling make it possible to rapidly generate vast amounts of human - sounding text .",
"the capabilities of humans and automatic discriminators to detect machine - generated text have been a large source of research interest , but humans and machines rely on different cue... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
3,
4,
5
],
"text": "neural language modelling",
"tokens": [
"neural",
"language",
"modelling"
]
}
]... | [
"recent",
"advancements",
"in",
"neural",
"language",
"modelling",
"make",
"it",
"possible",
"to",
"rapidly",
"generate",
"vast",
"amounts",
"of",
"human",
"-",
"sounding",
"text",
".",
"the",
"capabilities",
"of",
"humans",
"and",
"automatic",
"discriminators",
... |
ACL | Low-resource Deep Entity Resolution with Transfer and Active Learning | Entity resolution (ER) is the task of identifying different representations of the same real-world entities across databases. It is a key step for knowledge base creation and text mining. Recent adaptation of deep learning methods for ER mitigates the need for dataset-specific feature engineering by constructing distributed representations of entity records. While these methods achieve state-of-the-art performance over benchmark data, they require large amounts of labeled data, which are typically unavailable in realistic ER applications. In this paper, we develop a deep learning-based method that targets low-resource settings for ER through a novel combination of transfer learning and active learning. We design an architecture that allows us to learn a transferable model from a high-resource setting to a low-resource one. To further adapt to the target dataset, we incorporate active learning that carefully selects a few informative examples to fine-tune the transferred model. Empirical evaluation demonstrates that our method achieves comparable, if not better, performance compared to state-of-the-art learning-based methods while using an order of magnitude fewer labels. | 4c4e21fb350c6aff9d80f98b8ffebce1 | 2,019 | [
"entity resolution ( er ) is the task of identifying different representations of the same real - world entities across databases .",
"it is a key step for knowledge base creation and text mining .",
"recent adaptation of deep learning methods for er mitigates the need for dataset - specific feature engineering... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1
],
"text": "entity resolution",
"tokens": [
"entity",
"resolution"
]
}
],
"event_type": "ITT",
"trigger": ... | [
"entity",
"resolution",
"(",
"er",
")",
"is",
"the",
"task",
"of",
"identifying",
"different",
"representations",
"of",
"the",
"same",
"real",
"-",
"world",
"entities",
"across",
"databases",
".",
"it",
"is",
"a",
"key",
"step",
"for",
"knowledge",
"base",
... |
ACL | ReInceptionE: Relation-Aware Inception Network with Joint Local-Global Structural Information for Knowledge Graph Embedding | The goal of Knowledge graph embedding (KGE) is to learn how to represent the low dimensional vectors for entities and relations based on the observed triples. The conventional shallow models are limited to their expressiveness. ConvE (Dettmers et al., 2018) takes advantage of CNN and improves the expressive power with parameter efficient operators by increasing the interactions between head and relation embeddings. However, there is no structural information in the embedding space of ConvE, and the performance is still limited by the number of interactions. The recent KBGAT (Nathani et al., 2019) provides another way to learn embeddings by adaptively utilizing structural information. In this paper, we take the benefits of ConvE and KBGAT together and propose a Relation-aware Inception network with joint local-global structural information for knowledge graph Embedding (ReInceptionE). Specifically, we first explore the Inception network to learn query embedding, which aims to further increase the interactions between head and relation embeddings. Then, we propose to use a relation-aware attention mechanism to enrich the query embedding with the local neighborhood and global entity information. Experimental results on both WN18RR and FB15k-237 datasets demonstrate that ReInceptionE achieves competitive performance compared with state-of-the-art methods. | b08ed0dc5bd153fd697689da44b22216 | 2,020 | [
"the goal of knowledge graph embedding ( kge ) is to learn how to represent the low dimensional vectors for entities and relations based on the observed triples .",
"the conventional shallow models are limited to their expressiveness .",
"conve ( dettmers et al . , 2018 ) takes advantage of cnn and improves the... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
3,
4,
5
],
"text": "knowledge graph embedding",
"tokens": [
"knowledge",
"graph",
"embedding"
]
}
]... | [
"the",
"goal",
"of",
"knowledge",
"graph",
"embedding",
"(",
"kge",
")",
"is",
"to",
"learn",
"how",
"to",
"represent",
"the",
"low",
"dimensional",
"vectors",
"for",
"entities",
"and",
"relations",
"based",
"on",
"the",
"observed",
"triples",
".",
"the",
... |
ACL | Counterfactuals to Control Latent Disentangled Text Representations for Style Transfer | Disentanglement of latent representations into content and style spaces has been a commonly employed method for unsupervised text style transfer. These techniques aim to learn the disentangled representations and tweak them to modify the style of a sentence. In this paper, we propose a counterfactual-based method to modify the latent representation, by posing a ‘what-if’ scenario. This simple and disciplined approach also enables a fine-grained control on the transfer strength. We conduct experiments with the proposed methodology on multiple attribute transfer tasks like Sentiment, Formality and Excitement to support our hypothesis. | 572f71043d7e5d60eaf5f17a097d593f | 2,021 | [
"disentanglement of latent representations into content and style spaces has been a commonly employed method for unsupervised text style transfer .",
"these techniques aim to learn the disentangled representations and tweak them to modify the style of a sentence .",
"in this paper , we propose a counterfactual ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
16,
17,
18,
19
],
"text": "unsupervised text style transfer",
"tokens": [
"unsupervised",
"text",
"style",
... | [
"disentanglement",
"of",
"latent",
"representations",
"into",
"content",
"and",
"style",
"spaces",
"has",
"been",
"a",
"commonly",
"employed",
"method",
"for",
"unsupervised",
"text",
"style",
"transfer",
".",
"these",
"techniques",
"aim",
"to",
"learn",
"the",
... |
ACL | Refining Sample Embeddings with Relation Prototypes to Enhance Continual Relation Extraction | Continual learning has gained increasing attention in recent years, thanks to its biological interpretation and efficiency in many real-world applications. As a typical task of continual learning, continual relation extraction (CRE) aims to extract relations between entities from texts, where the samples of different relations are delivered into the model continuously. Some previous works have proved that storing typical samples of old relations in memory can help the model keep a stable understanding of old relations and avoid forgetting them. However, most methods heavily depend on the memory size in that they simply replay these memorized samples in subsequent tasks. To fully utilize memorized samples, in this paper, we employ relation prototype to extract useful information of each relation. Specifically, the prototype embedding for a specific relation is computed based on memorized samples of this relation, which is collected by K-means algorithm. The prototypes of all observed relations at current learning stage are used to re-initialize a memory network to refine subsequent sample embeddings, which ensures the model’s stable understanding on all observed relations when learning a new task. Compared with previous CRE models, our model utilizes the memory information sufficiently and efficiently, resulting in enhanced CRE performance. Our experiments show that the proposed model outperforms the state-of-the-art CRE models and has great advantage in avoiding catastrophic forgetting. The code and datasets are released on https://github.com/fd2014cl/RP-CRE. | 074d7f1a4a31e3803bdfab2fc62d1a55 | 2,021 | [
"continual learning has gained increasing attention in recent years , thanks to its biological interpretation and efficiency in many real - world applications .",
"as a typical task of continual learning , continual relation extraction ( cre ) aims to extract relations between entities from texts , where the samp... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1
],
"text": "continual learning",
"tokens": [
"continual",
"learning"
]
}
],
"event_type": "ITT",
"trigger"... | [
"continual",
"learning",
"has",
"gained",
"increasing",
"attention",
"in",
"recent",
"years",
",",
"thanks",
"to",
"its",
"biological",
"interpretation",
"and",
"efficiency",
"in",
"many",
"real",
"-",
"world",
"applications",
".",
"as",
"a",
"typical",
"task",
... |
ACL | Towards Consistent Document-level Entity Linking: Joint Models for Entity Linking and Coreference Resolution | We consider the task of document-level entity linking (EL), where it is important to make consistent decisions for entity mentions over the full document jointly. We aim to leverage explicit “connections” among mentions within the document itself: we propose to join EL and coreference resolution (coref) in a single structured prediction task over directed trees and use a globally normalized model to solve it. This contrasts with related works where two separate models are trained for each of the tasks and additional logic is required to merge the outputs. Experimental results on two datasets show a boost of up to +5% F1-score on both coref and EL tasks, compared to their standalone counterparts. For a subset of hard cases, with individual mentions lacking the correct EL in their candidate entity list, we obtain a +50% increase in accuracy. | a681a5e8b6b67528b5ac27cfa21ed69d | 2,022 | [
"we consider the task of document - level entity linking ( el ) , where it is important to make consistent decisions for entity mentions over the full document jointly .",
"we aim to leverage explicit “ connections ” among mentions within the document itself : we propose to join el and coreference resolution ( co... | [
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
... | [
"we",
"consider",
"the",
"task",
"of",
"document",
"-",
"level",
"entity",
"linking",
"(",
"el",
")",
",",
"where",
"it",
"is",
"important",
"to",
"make",
"consistent",
"decisions",
"for",
"entity",
"mentions",
"over",
"the",
"full",
"document",
"jointly",
... |
ACL | Multi-task Pairwise Neural Ranking for Hashtag Segmentation | Hashtags are often employed on social media and beyond to add metadata to a textual utterance with the goal of increasing discoverability, aiding search, or providing additional semantics. However, the semantic content of hashtags is not straightforward to infer as these represent ad-hoc conventions which frequently include multiple words joined together and can include abbreviations and unorthodox spellings. We build a dataset of 12,594 hashtags split into individual segments and propose a set of approaches for hashtag segmentation by framing it as a pairwise ranking problem between candidate segmentations. Our novel neural approaches demonstrate 24.6% error reduction in hashtag segmentation accuracy compared to the current state-of-the-art method. Finally, we demonstrate that a deeper understanding of hashtag semantics obtained through segmentation is useful for downstream applications such as sentiment analysis, for which we achieved a 2.6% increase in average recall on the SemEval 2017 sentiment analysis dataset. | de20317483f6a4505b70bb8697a42f09 | 2,019 | [
"hashtags are often employed on social media and beyond to add metadata to a textual utterance with the goal of increasing discoverability , aiding search , or providing additional semantics .",
"however , the semantic content of hashtags is not straightforward to infer as these represent ad - hoc conventions whi... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "FEA",
"offsets": [
0
],
"text": "hashtags",
"tokens": [
"hashtags"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
3
],
... | [
"hashtags",
"are",
"often",
"employed",
"on",
"social",
"media",
"and",
"beyond",
"to",
"add",
"metadata",
"to",
"a",
"textual",
"utterance",
"with",
"the",
"goal",
"of",
"increasing",
"discoverability",
",",
"aiding",
"search",
",",
"or",
"providing",
"additi... |
ACL | A2N: Attending to Neighbors for Knowledge Graph Inference | State-of-the-art models for knowledge graph completion aim at learning a fixed embedding representation of entities in a multi-relational graph which can generalize to infer unseen entity relationships at test time. This can be sub-optimal as it requires memorizing and generalizing to all possible entity relationships using these fixed representations. We thus propose a novel attention-based method to learn query-dependent representation of entities which adaptively combines the relevant graph neighborhood of an entity leading to more accurate KG completion. The proposed method is evaluated on two benchmark datasets for knowledge graph completion, and experimental results show that the proposed model performs competitively or better than existing state-of-the-art, including recent methods for explicit multi-hop reasoning. Qualitative probing offers insight into how the model can reason about facts involving multiple hops in the knowledge graph, through the use of neighborhood attention. | 8a6454a399d5e1c6b721953858f58eb8 | 2,019 | [
"state - of - the - art models for knowledge graph completion aim at learning a fixed embedding representation of entities in a multi - relational graph which can generalize to infer unseen entity relationships at test time .",
"this can be sub - optimal as it requires memorizing and generalizing to all possible ... | [
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
0,
1,
2,
3,
4,
5,
6,
7
],
"text": "state - of - the - art models",
"tokens": [
"state"... | [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"models",
"for",
"knowledge",
"graph",
"completion",
"aim",
"at",
"learning",
"a",
"fixed",
"embedding",
"representation",
"of",
"entities",
"in",
"a",
"multi",
"-",
"relational",
"graph",
"which",
"can",
"generali... |
ACL | Vocabulary Pyramid Network: Multi-Pass Encoding and Decoding with Multi-Level Vocabularies for Response Generation | We study the task of response generation. Conventional methods employ a fixed vocabulary and one-pass decoding, which not only make them prone to safe and general responses but also lack further refining to the first generated raw sequence. To tackle the above two problems, we present a Vocabulary Pyramid Network (VPN) which is able to incorporate multi-pass encoding and decoding with multi-level vocabularies into response generation. Specifically, the dialogue input and output are represented by multi-level vocabularies which are obtained from hierarchical clustering of raw words. Then, multi-pass encoding and decoding are conducted on the multi-level vocabularies. Since VPN is able to leverage rich encoding and decoding information with multi-level vocabularies, it has the potential to generate better responses. Experiments on English Twitter and Chinese Weibo datasets demonstrate that VPN remarkably outperforms strong baselines. | 29acb3b597dab777bb60052652c5c8ae | 2,019 | [
"we study the task of response generation .",
"conventional methods employ a fixed vocabulary and one - pass decoding , which not only make them prone to safe and general responses but also lack further refining to the first generated raw sequence .",
"to tackle the above two problems , we present a vocabulary ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
5,
6
],
"text": "response generation",
"tokens": [
"response",
"generation"
]
}
],
"event_type": "ITT",
"trigge... | [
"we",
"study",
"the",
"task",
"of",
"response",
"generation",
".",
"conventional",
"methods",
"employ",
"a",
"fixed",
"vocabulary",
"and",
"one",
"-",
"pass",
"decoding",
",",
"which",
"not",
"only",
"make",
"them",
"prone",
"to",
"safe",
"and",
"general",
... |
ACL | BERT Rediscovers the Classical NLP Pipeline | Pre-trained text encoders have rapidly advanced the state of the art on many NLP tasks. We focus on one such model, BERT, and aim to quantify where linguistic information is captured within the network. We find that the model represents the steps of the traditional NLP pipeline in an interpretable and localizable way, and that the regions responsible for each step appear in the expected sequence: POS tagging, parsing, NER, semantic roles, then coreference. Qualitative analysis reveals that the model can and often does adjust this pipeline dynamically, revising lower-level decisions on the basis of disambiguating information from higher-level representations. | f904a8edc57072753bddf27c7651f016 | 2,019 | [
"pre - trained text encoders have rapidly advanced the state of the art on many nlp tasks .",
"we focus on one such model , bert , and aim to quantify where linguistic information is captured within the network .",
"we find that the model represents the steps of the traditional nlp pipeline in an interpretable ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "MOD",
"offsets": [
0,
1,
2,
3,
4
],
"text": "pre - trained text encoders",
"tokens": [
"pre",
"-",
"trained",
... | [
"pre",
"-",
"trained",
"text",
"encoders",
"have",
"rapidly",
"advanced",
"the",
"state",
"of",
"the",
"art",
"on",
"many",
"nlp",
"tasks",
".",
"we",
"focus",
"on",
"one",
"such",
"model",
",",
"bert",
",",
"and",
"aim",
"to",
"quantify",
"where",
"li... |
ACL | NNE: A Dataset for Nested Named Entity Recognition in English Newswire | Named entity recognition (NER) is widely used in natural language processing applications and downstream tasks. However, most NER tools target flat annotation from popular datasets, eschewing the semantic information available in nested entity mentions. We describe NNE—a fine-grained, nested named entity dataset over the full Wall Street Journal portion of the Penn Treebank (PTB). Our annotation comprises 279,795 mentions of 114 entity types with up to 6 layers of nesting. We hope the public release of this large dataset for English newswire will encourage development of new techniques for nested NER. | 9f66da9b002f8430fd65b37dd6f79fb4 | 2,019 | [
"named entity recognition ( ner ) is widely used in natural language processing applications and downstream tasks .",
"however , most ner tools target flat annotation from popular datasets , eschewing the semantic information available in nested entity mentions .",
"we describe nne — a fine - grained , nested n... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
21
],
"text": "named entity recognition",
"tokens": [
"ner"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
8
... | [
"named",
"entity",
"recognition",
"(",
"ner",
")",
"is",
"widely",
"used",
"in",
"natural",
"language",
"processing",
"applications",
"and",
"downstream",
"tasks",
".",
"however",
",",
"most",
"ner",
"tools",
"target",
"flat",
"annotation",
"from",
"popular",
... |
ACL | Cross-Modal Commentator: Automatic Machine Commenting Based on Cross-Modal Information | Automatic commenting of online articles can provide additional opinions and facts to the reader, which improves user experience and engagement on social media platforms. Previous work focuses on automatic commenting based solely on textual content. However, in real-scenarios, online articles usually contain multiple modal contents. For instance, graphic news contains plenty of images in addition to text. Contents other than text are also vital because they are not only more attractive to the reader but also may provide critical information. To remedy this, we propose a new task: cross-model automatic commenting (CMAC), which aims to make comments by integrating multiple modal contents. We construct a large-scale dataset for this task and explore several representative methods. Going a step further, an effective co-attention model is presented to capture the dependency between textual and visual information. Evaluation results show that our proposed model can achieve better performance than competitive baselines. | f355cf2bdabf184bcfaf7dfbab8a9623 | 2,019 | [
"automatic commenting of online articles can provide additional opinions and facts to the reader , which improves user experience and engagement on social media platforms .",
"previous work focuses on automatic commenting based solely on textual content .",
"however , in real - scenarios , online articles usual... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1
],
"text": "automatic commenting",
"tokens": [
"automatic",
"commenting"
]
}
],
"event_type": "ITT",
"trig... | [
"automatic",
"commenting",
"of",
"online",
"articles",
"can",
"provide",
"additional",
"opinions",
"and",
"facts",
"to",
"the",
"reader",
",",
"which",
"improves",
"user",
"experience",
"and",
"engagement",
"on",
"social",
"media",
"platforms",
".",
"previous",
... |
ACL | A Cross-Domain Transferable Neural Coherence Model | Coherence is an important aspect of text quality and is crucial for ensuring its readability. One important limitation of existing coherence models is that training on one domain does not easily generalize to unseen categories of text. Previous work advocates for generative models for cross-domain generalization, because for discriminative models, the space of incoherent sentence orderings to discriminate against during training is prohibitively large. In this work, we propose a local discriminative neural model with a much smaller negative sampling space that can efficiently learn against incorrect orderings. The proposed coherence model is simple in structure, yet it significantly outperforms previous state-of-art methods on a standard benchmark dataset on the Wall Street Journal corpus, as well as in multiple new challenging settings of transfer to unseen categories of discourse on Wikipedia articles. | 8452682ec7a8fed90d74ce15605fe03a | 2,019 | [
"coherence is an important aspect of text quality and is crucial for ensuring its readability .",
"one important limitation of existing coherence models is that training on one domain does not easily generalize to unseen categories of text .",
"previous work advocates for generative models for cross - domain ge... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
6,
7
],
"text": "text quality",
"tokens": [
"text",
"quality"
]
}
],
"event_type": "ITT",
"trigger": {
"o... | [
"coherence",
"is",
"an",
"important",
"aspect",
"of",
"text",
"quality",
"and",
"is",
"crucial",
"for",
"ensuring",
"its",
"readability",
".",
"one",
"important",
"limitation",
"of",
"existing",
"coherence",
"models",
"is",
"that",
"training",
"on",
"one",
"do... |
ACL | Let Me Choose: From Verbal Context to Font Selection | In this paper, we aim to learn associations between visual attributes of fonts and the verbal context of the texts they are typically applied to. Compared to related work leveraging the surrounding visual context, we choose to focus only on the input text, which can enable new applications for which the text is the only visual element in the document. We introduce a new dataset, containing examples of different topics in social media posts and ads, labeled through crowd-sourcing. Due to the subjective nature of the task, multiple fonts might be perceived as acceptable for an input text, which makes this problem challenging. To this end, we investigate different end-to-end models to learn label distributions on crowd-sourced data, to capture inter-subjectivity across all annotations. | 39550027daca869d23eaa6a3ed82bf16 | 2,020 | [
"in this paper , we aim to learn associations between visual attributes of fonts and the verbal context of the texts they are typically applied to .",
"compared to related work leveraging the surrounding visual context , we choose to focus only on the input text , which can enable new applications for which the t... | [
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
4
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
... | [
"in",
"this",
"paper",
",",
"we",
"aim",
"to",
"learn",
"associations",
"between",
"visual",
"attributes",
"of",
"fonts",
"and",
"the",
"verbal",
"context",
"of",
"the",
"texts",
"they",
"are",
"typically",
"applied",
"to",
".",
"compared",
"to",
"related",
... |
ACL | Maria: A Visual Experience Powered Conversational Agent | Arguably, the visual perception of conversational agents to the physical world is a key way for them to exhibit the human-like intelligence. Image-grounded conversation is thus proposed to address this challenge. Existing works focus on exploring the multimodal dialog models that ground the conversation on a given image. In this paper, we take a step further to study image-grounded conversation under a fully open-ended setting where no paired dialog and image are assumed available. Specifically, we present Maria, a neural conversation agent powered by the visual world experiences which are retrieved from a large-scale image index. Maria consists of three flexible components, i.e., text-to-image retriever, visual concept detector and visual-knowledge-grounded response generator. The retriever aims to retrieve a correlated image to the dialog from an image index, while the visual concept detector extracts rich visual knowledge from the image. Then, the response generator is grounded on the extracted visual knowledge and dialog context to generate the target response. Extensive experiments demonstrate Maria outperforms previous state-of-the-art methods on automatic metrics and human evaluation, and can generate informative responses that have some visual commonsense of the physical world. | 73d60b2d72d21ceb399119a7cdc1aedf | 2,021 | [
"arguably , the visual perception of conversational agents to the physical world is a key way for them to exhibit the human - like intelligence .",
"image - grounded conversation is thus proposed to address this challenge .",
"existing works focus on exploring the multimodal dialog models that ground the conver... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
26,
27,
28,
29
],
"text": "image - grounded conversation",
"tokens": [
"image",
"-",
"grounded",
... | [
"arguably",
",",
"the",
"visual",
"perception",
"of",
"conversational",
"agents",
"to",
"the",
"physical",
"world",
"is",
"a",
"key",
"way",
"for",
"them",
"to",
"exhibit",
"the",
"human",
"-",
"like",
"intelligence",
".",
"image",
"-",
"grounded",
"conversa... |
ACL | Data Programming for Learning Discourse Structure | This paper investigates the advantages and limits of data programming for the task of learning discourse structure. The data programming paradigm implemented in the Snorkel framework allows a user to label training data using expert-composed heuristics, which are then transformed via the “generative step” into probability distributions of the class labels given the training candidates. These results are later generalized using a discriminative model. Snorkel’s attractive promise to create a large amount of annotated data from a smaller set of training data by unifying the output of a set of heuristics has yet to be used for computationally difficult tasks, such as that of discourse attachment, in which one must decide where a given discourse unit attaches to other units in a text in order to form a coherent discourse structure. Although approaching this problem using Snorkel requires significant modifications to the structure of the heuristics, we show that weak supervision methods can be more than competitive with classical supervised learning approaches to the attachment problem. | 0fa8ddb2699465b9ef137aad72c7f04e | 2,019 | [
"this paper investigates the advantages and limits of data programming for the task of learning discourse structure .",
"the data programming paradigm implemented in the snorkel framework allows a user to label training data using expert - composed heuristics , which are then transformed via the “ generative step... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
8,
9
],
"text": "data programming",
"tokens": [
"data",
"programming"
]
}
],
"event_type": "ITT",
"trigger": {
... | [
"this",
"paper",
"investigates",
"the",
"advantages",
"and",
"limits",
"of",
"data",
"programming",
"for",
"the",
"task",
"of",
"learning",
"discourse",
"structure",
".",
"the",
"data",
"programming",
"paradigm",
"implemented",
"in",
"the",
"snorkel",
"framework",... |
ACL | MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators | Prompting has recently been shown as a promising approach for applying pre-trained language models to perform downstream tasks. We present Multi-Stage Prompting, a simple and automatic approach for leveraging pre-trained language models to translation tasks. To better mitigate the discrepancy between pre-training and translation, MSP divides the translation process via pre-trained language models into three separate stages: the encoding stage, the re-encoding stage, and the decoding stage. During each stage, we independently apply different continuous prompts for allowing pre-trained language models better shift to translation tasks. We conduct extensive experiments on three translation tasks. Experiments show that our method can significantly improve the translation performance of pre-trained language models. | f29bea4f8fe9b7344e4468b86d190de6 | 2,022 | [
"prompting has recently been shown as a promising approach for applying pre - trained language models to perform downstream tasks .",
"we present multi - stage prompting , a simple and automatic approach for leveraging pre - trained language models to translation tasks .",
"to better mitigate the discrepancy be... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
0
],
"text": "prompting",
"tokens": [
"prompting"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
4
],
... | [
"prompting",
"has",
"recently",
"been",
"shown",
"as",
"a",
"promising",
"approach",
"for",
"applying",
"pre",
"-",
"trained",
"language",
"models",
"to",
"perform",
"downstream",
"tasks",
".",
"we",
"present",
"multi",
"-",
"stage",
"prompting",
",",
"a",
"... |
ACL | Evaluating Explanation Methods for Neural Machine Translation | Recently many efforts have been devoted to interpreting the black-box NMT models, but little progress has been made on metrics to evaluate explanation methods. Word Alignment Error Rate can be used as such a metric that matches human understanding, however, it can not measure explanation methods on those target words that are not aligned to any source word. This paper thereby makes an initial attempt to evaluate explanation methods from an alternative viewpoint. To this end, it proposes a principled metric based on fidelity in regard to the predictive behavior of the NMT model. As the exact computation for this metric is intractable, we employ an efficient approach as its approximation. On six standard translation tasks, we quantitatively evaluate several explanation methods in terms of the proposed metric and we reveal some valuable findings for these explanation methods in our experiments. | 268425488840dce7a37f4c7bd1339d6f | 2,020 | [
"recently many efforts have been devoted to interpreting the black - box nmt models , but little progress has been made on metrics to evaluate explanation methods .",
"word alignment error rate can be used as such a metric that matches human understanding , however , it can not measure explanation methods on thos... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
9,
10,
11,
12,
13
],
"text": "black - box nmt models",
"tokens": [
"black",
"-",
"box",
... | [
"recently",
"many",
"efforts",
"have",
"been",
"devoted",
"to",
"interpreting",
"the",
"black",
"-",
"box",
"nmt",
"models",
",",
"but",
"little",
"progress",
"has",
"been",
"made",
"on",
"metrics",
"to",
"evaluate",
"explanation",
"methods",
".",
"word",
"a... |
ACL | Predicting Degrees of Technicality in Automatic Terminology Extraction | While automatic term extraction is a well-researched area, computational approaches to distinguish between degrees of technicality are still understudied. We semi-automatically create a German gold standard of technicality across four domains, and illustrate the impact of a web-crawled general-language corpus on technicality prediction. When defining a classification approach that combines general-language and domain-specific word embeddings, we go beyond previous work and align vector spaces to gain comparative embeddings. We suggest two novel models to exploit general- vs. domain-specific comparisons: a simple neural network model with pre-computed comparative-embedding information as input, and a multi-channel model computing the comparison internally. Both models outperform previous approaches, with the multi-channel model performing best. | c665391ac49bb3b04f57e521866222ba | 2,020 | [
"while automatic term extraction is a well - researched area , computational approaches to distinguish between degrees of technicality are still understudied .",
"we semi - automatically create a german gold standard of technicality across four domains , and illustrate the impact of a web - crawled general - lang... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
1,
2,
3
],
"text": "automatic term extraction",
"tokens": [
"automatic",
"term",
"extraction"
]
}
]... | [
"while",
"automatic",
"term",
"extraction",
"is",
"a",
"well",
"-",
"researched",
"area",
",",
"computational",
"approaches",
"to",
"distinguish",
"between",
"degrees",
"of",
"technicality",
"are",
"still",
"understudied",
".",
"we",
"semi",
"-",
"automatically",
... |
ACL | A Novel Graph-based Multi-modal Fusion Encoder for Neural Machine Translation | Multi-modal neural machine translation (NMT) aims to translate source sentences into a target language paired with images. However, dominant multi-modal NMT models do not fully exploit fine-grained semantic correspondences between semantic units of different modalities, which have potential to refine multi-modal representation learning. To deal with this issue, in this paper, we propose a novel graph-based multi-modal fusion encoder for NMT. Specifically, we first represent the input sentence and image using a unified multi-modal graph, which captures various semantic relationships between multi-modal semantic units (words and visual objects). We then stack multiple graph-based multi-modal fusion layers that iteratively perform semantic interactions to learn node representations. Finally, these representations provide an attention-based context vector for the decoder. We evaluate our proposed encoder on the Multi30K datasets. Experimental results and in-depth analysis show the superiority of our multi-modal NMT model. | 37d1e2b11cb44f56ddf7b0fd2359674e | 2,020 | [
"multi - modal neural machine translation ( nmt ) aims to translate source sentences into a target language paired with images .",
"however , dominant multi - modal nmt models do not fully exploit fine - grained semantic correspondences between semantic units of different modalities , which have potential to refi... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
3,
4,
5
],
"text": "multi - modal neural machine translation",
"tokens": [
"multi",
"-",... | [
"multi",
"-",
"modal",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"aims",
"to",
"translate",
"source",
"sentences",
"into",
"a",
"target",
"language",
"paired",
"with",
"images",
".",
"however",
",",
"dominant",
"multi",
"-",
"modal",
"nmt",
"model... |
ACL | Just Rank: Rethinking Evaluation with Word and Sentence Similarities | Word and sentence embeddings are useful feature representations in natural language processing. However, intrinsic evaluation for embeddings lags far behind, and there has been no significant update since the past decade. Word and sentence similarity tasks have become the de facto evaluation method. It leads models to overfit to such evaluations, negatively impacting embedding models’ development. This paper first points out the problems using semantic similarity as the gold standard for word and sentence embedding evaluations. Further, we propose a new intrinsic evaluation method called EvalRank, which shows a much stronger correlation with downstream tasks. Extensive experiments are conducted based on 60+ models and popular datasets to certify our judgments. Finally, the practical evaluation toolkit is released for future benchmarking purposes. | 1e98104fed0ee812bd2c5f1e6aef0500 | 2,022 | [
"word and sentence embeddings are useful feature representations in natural language processing .",
"however , intrinsic evaluation for embeddings lags far behind , and there has been no significant update since the past decade .",
"word and sentence similarity tasks have become the de facto evaluation method .... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
9,
10,
11
],
"text": "natural language processing",
"tokens": [
"natural",
"language",
"processing"
]
}... | [
"word",
"and",
"sentence",
"embeddings",
"are",
"useful",
"feature",
"representations",
"in",
"natural",
"language",
"processing",
".",
"however",
",",
"intrinsic",
"evaluation",
"for",
"embeddings",
"lags",
"far",
"behind",
",",
"and",
"there",
"has",
"been",
"... |
ACL | Multi-Task Retrieval for Knowledge-Intensive Tasks | Retrieving relevant contexts from a large corpus is a crucial step for tasks such as open-domain question answering and fact checking. Although neural retrieval outperforms traditional methods like tf-idf and BM25, its performance degrades considerably when applied to out-of-domain data. Driven by the question of whether a neural retrieval model can be _universal_ and perform robustly on a wide variety of problems, we propose a multi-task trained model. Our approach not only outperforms previous methods in the few-shot setting, but also rivals specialised neural retrievers, even when in-domain training data is abundant. With the help of our retriever, we improve existing models for downstream tasks and closely match or improve the state of the art on multiple benchmarks. | e838393ab8c4b4570a3d88b5ffb9947a | 2,021 | [
"retrieving relevant contexts from a large corpus is a crucial step for tasks such as open - domain question answering and fact checking .",
"although neural retrieval outperforms traditional methods like tf - idf and bm25 , its performance degrades considerably when applied to out - of - domain data .",
"drive... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
15,
16,
17,
18,
19,
20,
21,
22
],
"text": "open - domain question answering and fact checking",
"... | [
"retrieving",
"relevant",
"contexts",
"from",
"a",
"large",
"corpus",
"is",
"a",
"crucial",
"step",
"for",
"tasks",
"such",
"as",
"open",
"-",
"domain",
"question",
"answering",
"and",
"fact",
"checking",
".",
"although",
"neural",
"retrieval",
"outperforms",
... |
ACL | Factorising Meaning and Form for Intent-Preserving Paraphrasing | We propose a method for generating paraphrases of English questions that retain the original intent but use a different surface form. Our model combines a careful choice of training objective with a principled information bottleneck, to induce a latent encoding space that disentangles meaning and form. We train an encoder-decoder model to reconstruct a question from a paraphrase with the same meaning and an exemplar with the same surface form, leading to separated encoding spaces. We use a Vector-Quantized Variational Autoencoder to represent the surface form as a set of discrete latent variables, allowing us to use a classifier to select a different surface form at test time. Crucially, our method does not require access to an external source of target exemplars. Extensive experiments and a human evaluation show that we are able to generate paraphrases with a better tradeoff between semantic preservation and syntactic novelty compared to previous methods. | 5238b0b4feeff43bce8d266e7f914e8d | 2,021 | [
"we propose a method for generating paraphrases of english questions that retain the original intent but use a different surface form .",
"our model combines a careful choice of training objective with a principled information bottleneck , to induce a latent encoding space that disentangles meaning and form .",
... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
... | [
"we",
"propose",
"a",
"method",
"for",
"generating",
"paraphrases",
"of",
"english",
"questions",
"that",
"retain",
"the",
"original",
"intent",
"but",
"use",
"a",
"different",
"surface",
"form",
".",
"our",
"model",
"combines",
"a",
"careful",
"choice",
"of",... |
ACL | Will-They-Won’t-They: A Very Large Dataset for Stance Detection on Twitter | We present a new challenging stance detection dataset, called Will-They-Won’t-They (WT--WT), which contains 51,284 tweets in English, making it by far the largest available dataset of the type. All the annotations are carried out by experts; therefore, the dataset constitutes a high-quality and reliable benchmark for future research in stance detection. Our experiments with a wide range of recent state-of-the-art stance detection systems show that the dataset poses a strong challenge to existing models in this domain. | 1588afe5dba733abed2dcbac40fc84da | 2,020 | [
"we present a new challenging stance detection dataset , called will - they - won ’ t - they ( wt - - wt ) , which contains 51 , 284 tweets in english , making it by far the largest available dataset of the type .",
"all the annotations are carried out by experts ; therefore , the dataset constitutes a high - qua... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
... | [
"we",
"present",
"a",
"new",
"challenging",
"stance",
"detection",
"dataset",
",",
"called",
"will",
"-",
"they",
"-",
"won",
"’",
"t",
"-",
"they",
"(",
"wt",
"-",
"-",
"wt",
")",
",",
"which",
"contains",
"51",
",",
"284",
"tweets",
"in",
"english"... |
ACL | Novel Slot Detection: A Benchmark for Discovering Unknown Slot Types in the Task-Oriented Dialogue System | Existing slot filling models can only recognize pre-defined in-domain slot types from a limited slot set. In the practical application, a reliable dialogue system should know what it does not know. In this paper, we introduce a new task, Novel Slot Detection (NSD), in the task-oriented dialogue system. NSD aims to discover unknown or out-of-domain slot types to strengthen the capability of a dialogue system based on in-domain training data. Besides, we construct two public NSD datasets, propose several strong NSD baselines, and establish a benchmark for future work. Finally, we conduct exhaustive experiments and qualitative analysis to comprehend key challenges and provide new guidance for future directions. | ffddbdedad011dd4d33b3f5901710527 | 2,021 | [
"existing slot filling models can only recognize pre - defined in - domain slot types from a limited slot set .",
"in the practical application , a reliable dialogue system should know what it does not know .",
"in this paper , we introduce a new task , novel slot detection ( nsd ) , in the task - oriented dial... | [
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
0,
1,
2,
3
],
"text": "existing slot filling models",
"tokens": [
"existing",
"slot",
"filling",
... | [
"existing",
"slot",
"filling",
"models",
"can",
"only",
"recognize",
"pre",
"-",
"defined",
"in",
"-",
"domain",
"slot",
"types",
"from",
"a",
"limited",
"slot",
"set",
".",
"in",
"the",
"practical",
"application",
",",
"a",
"reliable",
"dialogue",
"system",... |
ACL | HyperCore: Hyperbolic and Co-graph Representation for Automatic ICD Coding | The International Classification of Diseases (ICD) provides a standardized way for classifying diseases, which endows each disease with a unique code. ICD coding aims to assign proper ICD codes to a medical record. Since manual coding is very laborious and prone to errors, many methods have been proposed for the automatic ICD coding task. However, most of existing methods independently predict each code, ignoring two important characteristics: Code Hierarchy and Code Co-occurrence. In this paper, we propose a Hyperbolic and Co-graph Representation method (HyperCore) to address the above problem. Specifically, we propose a hyperbolic representation method to leverage the code hierarchy. Moreover, we propose a graph convolutional network to utilize the code co-occurrence. Experimental results on two widely used datasets demonstrate that our proposed model outperforms previous state-of-the-art methods. | 5cba0b148b13c26580447864c46bbb91 | 2,020 | [
"the international classification of diseases ( icd ) provides a standardized way for classifying diseases , which endows each disease with a unique code .",
"icd coding aims to assign proper icd codes to a medical record .",
"since manual coding is very laborious and prone to errors , many methods have been pr... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
25,
26
],
"text": "icd coding",
"tokens": [
"icd",
"coding"
]
}
],
"event_type": "ITT",
"trigger": {
"off... | [
"the",
"international",
"classification",
"of",
"diseases",
"(",
"icd",
")",
"provides",
"a",
"standardized",
"way",
"for",
"classifying",
"diseases",
",",
"which",
"endows",
"each",
"disease",
"with",
"a",
"unique",
"code",
".",
"icd",
"coding",
"aims",
"to",... |
ACL | Employing the Correspondence of Relations and Connectives to Identify Implicit Discourse Relations via Label Embeddings | It has been shown that implicit connectives can be exploited to improve the performance of the models for implicit discourse relation recognition (IDRR). An important property of the implicit connectives is that they can be accurately mapped into the discourse relations conveying their functions. In this work, we explore this property in a multi-task learning framework for IDRR in which the relations and the connectives are simultaneously predicted, and the mapping is leveraged to transfer knowledge between the two prediction tasks via the embeddings of relations and connectives. We propose several techniques to enable such knowledge transfer that yield the state-of-the-art performance for IDRR on several settings of the benchmark dataset (i.e., the Penn Discourse Treebank dataset). | e7a28870a5d5e7d4ceef91c9de6694ec | 2,019 | [
"it has been shown that implicit connectives can be exploited to improve the performance of the models for implicit discourse relation recognition ( idrr ) .",
"an important property of the implicit connectives is that they can be accurately mapped into the discourse relations conveying their functions .",
"in ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
18,
19,
20,
21
],
"text": "implicit discourse relation recognition",
"tokens": [
"implicit",
"discourse",
"... | [
"it",
"has",
"been",
"shown",
"that",
"implicit",
"connectives",
"can",
"be",
"exploited",
"to",
"improve",
"the",
"performance",
"of",
"the",
"models",
"for",
"implicit",
"discourse",
"relation",
"recognition",
"(",
"idrr",
")",
".",
"an",
"important",
"prope... |
ACL | Towards Comprehensive Description Generation from Factual Attribute-value Tables | The comprehensive descriptions for factual attribute-value tables, which should be accurate, informative and loyal, can be very helpful for end users to understand the structured data in this form. However previous neural generators might suffer from key attributes missing, less informative and groundless information problems, which impede the generation of high-quality comprehensive descriptions for tables. To relieve these problems, we first propose force attention (FA) method to encourage the generator to pay more attention to the uncovered attributes to avoid potential key attributes missing. Furthermore, we propose reinforcement learning for information richness to generate more informative as well as more loyal descriptions for tables. In our experiments, we utilize the widely used WIKIBIO dataset as a benchmark. Besides, we create WB-filter based on WIKIBIO to test our model in the simulated user-oriented scenarios, in which the generated descriptions should accord with particular user interests. Experimental results show that our model outperforms the state-of-the-art baselines on both automatic and human evaluation. | 9f65ac5f8fa1e376f625846721541a98 | 2,019 | [
"the comprehensive descriptions for factual attribute - value tables , which should be accurate , informative and loyal , can be very helpful for end users to understand the structured data in this form .",
"however previous neural generators might suffer from key attributes missing , less informative and groundl... | [
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
42,
43,
44
],
"text": "key attributes missing",
"tokens": [
"key",
"attributes",
"missing"
]
},
{
... | [
"the",
"comprehensive",
"descriptions",
"for",
"factual",
"attribute",
"-",
"value",
"tables",
",",
"which",
"should",
"be",
"accurate",
",",
"informative",
"and",
"loyal",
",",
"can",
"be",
"very",
"helpful",
"for",
"end",
"users",
"to",
"understand",
"the",
... |
ACL | Matching the Blanks: Distributional Similarity for Relation Learning | General purpose relation extractors, which can model arbitrary relations, are a core aspiration in information extraction. Efforts have been made to build general purpose extractors that represent relations with their surface forms, or which jointly embed surface forms with relations from an existing knowledge graph. However, both of these approaches are limited in their ability to generalize. In this paper, we build on extensions of Harris’ distributional hypothesis to relations, as well as recent advances in learning text representations (specifically, BERT), to build task agnostic relation representations solely from entity-linked text. We show that these representations significantly outperform previous work on exemplar based relation extraction (FewRel) even without using any of that task’s training data. We also show that models initialized with our task agnostic representations, and then tuned on supervised relation extraction datasets, significantly outperform the previous methods on SemEval 2010 Task 8, KBP37, and TACRED | 7895c9bafb91784ddb6a20ae83b8482d | 2,019 | [
"general purpose relation extractors , which can model arbitrary relations , are a core aspiration in information extraction .",
"efforts have been made to build general purpose extractors that represent relations with their surface forms , or which jointly embed surface forms with relations from an existing know... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "MOD",
"offsets": [
0,
1,
2,
3
],
"text": "general purpose relation extractors",
"tokens": [
"general",
"purpose",
"relation",
... | [
"general",
"purpose",
"relation",
"extractors",
",",
"which",
"can",
"model",
"arbitrary",
"relations",
",",
"are",
"a",
"core",
"aspiration",
"in",
"information",
"extraction",
".",
"efforts",
"have",
"been",
"made",
"to",
"build",
"general",
"purpose",
"extrac... |
ACL | Beyond the Granularity: Multi-Perspective Dialogue Collaborative Selection for Dialogue State Tracking | In dialogue state tracking, dialogue history is a crucial material, and its utilization varies between different models. However, no matter how the dialogue history is used, each existing model uses its own consistent dialogue history during the entire state tracking process, regardless of which slot is updated. Apparently, it requires different dialogue history to update different slots in different turns. Therefore, using consistent dialogue contents may lead to insufficient or redundant information for different slots, which affects the overall performance. To address this problem, we devise DiCoS-DST to dynamically select the relevant dialogue contents corresponding to each slot for state updating. Specifically, it first retrieves turn-level utterances of dialogue history and evaluates their relevance to the slot from a combination of three perspectives: (1) its explicit connection to the slot name; (2) its relevance to the current turn dialogue; (3) Implicit Mention Oriented Reasoning. Then these perspectives are combined to yield a decision, and only the selected dialogue contents are fed into State Generator, which explicitly minimizes the distracting information passed to the downstream state prediction. Experimental results show that our approach achieves new state-of-the-art performance on MultiWOZ 2.1 and MultiWOZ 2.2, and achieves superior performance on multiple mainstream benchmark datasets (including Sim-M, Sim-R, and DSTC2). | 0327bd3e55803a43b3cd1e2f542d5bf4 | 2,022 | [
"in dialogue state tracking , dialogue history is a crucial material , and its utilization varies between different models .",
"however , no matter how the dialogue history is used , each existing model uses its own consistent dialogue history during the entire state tracking process , regardless of which slot is... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "FEA",
"offsets": [
5,
6
],
"text": "dialogue history",
"tokens": [
"dialogue",
"history"
]
}
],
"event_type": "ITT",
"trigger": {
... | [
"in",
"dialogue",
"state",
"tracking",
",",
"dialogue",
"history",
"is",
"a",
"crucial",
"material",
",",
"and",
"its",
"utilization",
"varies",
"between",
"different",
"models",
".",
"however",
",",
"no",
"matter",
"how",
"the",
"dialogue",
"history",
"is",
... |
ACL | Balancing Training for Multilingual Neural Machine Translation | When training multilingual machine translation (MT) models that can translate to/from multiple languages, we are faced with imbalanced training sets: some languages have much more training data than others. Standard practice is to up-sample less resourced languages to increase representation, and the degree of up-sampling has a large effect on the overall performance. In this paper, we propose a method that instead automatically learns how to weight training data through a data scorer that is optimized to maximize performance on all test languages. Experiments on two sets of languages under both one-to-many and many-to-one MT settings show our method not only consistently outperforms heuristic baselines in terms of average performance, but also offers flexible control over the performance of which languages are optimized. | c23c0a8fd5125ad78c56e58e76952dca | 2,020 | [
"when training multilingual machine translation ( mt ) models that can translate to / from multiple languages , we are faced with imbalanced training sets : some languages have much more training data than others .",
"standard practice is to up - sample less resourced languages to increase representation , and th... | [
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
92
],
"text": "performance",
"tokens": [
"performance"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"o... | [
"when",
"training",
"multilingual",
"machine",
"translation",
"(",
"mt",
")",
"models",
"that",
"can",
"translate",
"to",
"/",
"from",
"multiple",
"languages",
",",
"we",
"are",
"faced",
"with",
"imbalanced",
"training",
"sets",
":",
"some",
"languages",
"have... |
ACL | Politeness Transfer: A Tag and Generate Approach | This paper introduces a new task of politeness transfer which involves converting non-polite sentences to polite sentences while preserving the meaning. We also provide a dataset of more than 1.39 instances automatically labeled for politeness to encourage benchmark evaluations on this new task. We design a tag and generate pipeline that identifies stylistic attributes and subsequently generates a sentence in the target style while preserving most of the source content. For politeness as well as five other transfer tasks, our model outperforms the state-of-the-art methods on automatic metrics for content preservation, with a comparable or better performance on style transfer accuracy. Additionally, our model surpasses existing methods on human evaluations for grammaticality, meaning preservation and transfer accuracy across all the six style transfer tasks. The data and code is located at https://github.com/tag-and-generate. | 9f9c847cba0591c493c02969a13efdc8 | 2,020 | [
"this paper introduces a new task of politeness transfer which involves converting non - polite sentences to polite sentences while preserving the meaning .",
"we also provide a dataset of more than 1 . 39 instances automatically labeled for politeness to encourage benchmark evaluations on this new task .",
"we... | [
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
4,
5,
6,
7,
8
],
"text": "new task of politeness transfer",
"tokens": [
"new",
"task",
"of",
... | [
"this",
"paper",
"introduces",
"a",
"new",
"task",
"of",
"politeness",
"transfer",
"which",
"involves",
"converting",
"non",
"-",
"polite",
"sentences",
"to",
"polite",
"sentences",
"while",
"preserving",
"the",
"meaning",
".",
"we",
"also",
"provide",
"a",
"d... |
ACL | Discrete Latent Variable Representations for Low-Resource Text Classification | While much work on deep latent variable models of text uses continuous latent variables, discrete latent variables are interesting because they are more interpretable and typically more space efficient. We consider several approaches to learning discrete latent variable models for text in the case where exact marginalization over these variables is intractable. We compare the performance of the learned representations as features for low-resource document and sentence classification. Our best models outperform the previous best reported results with continuous representations in these low-resource settings, while learning significantly more compressed representations. Interestingly, we find that an amortized variant of Hard EM performs particularly well in the lowest-resource regimes. | 6273754d62a21d32543783658ced63f4 | 2,020 | [
"while much work on deep latent variable models of text uses continuous latent variables , discrete latent variables are interesting because they are more interpretable and typically more space efficient .",
"we consider several approaches to learning discrete latent variable models for text in the case where exa... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "FEA",
"offsets": [
15,
16,
17
],
"text": "discrete latent variables",
"tokens": [
"discrete",
"latent",
"variables"
]
}
... | [
"while",
"much",
"work",
"on",
"deep",
"latent",
"variable",
"models",
"of",
"text",
"uses",
"continuous",
"latent",
"variables",
",",
"discrete",
"latent",
"variables",
"are",
"interesting",
"because",
"they",
"are",
"more",
"interpretable",
"and",
"typically",
... |
ACL | Collocation Classification with Unsupervised Relation Vectors | Lexical relation classification is the task of predicting whether a certain relation holds between a given pair of words. In this paper, we explore to which extent the current distributional landscape based on word embeddings provides a suitable basis for classification of collocations, i.e., pairs of words between which idiosyncratic lexical relations hold. First, we introduce a novel dataset with collocations categorized according to lexical functions. Second, we conduct experiments on a subset of this benchmark, comparing it in particular to the well known DiffVec dataset. In these experiments, in addition to simple word vector arithmetic operations, we also investigate the role of unsupervised relation vectors as a complementary input. While these relation vectors indeed help, we also show that lexical function classification poses a greater challenge than the syntactic and semantic relations that are typically used for benchmarks in the literature. | dadeb13cd8678ad2dc45758372790232 | 2,019 | [
"lexical relation classification is the task of predicting whether a certain relation holds between a given pair of words .",
"in this paper , we explore to which extent the current distributional landscape based on word embeddings provides a suitable basis for classification of collocations , i . e . , pairs of ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "lexical relation classification",
"tokens": [
"lexical",
"relation",
"classification"
]
... | [
"lexical",
"relation",
"classification",
"is",
"the",
"task",
"of",
"predicting",
"whether",
"a",
"certain",
"relation",
"holds",
"between",
"a",
"given",
"pair",
"of",
"words",
".",
"in",
"this",
"paper",
",",
"we",
"explore",
"to",
"which",
"extent",
"the"... |
ACL | DuReader_robust: A Chinese Dataset Towards Evaluating Robustness and Generalization of Machine Reading Comprehension in Real-World Applications | Machine reading comprehension (MRC) is a crucial task in natural language processing and has achieved remarkable advancements. However, most of the neural MRC models are still far from robust and fail to generalize well in real-world applications. In order to comprehensively verify the robustness and generalization of MRC models, we introduce a real-world Chinese dataset – DuReader_robust . It is designed to evaluate the MRC models from three aspects: over-sensitivity, over-stability and generalization. Comparing to previous work, the instances in DuReader_robust are natural texts, rather than the altered unnatural texts. It presents the challenges when applying MRC models to real-world applications. The experimental results show that MRC models do not perform well on the challenge test set. Moreover, we analyze the behavior of existing models on the challenge test set, which may provide suggestions for future model development. The dataset and codes are publicly available at https://github.com/baidu/DuReader. | 6eeb7160d2c0970739e89262e09c45bc | 2,021 | [
"machine reading comprehension ( mrc ) is a crucial task in natural language processing and has achieved remarkable advancements .",
"however , most of the neural mrc models are still far from robust and fail to generalize well in real - world applications .",
"in order to comprehensively verify the robustness ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "machine reading comprehension",
"tokens": [
"machine",
"reading",
"comprehension"
]
... | [
"machine",
"reading",
"comprehension",
"(",
"mrc",
")",
"is",
"a",
"crucial",
"task",
"in",
"natural",
"language",
"processing",
"and",
"has",
"achieved",
"remarkable",
"advancements",
".",
"however",
",",
"most",
"of",
"the",
"neural",
"mrc",
"models",
"are",... |
ACL | Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval | Recent research demonstrates the effectiveness of using fine-tuned language models (LM) for dense retrieval. However, dense retrievers are hard to train, typically requiring heavily engineered fine-tuning pipelines to realize their full potential. In this paper, we identify and address two underlying problems of dense retrievers: i) fragility to training data noise and ii) requiring large batches to robustly learn the embedding space. We use the recently proposed Condenser pre-training architecture, which learns to condense information into the dense vector through LM pre-training. On top of it, we propose coCondenser, which adds an unsupervised corpus-level contrastive loss to warm up the passage embedding space. Experiments on MS-MARCO, Natural Question, and Trivia QA datasets show that coCondenser removes the need for heavy data engineering such as augmentation, synthesis, or filtering, and the need for large batch training. It shows comparable performance to RocketQA, a state-of-the-art, heavily engineered system, using simple small batch fine-tuning. | a027fcef382109ce96d7092f1ade3b1f | 2,022 | [
"recent research demonstrates the effectiveness of using fine - tuned language models ( lm ) for dense retrieval .",
"however , dense retrievers are hard to train , typically requiring heavily engineered fine - tuning pipelines to realize their full potential .",
"in this paper , we identify and address two und... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
7,
8,
9,
10,
11
],
"text": "fine - tuned language models",
"tokens": [
"fine",
"-",
"tuned",
... | [
"recent",
"research",
"demonstrates",
"the",
"effectiveness",
"of",
"using",
"fine",
"-",
"tuned",
"language",
"models",
"(",
"lm",
")",
"for",
"dense",
"retrieval",
".",
"however",
",",
"dense",
"retrievers",
"are",
"hard",
"to",
"train",
",",
"typically",
... |
ACL | Learning Dense Representations of Phrases at Scale | Open-domain question answering can be reformulated as a phrase retrieval problem, without the need for processing documents on-demand during inference (Seo et al., 2019). However, current phrase retrieval models heavily depend on sparse representations and still underperform retriever-reader approaches. In this work, we show for the first time that we can learn dense representations of phrases alone that achieve much stronger performance in open-domain QA. We present an effective method to learn phrase representations from the supervision of reading comprehension tasks, coupled with novel negative sampling methods. We also propose a query-side fine-tuning strategy, which can support transfer learning and reduce the discrepancy between training and inference. On five popular open-domain QA datasets, our model DensePhrases improves over previous phrase retrieval models by 15%-25% absolute accuracy and matches the performance of state-of-the-art retriever-reader models. Our model is easy to parallelize due to pure dense representations and processes more than 10 questions per second on CPUs. Finally, we directly use our pre-indexed dense phrase representations for two slot filling tasks, showing the promise of utilizing DensePhrases as a dense knowledge base for downstream tasks. | ad3e6fb102f854dba1f9a8ffff73a75f | 2,021 | [
"open - domain question answering can be reformulated as a phrase retrieval problem , without the need for processing documents on - demand during inference ( seo et al . , 2019 ) .",
"however , current phrase retrieval models heavily depend on sparse representations and still underperform retriever - reader appr... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
3,
4
],
"text": "open - domain question answering",
"tokens": [
"open",
"-",
"domain",
... | [
"open",
"-",
"domain",
"question",
"answering",
"can",
"be",
"reformulated",
"as",
"a",
"phrase",
"retrieval",
"problem",
",",
"without",
"the",
"need",
"for",
"processing",
"documents",
"on",
"-",
"demand",
"during",
"inference",
"(",
"seo",
"et",
"al",
"."... |
ACL | Challenges and Strategies in Cross-Cultural NLP | Various efforts in the Natural Language Processing (NLP) community have been made to accommodate linguistic diversity and serve speakers of many different languages. However, it is important to acknowledge that speakers and the content they produce and require, vary not just by language, but also by culture. Although language and culture are tightly linked, there are important differences. Analogous to cross-lingual and multilingual NLP, cross-cultural and multicultural NLP considers these differences in order to better serve users of NLP systems. We propose a principled framework to frame these efforts, and survey existing and potential strategies. | 6241011ac9fa80bf9a0ca27a2760b2fc | 2,022 | [
"various efforts in the natural language processing ( nlp ) community have been made to accommodate linguistic diversity and serve speakers of many different languages .",
"however , it is important to acknowledge that speakers and the content they produce and require , vary not just by language , but also by cul... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
4,
5,
6
],
"text": "natural language processing",
"tokens": [
"natural",
"language",
"processing"
]
}
... | [
"various",
"efforts",
"in",
"the",
"natural",
"language",
"processing",
"(",
"nlp",
")",
"community",
"have",
"been",
"made",
"to",
"accommodate",
"linguistic",
"diversity",
"and",
"serve",
"speakers",
"of",
"many",
"different",
"languages",
".",
"however",
",",... |
ACL | LeeBERT: Learned Early Exit for BERT with cross-level optimization | Pre-trained language models like BERT are performant in a wide range of natural language tasks. However, they are resource exhaustive and computationally expensive for industrial scenarios. Thus, early exits are adopted at each layer of BERT to perform adaptive computation by predicting easier samples with the first few layers to speed up the inference. In this work, to improve efficiency without performance drop, we propose a novel training scheme called Learned Early Exit for BERT (LeeBERT). First, we ask each exit to learn from each other, rather than learning only from the last layer. Second, the weights of different loss terms are learned, thus balancing off different objectives. We formulate the optimization of LeeBERT as a bi-level optimization problem, and we propose a novel cross-level optimization (CLO) algorithm to improve the optimization results. Experiments on the GLUE benchmark show that our proposed methods improve the performance of the state-of-the-art (SOTA) early exit methods for pre-trained models. | 54ea31dee745901248cf1efc599f8ed8 | 2,021 | [
"pre - trained language models like bert are performant in a wide range of natural language tasks .",
"however , they are resource exhaustive and computationally expensive for industrial scenarios .",
"thus , early exits are adopted at each layer of bert to perform adaptive computation by predicting easier samp... | [
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
0,
1,
2,
3,
4
],
"text": "pre - trained language models",
"tokens": [
"pre",
"-",
"trained",
... | [
"pre",
"-",
"trained",
"language",
"models",
"like",
"bert",
"are",
"performant",
"in",
"a",
"wide",
"range",
"of",
"natural",
"language",
"tasks",
".",
"however",
",",
"they",
"are",
"resource",
"exhaustive",
"and",
"computationally",
"expensive",
"for",
"ind... |
ACL | Improving Disfluency Detection by Self-Training a Self-Attentive Model | Self-attentive neural syntactic parsers using contextualized word embeddings (e.g. ELMo or BERT) currently produce state-of-the-art results in joint parsing and disfluency detection in speech transcripts. Since the contextualized word embeddings are pre-trained on a large amount of unlabeled data, using additional unlabeled data to train a neural model might seem redundant. However, we show that self-training — a semi-supervised technique for incorporating unlabeled data — sets a new state-of-the-art for the self-attentive parser on disfluency detection, demonstrating that self-training provides benefits orthogonal to the pre-trained contextualized word representations. We also show that ensembling self-trained parsers provides further gains for disfluency detection. | c7e3c66a81015e2ed85c638b3dfb8c02 | 2,020 | [
"self - attentive neural syntactic parsers using contextualized word embeddings ( e . g . elmo or bert ) currently produce state - of - the - art results in joint parsing and disfluency detection in speech transcripts .",
"since the contextualized word embeddings are pre - trained on a large amount of unlabeled d... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
36,
37
],
"text": "speech transcripts",
"tokens": [
"speech",
"transcripts"
]
}
],
"event_type": "ITT",
"trigge... | [
"self",
"-",
"attentive",
"neural",
"syntactic",
"parsers",
"using",
"contextualized",
"word",
"embeddings",
"(",
"e",
".",
"g",
".",
"elmo",
"or",
"bert",
")",
"currently",
"produce",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"results",
"in",
"joint",
... |
ACL | Toward Interpretable Semantic Textual Similarity via Optimal Transport-based Contrastive Sentence Learning | Recently, finetuning a pretrained language model to capture the similarity between sentence embeddings has shown the state-of-the-art performance on the semantic textual similarity (STS) task. However, the absence of an interpretation method for the sentence similarity makes it difficult to explain the model output. In this work, we explicitly describe the sentence distance as the weighted sum of contextualized token distances on the basis of a transportation problem, and then present the optimal transport-based distance measure, named RCMD; it identifies and leverages semantically-aligned token pairs. In the end, we propose CLRCMD, a contrastive learning framework that optimizes RCMD of sentence pairs, which enhances the quality of sentence similarity and their interpretation. Extensive experiments demonstrate that our learning framework outperforms other baselines on both STS and interpretable-STS benchmarks, indicating that it computes effective sentence similarity and also provides interpretation consistent with human judgement. | 17c84a88de68aeca966790f0bde272e4 | 2,022 | [
"recently , finetuning a pretrained language model to capture the similarity between sentence embeddings has shown the state - of - the - art performance on the semantic textual similarity ( sts ) task .",
"however , the absence of an interpretation method for the sentence similarity makes it difficult to explain... | [
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
4,
5,
6
],
"text": "pretrained language model",
"tokens": [
"pretrained",
"language",
"model"
]
... | [
"recently",
",",
"finetuning",
"a",
"pretrained",
"language",
"model",
"to",
"capture",
"the",
"similarity",
"between",
"sentence",
"embeddings",
"has",
"shown",
"the",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"performance",
"on",
"the",
"semantic",
"textua... |
ACL | Prompt for Extraction? PAIE: Prompting Argument Interaction for Event Argument Extraction | In this paper, we propose an effective yet efficient model PAIE for both sentence-level and document-level Event Argument Extraction (EAE), which also generalizes well when there is a lack of training data. On the one hand, PAIE utilizes prompt tuning for extractive objectives to take the best advantages of Pre-trained Language Models (PLMs). It introduces two span selectors based on the prompt to select start/end tokens among input texts for each role. On the other hand, it captures argument interactions via multi-role prompts and conducts joint optimization with optimal span assignments via a bipartite matching loss. Also, with a flexible prompt design, PAIE can extract multiple arguments with the same role instead of conventional heuristic threshold tuning. We have conducted extensive experiments on three benchmarks, including both sentence- and document-level EAE. The results present promising improvements from PAIE (3.5% and 2.3% F1 gains in average on three benchmarks, for PAIE-base and PAIE-large respectively). Further analysis demonstrates the efficiency, generalization to few-shot settings, and effectiveness of different extractive prompt tuning strategies. Our code is available at https://github.com/mayubo2333/PAIE. | aa8dcb455038ef2a2a744c58a1972d2e | 2,022 | [
"in this paper , we propose an effective yet efficient model paie for both sentence - level and document - level event argument extraction ( eae ) , which also generalizes well when there is a lack of training data .",
"on the one hand , paie utilizes prompt tuning for extractive objectives to take the best advan... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
4
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
... | [
"in",
"this",
"paper",
",",
"we",
"propose",
"an",
"effective",
"yet",
"efficient",
"model",
"paie",
"for",
"both",
"sentence",
"-",
"level",
"and",
"document",
"-",
"level",
"event",
"argument",
"extraction",
"(",
"eae",
")",
",",
"which",
"also",
"genera... |
ACL | Slot-consistent NLG for Task-oriented Dialogue Systems with Iterative Rectification Network | Data-driven approaches using neural networks have achieved promising performances in natural language generation (NLG). However, neural generators are prone to make mistakes, e.g., neglecting an input slot value and generating a redundant slot value. Prior works refer this to hallucination phenomenon. In this paper, we study slot consistency for building reliable NLG systems with all slot values of input dialogue act (DA) properly generated in output sentences. We propose Iterative Rectification Network (IRN) for improving general NLG systems to produce both correct and fluent responses. It applies a bootstrapping algorithm to sample training candidates and uses reinforcement learning to incorporate discrete reward related to slot inconsistency into training. Comprehensive studies have been conducted on multiple benchmark datasets, showing that the proposed methods have significantly reduced the slot error rate (ERR) for all strong baselines. Human evaluations also have confirmed its effectiveness. | 24f9be0c5791a1ad22b59b6ddd62b825 | 2,020 | [
"data - driven approaches using neural networks have achieved promising performances in natural language generation ( nlg ) .",
"however , neural generators are prone to make mistakes , e . g . , neglecting an input slot value and generating a redundant slot value .",
"prior works refer this to hallucination ph... | [
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "MOD",
"offsets": [
21,
22
],
"text": "neural generators",
"tokens": [
"neural",
"generators"
]
},
{
"argument_type": "Fault",
... | [
"data",
"-",
"driven",
"approaches",
"using",
"neural",
"networks",
"have",
"achieved",
"promising",
"performances",
"in",
"natural",
"language",
"generation",
"(",
"nlg",
")",
".",
"however",
",",
"neural",
"generators",
"are",
"prone",
"to",
"make",
"mistakes"... |
ACL | Zero-Shot Entity Linking by Reading Entity Descriptions | We present the zero-shot entity linking task, where mentions must be linked to unseen entities without in-domain labeled data. The goal is to enable robust transfer to highly specialized domains, and so no metadata or alias tables are assumed. In this setting, entities are only identified by text descriptions, and models must rely strictly on language understanding to resolve the new entities. First, we show that strong reading comprehension models pre-trained on large unlabeled data can be used to generalize to unseen entities. Second, we propose a simple and effective adaptive pre-training strategy, which we term domain-adaptive pre-training (DAP), to address the domain shift problem associated with linking unseen entities in a new domain. We present experiments on a new dataset that we construct for this task and show that DAP improves over strong pre-training baselines, including BERT. The data and code are available at https://github.com/lajanugen/zeshel. | ba6095d846d233ba05c7fb89a50343f3 | 2,019 | [
"we present the zero - shot entity linking task , where mentions must be linked to unseen entities without in - domain labeled data .",
"the goal is to enable robust transfer to highly specialized domains , and so no metadata or alias tables are assumed .",
"in this setting , entities are only identified by tex... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
... | [
"we",
"present",
"the",
"zero",
"-",
"shot",
"entity",
"linking",
"task",
",",
"where",
"mentions",
"must",
"be",
"linked",
"to",
"unseen",
"entities",
"without",
"in",
"-",
"domain",
"labeled",
"data",
".",
"the",
"goal",
"is",
"to",
"enable",
"robust",
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.