id stringlengths 8 19 | document stringlengths 2.18k 16.2k | challenge stringlengths 76 208 | approach stringlengths 79 223 | outcome stringlengths 84 209 |
|---|---|---|---|---|
P06-1112 | In this paper , we explore correlation of dependency relation paths to rank candidate answers in answer extraction . Using the correlation measure , we compare dependency relations of a candidate answer and mapped question phrases in sentence with the corresponding relations in question . Different from previous studie... | A generally accessible NER system for QA systems produces a larger answer candidate set which would be hard for current surface word-level ranking methods. | They propose a statistical method which takes correlations of dependency relation paths computed by the Dynamic Time Wrapping algorithm into account for ranking candidate answers. | The proposed method outperforms state-of-the-art syntactic relation-based methods by up to 20% and shows it works even better on harder questions where NER performs poorly. |
2020.acl-main.528 | Recently , many works have tried to augment the performance of Chinese named entity recognition ( NER ) using word lexicons . As a representative , Lattice-LSTM ( Zhang and Yang , 2018 ) has achieved new benchmark results on several public Chinese NER datasets . However , Lattice-LSTM has a complex model architecture .... | Named entity recognition in Chinese requires word segmentation causes errors or character-level model with lexical features that is complex and expensive. | They propose to encode lexicon features into character representations so it can keep the system simpler and achieve faster inference than previous models. | The proposed efficient character-based LSTM method with lexical features achieves 6.15 times faster inference speed and better performance than previous models. |
P19-1352 | Word embedding is central to neural machine translation ( NMT ) , which has attracted intensive research interest in recent years . In NMT , the source embedding plays the role of the entrance while the target embedding acts as the terminal . These layers occupy most of the model parameters for representation learning ... | Word embeddings occupy a large amount of memory, and weight tying does not mitigate this issue for distant language pairs on translation tasks. | They propose a language independet method where a model shares embeddings between source and target only when words have some common characteristics. | Experiments on machine translation datasets involving multiple language families and scripts show that the proposed model outperforms baseline models while using fewer parameters. |
D12-1061 | This paper explores log-based query expansion ( QE ) models for Web search . Three lexicon models are proposed to bridge the lexical gap between Web documents and user queries . These models are trained on pairs of user queries and titles of clicked documents . Evaluations on a real world data set show that the lexicon... | Term mismatches between a query and documents hinder retrievals of relevant documents and black box statistical machine translation models are used to expand queries. | They propose to train lexicon query expansion models by using transaction logs that contain pairs of queries and titles of clicked documents. | The proposed query expansion model enables retrieval systems to significantly outperform models with previous expansion models while being more transparent. |
N07-1011 | Traditional noun phrase coreference resolution systems represent features only of pairs of noun phrases . In this paper , we propose a machine learning method that enables features over sets of noun phrases , resulting in a first-order probabilistic model for coreference . We outline a set of approximations that make t... | Existing approaches treat noun phrase coreference resolution as a set of independent binary classifications limiting the features to be only pairs of noun phrases. | They propose a machine learning method that uses sets of noun phrases as features that are coupled with a sampling method to enable scalability. | Evaluation on the ACE coreference dataset, the proposed method achieves a 45% error reduction over a previous method. |
2021.acl-long.67 | Bilingual lexicons map words in one language to their translations in another , and are typically induced by learning linear projections to align monolingual word embedding spaces . In this paper , we show it is possible to produce much higher quality lexicons with methods that combine ( 1 ) unsupervised bitext mining ... | Existing methods to induce bilingual lexicons use linear projections to align word embeddings that are based on unrealistic simplifying assumptions. | They propose to use both unsupervised bitext mining and unsupervised word alignment methods to produce higher quality lexicons. | The proposed method achieves the state-of-the-art in the bilingual lexical induction task while keeping the interpretability of their pipeline. |
D18-1065 | In this paper we show that a simple beam approximation of the joint distribution between attention and output is an easy , accurate , and efficient attention mechanism for sequence to sequence learning . The method combines the advantage of sharp focus in hard attention and the implementation ease of soft attention . O... | Softmax attention models are popular because of their differentiable and easy to implement nature while hard attention models outperform them when successfully trained. | They propose a method to approximate the joint attention-output distribution which provides sharp attention as hard attention and easy implementation as soft attention. | The proposed approach outperforms soft attention models and recent hard attention and Sparsemax models on five translation tasks and also on morphological inflection tasks. |
2022.acl-long.304 | Contrastive learning has achieved impressive success in generation tasks to militate the " exposure bias " problem and discriminatively exploit the different quality of references . Existing works mostly focus on contrastive learning on the instance-level without discriminating the contribution of each word , while key... | Existing works on contrastive learning for text generation focus only on instance-level while word-level information such as keywords is also of great importance. | They propose a CVAE-based hierarchical contrastive learning within instance and keyword-level using a keyword graph which iteratively polishes the keyword representations. | The proposed model outperforms CVAE and baselines on storytelling, paraphrasing, and dialogue generation tasks. |
2020.emnlp-main.384 | Word embedding models are typically able to capture the semantics of words via the distributional hypothesis , but fail to capture the numerical properties of numbers that appear in a text . This leads to problems with numerical reasoning involving tasks such as question answering . We propose a new methodology to assi... | Existing word embeddings treat numbers like words failing to capture numeration and magnitude properties of numbers which is problematic for tasks such as question answering. | They propose a deterministic technique to learn numerical embeddings where cosine similarity reflects the actual distance and a regularization approach for a contextual setting. | A Bi-LSTM network initialized with the proposed embedding shows the ability to capture numeration and magnitude and to perform list maximum, decoding, and addition. |
P12-1103 | We propose a novel approach to improve SMT via paraphrase rules which are automatically extracted from the bilingual training data . Without using extra paraphrase resources , we acquire the rules by comparing the source side of the parallel corpus with the target-to-source translations of the target side . Besides the... | Incorporating paraphrases improves statistical machine translation however no works investigate sentence level paraphrases. | They propose to use bilingual training data to obtain paraphrase rules on word, phrase and sentence levels to rewrite inputs to be MT-favored. | The acquired paraphrase rules improve translation qualities in oral and news domains. |
N09-1072 | Automatically extracting social meaning and intention from spoken dialogue is an important task for dialogue systems and social computing . We describe a system for detecting elements of interactional style : whether a speaker is awkward , friendly , or flirtatious . We create and use a new spoken corpus of 991 4-minut... | Methods to extract social meanings such as engagement from speech remain unknown while it is important in sociolinguistics and to develop socially aware computing systems. | They create a spoken corpus from conversations in speed-dating and perform analysis using extracted dialogue features with a focus on genders. | They found several gender dependent and independent phenomena in conversations related to the speed of speaking, laughing or asking questions. |
P18-1256 | We introduce the task of predicting adverbial presupposition triggers such as also and again . Solving such a task requires detecting recurring or similar events in the discourse context , and has applications in natural language generation tasks such as summarization and dialogue systems . We create two new datasets f... | Adverbaial triggers indicate the event recurrence, continuation, or termination in the discourse context and are frequently found in English but there are few related works. | They introduce an adverbial presupposition trigger prediction task and datasets and propose an attention mechanism that augments a recurrent neural network without additional trainable parameters. | The proposed model outperforms baselines including an LSTM-based language model on most of the triggers on the two datasets. |
P08-1116 | This paper proposes a novel method that exploits multiple resources to improve statistical machine translation ( SMT ) based paraphrasing . In detail , a phrasal paraphrase table and a feature function are derived from each resource , which are then combined in a log-linear SMT model for sentence-level paraphrase gener... | Paraphrase generation requires monolingual parallel corpora which is not easily obtainable, and few works focus on using the extracted phrasal paraphrases in sentence-level paraphrase generation. | They propose to exploit six paraphrase resources to extract phrasal paraphrase tables that are further used to build a log-linear statistical machine translation-based paraphrasing model. | They show that using multiple resources enhances paraphrase generation quality in precision on phrase and sentence level especially when they are similar to user queries. |
P08-1027 | There are many possible different semantic relationships between nominals . Classification of such relationships is an important and difficult task ( for example , the well known noun compound classification task is a special case of this problem ) . We propose a novel pattern clusters method for nominal relationship (... | Using annotated data or semantic resources such as WordNet for relation classification introduces errors and such data is not available in many domains and languages. | They propose an unsupervised pattern clustering method for nominal relation classification using a large generic corpus enabling scale in domain and language. | Experiments on the ACL SemEval-07 dataset show the proposed method performs better than existing methods that do not use disambiguation tags. |
2021.emnlp-main.185 | Learning sentence embeddings from dialogues has drawn increasing attention due to its low annotation cost and high domain adaptability . Conventional approaches employ the siamese-network for this task , which obtains the sentence embeddings through modeling the context-response semantic relevance by applying a feed-fo... | Existing methods to learn representations from dialogues have a similarity-measurement gap between training and evaluation time and do not exploit the multi-turn structure of data. | They propose a dialogue-based contrastive learning approach to learn sentence embeddings from dialogues by modelling semantic matching relationships between the context and response implicitly. | The proposed approach outperforms baseline methods on two newly introduced tasks coupled with three multi-turn dialogue datasets in terms of MAP and Spearman's correlation measures. |
P02-1051 | Named entity phrases are some of the most difficult phrases to translate because new phrases can appear from nowhere , and because many are domain specific , not to be found in bilingual dictionaries . We present a novel algorithm for translating named entity phrases using easily obtainable monolingual and bilingual re... | Translating named entities is challenging since they can appear from nowhere, and cannot be found in bilingual dictionaries because they are domain specific. | They propose an algorithm for Arabic-English named entity translation which uses easily obtainable monolingual and bilingual resources and a limited amount of hard-to-obtain bilingual resources. | The proposed algorithm is compared with human translators and a commercial system and it performs at near human translation. |
E06-1014 | Probabilistic Latent Semantic Analysis ( PLSA ) models have been shown to provide a better model for capturing polysemy and synonymy than Latent Semantic Analysis ( LSA ) . However , the parameters of a PLSA model are trained using the Expectation Maximization ( EM ) algorithm , and as a result , the trained model is d... | EM algorithm-baed Probabilistic latent semantic analysis models provide high variance in performance and models with different initializations are not comparable. | They propose to use Latent Semantic Analysis to initialize probabilistic latent semantic analysis models, EM algorithm is further used to refine the initial estimate. | They show that the model initialized in the proposed method always outperforms existing methods. |
2021.naacl-main.34 | We rely on arguments in our daily lives to deliver our opinions and base them on evidence , making them more convincing in turn . However , finding and formulating arguments can be challenging . In this work , we present the Arg-CTRL-a language model for argument generation that can be controlled to generate sentence-l... | Argumentative content generation can support humans but current models produce lengthy texts and offer a little controllability on aspects of the argument for users. | They train a controllable language model on a corpus annotated with control codes provided by a stance detection model and introduce a dataset for evaluation. | The proposed model can generate arguments that are genuine and argumentative and grammatically correct and also counter-arguments in a transparent and interpretable way. |
N16-1181 | We describe a question answering model that applies to both images and structured knowledge bases . The model uses natural language strings to automatically assemble neural networks from a collection of composable modules . Parameters for these modules are learned jointly with network-assembly parameters via reinforcem... | Existing works on visual learning use manually-specified modular structures. | They propose a question-answering model trained jointly to translate from questions to dynamically assembled neural networks then produce answers with using images or knowledge bases. | The proposed model achieves state-of-the-arts on visual and structured domain datasets showing that coutinous representations improve the expressiveness and learnability of semantic parsers. |
2020.aacl-main.88 | Large pre-trained language models reach stateof-the-art results on many different NLP tasks when fine-tuned individually ; They also come with a significant memory and computational requirements , calling for methods to reduce model sizes ( green AI ) . We propose a twostage model-compression method to reduce a model '... | Existing coarse-grained approaches for reducing the inference time of pretraining models remove layers, posing a trade-off between compression and the accuracy of a model. | They propose a model-compression method which decompresses the matrix and performs feature distillation on the internal representations to recover from the decomposition. | The proposed method reduces the model size by 0.4x and increases inference speed by 1.45x while keeping the performance degradation minimum on the GLUE benchmark. |
D16-1205 | Several studies on sentence processing suggest that the mental lexicon keeps track of the mutual expectations between words . Current DSMs , however , represent context words as separate features , thereby loosing important information for word expectations , such as word interrelations . In this paper , we present a D... | Providing richer contexts to Distributional Semantic Models improves by taking word interrelations into account but it would suffer from data sparsity. | They propose a Distributional Semantic Model that incorporates verb contexts as joint syntactic dependencies so that it emulates knowledge about event participants. | They show that representations obtained by the proposed model outperform more complex models on two verb similarity datasets with a limited training corpus. |
2021.acl-long.57 | In this paper , we propose Inverse Adversarial Training ( IAT ) algorithm for training neural dialogue systems to avoid generic responses and model dialogue history better . In contrast to standard adversarial training algorithms , IAT encourages the model to be sensitive to the perturbation in the dialogue history and... | Neural end-to-end dialogue models generate fluent yet dull and generic responses without taking dialogue histories into account due to the over-simplified maximum likelihood estimation objective. | They propose an algorithm which encourages to be sensitive to perturbations in dialogue histories and generates more diverse and consistent responses by applying penalization. | The proposed approach can model dialogue history better and generate more diverse and consistent responses on OpenSubtitles and DailyDialog. |
D09-1065 | demonstrated that corpus-extracted models of semantic knowledge can predict neural activation patterns recorded using fMRI . This could be a very powerful technique for evaluating conceptual models extracted from corpora ; however , fMRI is expensive and imposes strong constraints on data collection . Following on expe... | The expensive cost of using fMRI hinders studies on the relationship between corpus-extracted models of semantic knowledge and neural activation patterns. | They propose to use EEG activation patterns instead of fMRI to reduce the cost. | They show that using EEG signals with corpus-based models, they can predict word level distinctions significantly above chance. |
D09-1085 | This paper introduces a new parser evaluation corpus containing around 700 sentences annotated with unbounded dependencies , from seven different grammatical constructions . We run a series of off-theshelf parsers on the corpus to evaluate how well state-of-the-art parsing technology is able to recover such dependencie... | While recent statistical parsers perform well on Penn Treebank, the results can be misleading due to several reasons originating from evaluation and datasets. | They propose a new corpus with unbounded dependencies from difference grammatical constructions. | Their evaluation of existing parsers with the proposed corpus shows lower scores than reported in previous works indicating a poor ability to recover unbounded dependencies. |
P12-1013 | Learning entailment rules is fundamental in many semantic-inference applications and has been an active field of research in recent years . In this paper we address the problem of learning transitive graphs that describe entailment rules between predicates ( termed entailment graphs ) . We first identify that entailmen... | Current inefficient algorithms aim to obtain entailment rules for semantic inference hindering the use of large resources. | They propose an efficient polynomial approximation algorithm that exploits their observation, entailment graphs have a "tree-like" property. | Their iterative algorithm runs by orders of magnitude faster than current exact state-of-the-art solutions while keeping close quality. |
D15-1054 | Sponsored search is at the center of a multibillion dollar market established by search technology . Accurate ad click prediction is a key component for this market to function since the pricing mechanism heavily relies on the estimation of click probabilities . Lexical features derived from the text of both the query ... | Conventional word embeddings with a simple integration of click feedback information and averaging to obtain sentence representations do not work well for ad click prediction. | They propose several joint word embedding methods to leverage positive and negative click feedback which put query vectors close to relevant ad vectors. | The use of features obtained from the new models improves on a large sponsored search data of commercial Yahoo! search engine. |
D09-1072 | We propose a new model for unsupervised POS tagging based on linguistic distinctions between open and closed-class items . Exploiting notions from current linguistic theory , the system uses far less information than previous systems , far simpler computational methods , and far sparser descriptions in learning context... | Current approaches tackle unsupervised POS tagging as a sequential labelling problem and require a complete knowledge of the lexicon. | They propose to first identify functional syntactic contexts and then use them to make predictions for POS tagging. | The proposed method achieves equivalent performance by using 0.6% of the lexical knowledge used in baseline models. |
2021.naacl-main.458 | Non-autoregressive Transformer is a promising text generation model . However , current non-autoregressive models still fall behind their autoregressive counterparts in translation quality . We attribute this accuracy gap to the lack of dependency modeling among decoder inputs . In this paper , we propose CNAT , which ... | Non-autoregressive translation models fall behind their autoregressive counterparts in translation quality due to the lack of dependency modelling for the target outputs. | They propose a non-autoregressive transformer-based model which implicitly learns categorical codes as latent variables into the decoding to complement missing dependencies. | The proposed model achieves state-of-the-art without knowledge distillation and a competitive decoding speedup with an interactive-based model when coupled with knowledge distillation and reranking techniques. |
2021.emnlp-main.765 | The clustering-based unsupervised relation discovery method has gradually become one of the important methods of open relation extraction ( OpenRE ) . However , high-dimensional vectors can encode complex linguistic information which leads to the problem that the derived clusters can not explicitly align with the relat... | Even though high-dimensional vectors that can encode complex information used for relation extraction are not guaranteed to be consistent with relational semantic similarity. | They propose to use available relation labeled data to obtain relation-oriented representation by minimizing the distance between the same relation instances. | The proposed approach can reduce error rates significantly from the best models for open relation extraction. |
End of preview. Expand in Data Studio
ACLSum: A New Dataset for Aspect-based Summarization of Scientific Publications
This repository contains data for our paper "ACLSum: A New Dataset for Aspect-based Summarization of Scientific Publications" and a small utility class to work with it.
HuggingFace datasets
You can also use Huggin Face datasets to load ACLSum (dataset link). This would be convenient if you want to train transformer models using our dataset.
Just do,
from datasets import load_dataset
dataset = load_dataset("sobamchan/aclsum", "challenge", split="train")
- Downloads last month
- 61