_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d222140906 | Discourse relations describe how two propositions relate to one another, and identifying them automatically is an integral part of natural language understanding. However, annotating discourse relations typically requires expert annotators. Recently, different semantic aspects of a sentence have been represented and crowd-sourced via question-and-answer (QA) pairs. This paper proposes a novel representation of discourse relations as QA pairs, which in turn allows us to crowd-source widecoverage data annotated with discourse relations, via an intuitively appealing interface for composing such questions and answers. Based on our proposed representation, we collect a novel and wide-coverage QADiscourse dataset, and present baseline algorithms for predicting QADiscourse relations. | QADiscourse -Discourse Relations as QA Pairs: Representation, Crowdsourcing and Baselines |
d222140938 | Few-shot Intent Detection is challenging due to the scarcity of available annotated utterances. Although recent works demonstrate that multi-level matching plays an important role in transferring learned knowledge from seen training classes to novel testing classes, they rely on a static similarity measure and overly fine-grained matching components. These limitations inhibit generalizing capability towards Generalized Few-shot Learning settings where both seen and novel classes are co-existent. In this paper, we propose a novel Semantic Matching and Aggregation Network where semantic components are distilled from utterances via multi-head self-attention with additional dynamic regularization constraints. These semantic components capture high-level information, resulting in more effective matching between instances. Our multi-perspective matching method provides a comprehensive matching measure to enhance representations of both labeled and unlabeled instances. We also propose a more challenging evaluation setting that considers classification on the joint all-class label space. Extensive experimental results demonstrate the effectiveness of our method. Our code and data are publicly available 1 . | Dynamic Semantic Matching and Aggregation Network for Few-shot Intent Detection |
d222141016 | While monolingual data has been shown to be useful in improving bilingual neural machine translation (NMT), effectively and efficiently leveraging monolingual data for Multilingual NMT (MNMT) systems is a less explored area. In this work, we propose a multi-task learning (MTL) framework that jointly trains the model with the translation task on bitext data and two denoising tasks on the monolingual data. We conduct extensive empirical studies on MNMT systems with 10 language pairs from WMT datasets. We show that the proposed approach can effectively improve the translation quality for both high-resource and low-resource languages with large margin, achieving significantly better results than the individual bilingual models. We also demonstrate the efficacy of the proposed approach in the zero-shot setup for language pairs without bitext training data. Furthermore, we show the effectiveness of MTL over pre-training approaches for both NMT and cross-lingual transfer learning NLU tasks; the proposed approach outperforms massive scale models trained on single task. | Multi-task Learning for Multilingual Neural Machine Translation |
d222141121 | Large-scale training datasets lie at the core of the recent success of neural machine translation (NMT) models. However, the complex patterns and potential noises in the large-scale data make training NMT models difficult. In this work, we explore to identify the inactive training examples which contribute less to the model performance, and show that the existence of inactive examples depends on the data distribution. We further introduce data rejuvenation to improve the training of NMT models on large-scale datasets by exploiting inactive examples. The proposed framework consists of three phases. First, we train an identification model on the original training data, and use it to distinguish inactive examples and active examples by their sentence-level output probabilities. Then, we train a rejuvenation model on the active examples, which is used to re-label the inactive examples with forwardtranslation. Finally, the rejuvenated examples and the active examples are combined to train the final NMT model. Experimental results on WMT14English-German and English-French datasets show that the proposed data rejuvenation consistently and significantly improves performance for several strong NMT models. Extensive analyses reveal that our approach stabilizes and accelerates the training process of NMT models, resulting in final models with better generalization capability. | Data Rejuvenation: Exploiting Inactive Training Examples for Neural Machine Translation |
d221340971 | Social media data can be a very salient source of information during crises. User-generated messages provide a window into people's minds during such times, allowing us insights about their moods and opinions. Due to the vast amounts of such messages, a large-scale analysis of population-wide developments becomes possible.In this paper, we analyze Twitter messages (tweets) collected during the first months of the COVID-19 pandemic in Europe with regard to their sentiment. This is implemented with a neural network for sentiment analysis using multilingual sentence embeddings. We separate the results by country of origin, and correlate their temporal development with events in those countries. This allows us to study the effect of the situation on people's moods. We see, for example, that lockdown announcements correlate with a deterioration of mood in almost all surveyed countries, which recovers within a short time span. | Cross-language sentiment analysis of European Twitter messages during the COVID-19 pandemic |
d9464771 | Neural network-based dialog systems are attracting increasing attention in both academia and industry. Recently, researchers have begun to realize the importance of speaker modeling in neural dialog systems, but there lacks established tasks and datasets. In this paper, we propose speaker classification as a surrogate task for general speaker modeling, and collect massive data to facilitate research in this direction. We further investigate temporal-based and content-based models of speakers, and propose several hybrids of them. Experiments show that speaker classification is feasible, and that hybrid models outperform each single component. | Towards Neural Speaker Modeling in Multi-Party Conversation: The Task, Dataset, and Models |
d1349048 | We describe a mechanism which generates rebuttals to a user's rejoinders in the context of arguments generated from Bayesian networks. This mechanism is implemented in an interactive argumentation system. Given an argument generated by the system and an interpretation of a user's rejoinder, the generation of the rebuttal takes into account the intended effect of the user's rejoinder, determined on a model of the user's beliefs, and its actual effect, determined on a model of the system's beliefs. We consider three main rebuttal strategies: refute the user's rejoinder, strengthen the argument goal, and dismiss the user's line of reasoning. | Towards the Generations.of Rebul;tals in a Bayesian Argumentation System |
d92327 | Universitat Jaume I Current machine translation (MT) systems are still not perfect. In practice, the output from these systems needs to be edited to correct errors. A way of increasing the productivity of the whole translation process (MT plus human work) is to incorporate the human correction activities within the translation process itself, thereby shifting the MT paradigm to that of computer-assisted translation. This model entails an iterative process in which the human translator activity is included in the loop: In each iteration, a prefix of the translation is validated (accepted or amended) by the human and the system computes its best (or n-best) translation suffix hypothesis to complete this prefix. A successful framework for MT is the so-called statistical (or pattern recognition) framework. Interestingly, within this framework, the adaptation of MT systems to the interactive scenario affects mainly the search process, allowing a great reuse of successful techniques and models. In this article, alignment templates, phrase-based models, and stochastic finite-state transducers are used to develop computer-assisted translation systems. These systems were assessed in a European project (TransType2) in two real tasks: The translation of printer manuals; manuals and the translation of the Bulletin of the European Union. In each task, the following three pairs of languages were involved (in both translation directions): | Statistical Approaches to Computer-Assisted Translation |
d182952892 | Single document summarization has enjoyed renewed interest in recent years thanks to the popularity of neural network models and the availability of large-scale datasets. In this paper we develop an unsupervised approach arguing that it is unrealistic to expect large-scale and high-quality training data to be available or created for different types of summaries, domains, or languages. We revisit a popular graph-based ranking algorithm and modify how node (aka sentence) centrality is computed in two ways: (a) we employ BERT, a state-of-the-art neural representation learning model to better capture sentential meaning and (b) we build graphs with directed edges arguing that the contribution of any two nodes to their respective centrality is influenced by their relative position in a document. Experimental results on three news summarization datasets representative of different languages and writing styles show that our approach outperforms strong baselines by a wide margin. 1(2) Clara Eaglen, from the royal national in- | Sentence Centrality Revisited for Unsupervised Summarization |
d233209850 | Knowledge Graph Embeddings (KGEs) have been intensively explored in recent years due to their promise for a wide range of applications. However, existing studies focus on improving the final model performance without acknowledging the computational cost of the proposed approaches, in terms of execution time and environmental impact. This paper proposes a simple yet effective KGE framework which can reduce the training time and carbon footprint by orders of magnitudes compared with state-of-the-art approaches, while producing competitive performance. We highlight three technical innovations: full batch learning via relational matrices, closed-form Orthogonal Procrustes Analysis for KGEs, and non-negative-sampling training. In addition, as the first KGE method whose entity embeddings also store full relation information, our trained models encode rich semantics and are highly interpretable. Comprehensive experiments and ablation studies involving 13 strong baselines and two standard datasets verify the effectiveness and efficiency of our algorithm. | Highly Efficient Knowledge Graph Embedding Learning with Orthogonal Procrustes Analysis |
d7663461 | Combining deep neural networks with structured logic rules is desirable to harness flexibility and reduce uninterpretability of the neural models. We propose a general framework capable of enhancing various types of neural networks (e.g., CNNs and RNNs) with declarative first-order logic rules. Specifically, we develop an iterative distillation method that transfers the structured information of logic rules into the weights of neural networks. We deploy the framework on a CNN for sentiment analysis, and an RNN for named entity recognition. With a few highly intuitive rules, we obtain substantial improvements and achieve state-of-the-art or comparable results to previous best-performing systems. | Harnessing Deep Neural Networks with Logic Rules |
d221738893 | This paper describes our system (Solomon) details and results of participation in the SemEval 2020 Task 11 "Detection of Propaganda Techniques in News Articles"(Da San Martino et al., 2020). We participated in Task "Technique Classification" (TC) which is a multi-class classification task. To address the TC task, we used RoBERTa based transformer architecture for fine-tuning on the propaganda dataset. The predictions of RoBERTa were further fine-tuned by class-dependentminority-class classifiers. A special classifier, which employs dynamically adapted Least Common Sub-sequence algorithm, is used to adapt to the intricacies of repetition class. Compared to the other participating systems, our submission is ranked 4th on the leaderboard. | Solomon at SemEval-2020 Task 11: Ensemble Architecture for Fine-Tuned Propaganda Detection in News Articles |
d233181798 | Many sequence-to-sequence tasks in natural language processing are roughly monotonic in the alignment between source and target sequence, and previous work has facilitated or enforced learning of monotonic attention behavior via specialized attention functions or pretraining. In this work, we introduce a monotonicity loss function that is compatible with standard attention mechanisms and test it on several sequence-to-sequence tasks: grapheme-to-phoneme conversion, morphological inflection, transliteration, and dialect normalization. Experiments show that we can achieve largely monotonic behavior. Performance is mixed, with larger gains on top of RNN baselines. General monotonicity does not benefit transformer multihead attention, however, we see isolated improvements when only a subset of heads is biased towards monotonic behavior. | On Biasing Transformer Attention Towards Monotonicity |
d256461298 | This paper describes the Global Tone Communication Co., Ltd.'s submission of the WMT22 shared general MT task. We participate in six directions: English to/from Ukrainian, Ukrainian to/from Czech, English to Croatian and English to Chinese. Our submitted systems are unconstrained and focus on backtranslation, multilingual translation model and finetuning. Multilingual translation model focus on X to one and one to X. We also apply rules and language model to filter monolingual, parallel sentences and synthetic sentences. | GTCOM Neural Machine Translation Systems for WMT22 |
d53223311 | In the era of Big Knowledge Graphs, Question Answering (QA) systems have reached a milestone in their performance and feasibility. However, their applicability, particularly in specific domains such as the biomedical domain, has not gained wide acceptance due to their "black box" nature, which hinders transparency, fairness, and accountability of QA systems. Therefore, users are unable to understand how and why particular questions have been answered, whereas some others fail. To address this challenge, in this paper, we develop an automatic approach for generating explanations during various stages of a pipeline-based QA system. Our approach is a supervised and automatic approach which considers three classes (i.e., success, no answer, and wrong answer) for annotating the output of involved QA components. Upon our prediction, a template explanation is chosen and integrated into the output of the corresponding component. To measure the effectiveness of the approach, we conducted a user survey as to how non-expert users perceive our generated explanations. The results of our study show a significant increase in the four dimensions of the human factor from the Human-computer interaction community. | QA2Explanation: Generating and Evaluating Explanations for Question Answering Systems over Knowledge Graph |
d4903640 | We investigate the effect of various dependency-based word embeddings on distinguishing between functional and domain similarity, word similarity rankings, and two downstream tasks in English.Variations include word embeddings trained using context windows from Stanford and Universal dependencies at several levels of enhancement (ranging from unlabeled, to Enhanced++ dependencies).Results are compared to basic linear contexts and evaluated on several datasets. We found that embeddings trained with Universal and Stanford dependency contexts excel at different tasks, and that enhanced dependencies often improve performance. | A Deeper Look into Dependency-Based Word Embeddings |
d211011336 | The literature on structured prediction for NLP describes a rich collection of distributions and algorithms over sequences, segmentations, alignments, and trees; however, these algorithms are difficult to utilize in deep learning frameworks. We introduce Torch-Struct, a library for structured prediction designed to take advantage of and integrate with vectorized, auto-differentiation based frameworks. Torch-Struct includes a broad collection of probabilistic structures accessed through a simple and flexible distribution-based API that connects to any deep learning model. The library utilizes batched, vectorized operations and exploits auto-differentiation to produce readable, fast, and testable code. Internally, we also include a number of general-purpose optimizations to provide cross-algorithm efficiency. Experiments show significant performance gains over fast baselines. Case studies demonstrate the benefits of the library. Torch-Struct is available at https://github.com/ harvardnlp/pytorch-struct. | Torch-Struct: Deep Structured Prediction Library |
d5955929 | The goal of semantic parsing is to map natural language to a machine interpretable meaning representation language (MRL). One of the constraints that limits full exploration of deep learning technologies for semantic parsing is the lack of sufficient annotation training data. In this paper, we propose using sequence-to-sequence in a multi-task setup for semantic parsing with a focus on transfer learning. We explore three multi-task architectures for sequence-to-sequence modeling and compare their performance with an independently trained model. Our experiments show that the multi-task setup aids transfer learning from an auxiliary task with large labeled data to a target task with smaller labeled data. We see absolute accuracy gains ranging from 1.0% to 4.4% in our inhouse data set, and we also see good gains ranging from 2.5% to 7.0% on the ATIS semantic parsing tasks with syntactic and semantic auxiliary tasks. | Transfer Learning for Neural Semantic Parsing |
d3517571 | We present a preliminary study on unsupervised preposition sense disambiguation (PSD), comparing different models and training techniques (EM, MAP-EM with L 0 norm, Bayesian inference using Gibbs sampling). To our knowledge, this is the first attempt at unsupervised preposition sense disambiguation. Our best accuracy reaches 56%, a significant improvement (at p <.001) of 16% over the most-frequent-sense baseline. | Models and Training for Unsupervised Preposition Sense Disambiguation |
d257767736 | Automation in the legal domain is promising to be vital to help solve the backlog that currently affects the Indian judiciary. For any system that is developed to aid such a task, it is imperative that it is informed by choices that legal professionals often take in the real world in order to achieve the same task while also ensuring that biases are eliminated. The task of legal case similarity is accomplished in this paper by extracting the thematic similarity of the documents based on their rhetorical roles. The similarity scores between the documents are calculated, keeping in mind the different amount of influence each of these rhetorical roles have in real life practices over determining the similarity between two documents. Knowledge graphs are used to capture this information in order to facilitate the use of this method for applications like information retrieval and recommendation systems. | Knowledge Graph-based Thematic Similarity for Indian Legal Judgement Documents using Rhetorical Roles |
d102353644 | In a corpus of data, outliers are either errors: mistakes in the data that are counterproductive, or are unique: informative samples that improve model robustness. Identifying outliers can lead to better datasets by (1) removing noise in datasets and (2) guiding collection of additional data to fill gaps. However, the problem of detecting both outlier types has received relatively little attention in NLP, particularly for dialog systems. We introduce a simple and effective technique for detecting both erroneous and unique samples in a corpus of short texts using neural sentence embeddings combined with distance-based outlier detection. We also present a novel data collection pipeline built atop our detection technique to automatically and iteratively mine unique data samples while discarding erroneous samples. Experiments show that our outlier detection technique is effective at finding errors while our data collection pipeline yields highly diverse corpora that in turn produce more robust intent classification and slot-filling models. | Outlier Detection for Improved Data Quality and Diversity in Dialog Systems |
d10561150 | Manually created large scale ontologies are useful for organizing, searching, and repurposing content ranging from scientific papers and medical guidelines to images. However, maintenance of such ontologies is expensive. In this paper, we investigate the use of universal schemas (Riedel et al., 2013) as a mechanism for ontology maintenance. We apply this approach on top of two unique data sources: 14 million full-text scientific articles and chapters, plus a 1 million concept handcurated medical ontology. We show that using a straightforward matrix factorization algorithm one can achieve 0.7 F1 measure on a link prediction task in this environment. Link prediction results can be used to suggest new relation types and relation type synonyms coming from the literature as well as predict specific new relation instances in the ontology. | Applying Universal Schemas for Domain Specific Ontology Expansion |
d10643243 | We present the results from the second shared task on multimodal machine translation and multilingual image description. Nine teams submitted 19 systems to two tasks. The multimodal translation task, in which the source sentence is supplemented by an image, was extended with a new language (French) and two new test sets. The multilingual image description task was changed such that at test time, only the image is given. Compared to last year, multimodal systems improved, but text-only systems remain competitive. | Findings of the Second Shared Task on Multimodal Machine Translation and Multilingual Image Description |
d6472470 | GRAMMATISK MERKING AV THE LANCASTER-OSLO/BERGEN CORPUS: ORDKLASSEBESTEMMELSE VED HJELP AV ORDSLUTT 1 Målsetting 1 forbindelse med prosjektet "Grammatical Tagging of the Lan~ caster-Oslo/Bergen Corpus" har vl 1 Oslo spesielt konsentrert oss om å revidere Greene og Rubins sufflksllste og ordliste for Brown Corpus.^ Våre reviderte lister tar hensyn til både Brown Corpus og Lancaster-Oslo/Bergen LOB) Corpus, dvs. tilsammen ca. 2 millioner ord løpende tekst. Riktignok er dette et relativt beskjedent materiale 1 forhold til alt som finnes av engelske tekster, men vl tror likevel at de resultatene vl er kommet frem til, er ganske allmenngyldige.ProblemstillingFor den som ønsker å utføre en automatisk grammatisk analyse, byr engelsk på spesielle problemer, både fordi språket Inneholder et usedvanlig stort antall homografer, og fordi det så godt som fullstendig mangler dlstlnktlve bøyningsendelser. Ikke desto mindre viser Greene og Rubins arbeld (1971) klart at det 1 stor utstrekning lar seg gjøre å bestemme ordklasser ut fra avled ningsendelser og andre typer sluttsekvenser. (Betegnelsene "suffiks" og "endelser" vll her bll brukt 1 betydningen ordslutt eller sluttsekvens.) 3 Materiale og metode Da Greene og Rubin (1971) laget sin sufflksllste, brukte de for uten en fInal-alfabetlsk ordliste for Brown Corpus (ca. 50.000 ordtyper), fInal-alfabetlske ordlister fra Dolby og Resnlkoff (1967) . VI har benyttet en lignende fremgangsmåte 1 arbeidet med å revidere sufflksllsten. Følgende materiale ble brukt: A: En fInal-alfabetlsk ordliste over de tilsammen ca. 75.000 ordtypene (grafiske ord) som finnes 1 Brown Corpus og LOB Corpus. B: En frekvensliste over grammatiske koder som forekommer ved hyppige endelser (fra én tll fem bokstaver), basert på den gram matisk merkede versjonen av Brown Corpus (heretter kalt suffiks/ kode-llsten). Noen eksempler:3Grammatisk merking av The Lancaster-Oslo/Bergen Corpus: ordklassebestemmelse vedhjelp av ordslutt Mette-Cathrine Jahr, Stig Johansson Proceedings of NODALIDA 1981, pages 125-136 | Mette-Cathrlne Jahr |
d220047937 | Language models keep track of complex linguistic information about the preceding context -including, e.g., syntactic relations in a sentence. We investigate whether they also capture information beneficial for resolving pronominal anaphora in English. We analyze two state of the art models with LSTM and Transformer architectures, respectively, using probe tasks on a coreference annotated corpus.Our hypothesis is that language models will capture grammatical properties of anaphora (such as agreement between a pronoun and its antecedent), but not semantico-referential information (the fact that pronoun and antecedent refer to the same entity). Instead, we find evidence that models capture referential aspects to some extent -though they are still much better at grammar. The Transformer outperforms the LSTM in all analyses, and exhibits in particular better semantico-referential abilities. | Probing for Referential Information in Language Models |
d209071986 | The problem of building a coherent and non-monotonous conversational agent with proper discourse and coverage is still an area of open research. Current architectures only take care of semantic and contextual information for a given query and fail to completely account for syntactic and external knowledge which are crucial for generating responses in a chit-chat system. To overcome this problem, we propose an end to end multi-stream deep learning architecture which learns unified embeddings for query-response pairs by leveraging contextual information from memory networks and syntactic information by incorporating Graph Convolution Networks (GCN) over their dependency parse. A stream of this network also utilizes transfer learning by pre-training a bidirectional transformer to extract semantic representation for each input sentence and incorporates external knowledge through the neighborhood of the entities from a Knowledge Base (KB). We benchmark these embeddings on next sentence prediction task and significantly improve upon the existing techniques. Furthermore, we use AMUSED to represent query and responses along with its context to develop a retrieval based conversational agent which has been validated by expert linguists to have comprehensive engagement with humans. | AMUSED: A MULTI-STREAM VECTOR REPRESENTA- TION METHOD FOR USE IN NATURAL DIALOGUE |
d8867529 | This paper introduces SGNMT, our experimental platform for machine translation research. SGNMT provides a generic interface to neural and symbolic scoring modules (predictors) with left-to-right semantic such as translation models like NMT, language models, translation lattices, n-best lists or other kinds of scores and constraints. Predictors can be combined with other predictors to form complex decoding tasks. SGNMT implements a number of search strategies for traversing the space spanned by the predictors which are appropriate for different predictor constellations. Adding new predictors or decoding strategies is particularly easy, making it a very efficient tool for prototyping new research ideas. SGNMT is actively being used by students in the MPhil program in Machine Learning, Speech and Language Technology at the University of Cambridge for course work and theses, as well as for most of the research work in our group. | SGNMT -A Flexible NMT Decoding Platform for Quick Prototyping of New Models and Search Strategies |
d10182773 | In this paper I report on an investigation into thc problem of assigning tones to pitch contours. The proposed model is intended to serve as a tool for phonologists working on instrumentally obtained pitch data from, tone languages. Motivation and exemplification for the model is provided by data taken from my fieldwork on Bamileke Dschang (Cameroon). Following recent work by Liberman and others, l provide a parametrised F0 prediction fuuction ~o which generates F0 values from a tone sequence, and I explore the asymptotic behaviour of downstel,. Next., i observe that transcribing a sequence X of pitch (i.e. F0) values amounts to fin-dil~g a tone sequence T such that P(T) ~ X. This is a combimttorial optimisation problem, for which two non-deterministic search techniques are provi-d~d: a genetic algorithm and a simulated annea-Iblg algorithm. Finally, two implementations--Oll,~ for each technique~are described and then co,npared using both artificial and real data for s~.quences of up to 20 tones. These programs can be adapted to other tone languages by adjusting tiw F0 predh:tion function. | AUTOMATED TONE TRANSCRIPTION |
d159041087 | A large body of research into semantic textual similarity has focused on constructing state-of-the-art embeddings using sophisticated modelling, careful choice of learning signals and many clever tricks. By contrast, little attention has been devoted to similarity measures between these embeddings, with cosine similarity being used unquestionably in the majority of cases. In this work, we illustrate that for all common word vectors, cosine similarity is essentially equivalent to the Pearson correlation coefficient, which provides some justification for its use. We thoroughly characterise cases where Pearson correlation (and thus cosine similarity) is unfit as similarity measure. Importantly, we show that Pearson correlation is appropriate for some word vectors but not others. When it is not appropriate, we illustrate how common nonparametric rank correlation coefficients can be used instead to significantly improve performance. We support our analysis with a series of evaluations on word-level and sentencelevel semantic textual similarity benchmarks. On the latter, we show that even the simplest averaged word vectors compared by rank correlation easily rival the strongest deep representations compared by cosine similarity. | Correlation Coefficients and Semantic Textual Similarity |
d15859165 | Geolocation prediction is vital to geospatial applications like localised search and local event detection. Predominately, social media geolocation models are based on full text data, including common words with no geospatial dimension (e.g. today) and noisy strings (tmrw), potentially hampering prediction and leading to slower/more memory-intensive models. In this paper, we focus on finding location indicative words (LIWs) via feature selection, and establishing whether the reduced feature set boosts geolocation accuracy. Our results show that an information gain ratiobased approach surpasses other methods at LIW selection, outperforming state-of-the-art geolocation prediction methods by 10.6% in accuracy and reducing the mean and median of prediction error distance by 45km and 209km, respectively, on a public dataset. We further formulate notions of prediction confidence, and demonstrate that performance is even higher in cases where our model is more confident, striking a trade-off between accuracy and coverage. Finally, the identified LIWs reveal regional language differences, which could be potentially useful for lexicographers. | Geolocation Prediction in Social Media Data by Finding Location Indicative Words |
d222178118 | Existing persona-grounded dialog models often fail to capture simple implications of given persona descriptions, something which humans are able to do seamlessly. For example, state-of-the-art models cannot infer that interest in hiking might imply love for nature or longing for a break. In this paper, we propose to expand available persona sentences using existing commonsense knowledge bases and paraphrasing resources to imbue dialog models with access to an expanded and richer set of persona descriptions. Additionally, we introduce fine-grained grounding on personas by encouraging the model to make a discrete choice among persona sentences while synthesizing a dialog response. Since such a choice is not observed in the data, we model it using a discrete latent random variable and use variational learning to sample from hundreds of persona expansions. Our model outperforms competitive baselines on the PERSONA-CHAT dataset in terms of dialog quality and diversity while achieving persona-consistent and controllable dialog generation. | Like hiking? You probably enjoy nature: Persona-grounded Dialog with Commonsense Expansions |
d203902359 | Among the six challenges of neural machine translation (NMT) coined by(Koehn and Knowles, 2017), rare-word problem is considered the most severe one, especially in translation of low-resource languages. In this paper, we propose three solutions to address the rare words in neural machine translation systems. First, we enhance source context to predict the target words by connecting directly the source embeddings to the output of the attention component in NMT. Second, we propose an algorithm to learn morphology of unknown words for English in supervised way in order to minimize the adverse effect of rare-word problem. Finally, we exploit synonymous relation from the WordNet to overcome out-of-vocabulary (OOV) problem of NMT. We evaluate our approaches on two lowresource language pairs: English-Vietnamese and Japanese-Vietnamese. In our experiments, we have achieved significant improvements of up to roughly +1.0 BLEU points in both language pairs. | Overcoming the Rare Word Problem for Low-Resource Language Pairs in Neural Machine Translation |
d9036227 | In this paper, we describe SemEval-2013 Task 4: the definition, the data, the evaluation and the results. The task is to capture some of the meaning of English noun compounds via paraphrasing. Given a two-word noun compound, the participating system is asked to produce an explicitly ranked list of its free-form paraphrases. The list is automatically compared and evaluated against a similarly ranked list of paraphrases proposed by human annotators, recruited and managed through Amazon's Mechanical Turk. The comparison of raw paraphrases is sensitive to syntactic and morphological variation. The "gold" ranking is based on the relative popularity of paraphrases among annotators. To make the ranking more reliable, highly similar paraphrases are grouped, so as to downplay superficial differences in syntax and morphology. Three systems participated in the task. They all beat a simple baseline on one of the two evaluation measures, but not on both measures. This shows that the task is difficult. | SemEval-2013 Task 4: Free Paraphrases of Noun Compounds |
d14519034 | There has been relatively little attention to incorporating linguistic prior to neural machine translation. Much of the previous work was further constrained to considering linguistic prior on the source side. In this paper, we propose a hybrid model, called NMT+RG, that learns to parse and translate by combining the recurrent neural network grammar into the attentionbased neural machine translation. Our approach encourages the neural machine translation model to incorporate linguistic prior during training, and lets it translate on its own afterward. Extensive experiments with four language pairs show the effectiveness of the proposed NMT+RG. | Learning to Parse and Translate Improves Neural Machine Translation |
d203610250 | While recent work in abstractive summarization has resulted in higher scores in automatic metrics, there is little understanding on how these systems combine information taken from multiple document sentences. In this paper, we analyze the outputs of five state-of-the-art abstractive summarizers, focusing on summary sentences that are formed by sentence fusion. We ask assessors to judge the grammaticality, faithfulness, and method of fusion for summary sentences. Our analysis reveals that system sentences are mostly grammatical, but often fail to remain faithful to the original article. | Analyzing Sentence Fusion in Abstractive Summarization |
d259376900 | In this manuscript, we describe the participation of the UMUTeam in SemEval-2023 Task 5, namely, Clickbait Spoiling, a shared task on identifying spoiler type (i.e., a phrase or a passage) and generating short texts that satisfy curiosity induced by a clickbait post, i.e. generating spoilers for the clickbait post. Our participation in Task 1 is based on fine-tuning pre-trained models, which consists in taking a pre-trained model and tuning it to fit the spoiler classification task. Our system has obtained excellent results in Task 1: we outperformed all proposed baselines, being within the Top 10 for most measures. Foremost, we reached Top 3 in F1 score in the passage spoiler ranking. | Chick Adams at SemEval-2023 Task 5: Using RoBERTa and DeBERTa to extract post and document-based features for Clickbait Spoiling |
d259376915 | In this paper, we describe our approach to Task 4 in SemEval 2023. Our pipeline tries to solve the problem of multi-label text classification of human values in English-written arguments. We propose a label-aware system where we reframe the multi-label task into a binary task resembling an NLI task. We propose to include the semantic description of the human values by comparing each description to each argument and ask whether there is entailment or not. | Søren Kierkegaard at SemEval-2023 Task 4: Label-aware text classification using Natural Language Inference |
d5570801 | This paper describes our submission for the *SEM shared task of Semantic Textual Similarity. We estimate the semantic similarity between two sentences using regression models with features: 1) n-gram hit rates (lexical matches) between sentences, 2) lexical semantic similarity between non-matching words, 3) string similarity metrics, 4) affective content similarity and 5) sentence length. Domain adaptation is applied in the form of independent models and a model selection strategy achieving a mean correlation of 0.47. | DeepPurple: Lexical, String and Affective Feature Fusion for Sentence-Level Semantic Similarity Estimation |
d212633786 | Text corpora annotated with language-related properties are an important resource for the development of Language Technology. The current work contributes a new resource for Chinese Language Technology and for Chinese-English translation, in the form of a set of TED talks (some originally given in English, some in Chinese) that have been annotated with discourse relations in the style of the Penn Discourse TreeBank, adapted to properties of Chinese text that are not present in English. The resource is currently unique in annotating discourse-level properties of planned spoken monologues rather than of written text. An inter-annotator agreement study demonstrates that the annotation scheme is able to achieve highly reliable results. | Shallow Discourse Annotation for Chinese TED Talks |
d7717275 | We revisit Levin's theory about the correspondence of verb meaning and syntax and infer semantic classes from a large syntactic classification of more than 600 German verbs taking clausal and non-finite arguments. Grasping the meaning components of Levin-classes is known to be hard. We address this challenge by setting up a multi-perspective semantic characterization of the inferred classes. To this end, we link the inferred classes and their English translation to independently constructed semantic classes in three different lexicons -the German wordnet GermaNet, VerbNet and FrameNet -and perform a detailed analysis and evaluation of the resulting German-English classification (available at www.ukp.tu-darmstadt. de/modality-verbclasses/). | Verbs Taking Clausal and Non-Finite Arguments as Signals of Modality - Revisiting the Issue of Meaning Grounded in Syntax |
d218487733 | Recent advances in NLP demonstrate the effectiveness of training large-scale language models and transferring them to downstream tasks. Can fine-tuning these models on tasks other than language modeling further improve performance? In this paper, we conduct an extensive study of the transferability between 33 NLP tasks across three broad classes of problems (text classification, question answering, and sequence labeling). Our results show that transfer learning is more beneficial than previously thought, especially when target task data is scarce, and can improve performance even when the source task is small or differs substantially from the target task (e.g., partof-speech tagging transfers well to the DROP QA dataset). We also develop task embeddings that can be used to predict the most transferable source tasks for a given target task, and we validate their effectiveness in experiments controlled for source and target data size. Overall, our experiments reveal that factors such as source data size, task and domain similarity, and task complexity all play a role in determining transferability. | Exploring and Predicting Transferability across NLP Tasks |
d26227339 | Nous présenterons dans cette communication les premiers travaux de modélisation informatique d'une grammaire de la langue créole martiniquaise, en nous inspirant des descriptions fonctionnelles deDamoiseau (1984)ainsi que du manuel dePinalie & Bernabé (1999). Prenant appui sur des travaux antérieurs en génération de texte(Vaillant, 1997), nous utilisons un formalisme de grammaires d'unification, les grammaires d'adjonction d'arbres (TAG d'après l'acronyme anglais), ainsi qu'une modélisation de catégories lexicales fonctionnelles à base syntaxico-sémantique, pour mettre en oeuvre une grammaire du créole martiniquais utilisable dans une maquette de système de génération automatique. L'un des intérêts principaux de ce système pourrait être son utilisation comme logiciel outil pour l'aide à l'apprentissage du créole en tant que langue seconde.In this article, some first elements of a computational modelling of the grammar of the Martiniquese French Creole dialect are presented. The sources of inspiration for the modelling is the functional description given byDamoiseau (1984), and Pinalie's & Bernabé's (1999) grammar manual. Based on earlier works in text generation (Vaillant, 1997), a unification grammar formalism, namely Tree Adjoining Grammars (TAG), and a modelling of lexical functional categories based on syntactic and semantic properties, are used to implement a grammar of Martiniquese Creole which is used in a prototype of text generation system. One of the main applications of the system could be its use as a tool software supporting the task of learning Creole as a second language. | Une grammaire formelle du créole martiniquais pour la génération automatique |
d218974194 | The knowledge acquisition bottleneck problem dramatically hampers the creation of sense-annotated data for Word Sense Disambiguation (WSD). Sense-annotated data are scarce for English and almost absent for other languages. This limits the range of action of deeplearning approaches, which today are at the base of any NLP task and are hungry for data. We mitigate this issue and encourage further research in multilingual WSD by releasing to the NLP community five large datasets annotated with word senses in five different languages, namely, English, French, Italian, German and Spanish, and 5 distinct datasets in English, each for a different semantic domain. We show that supervised WSD models trained on our data attain higher performance than when trained on other automaticallycreated corpora. We release all our data containing more than 15 million annotated instances in 5 different languages at http:// trainomatic.org/onesec. | Sense-Annotated Corpora for Word Sense Disambiguation in Multiple Languages and Domains |
d52100878 | We present a neural framework for opinion summarization from online product reviews which is knowledge-lean and only requires light supervision (e.g., in the form of product domain labels and user-provided ratings). Our method combines two weakly supervised components to identify salient opinions and form extractive summaries from multiple reviews: an aspect extractor trained under a multi-task objective, and a sentiment predictor based on multiple instance learning. We introduce an opinion summarization dataset that includes a training set of product reviews from six diverse domains and human-annotated development and test sets with gold standard aspect annotations, salience labels, and opinion summaries. Automatic evaluation shows significant improvements over baselines, and a largescale study indicates that our opinion summaries are preferred by human judges according to multiple criteria. 1 | Summarizing Opinions: Aspect Extraction Meets Sentiment Prediction and They Are Both Weakly Supervised |
d182952883 | Word embeddings typically represent different meanings of a word in a single conflated vector. Empirical analysis of embeddings of ambiguous words is currently limited by the small size of manually annotated resources and by the fact that word senses are treated as unrelated individual concepts. We present a large dataset based on manual Wikipedia annotations and word senses, where word senses from different words are related by semantic classes. This is the basis for novel diagnostic tests for an embedding's content: we probe word embeddings for semantic classes and analyze the embedding space by classifying embeddings into semantic classes. Our main findings are: (i) Information about a sense is generally represented well in a single-vector embedding -if the sense is frequent. (ii) A classifier can accurately predict whether a word is single-sense or multi-sense, based only on its embedding. (iii) Although rare senses are not well represented in single-vector embeddings, this does not have negative impact on an NLP application whose performance depends on frequent senses. | Probing for Semantic Classes: Diagnosing the Meaning Content of Word Embeddings |
d222090987 | Long Short-Term Memory recurrent neural network (LSTM) is widely used and known to capture informative long-term syntactic dependencies. However, how such information are reflected in its internal vectors for natural text has not yet been sufficiently investigated. We analyze them by learning a language model where syntactic structures are implicitly given. We empirically show that the context update vectors, i.e. outputs of internal gates, are approximately quantized to binary or ternary values to help the language model to count the depth of nesting accurately, as Suzgun et al.(2019)recently showed for synthetic Dyck languages. For some dimensions in the context vector, we show that their activations are highly correlated with the depth of phrase structures, such as VP and NP. Moreover, with an L 1 regularization, we also found that it can be accurately predicted whether a word is inside a phrase structure or not from a small number of components of the context vector. Even for the case of learning from raw text, context vectors are still shown to correlate well with the phrase structures. Finally, we show that natural clusters of the functional words and the parts of speech that trigger phrases are represented in a small but principal subspace of the context-update vector of LSTM.This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. | How LSTM Encodes Syntax: Exploring Context Vectors and Semi-Quantization on Natural Text |
d76660251 | Word meaning changes over time, depending on linguistic and extra-linguistic factors. Associating a word's correct meaning in its historical context is a critical challenge in diachronic research, and is relevant to a range of NLP tasks, including information retrieval and semantic search in historical texts. Bayesian models for semantic change have emerged as a powerful tool to address this challenge, providing explicit and interpretable representations of semantic change phenomena. However, while corpora typically come with rich metadata, existing models are limited by their inability to exploit contextual information (such as text genre) beyond the document timestamp. This is particularly critical in the case of ancient languages, where lack of data and long diachronic span make it harder to draw a clear distinction between polysemy and semantic change, and current systems perform poorly on these languages. We develop GASC, a dynamic semantic change model that leverages categorical metadata about the texts' genre information to boost inference and uncover the evolution of meanings in Ancient Greek corpora. In a new evaluation framework, we show that our model achieves improved predictive performance compared to the state of the art. * Work done prior to joining Amazon. 1 In the remainder of this paper, we use emphasis to refer to a word and 'single quotes' for any of its senses. | GASC: Genre-Aware Semantic Change for Ancient Greek |
d52124023 | Scripts define knowledge about how everyday scenarios (such as going to a restaurant) are expected to unfold. One of the challenges to learning scripts is the hierarchical nature of the knowledge. For example, a suspect arrested might plead innocent or guilty, and a very different track of events is then expected to happen. To capture this type of information, we propose an autoencoder model with a latent space defined by a hierarchy of categorical variables. We utilize a recently proposed vector quantization based approach, which allows continuous embeddings to be associated with each latent variable value. This permits the decoder to softly decide what portions of the latent hierarchy to condition on by attending over the value embeddings for a given setting. Our model effectively encodes and generates scripts, outperforming a recent language modeling-based method on several standard tasks, and allowing the autoencoder model to achieve substantially lower perplexity scores compared to the previous language modelingbased method. | Hierarchical Quantized Representations for Script Generation |
d235294145 | Language generation models' democratization benefits many domains, from answering health-related questions to enhancing education by providing AI-driven tutoring services. However, language generation models' democratization also makes it easier to generate human-like text at-scale for nefarious activities, from spreading misinformation to targeting specific groups with hate speech. Thus, it is essential to understand how people interact with bots and develop methods to detect bot-generated text. This paper shows that bot-generated text detection methods are more robust across datasets and models if we use information about how people respond to it rather than using the bot's text directly. We also analyze linguistic alignment, providing insight into differences between humanhuman and human-bot conversations. | Detecting Bot-Generated Text by Characterizing Linguistic Accommodation in Human-Bot Interactions |
d37501336 | The Practical Lexical Function (PLF) model is a model of computational distributional semantics that attempts to strike a balance between expressivity and learnability in predicting phrase meaning and shows competitive results. We investigate how well the PLF carries over to free word order languages, given that it builds on observations of predicate-argument combinations that are harder to recover in free word order languages. We evaluate variants of the PLF for Croatian, using a new lexical substitution dataset. We find that the PLF works about as well for Croatian as for English, but demonstrate that its strength lies in modeling verbs, and that the free word order affects the less robust PLF variant. | Does Free Word Order Hurt? Assessing the Practical Lexical Function Model for Croatian |
d618047 | This paper details the coreference resolution system submitted by Stanford at the CoNLL-2011 shared task. Our system is a collection of deterministic coreference resolution models that incorporate lexical, syntactic, semantic, and discourse information. All these models use global document-level information by sharing mention attributes, such as gender and number, across mentions in the same cluster. We participated in both the open and closed tracks and submitted results using both predicted and gold mentions. Our system was ranked first in both tracks, with a score of 57.8 in the closed track and 58.3 in the open track. | Stanford's Multi-Pass Sieve Coreference Resolution System at the CoNLL-2011 Shared Task |
d8377315 | We propose an abstraction-based multidocument summarization framework that can construct new sentences by exploring more fine-grained syntactic units than sentences, namely, noun/verb phrases. Different from existing abstraction-based approaches, our method first constructs a pool of concepts and facts represented by phrases from the input documents. Then new sentences are generated by selecting and merging informative phrases to maximize the salience of phrases and meanwhile satisfy the sentence construction constraints. We employ integer linear optimization for conducting phrase selection and merging simultaneously in order to achieve the global optimal solution for a summary. Experimental results on the benchmark data set TAC 2011 show that our framework outperforms the state-ofthe-art models under automated pyramid evaluation metric, and achieves reasonably well results on manual linguistic quality evaluation. | Abstractive Multi-Document Summarization via Phrase Selection and Merging * |
d236459862 | Propaganda can be defined as a form of communication that aims to influence the opinions or the actions of people towards a specific goal; this is achieved by means of well-defined rhetorical and psychological devices. Propaganda, in the form we know it today, can be dated back to the beginning of the 17th century. However, it is with the advent of the Internet and the social media that it has started to spread on a much larger scale than before, thus becoming major societal and political issue. Nowadays, a large fraction of propaganda in social media is multimodal, mixing textual with visual content. With this in mind, here we propose a new multi-label multimodal task: detecting the type of propaganda techniques used in memes. We further create and release a new corpus of 950 memes, carefully annotated with 22 propaganda techniques, which can appear in the text, in the image, or in both. Our analysis of the corpus shows that understanding both modalities together is essential for detecting these techniques. This is further confirmed in our experiments with several state-of-the-art multimodal models. | Detecting Propaganda Techniques in Memes |
d3957741 | We describe SOCKEYE, 1 an open-source sequence-to-sequence toolkit for Neural Machine Translation (NMT). SOCKEYE is a production-ready framework for training and applying models as well as an experimental platform for researchers. Written in Python and built on MXNET, the toolkit offers scalable training and inference for the three most prominent encoderdecoder architectures: attentional recurrent neural networks, self-attentional transformers, and fully convolutional networks. SOCKEYE also supports a wide range of optimizers, normalization and regularization techniques, and inference improvements from current NMT literature. Users can easily run standard training recipes, explore different model settings, and incorporate new ideas. The SOCKEYE toolkit is free software released under the Apache 2.0 license. | The SOCKEYE Neural Machine Translation Toolkit at AMTA 2018 |
d199577761 | We investigate neg(ation)-raising inferences, wherein negation on a predicate can be interpreted as though in that predicate's subordinate clause. To do this, we collect a largescale dataset of neg-raising judgments for effectively all English clause-embedding verbs and develop a model to jointly induce the semantic types of verbs and their subordinate clauses and the relationship of these types to neg-raising inferences. We find that some negraising inferences are attributable to properties of particular predicates, while others are attributable to subordinate clause structure. 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 NAACL-HLT 2019 Submission ***. Confidential Review Copy. DO NOT DISTRIBUTE. | The lexical and grammatical sources of neg-raising inferences Anonymous NAACL submission |
d189762556 | The Transformer is a sequence model that forgoes traditional recurrent architectures in favor of a fully attention-based approach. Besides improving performance, an advantage of using attention is that it can also help to interpret a model by showing how the model assigns weight to different input elements. However, the multi-layer, multi-head attention mechanism in the Transformer model can be difficult to decipher. To make the model more accessible, we introduce an open-source tool that visualizes attention at multiple scales, each of which provides a unique perspective on the attention mechanism. We demonstrate the tool on BERT and OpenAI GPT-2 and present three example use cases: detecting model bias, locating relevant attention heads, and linking neurons to model behavior. | A Multiscale Visualization of Attention in the Transformer Model |
d7562971 | Online debate forums present a valuable opportunity for the understanding and modeling of dialogue. To understand these debates, a key challenge is inferring the stances of the participants, all of which are interrelated and dependent. While collectively modeling users' stances has been shown to be effective(Walker et al., 2012c;Hasan and Ng, 2013), there are many modeling decisions whose ramifications are not well understood. To investigate these choices and their effects, we introduce a scalable unified probabilistic modeling framework for stance classification models that 1) are collective, 2) reason about disagreement, and 3) can model stance at either the author level or at the post level. We comprehensively evaluate the possible modeling choices on eight topics across two online debate corpora, finding accuracy improvements of up to 11.5 percentage points over a local classifier. Our results highlight the importance of making the correct modeling choices for online dialogues, and having a unified probabilistic modeling framework that makes this possible. | Joint Models of Disagreement and Stance in Online Debate |
d85528601 | AMR-to-text generation is a problem recently introduced to the NLP community, in which the goal is to generate sentences from Abstract Meaning Representation (AMR) graphs. Sequence-to-sequence models can be used to this end by converting the AMR graphs to strings. Approaching the problem while working directly with graphs requires the use of graph-to-sequence models that encode the AMR graph into a vector representation. Such encoding has been shown to be beneficial in the past, and unlike sequential encoding, it allows us to explicitly capture reentrant structures in the AMR graphs. We investigate the extent to which reentrancies (nodes with multiple parents) have an impact on AMR-to-text generation by comparing graph encoders to tree encoders, where reentrancies are not preserved. We show that improvements in the treatment of reentrancies and long-range dependencies contribute to higher overall scores for graph encoders. Our best model achieves 24.40 BLEU on LDC2015E86, outperforming the state of the art by 1.1 points and 24.54 BLEU on LDC2017T10, outperforming the state of the art by 1.24 points. | Structural Neural Encoders for AMR-to-text Generation |
d1356246 | We present an LFG-DOP parser which uses fragments from LFG-annotated sentences to parse new sentences. Experiments with the Verbmobil and Homecentre corpora show that (1) Viterbi n best search performs about 100 times faster than Monte Carlo search while both achieve the same accuracy; (2) the DOP hypothesis which states that parse accuracy increases with increasing fragment size is confirmed for LFG-DOP; (3) LFG-DOP's relative frequency estimator performs worse than a discounted frequency estimator; and (4) LFG-DOP significantly outperforms Tree-DOP if evaluated on tree structures only. | An Improved Parser for Data-Oriented Lexical-Functional Analysis |
d1145506 | The ability to measure the quality of word order in translations is an important goal for research in machine translation. Current machine translation metrics do not adequately measure the reordering performance of translation systems. We present a novel metric, the LRscore, which directly measures reordering success. The reordering component is balanced by a lexical metric. Capturing the two most important elements of translation success in a simple combined metric with only one parameter results in an intuitive, shallow, language independent metric. | LRscore for Evaluating Lexical and Reordering Quality in MT |
d52155263 | Recent works in neural machine translation have begun to explore document translation. However, translating online multispeaker conversations is still an open problem. In this work, we propose the task of translating Bilingual Multi-Speaker Conversations, and explore neural architectures which exploit both source and target-side conversation histories for this task. To initiate an evaluation for this task, we introduce datasets extracted from Europarl v7 and OpenSubtitles2016. Our experiments on four language-pairs confirm the significance of leveraging conversation history, both in terms of BLEU and manual evaluation. | Contextual Neural Model for Translating Bilingual Multi-Speaker Conversations |
d203837062 | While translating between East Asian languages, many works have discovered clear advantages of using characters as the translation unit. Unfortunately, traditional recurrent neural machine translation systems hinder the practical usage of those character-based systems due to their architectural limitations. They are unfavorable in handling extremely long sequences as well as highly restricted in parallelizing the computations. In this paper, we demonstrate that the new transformer architecture can perform character-based translation better than the recurrent one. We conduct experiments on a low-resource language pair: Japanese-Vietnamese. Our models considerably outperform the state-of-the-art systems which employ word-based recurrent architectures. | How Transformer Revitalizes Character-based Neural Machine Translation: An Investigation on Japanese-Vietnamese Translation Systems |
d218901061 | Understanding predictions made by deep neural networks is notoriously difficult, but also crucial to their dissemination. As all machine learning-based methods, they are as good as their training data, and can also capture unwanted biases. While there are tools that can help understand whether such biases exist, they do not distinguish between correlation and causation, and might be ill-suited for text-based models and for reasoning about high-level language concepts. A key problem of estimating the causal effect of a concept of interest on a given model is that this estimation requires the generation of counterfactual examples, which is challenging with existing generation technology. To bridge that gap, we propose CausaLM, a framework for producing causal model explanations using counterfactual language representation models. Our approach is based on fine-tuning of deep contextualized embedding models with auxiliary adversarial tasks derived from the causal graph of the problem. Concretely, we show that by carefully choosing Submission Volume 47, Number 2 auxiliary adversarial pre-training tasks, language representation models such as BERT can effectively learn a counterfactual representation for a given concept of interest, and be used to estimate its true causal effect on model performance. A byproduct of our method is a language representation model that is unaffected by the tested concept, which can be useful in mitigating unwanted bias ingrained in the data. 1 | CausaLM: Causal Model Explanation Through Counterfactual Language Models under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license Computational Linguistics |
d226964709 | This paper describes our submission to the WMT20 sentence filtering task. We combine scores from (1) a custom LASER built for each source language, (2) a classifier built to distinguish positive and negative pairs by semantic alignment, and (3) the original scores included in the task devkit. For the mBART finetuning setup, provided by the organizers, our method shows 7% and 5% relative improvement over baseline, in sacreBLEU score on the test set for Pashto and Khmer respectively. | Score Combination for Improved Parallel Corpus Filtering for Low Resource Conditions |
d52302229 | We present three enhancements to existing encoder-decoder models for open-domain conversational agents, aimed at effectively modeling coherence and promoting output diversity: (1) We introduce a measure of coherence as the GloVe embedding similarity between the dialogue context and the generated response, (2) we filter our training corpora based on the measure of coherence to obtain topically coherent and lexically diverse context-response pairs, (3) we then train a response generator using a conditional variational autoencoder model that incorporates the measure of coherence as a latent variable and uses a context gate to guarantee topical consistency with the context and promote lexical diversity. Experiments on the OpenSubtitles corpus show a substantial improvement over competitive neural models in terms of BLEU score as well as metrics of coherence and diversity. | Better Conversations by Modeling, Filtering, and Optimizing for Coherence and Diversity |
d15734769 | Vernacular regions such as central Ohio are popularly used in everyday language; but their vague and indeterministic boundaries affect the clarity of communicating them over the geographic space. This paper introduced a context-based natural language processing approach to retrieve geographic entities. Geographic entities extracted from news articles were used as location-based behavioral samples to map out the vague region of central Ohio. Particularly, part of speech tagging and parse tree generation were employed to filter out candidate entities from English sentences. Propositional logic of context (PLC) was introduced and adapted to build the contextual model for deciding the membership of named entities. Results were automatically generated and visualized in GIS using both symbol and density mapping. Final maps were consistent with our intuition and common sense knowledge of the vague region. | Context-based Natural Language Processing for GIS-based Vague Region Visualization |
d233296664 | Although research on emotion classification has significantly progressed in highresource languages, it is still infancy for resource-constrained languages like Bengali. However, unavailability of necessary language processing tools and deficiency of benchmark corpora makes the emotion classification task in Bengali more challenging and complicated. This work proposes a transformer-based technique to classify the Bengali text into one of the six basic emotions: anger, fear, disgust, sadness, joy, and surprise. A Bengali emotion corpus consists of 6243 texts is developed for the classification task. Experimentation carried out using various machine learning (LR, RF, MNB, SVM), deep neural networks (CNN, BiLSTM, CNN+BiLSTM) and transformer (Bangla-BERT, m-BERT, XLM-R) based approaches. Experimental outcomes indicate that XLM-R outdoes all other techniques by achieving the highest weighted f 1 -score of 69.73% on the test data. The dataset is publicly available at https://github.com/omar-sharif03/ NAACL-SRW-2021. | Emotion Classification in a Resource Constrained Language Using Transformer-based Approach |
d195766996 | Named entity recognition (NER) is one of the best studied tasks in natural language processing. However, most approaches are not capable of handling nested structures which are common in many applications. In this paper we introduce a novel neural network architecture that first merges tokens and/or entities into entities forming nested structures, and then labels each of them independently. Unlike previous work, our merge and label approach predicts real-valued instead of discrete segmentation structures, which allow it to combine word and nested entity embeddings while maintaining differentiability. We evaluate our approach using the ACE 2005 Corpus, where it achieves state-of-the-art F1 of 74.6, further improved with contextual embeddings (BERT) to 82.4, an overall improvement of close to 8 F1 points over previous approaches trained on the same data. Additionally we compare it against BiLSTM-CRFs, the dominant approach for flat NER structures, demonstrating that its ability to predict nested structures does not impact performance in simpler cases. 1 | Merge and Label: A novel neural network architecture for nested NER |
d52156056 | Measuring domain relevance of data and identifying or selecting well-fit domain data for machine translation (MT) is a well-studied topic, but denoising is not yet. Denoising is concerned with a different type of data quality and tries to reduce the negative impact of data noise on MT training, in particular, neural MT (NMT) training. This paper generalizes methods for measuring and selecting data for domain MT and applies them to denoising NMT training. The proposed approach uses trusted data and a denoising curriculum realized by online data selection. Intrinsic and extrinsic evaluations of the approach show its significant effectiveness for NMT to train on data with severe noise. | Denoising Neural Machine Translation Training with Trusted Data and Online Data Selection |
d36815393 | Maxwell is also associated with the Computation Center -Tuggle is also associated with the School of BusinessThis study describes the structure, implementation and potential of a simple computer program that understands and answers questions in a humanoid mannet. An emphasis bas been placed on the creation of an extendible memory structure--one capable of supposting conversation in normal, unrestricted ErIgli6h on a variety of topics. An attempt has also been made to find procedures that can easily and accurately determine the meaning of input text-A parser using a combination of syntax and semantics has been developed for understanding YES-NO questions, in particular, DO-type (DO, DID, DOES, etc.) questions. A third and major emphasis has been on the development of procedures to allow the program to converse, easily and "naturally1' with a human. This general gaql has been met by developing procedures that generate answers to DO-questions in a manner similar to the way a person might answer them. | American Jo&nsl of Computational Linguistics |
d202539806 | Generating syntactically and semantically valid and relevant questions from paragraphs is useful with many applications. Manual generation is a labour-intensive task, as it requires the reading, parsing and understanding of long passages of text. A number of question generation models based on sequence-to-sequence techniques have recently been proposed. Most of them generate questions from sentences only, and none of them is publicly available as an easy-to-use service. In this paper, we demonstrate ParaQG, a Web-based system for generating questions from sentences and paragraphs. ParaQG incorporates a number of novel functionalities to make the question generation process user-friendly. It provides an interactive interface for a user to select answers with visual insights on generation of questions. It also employs various faceted views to group similar questions as well as filtering techniques to eliminate unanswerable questions. Ke. 2018. Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In EMNLP 2018, pages 3901-3910. | ParaQG: A System for Generating Questions and Answers from Paragraphs |
d3834355 | Reading a document and extracting an answer to a question about its content has attracted substantial attention recently. While most work has focused on the interaction between the question and the document, in this work we evaluate the importance of context when the question and document are processed independently. We take a standard neural architecture for this task, and show that by providing rich contextualized word representations from a large pre-trained language model as well as allowing the model to choose between contextdependent and context-independent word representations, we can obtain dramatic improvements and reach performance comparable to state-of-the-art on the competitive SQUAD dataset. | Contextualized Word Representations for Reading Comprehension |
d259370844 | As e-commerce platforms develop different business lines, a special but challenging product categorization scenario emerges, where there are multiple domain-specific category taxonomies and each of them evolves dynamically over time. In order to unify the categorization process and ensure efficiency, we propose a two-stage taxonomy-agnostic framework that relies solely on calculating the semantic relatedness between product titles and category names in the vector space. To further enhance domain transferability and better exploit cross-domain data, we design two plugin modules: a heuristic mapping scorer and a pretrained contrastive ranking module with the help of "meta concepts", which represent keyword knowledge shared across domains. Comprehensive offline experiments show that our method outperforms strong baselines on three dynamic multi-domain product categorization (DMPC) tasks, and online experiments reconfirm its efficacy with a 5% increase on seasonal purchase revenue. Related datasets are released 1 . | Transferable and Efficient: Unifying Dynamic Multi-Domain Product Categorization |
d231741306 | The segmentation of emails into functional zones (also dubbed email zoning) is a relevant preprocessing step for most NLP tasks that deal with emails. However, despite the multilingual character of emails and their applications, previous literature regarding email zoning corpora and systems was developed essentially for English.In this paper, we analyse the existing email zoning corpora and propose a new multilingual benchmark composed of 625 emails in Portuguese, Spanish and French. Moreover, we introduce OKAPI, the first multilingual email segmentation model based on a language agnostic sentence encoder. Besides generalizing well for unseen languages, our model is competitive with current English benchmarks, and reached new state-of-the-art performances for domain adaptation tasks in English. | Multilingual Email Zoning |
d10743051 | We present a joint model for biomedical event extraction and apply it to four tracks of the BioNLP 2011 Shared Task. Our model decomposes into three sub-models that concern (a) event triggers and outgoing arguments, (b) event triggers and incoming arguments and (c) protein-protein bindings. For efficient decoding we employ dual decomposition. Our results are very competitive: With minimal adaptation of our model we come in second for two of the tasks-right behind a version of the system presented here that includes predictions of the Stanford event extractor as features. We also show that for the Infectious Diseases task using data from the Genia track is a very effective way to improve accuracy. | Robust Biomedical Event Extraction with Dual Decomposition and Minimal Domain Adaptation |
d259370841 | Getting a good understanding of the user intent is vital for e-commerce applications to surface the right product to a given customer query. Query Understanding (QU) systems are essential for this purpose, and many e-commerce providers are working on complex solutions that need to be data efficient and able to capture early emerging market trends. Query Attribute Understanding (QAU) is a sub-component of QU that involves extracting named attributes from user queries and linking them to existing e-commerce entities such as brand, material, color, etc. While extracting named entities from text has been extensively explored in the literature, QAU requires specific attention due to the nature of the queries, which are often short, noisy, ambiguous, and constantly evolving. This paper makes three contributions to QAU. First, we propose a novel end-to-end approach that jointly solves Named Entity Recognition (NER) and Entity Linking (NEL) and enables open-world reasoning for QAU. Second, we introduce a novel method for utilizing product graphs to enhance the representation of query entities. Finally, we present a new dataset constructed from public sources that can be used to evaluate the performance of future QAU systems. | AVEN-GR: Attribute Value Extraction and Normalization using product GRaphs |
d36627602 | Comprehension and generation are the two complementary aspects of natural language processing (NLP). However, much of the research in NLP until recently has focussed on comprehension. Some of the reasons for this almost exclusive emphasis on comprehension are(1) the belief that comprehension is harder than generation, (2) problems in comprehension could be formulated in the AI paradigm developed for problems in perception, (3) the potential areas of applications seemed to call for comprehension more than generation, e.g., question-answer systems, where the answers can be presented in some fixed format or even in some nonlinguistic fashion (such as tables), etc. Now there is a flurry of activity in generation, and we are definitely going to see a significant part of future NLP research devoted to generation. A key motivation for this interest in generation is the realization that many applications of NLP require that the response produced by a system must be flexible (i.e., not produced by filling in a fixed set of templates) and must often consist of a sequence of sentences (i.e., a text) which must have | GENERATION -A NEW FRONTIER OF NATURAL LANGUAGE PROCESSING? |
d226262227 | Pre-trained contextual representations like BERT have achieved great success in natural language processing. However, the sentence embeddings from the pre-trained language models without fine-tuning have been found to poorly capture semantic meaning of sentences. In this paper, we argue that the semantic information in the BERT embeddings is not fully exploited. We first reveal the theoretical connection between the masked language model pre-training objective and the semantic similarity task theoretically, and then analyze the BERT sentence embeddings empirically. We find that BERT always induces a non-smooth anisotropic semantic space of sentences, which harms its performance of semantic similarity. To address this issue, we propose to transform the anisotropic sentence embedding distribution to a smooth and isotropic Gaussian distribution through normalizing flows that are learned with an unsupervised objective. Experimental results show that our proposed BERT-flow method obtains significant performance gains over the state-of-the-art sentence embeddings on a variety of semantic textual similarity tasks. The code is available at https://github.com/ bohanli/BERT-flow. * The work was done when BL was an intern at ByteDance. | On the Sentence Embeddings from Pre-trained Language Models |
d52070059 | There have been several attempts to define a plausible motivation for a chit-chat dialogue agent that can lead to engaging conversations. In this work, we explore a new direction where the agent specifically focuses on discovering information about its interlocutor. We formalize this approach by defining a quantitative metric. We propose an algorithm for the agent to maximize it. We validate the idea with human evaluation where our system outperforms various baselines. We demonstrate that the metric indeed correlates with the human judgments of engagingness. | Aiming to Know You Better Perhaps Makes Me a More Engaging Dialogue Partner |
d196206113 | This work presents our ongoing research of unsupervised pretraining in neural machine translation (NMT). In our method, we initialize the weights of the encoder and decoder with two language models that are trained with monolingual data and then fine-tune the model on parallel data using Elastic Weight Consolidation (EWC) to avoid forgetting of the original language modeling tasks. We compare the regularization by EWC with the previous work that focuses on regularization by language modeling objectives.The positive result is that using EWC with the decoder achieves BLEU scores similar to the previous work. However, the model converges 2-3 times faster and does not require the original unlabeled training data during the finetuning stage.In contrast, the regularization using EWC is less effective if the original and new tasks are not closely related. We show that initializing the bidirectional NMT encoder with a left-toright language model and forcing the model to remember the original left-to-right language modeling task limits the learning capacity of the encoder for the whole bidirectional context. | Unsupervised Pretraining for Neural Machine Translation Using Elastic Weight Consolidation |
d15842085 | We introduce a simple, non-linear mention-ranking model for coreference resolution that attempts to learn distinct feature representations for anaphoricity detection and antecedent ranking, which we encourage by pre-training on a pair of corresponding subtasks. Although we use only simple, unconjoined features, the model is able to learn useful representations, and we report the best overall score on the CoNLL 2012 English test set to date. | Learning Anaphoricity and Antecedent Ranking Features for Coreference Resolution |
d189897891 | In supervised event detection, most of the mislabeling occurs between a small number of confusing type pairs, including trigger-NIL pairs and sibling sub-types of the same coarse type. To address this label confusion problem, this paper proposes cost-sensitive regularization, which can force the training procedure to concentrate more on optimizing confusing type pairs. Specifically, we introduce a costweighted term into the training loss, which penalizes more on mislabeling between confusing label pairs. Furthermore, we also propose two estimators which can effectively measure such label confusion based on instance-level or population-level statistics. Experiments on TAC-KBP 2017 datasets demonstrate that the proposed method can significantly improve the performances of different models in both English and Chinese event detection. | Cost-sensitive Regularization for Label Confusion-aware Event Detection |
d233365157 | Since the seminal work of Richard Montague in the 1970s, mathematical and logic tools have successfully been used to model several aspects of the meaning of natural language. However, visually impaired people continue to face serious difficulties in getting full access to this important instrument. Our paper aims to present a work in progress whose main goal is to provide blind students and researchers with an adequate method to deal with the different resources that are used in formal semantics. In particular, we intend to adapt the Portuguese Braille system in order to accommodate the most common symbols and formulas used in this kind of approach and to develop pedagogical procedures to facilitate its learnability. By making this formalisation compatible with the Braille coding (either traditional and electronic), we hope to help blind people to learn and use this notation, essential to acquire a better understanding of a great number of semantic properties displayed by natural language. | Adapting the Portuguese Braille System to Formal Semantics |
d11238153 | This paper discusses development and evaluation of a practical, valid and reliable instrument for evaluating the spoken language abilities of second-language (L2) learners of English. First we sketch the theory and history behind elicited imitation (EI) tests and the renewed interest in them. Then we present how we developed a new test based on various language resources, and administered it to a few hundred students of varying levels. The students were also scored using standard evaluation techniques, and the EI results were compared to more traditionally derived scores. We also sketch how we developed a new integrated tool that allows the session recordings of the EI data to be analyzed with a widely used automatic speech recognition (ASR) engine. We discuss the promising results of the ASR engine's processing of these files and how they correlated with human scoring of the same items. We indicate how the integrated tool will be used in the future. Further development plans and prospects for follow-on work round out the discussion. | Elicited Imitation as an Oral Proficiency Measure with ASR Scoring |
d215238677 | Natural language inference (NLI) and semantic textual similarity (STS) are key tasks in natural language understanding (NLU). Although several benchmark datasets for those tasks have been released in English and a few other languages, there are no publicly available NLI or STS datasets in the Korean language. Motivated by this, we construct and release new datasets for Korean NLI and STS, dubbed KorNLI and KorSTS, respectively. Following previous approaches, we machine-translate existing English training sets and manually translate development and test sets into Korean. To accelerate research on Korean NLU, we also establish baselines on KorNLI and KorSTS. Our datasets are publicly available at https://github.com/ kakaobrain/KorNLUDatasets. | KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding |
d253098240 | Cognitive psychologists have documented that humans use cognitive heuristics, or mental shortcuts, to make quick decisions while expending less effort. While performing annotation work on crowdsourcing platforms, we hypothesize that such heuristic use among annotators cascades on to data quality and model robustness. In this work, we study cognitive heuristic use in the context of annotating multiple-choice reading comprehension datasets. We propose tracking annotator heuristic traces, where we tangibly measure low-effort annotation strategies that could indicate usage of various cognitive heuristics. We find evidence that annotators might be using multiple such heuristics, based on correlations with a battery of psychological tests. Importantly, heuristic use among annotators determines data quality along several dimensions: (1) known biased models, such as partial input models, more easily solve examples authored by annotators that rate highly on heuristic use, (2) models trained on annotators scoring highly on heuristic use don't generalize as well, and (3) heuristic-seeking annotators tend to create qualitatively less challenging examples. Our findings suggest that tracking heuristic usage among annotators can potentially help with collecting challenging datasets and diagnosing model biases. | Cascading Biases: Investigating the Effect of Heuristic Annotation Strategies on Data and Models |
d171579599 | Dans cet article, nous proposons une nouvelle méthode pour représenter sous forme vectorielle les sens d'un dictionnaire. Nous utilisons les termes employés dans leur définition en les projetant dans un espace vectoriel, puis en additionnant les vecteurs résultants, avec des pondérations dépendantes de leur partie du discours et de leur fréquence. Le vecteur de sens résultant est alors utilisé pour trouver des sens reliés, permettant de créer un réseau lexical de manière automatique. Le réseau obtenu est ensuite évalué par rapport au réseau lexical de WordNet, construit manuellement. Pour cela nous comparons l'impact des différents réseaux sur un système de désambiguïsation lexicale basé sur la mesure de Lesk. L'avantage de notre méthode est qu'elle peut être appliquée à n'importe quelle langue ne possédant pas un réseau lexical comme celui de WordNet. Les résultats montrent que notre réseau automatiquement généré permet d'améliorer le score du système de base, atteignant quasiment la qualité du réseau de WordNet.ABSTRACTSense Embeddings in Knowledge-Based Word Sense DisambiguationIn this paper, we develop a new way of creating sense vectors for any dictionary, by using an existing word embeddings model, and summing the vectors of the terms inside a sense's definition, weighted in function of their part of speech and their frequency. These vectors are then used for finding the closest senses to any other sense, thus creating a semantic network of related concepts, automatically generated. This network is hence evaluated against the existing semantic network found in WordNet, by comparing its contribution to a knowledge-based method for word sense disambiguation. This method can be applied to any other language which lacks such semantic network, as the creation of word vectors is totally unsupervised, and the creation of sense vectors only needs a traditional dictionary. The results show that our generated semantic network improves greatly the WSD system, almost as much as the manually created one. MOTS-CLÉS : Représentation vectorielle de sens, Désambiguïsation Lexicale. | Représentation vectorielle de sens pour la désambiguïsation lexicale à base de connaissances |
d203736437 | We investigate the political roles of "Internet trolls" in social media. Political trolls, such as the ones linked to the Russian Internet Research Agency (IRA), have recently gained enormous attention for their ability to sway public opinion and even influence elections. Analysis of the online traces of trolls has shown different behavioral patterns, which target different slices of the population. However, this analysis is manual and laborintensive, thus making it impractical as a firstresponse tool for newly-discovered troll farms. In this paper, we show how to automate this analysis by using machine learning in a realistic setting. In particular, we show how to classify trolls according to their political role -left, news feed, right-by using features extracted from social media, i.e., Twitter, in two scenarios: (i) in a traditional supervised learning scenario, where labels for trolls are available, and (ii) in a distant supervision scenario, where labels for trolls are not available, and we rely on more-commonly-available labels for news outlets mentioned by the trolls. Technically, we leverage the community structure and the text of the messages in the online social network of trolls represented as a graph, from which we extract several types of learned representations, i.e., embeddings, for the trolls. Experiments on the "IRA Russian Troll" dataset show that our methodology improves over the state-of-the-art in the first scenario, while providing a compelling case for the second scenario, which has not been explored in the literature thus far. | Predicting the Role of Political Trolls in Social Media |
d52932143 | Emotion recognition in conversations is a challenging Artificial Intelligence (AI) task. It has recently gained popularity due to its potential applications in many interesting AI tasks such as empathetic dialogue generation, user behavior understanding, and so on. To the best of our knowledge, there is no multimodal multiparty conversational dataset available that contains more than two speakers in a dialogue. In this work, we propose the Multimodal Emo-tionLines Dataset (MELD), which we created by enhancing and extending the previously introduced EmotionLines dataset. MELD contains 13,708 utterances from 1433 dialogues of Friends TV series. MELD is superior to other conversational emotion recognition datasets SEMAINE and IEMOCAP as it consists of multiparty conversations and number of utterances in MELD is almost twice as these two datasets. Every utterance in MELD is associated with an emotion and a sentiment label. Utterances in MELD are multimodal encompassing audio and visual modalities along with the text. We have also addressed several shortcomings in EmotionLines and proposed a strong multimodal baseline. The baseline results show that both contextual and multimodal information play important role in emotion recognition in conversations. | MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations |
d252847580 | Proverbs are unique linguistic expressions used by humans in the process of communication. They are frozen expressions and have the capacity to convey deep semantic aspects of a given language. This paper describes ProverbNet, a novel online multilingual database of proverbs and comprehensive metadata equipped with a multipurpose search engine to store, explore, understand, classify and analyse proverbs and their metadata. ProverbNet has immense applications including machine translation, cognitive studies and learning tools. We have 2320 Sanskrit Proverbs and 1136 Marathi proverbs and their metadata in ProverbNet and are adding more proverbs in different languages to the network. | Introduction to ProverbNet: An Online Multilingual Database of Proverbs and Comprehensive Metadata |
d57189558 | We extend our previous work on constituency parsing(Kitaev and Klein, 2018)by incorporating pre-training for ten additional languages, and compare the benefits of no pre-training, ELMo (Peters et al., 2018), and BERT (Devlin et al., 2018). Pre-training is effective across all languages evaluated, and BERT outperforms ELMo in large part due to the benefits of increased model capacity. Our parser obtains new state-of-the-art results for 11 languages, including English (95.8 F1) and Chinese (91.8 F1). | Multilingual Constituency Parsing with Self-Attention and Pre-Training |
d1470716 | We describe a Wizard-of-Oz experiment setup for the collection of multimodal interaction data for a Music Player application. This setup was developed and used to collect experimental data as part of a project aimed at building a flexible multimodal dialogue system which provides an interface to an MP3 player, combining speech and screen input and output. Besides the usual goal of WOZ data collection to get realistic examples of the behavior and expectations of the users, an equally important goal for us was to observe natural behavior of multiple wizards in order to guide our system development. The wizards' responses were therefore not constrained by a script. One of the challenges we had to address was to allow the wizards to produce varied screen output a in real time. Our setup includes a preliminary screen output planning module, which prepares several versions of possible screen output. The wizards were free to speak, and/or to select a screen output. | An Experiment Setup for Collecting Data for Adaptive Output Planning in a Multimodal Dialogue System |
d3871844 | Ensemble techniques are powerful approaches that combine several weak learners to build a stronger one. As a meta learning framework, ensemble techniques can easily be applied to many machine learning techniques. In this paper we propose a neural network extended with an ensemble loss function for text classification. The weight of each weak loss function is tuned within the training phase through the gradient propagation optimization method of the neural network. The approach is evaluated on several text classification datasets. We also evaluate its performance in various environments with several degrees of label noise. Experimental results indicate an improvement of the results and strong resilience against label noise in comparison with other methods. | On Extending Neural Networks with Loss Ensembles for Text Classification |
d207916220 | Mental health poses a significant challenge for an individual's well-being. Text analysis of rich resources, like social media, can contribute to deeper understanding of illnesses and provide means for their early detection. We tackle a challenge of detecting social media users' mental status through deep learningbased models, moving away from traditional approaches to the task. In a binary classification task on predicting if a user suffers from one of nine different disorders, a hierarchical attention network outperforms previously set benchmarks for four of the disorders. Furthermore, we explore the limitations of our model and analyze phrases relevant for classification by inspecting the model's word-level attention weights. | Adapting Deep Learning Methods for Mental Health Prediction on Social Media |
d231632895 | In this work, we explore joint energy-based model (EBM) training during the finetuning of pretrained text encoders (e.g., Roberta) for natural language understanding (NLU) tasks. Our experiments show that EBM training can help the model reach a better calibration that is competitive to strong baselines, with little or no loss in accuracy. We discuss three variants of energy functions (namely scalar, hidden, and sharp-hidden) that can be defined on top of a text encoder, and compare them in experiments. Due to the discreteness of text data, we adopt noise contrastive estimation (NCE) to train the energy-based model. To make NCE training more effective, we train an autoregressive noise model with the masked language model (MLM) objective. | Joint Energy-based Model Training for Better Calibrated Natural Language Understanding Models Tianxing He MIT |
d2255078 | Automatic image description systems are commonly trained and evaluated on large image description datasets. Recently, researchers have started to collect such datasets for languages other than English. An unexplored question is how different these datasets are from English and, if there are any differences, what causes them to differ. This paper provides a crosslinguistic comparison of Dutch, English, and German image descriptions. We find that these descriptions are similar in many respects, but the familiarity of crowd workers with the subjects of the images has a noticeable influence on description specificity.2. (Perceived) audience: Speakers adapt the style of their messages to their audience(Bell, 1984). Knowing how the descriptions will be used may affect the style or quality of the corpora.3. Individual/Demographic factors: Individual features, like the demographics or personal preferences of the workers, may explain part of the variation in the descriptions. | Cross-linguistic differences and similarities in image descriptions |
d198962591 | This paper presents a contrastive analysis between reading time and clause boundary categories in the Japanese language. We overlaid reading time data, made with BCCWJ Eye-Track, and clause boundary categories annotation on the Balanced Corpus of Contemporary Written Japanese. Statistical analysis based on the Bayesian linear mixed model shows that the reading time behaviours differ among the clause boundary categories. The result does not support the wrap-up effects of clause-final words. Another result we arrived at is that the predicate-argument relations facilitate the reading speed of native Japanese speakers. | Between Reading Time and Clause Boundaries in Japanese -Wrap-up Effect in a Head-Final Language |
d226254579 | Neural sequence models can generate highly fluent sentences, but recent studies have also shown that they are also prone to hallucinate additional content not supported by the input. These variety of fluent but wrong outputs are particularly problematic, as it will not be possible for users to tell they are being presented incorrect content. To detect these errors, we propose a task to predict whether each token in the output sequence is hallucinated (not contained in the input) and collect new manually annotated evaluation sets for this task. We also introduce a method for learning to detect hallucinations using pretrained language models fine tuned on synthetic data that includes automatically inserted hallucinations Experiments on machine translation (MT) and abstractive summarization demonstrate that our proposed approach consistently outperforms strong baselines on all benchmark datasets. We further demonstrate how to use the token-level hallucination labels to define a fine-grained loss over the target sequence in low-resource MT and achieve significant improvements over strong baseline methods.We also apply our method to word-level quality estimation for MT and show its effectiveness in both supervised and unsupervised settings 1 . | Detecting Hallucinated Content in Conditional Neural Sequence Generation |
d189762081 | We introduce a novel method of generating synthetic question answering corpora by combining models of question generation and answer extraction, and by filtering the results to ensure roundtrip consistency. By pretraining on the resulting corpora we obtain significant improvements on SQuAD2 (Rajpurkar et al., 2018) and NQ (Kwiatkowski et al., 2019), establishing a new state-of-the-art on the latter. Our synthetic data generation models, for both question generation and answer extraction, can be fully reproduced by finetuning a publicly available BERT model(Devlin et al., 2018)on the extractive subsets of SQuAD2 and NQ. We also describe a more powerful variant that does full sequence-to-sequence pretraining for question generation, obtaining exact match and F1 at less than 0.1% and 0.4% from human performance on SQuAD2.Related WorkQuestion generation is a well-studied task in its own right (Heilman and Smith, | Synthetic QA Corpora Generation with Roundtrip Consistency |
d4957826 | Semantic feature norms, originally utilized in the field of psycholinguistics as a tool for studying human semantic representation and computation, have recently attracted the attention of some NLP/IR researchers who wish to improve their task performances. However, currently available semantic feature norms are, by nature, not well-structured, making them difficult to integrate into existing resources of various kinds. In this paper, by examining an actual set of semantic feature norms, we investigate which types of semantic features should be migrated into Linked Data in Linguistics (LDL) and how the migration could be done. | Migrating Psycholinguistic Semantic Feature Norms into Linked Data in Linguistics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.