_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d220403427 | ||
d108289542 | We propose a formal model for translating unranked syntactic trees, such as dependency trees, into semantic graphs. These tree-tograph transducers can serve as a formal basis of transition systems for semantic parsing which recently have been shown to perform very well, yet hitherto lack formalization. Our model features "extended" rules and an arcfactored normal form, comes with an efficient translation algorithm, and can be equipped with weights in a straightforward manner. | Bottom-Up Unranked Tree-to-Graph Transducers for Translation into Semantic Graphs |
d16462827 | We present a hybrid machine learning approach for coreference resolution. In our method, we use CRFs as basic training model, use active learning method to generate combined features so as to make existed features used more effectively; at last, we proposed a novel clustering algorithm which used both the linguistics knowledge and the statistical knowledge. We built a coreference resolution system based on the proposed method and evaluate its performance from three aspects: the contributions of active learning; the effects of different clustering algorithms; and the resolution performance of different kinds of NPs. Experimental results show that additional performance gain can be obtained by using active learning method; clustering algorithm has a great effect on coreference resolution's performance and our clustering algorithm is very effective; and the key of coreference resolution is to improve the performance of the normal noun's resolution, especially the pronoun's resolution. | An Effective Hybrid Machine Learning Approach for Coreference Resolution |
d222206204 | Can pretrained language models (PLMs) generate derivationally complex words? We present the first study investigating this question, taking BERT as the example PLM. We examine BERT's derivational capabilities in different settings, ranging from using the unmodified pretrained model to full finetuning. Our best model, DagoBERT (Derivationally and generatively optimized BERT), clearly outperforms the previous state of the art in derivation generation (DG). Furthermore, our experiments show that the input segmentation crucially impacts BERT's derivational knowledge, suggesting that the performance of PLMs could be further improved if a morphologically informed vocabulary of units were used. | DagoBERT: Generating Derivational Morphology with a Pretrained Language Model |
d1770944 | A good dictionary contains not only many entries and a lot of information concerning each one of them, but also adequate means to reveal the stored information. Information access depends crucially on the quality of the index. We will present here some ideas of how a dictionary could be enhanced to support a speaker/writer to find the word s/he is looking for. To this end we suggest to add to an existing electronic resource an index based on the notion of association. We will also present preliminary work of how a subset of such associations, for example, topical associations, can be acquired by filtering a network of lexical co-occurrences extracted from a corpus. | Enhancing electronic dictionaries with an index based on associations |
d218977376 | ||
d248780096 | This working notes summarises the participation of the UMUTeam on the TamilNLP (ACL 2022) shared task concerning emotion analysis in Tamil. We participated in the two multiclassification challenges proposed with a neural network that combines linguistic features with different feature sets based on contextual and non-contextual sentence embeddings. Our proposal achieved the 1st result for the second subtask, with an f1-score of 15.1% discerning among 30 different emotions. However, our results for the first subtask were not recorded in the official leader board. Accordingly, we report our results for this subtask with the validation split, reaching a macro f1-score of 32.360%. | UMUTeam@TamilNLP-ACL2022: Emotional Analysis in Tamil |
d864303 | Speech synthesis models are typically built from a corpus of speech that has accurate transcriptions. However, many of the languages of the world do not have a standardized writing system. This paper is an initial attempt at building synthetic voices for such languages. It may seem useless to develop a text-to-speech system when there is no text available. But we will discuss some well defined use cases where we need these models. We will present our method to build synthetic voices from only speech data. We will present experimental results and oracle studies that show that we can automatically devise an artificial writing system for these languages, and build synthetic voices that are understandable and usable. | Text-To-Speech for Languages without an Orthography TITLE AND ABSTRACT IN MARATHI अरपती नसले ा भाषां साठी वाणी सं े षण विनमु त वाां या काे षापासू न वाणी सं ले षणाची सं गणकय ितपे बनवयासाठ या काे षाची अचू क लखत ितलपी उपलध असावी लागते . जगातील अने क भाषा मा मानां कत अरपती वापरत नाहीत. त त काम हे अशा भाषां साठ सं ले षत अावाज बनवयाचा एक पहला यास अाहे . मु ळात अरपतीच नसताना या भाषे या लखत पाठ ाचे वाणी सं ले षण करयाचे तं हे यथ वाटू शकते . पण त त ले खात अाही या सं ले षण णालचे काही मु ख उपयाे ग स चवीत अाहाे त. के वळ विनमु त वाां चा काे ष वापन सं ले षत अावाज बनवयाची अामची पत या ले खात अापण पा. अाही के ले ले याे ग व वे षण असे दश वतात क अापण एखाद अरपती अापाे अाप शाे धू शकताे , जचा वापर कन के ले ले वाणी सं ले षण स गम व वापरयाजाे गे असते |
d199453032 | In this paper we introduce a new natural language processing dataset and benchmark for predicting prosodic prominence from written text. To our knowledge this will be the largest publicly available dataset with prosodic labels. We describe the dataset construction and the resulting benchmark dataset in detail and train a number of different models ranging from feature-based classifiers to neural network systems for the prediction of discretized prosodic prominence. We show that pre-trained contextualized word representations from BERT outperform the other models even with less than 10% of the training data. Finally we discuss the dataset in light of the results and point to future research and plans for further improving both the dataset and methods of predicting prosodic prominence from text. The dataset and the code for the models are publicly available. | Predicting Prosodic Prominence from Text with Pre-trained Contextualized Word Representations |
d2446336 | In this paper, a new conceptual hierarchy based semantic similarity measure is presented, and it is evaluated in word sense disambiguation using a well known algorithm which is called Maximum Relatedness Disambiguation. In this study, WordNet's conceptual hierarchy is utilized as the data source, but the methods presented are suitable to other resources. | A new semantic similarity measure evaluated in word sense disambiguation |
d250150633 | In the last few years, several attempts have been made on extracting information from material science research domain. Material Science research articles are a rich source of information about various entities related to material science such as names of the materials used for experiments, the computational software used along with its parameters, the method used in the experiments, etc. But the distribution of these entities is not uniform across different sections of research articles. Most of the sentences in the research articles do not contain any entity. In this work, we first use a sentence-level classifier to identify sentences containing at least one entity mention. Next, we apply the information extraction models only on the filtered sentences, to extract various entities of interest. Our experiments for named entity recognition in the material science research articles show that this additional sentence-level classification step helps to improve the F1 score by more than 4%. | Using Sentence-level Classification Helps Entity Extraction from Material Science Literature |
d29120475 | People typically assume that killers are mentally ill or fundamentally different from the rest of humanity. Similarly, people often associate mental health conditions (such as schizophrenia or autism) with violence and otherness-treatable perhaps, but not empathically understandable. We take a dictionary approach to explore word use in a set of autobiographies, comparing the narratives of 2 killers (Adolf Hitler and Elliot Rodger) and 39 non-killers. Although results suggest several dimensions that differentiate these autobiographies-such as sentiment, temporal orientation, and references to death-they appear to reflect subject matter rather than psychology per se. Additionally, the Rodger text shows roughly typical developmental arcs in its use of words relating to friends, family, sex, and affect. From these data, we discuss the challenges of understanding killers and people in general. | A Dictionary-Based Comparison of Autobiographies by People and Murderous Monsters |
d8876593 | This paper describes ODL, a description language for lexical information that is being developed within the context of a national project called MLRS (Maltese Language Resource Server) whose goal is to create a national corpus and computational lexicon for the Maltese language. The main aim of ODL is to make the task of the lexicographer easier by allowing lexical specifications to be set out formally so that actual entries will conform to them. The paper describes some of the background motivation, the ODL language itself, and concludes with a short example of how lexical values expressed in ODL can be mapped to an existing tagset together with some speculations about future work. | ODL: An Object Description Language for Lexical Information |
d2503121 | In an effort to advance the state of the art in continuous speech recognition employing hidden Markov models (HMM), Segmental Neural Nets (SNN) were introduced recently to ameliorate the wellknown limitations of HMMs, namely, the conditional-independence limitation and the relative difficulty with which HMMs can handle segmental features. We describe a hybrid SNN/I-IMM system that combines the speed and performance of our HMM system with the segmental modeling capabilities of SNNs. The integration of the two acoustic modeling techniques is achieved successfully via the N-best rescoring paradigm. The N-best lists are used not only for recognition, but also during training. This discriminative training using N-best is demonstrated to improve performance. When tested on the DARPA Resource Management speaker-independent corpus, the hybrid SNN/HMM system decreases the error by about 20% compared to the state-of-the-art HMM system. | Improving State-of-the-Art Continuous Speech Recognition Systems Using the N-Best Paradigm with Neural Networks |
d14110109 | We introduce TFS, a computer formalism in the class of logic ibrmaiisms which integrates a powerful type system. Its basic data structures are typed feature structures. The type system encourages an objectoriented approach to linguistic description by providing a multiple inheritance mechanism and an inference mechanism which allows the specitication of relations between levels o[ linguistic description defined as classes of objects. We illustrate this alcproach starting from a very simple DCG, and show how to make use of the typing system to enforce general constraints and modularize linguistic descriptions, and how further abstraction leads to a tlPSG-Iike grammar. | Typed Unification Grammars |
d17851936 | Existing sentence regression methods for extractive summarization usually model sentence importance and redundancy in two separate processes. They first evaluate the importance f (s) of each sentence s and then select sentences to generate a summary based on both the importance scores and redundancy among sentences. In this paper, we propose to model importance and redundancy simultaneously by directly evaluating the relative importance f (s|S) of a sentence s given a set of selected sentences S. Specifically, we present a new framework to conduct regression with respect to the relative gain of s given S calculated by the ROUGE metric. Besides the single sentence features, additional features derived from the sentence relations are incorporated. Experiments on the DUC 2001, 2002 and 2004 multi-document summarization datasets show that the proposed method outperforms state-of-the-art extractive summarization approaches. | A Redundancy-Aware Sentence Regression Framework for Extractive Summarization |
d250391098 | Online messaging is dynamic, influential, and highly contextual, and a single post may contain contrasting sentiments towards multiple entities, such as dehumanizing one actor while empathizing with another in the same message. These complexities are important to capture for understanding the systematic abuse voiced within an online community, or for determining whether individuals are advocating for abuse, opposing abuse, or simply reporting abuse. In this work, we describe a formulation of directed social regard (DSR) as a problem of multi-entity aspect-based sentiment analysis (ME-ABSA), which models the degree of intensity of multiple sentiments that are associated with entities described by a text document. Our DSR schema is informed by Bandura's psychosocial theory of moral disengagement and by recent work in ABSA. We present a dataset of over 2,900 posts and sentences, comprising over 24,000 entities annotated for DSR over nine psychosocial dimensions by three annotators. We present a novel transformer-based ME-ABSA model for DSR, achieving favorable preliminary results on this dataset. | Towards a Multi-Entity Aspect-Based Sentiment Analysis for Characterizing Directed Social Regard in Online Messaging |
d14471353 | An experiment with an Estonian Constraint Grammar based syntactic analyzer is conducted, analyzing transcribed speech. In this paper the problems encountered during parsing disfluencies are analyzed. In addition, the amount by which the manual normalization of disfluencies improved the results of recall and precision was compared to non-normalized utterances. | Parsing Manually Detected and Normalized Disfluencies in Spoken Estonian |
d1113227 | VINCI is a Natural Language Generation environment designed for use in computer-aided second language instruction. It dynamically generates multiple parallel trees representing an initial text, questions on this text, and expected answers, and either orthographic or phonetic output. Analyses of a learner's answers to questions are used to diagnose comprehension and language skills and to adaptively control subsequent generation. The paper traces stages in the generation of short texts in English and French, and discusses issues of architecture, textual enrichment, and planning. | Generated Narratives for Computer-aided Language Teaching |
d235097511 | This paper provides a detailed overview of the system and its outcomes, which were produced as part of the NLP4IF Shared Task on Fighting the COVID-19 Infodemic at NAACL 2021. This task is accomplished using a variety of techniques. We used state-of-theart contextualized text representation models that were fine-tuned for the down-stream task in hand. ARBERT, MARBERT,AraBERT, Arabic ALBERT and BERT-base-arabic were used. According to the results, BERT-basearabic had the highest 0.748 F1 score on the test set. | iCompass at NLP4IF-2021-Fighting the COVID-19 Infodemic |
d221373817 | Dans cet article, nous proposons un modèle de représentations vectorielles de paire de mots, obtenues à partir d'une adaptation du modèle Skip-gram de Word2vec. Ce modèle est utilisé pour générer des vecteurs de paires de verbes, entraînées sur le corpus de textes anglais Ukwac. Les vecteurs sont évalués sur les données ConceptNet & EACL, sur une tâche de classification de relations lexicales. Nous comparons les résultats obtenus avec les vecteurs paires à des modèles utilisant des vecteurs mots, et testons l'évaluation avec des verbes dans leur forme originale et dans leur forme lemmatisée. Enfin, nous présentons des expériences où ces vecteurs paires sont utilisés sur une tâche d'identification de relation discursive entre deux segments de texte. Nos résultats sur le corpus anglais Penn Discourse Treebank, démontrent l'importance de l'information verbale pour la tâche, et la complémentarité de ces vecteurs paires avec les connecteurs discursifs des relations.ABSTRACTVerb-pairs embeddings for discourse relation predictionThis paper proposes a model to obtain vector representations of pairs of words, obtained from an adaptation of the Word2vec Skip-gram Model. This model is used to generate embeddings for pairs of verbs, trained on the english corpus Ukwac. The pair-embeddings are then evaluated on a classification task, where the goal is to predict the lexical relation between the input pair of words. The scores obtained on this task with the pair-embeddings are compared with the scores obtained with individual word-embeddings and with pairs of lemmatized verbs. Finally, the pair-embeddings are used on the discourse relation prediction task, on the Penn Discourse Treebank dataset, revealing the relevance of verbs for this task, and the complementarity between the verbs and the discourse connective.MOTS-CLÉS : Vecteur mot, vecteur relation de verbes, analyse de texte, prédiction de relation du discours.KEYWORDS: Word embedding, relation embedding between verbs, text analysis, discourse relation prediction.Actes de la 6e conférence conjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition) Nancy, France, 08-19 juin 2020 Volume 3 : Rencontre des Étudiants Chercheurs en Informatique pour le TAL, pages 179-192. hal : hal-02786199. Cette oeuvre est mise à disposition sous licence Attribution 4.0 International. objectif de construire des représentations de relations entre une paire de mots, et d'évaluer la qualité de ces représentations dans différentes tâches. Pour ce faire, nous présentons un modèle permettant d'entraîner des représentations vectorielles de relations entre des paires de mots.Les représentations vectorielles de mots sont très utilisées en traitement automatique des langues depuis les années 1990, l'idée étant que les mots sémantiquement proches se retrouvent proches dans l'espace vectoriel des mots. On retrouve souvent dans la littérature le terme word embedding ou "plongement lexical" pour qualifier ces représentations vectorielles. Ces représentations sont fondées sur l'hypothèse distributionnelle(Harris, 1954): "les lexèmes possédant un contexte linguistique similaire ont un sens similaire.". On peut par exemple obtenir des représentations vectorielles de mots en construisant une matrice de co-occurrences, en comptant le nombre d'occurrences de mots dans le contexte d'autres mots(Turney & Pantel, 2010). L'inconveniant de cette méthode étant néanmmoins la taille importante des vecteurs obtenus, et leurs composantes qui sont majoritairement nulles. | Représentation vectorielle de paires de verbes pour la prédiction de relations lexicales |
d251402098 | Interest in argument mining has resulted in an increasing number of argument annotated corpora. However, most focus on English texts with explicit argumentative discourse markers, such as persuasive essays or legal documents. Conversely, we report on the first extensive and consolidated Portuguese argument annotation project focused on opinion articles. We briefly describe the annotation guidelines based on a multi-layered process and analyze the manual annotations produced, highlighting the main challenges of this textual genre. We then conduct a comprehensive inter-annotator agreement analysis, including argumentative discourse units, their classes and relations, and resulting graphs. This analysis reveals that each of these aspects tackles very different kinds of challenges. We observe differences in annotator profiles, motivating our aim of producing a non-aggregated corpus containing the insights of every annotator. We note that the interpretation and identification of token-level arguments is challenging; nevertheless, tasks that focus on higher-level components of the argument structure can obtain considerable agreement. We lay down perspectives on corpus usage, exploiting its multi-faceted nature. | Annotating Arguments in a Corpus of Opinion Articles |
d13616471 | We investigate an aspect of the relationship between parsing and corpus-based methods in NLP that has received relatively little attention: coverage augmentation in rule-based parsers. In the specific task of determining grammatical relations (such as subjects and objects) in transcribed spoken language, we show that a combination of rule-based and corpus-based approaches, where a rule-based system is used as the teacher (or an automatic data annotator) to a corpus-based system, outperforms either system in isolation. | COMBINING RULE-BASED AND DATA-DRIVEN TECHNIQUES FOR GRAMMATICAL RELATION EXTRACTION IN SPOKEN LANGUAGE |
d8885361 | In the last decade, the statistical approach has found widespread use in machine translation both for written and spoken language and has had a major impact on the translation accuracy. This paper will cover the principles of statistical machine translation and summarize the progress made so far.i -16i -17 | One Decade of Statistical Machine Translation: 1996-2005 Human Language Technology and Pattern Recognition |
d21728392 | We present a reading corpus in Modern Standard Arabic to enrich the sparse collection of resources that can be leveraged for educational applications. The corpus consists of textbook material from the curriculum of the United Arab Emirates, spanning all 12 grades (1.4 million tokens) and a collection of 129 unabridged works of fiction (5.6 million tokens) all annotated with reading levels from Grade 1 to Post-secondary. We examine reading progression in terms of lexical coverage, and compare the two sub-corpora (curricular, fiction) to others from clearly established genres (news, legal/diplomatic) to measure representation of their respective genres. | A Leveled Reading Corpus of Modern Standard Arabic |
d8625842 | Finding quality descriptions on the web, such as those found in Wikipedia articles, of newer companies can be difficult: search engines show many pages with varying relevance, while multi-document summarization algorithms find it difficult to distinguish between core facts and other information such as news stories. In this paper, we propose an entity-focused, hybrid generation approach to automatically produce descriptions of previously unseen companies, and show that it outperforms a strong summarization baseline. | An Entity-Focused Approach to Generating Company Descriptions |
d239049598 | We consider the problem of multilingual unsupervised machine translation, translating to and from languages that only have monolingual data by using auxiliary parallel language pairs. For this problem the standard procedure so far to leverage the monolingual data is back-translation, which is computationally costly and hard to tune.In this paper we propose instead to use denoising adapters, adapter layers with a denoising objective, on top of pre-trained mBART-50. In addition to the modularity and flexibility of such an approach we show that the resulting translations are on-par with back-translating as measured by BLEU, and furthermore it allows adding unseen languages incrementally. | Multilingual Unsupervised Neural Machine Translation with Denoising Adapters |
d7729831 | One way in which marketers gain insights about consumers is by identifying the occasions in which consumers use their products and which are invoked by their products. Identifying occasions helps in consumer segmentation, answering why consumers purchase a product, and where and when they use it. Additionally, the types of occasions a consumer participates in and the social settings surrounding those occasions provide insights into the consumer's personality and sociocultural self. Insights such as these are required for understanding consumer behavior, which marketers need to better design and sell their products. In this paper, we describe a methodology for extracting and categorizing occasions from product reviews, product descriptions, and forum posts. We examine using a maximum entropy markov model (MEMM) and a linear chain conditional random field (CRF) for extraction and find the CRF results in a 72.4% F1-measure. Extracted occasions are categorized as one of six high-level types (Celebratory, Special, Seasonal, Temporal, Weather-Related, and Other) using a support vector machine with an 88.5% macroaveraged F1-measure. | Long Nights, Rainy Days, and Misspent Youth: Automatically Extracting and Categorizing Occasions Associated with Consumer Products |
d10859860 | In this work we investigate commonalities and differences between the semantic representations of concrete and abstract words using human judgments and distributional semantics. We tackle the following questions: a) Does distributional similarity imply similarity in concreteness vs. abstractness? b) How do concrete and abstract context words co-occur with concrete and abstract words? c) Are our contextual models in line with existing theories of meaning representation? Our studies show that both distributionally similar words as well as distributionally co-occurring words come from the same range of concreteness vs. abstractness scores, partly challenging existing theories of semantic representation. | Contextual Characteristics of Concrete and Abstract Words |
d234763248 | Link prediction on knowledge graphs (KGs) is a key research topic. Previous work mainly focused on binary relations, paying less attention to higher-arity relations although they are ubiquitous in real-world KGs. This paper considers link prediction upon n-ary relational facts and proposes a graph-based approach to this task.The key to our approach is to represent the nary structure of a fact as a small heterogeneous graph, and model this graph with edge-biased fully-connected attention. The fully-connected attention captures universal inter-vertex interactions, while with edge-aware attentive biases to particularly encode the graph structure and its heterogeneity. In this fashion, our approach fully models global and local dependencies in each n-ary fact, and hence can more effectively capture associations therein. Extensive evaluation verifies the effectiveness and superiority of our approach. It performs substantially and consistently better than current state-of-the-art across a variety of n-ary relational benchmarks. Our code is publicly available. 1 | Link Prediction on N-ary Relational Facts: A Graph-based Approach |
d226262336 | Text classification is a fundamental problem in natural language processing. Recent studies applied graph neural network (GNN) techniques to capture global word co-occurrence in a corpus. However, previous works are not scalable to large-sized corpus and ignore the heterogeneity of the text graph. To address these problems, we introduce a novel Transformer based heterogeneous graph neural network, namely Text Graph Transformer (TG-Transformer). Our model learns effective node representations by capturing structure and heterogeneity from the text graph. We propose a mini-batch text graph sampling method that significantly reduces computing and memory costs to handle large-sized corpus. Extensive experiments have been conducted on several benchmark datasets, and the results demonstrate that TG-Transformer outperforms state-of-the-art approaches on text classification task. | Text Graph Transformer for Document Classification |
d12628474 | From the Automatic Language Processing Advisory Committee (ALPAC)(Pierce et al., 1966)machine translation (MT) evaluations of the '60s to the Defense Advanced Research Projects Agency (DARPA) Global Autonomous Language Exploitation (GALE) (Olive, 2008) and National Institute of Standards and Technology (NIST) (NIST, 2008) MT evaluations of today, the U.S. Government has been instrumental in establishing measurements and baselines for the state-of-the-art in MT engines. In the same vein, the Automated Machine Translation Improvement Through Post-Editing Techniques (PEMT) project sought to establish a baseline of MT engines based on the perceptions of potential users. In contrast to these previous evaluations, the PEMT project's experiments also determined the minimal quality level output needed to achieve before users found the output acceptable. Based on these findings, the PEMT team investigated using post-editing techniques to achieve this level. This paper will present experiments in which analysts and translators were asked to evaluate MT output processed with varying post-editing techniques. The results show at what level the analysts and translators find MT useful and are willing to work with it. We also establish a ranking of the types of post-edits necessary to elevate MT output to the minimal acceptance level. | Automated Machine Translation Improvement Through Post-Editing Techniques: Analyst and Translator Experiments |
d6548598 | In this paper, we extend the work on using latent cross-language topic models for identifying word translations across comparable corpora. We present a novel precisionoriented algorithm that relies on per-topic word distributions obtained by the bilingual LDA (BiLDA) latent topic model. The algorithm aims at harvesting only the most probable word translations across languages in a greedy fashion, without any prior knowledge about the language pair, relying on a symmetrization process and the one-to-one constraint. We report our results for Italian-English and Dutch-English language pairs that outperform the current state-of-the-art results by a significant margin. In addition, we show how to use the algorithm for the construction of high-quality initial seed lexicons of translations. | Detecting Highly Confident Word Translations from Comparable Corpora without Any Prior Knowledge |
d257985684 | La métaphore présente un intérêt indiscutable pour étudier en diachronie l'évolution des idées dans les textes scientifiques relevant des sciences humaines et sociales. Cependant, malgré les différentes méthodes proposées en pour détecter automatiquement les métaphores, très peu de travaux de recherche ont essayé de les appliquer à ce genre de textes. Dans cet article, nous présentons une tentative d'identification des métaphores conceptuelles dans des textes de géographie en français et en anglais qui utilise une méthode reposant sur LDA(Heintz et al., 2013). Si la méthode testée s'avère, à l'issue de nos expérimentations, inadéquate pour notre objectif final, elle nous a cependant permis de cibler les difficultés inhérentes à ce type de projet ainsi que de futures perspectives de recherche.ABSTRACT. Metaphors are often perceived as being essential to the diachronic study of ideas formulated in scientific texts pertaining to social sciences. However, very few research endeavours have tried to apply any existing NLP metaphor detection method to such texts. The present article describes an attempt to identify conceptual metaphors in geography texts written in English and in French using an LDA-based method(Heintz et al., 2013). Although that particular method was ultimately unsuitable for our final goal, it enabled us to circumscribe the specific challenges inherent to this type of project as well as future research perspectives. MOTS-CLÉS : métaphore, géographie, topic modelling (LDA), fouille de textes, humanités numériques. | Chronique d'un échec : identification des métaphores dans les écrits des géographes |
d5658338 | This article presents a comparison of the accuracy of a number of different approaches for identifying cross language term equivalents (translations). The methods investigated are on the one hand associative measures, commonly used in word-space models or in Information Retrieval and on the other hand a Statistical Machine Translation (SMT) approach. I have performed tests on six language pairs, using the JRC-Acquis parallel corpus as training material and Eurovoc as a gold standard. The SMT approach is shown to be more effective than the associative measures. The best results are achieved by taking a weighted average of the scores of the SMT approach and disparate associative measures. | Identifying Cross Language Term Equivalents Using Statistical Machine Translation and Distributional Association Measures |
d36653853 | The diverse approaches to translation quality in the industry can be grouped in two broad camps: top-down and bottom-up. The author has recently published a decade-long study of the language services (Quality in Professional Translation, Bloomsbury, 2013). Research for the study covered translation providers from individual freelance translators working at home, to largescale institutions including the European Union Directorate-General for Translation, commercial translation companies and divisions, and not-for-profit translation groups.Within the two broad 'top-down' and 'bottom-up' camps, a range of further sub-models was identified and catalogued (e.g. 'minimalist' or 'experience-dependent'). The shared distinctive features of each sub-group were described, with a particular focus on their use of technologies.These different approaches have significant implications for, first, the integration of industry standards on quality, and, second, the efficient harnessing of technology throughout the translation workflow.This contribution explains the range of industry approaches to translation quality then asks how these map on to successful integration of standards, and features of the leading tools which are designed to support or enhance quality.Are standards and technologies inevitably experienced as an imposition by translators and others involved in the translation process? Significantly, no straightforward link was found between a 'top-down' or 'bottom-up' approach to assessing or improving translation quality and effective use of tools or standards. Instead, positive practice was identified across a range of approaches. | Translating and The Computer 36 Translating and The Computer 36 |
d2433570 | We describe our analysis and modeling of the summarization process of Japanese broadcast news. We have studied the entire manual summarization process of the Japan Broadcasting Corporation (NHK). The staff of NHK has been making manual summarizations of news text on a daily basis since December 2000. We interviewed these professional abstractors and obtained a considerable amount of news summaries. We matched the summary with the original text, investigated the news text structure, and thereby analyzed the manual summarization process. We then developed a summarization model on which we intend to build a summarization system. | Analysis and modeling of manual summarization of Japanese broadcast news |
d2815754 | Current SMT systems usually decode with single translation models and cannot benefit from the strengths of other models in decoding phase. We instead propose joint decoding, a method that combines multiple translation models in one decoder. Our joint decoder draws connections among multiple models by integrating the translation hypergraphs they produce individually. Therefore, one model can share translations and even derivations with other models. Comparable to the state-of-the-art system combination technique, joint decoding achieves an absolute improvement of 1.5 BLEU points over individual decoding. | Joint Decoding with Multiple Translation Models |
d44081246 | In this paper, we propose a regression system to infer the emotion intensity of a tweet. We develop a multi-aspect feature learning mechanism to capture the most discriminative semantic features of a tweet as well as the emotion information conveyed by each word in it. We combine six types of feature groups: (1) a tweet representation learned by an LSTM deep neural network on the training data, (2) a tweet representation learned by an LSTM network on a large corpus of tweets that contain emotion words (a distant supervision corpus), (3) word embeddings trained on the distant supervision corpus and averaged over all words in a tweet, (4) word and character n-grams, (5) features derived from various sentiment and emotion lexicons, and (6) other hand-crafted features. As part of the word embedding training, we also learn the distributed representations of multi-word expressions (MWEs) and negated forms of words. An SVR regressor is then trained over the full set of features. We evaluate the effectiveness of our ensemble feature sets on the SemEval-2018 Task 1 datasets and achieve a Pearson correlation of 72% on the task of tweet emotion intensity prediction. | DeepMiner at SemEval-2018 Task 1: Emotion Intensity Recognition Using Deep Representation Learning |
d3932031 | Reference Diaries | |
d209083653 | ||
d9701937 | In this paper we present a recommender system, What To Write and Why (W 3 ), capable of suggesting to a journalist, for a given event, the aspects still uncovered in news articles on which the readers focus their interest. The basic idea is to characterize an event according to the echo it receives in online news sources and associate it with the corresponding readers' communicative and informative patterns, detected through the analysis of Twitter and Wikipedia, respectively. Our methodology temporally aligns the results of this analysis and recommends the concepts that emerge as topics of interest from Twitter and Wikipedia, either not covered or poorly covered in the published news articles. | What to Write? A topic recommender for journalists |
d253762078 | Digital history is the application of computer science techniques to historical data in order to uncover insights into events occurring during specific time periods from the past. This relatively new interdisciplinary field can help identify and record latent information about political, cultural, and economic trends that are not otherwise apparent from traditional historical analysis. This paper presents a method that uses topic modeling and breakpoint detection to observe how extracted topics come in and out of prominence over various time periods. We apply our techniques on British parliamentary speech data from the 19th century. Findings show that some of the events produced are cohesive in topic content (religion, transportation, economics, etc.) and time period (events are focused in the same year or month). Topic content identified should be further analyzed for specific events and undergo external validation to determine the quality and value of the findings to historians specializing in 19th century Britain. | Enhancing Digital History -Event Discovery via Topic Modeling and Change Detection |
d17252009 | Key knowledge components of biological research papers are conveyed by structurally and rhetorically salient sentences that summarize the main findings of a particular experiment. In this article we define such sentences as Claimed Knowledge Updates (CKUs), and propose using them in text mining tasks. We provide evidence that CKUs convey the most important new factual information, and thus demonstrate that rhetorical salience is a systematic discourse structure indicator in biology articles along with structural salience. We assume that CKUs can be detected automatically with state-ofthe-art text analysis tools, and suggest some applications for presenting CKUs in knowledge bases and scientific browsing interfaces. | Identifying Claimed Knowledge Updates in Biomedical Research Articles |
d7246877 | Common approaches to text categorization essentially rely either on n-gram counts or on word embeddings. This presents important difficulties in highly dynamic or quickly-interacting environments, where the appearance of new words and/or varied misspellings is the norm. A paradigmatic example of this situation is abusive online behavior, with social networks and media platforms struggling to effectively combat uncommon or nonblacklisted hate words. To better deal with these issues in those fast-paced environments, we propose using the error signal of class-based language models as input to text classification algorithms. In particular, we train a next-character prediction model for any given class, and then exploit the error of such class-based models to inform a neural network classifier. This way, we shift from the ability to describe seen documents to the ability to predict unseen content. Preliminary studies using out-of-vocabulary splits from abusive tweet data show promising results, outperforming competitive text categorization strategies by 4-11%. | Class-based Prediction Errors to Detect Hate Speech with Out-of-vocabulary Words |
d184482715 | This paper describes the system of NLP@UIT that participated in Task 4 of SemEval-2019. We developed a system that predicts whether an English news article follows a hyperpartisan argumentation. Paparazzo is the name of our system and is also the code name of our team in Task 4 of SemEval-2019. The Paparazzo system, in which we use tri-grams of words and hepta-grams of characters, officially ranks thirteen with an accuracy of 0.747. Another system of ours, which utilizes trigrams of words, tri-grams of characters, trigrams of part-of-speech, syntactic dependency sub-trees, and named-entity recognition tags, achieved an accuracy of 0.787 and is proposed after the deadline of Task 4. | NLP@UIT at SemEval-2019 Task 4: The Paparazzo Hyperpartisan News Detector |
d15853332 | With the dramatic increase in the amount of content available in digital forms gives rise to a problem to manage this online textual data. As a result, it has become a necessary to classify large texts (documents) into specific classes. And Text Classification is a text mining technique which is used to classify the text documents into predefined classes. Most text classification techniques work on the principle of probabilities or matching terms with class name, in order to classify the documents into classes. The objective of this work is to consider the relationship among terms. And for this, Sports Specific Ontology is manually created for the first time. Two new algorithms, Ontology Based Classification and Hybrid Approach are proposed for Punjabi Text Classification. The experimental results conclude that Ontology Based Classification (85%) and Hybrid Approach (85%) provide better results. | Domain Based Classification of Punjabi Text Documents |
d226262299 | Being able to perform in-depth chat with humans in a closed domain is a precondition before an open-domain chatbot can ever be claimed. In this work, we take a close look at the movie domain and present a large-scale high-quality corpus with fine-grained annotations in hope of pushing the limit of moviedomain chatbots. We propose a unified, readily scalable neural approach which reconciles all subtasks like intent prediction and knowledge retrieval. The model is first pretrained on the huge general-domain data, then finetuned on our corpus. We show this simple neural approach trained on high-quality data is able to outperform commercial systems replying on complex rules. On both the static and interactive tests, we find responses generated by our system exhibits remarkably good engagement and sensibleness close to human-written ones. We further analyze the limits of our work and point out potential directions for future work 1 . | MovieChats: Chat like Humans in a Closed Domain |
d7758787 | We present a proposal for the annotation of multi-word expressions in a 1M corpus of contemporary portuguese. Our aim is to create a resource that allows us to study multi-word expressions (MWEs) in their context. The corpus will be a valuable additional resource next to the already existing MWE lexicon that was based on a much larger corpus of 50M words. In this paper we discuss the problematic cases for annotation and proposed solutions, focusing on the variational properties of MWEs. | Proposal for Multi-Word Expression Annotation in Running Text |
d245425026 | Running large-scale pre-trained language models in computationally constrained environments remains a challenging problem yet to be addressed, while transfer learning from these models has become prevalent in Natural Language Processing tasks. Several solutions, including knowledge distillation, network quantization, or network pruning have been previously proposed; however, these approaches focus mostly on the English language, thus widening the gap when considering low-resource languages. In this work, we introduce three light and fast versions of distilled BERT models for the Romanian language: Distil-BERT-base-ro, Distil-RoBERT-base, and DistilMulti-BERT-base-ro. The first two models resulted from the individual distillation of knowledge from two base versions of Romanian BERTs available in literature, while the last one was obtained by distilling their ensemble. To our knowledge, this is the first attempt to create publicly available Romanian distilled BERT models, which were thoroughly evaluated on five tasks: part-of-speech tagging, named entity recognition, sentiment analysis, semantic textual similarity, and dialect identification. Our experimental results argue that the three distilled models offer performance comparable to their teachers, while being twice as fast on a GPU and ∼35% smaller. In addition, we further test the similarity between the predictions of our students versus their teachers by measuring their label and probability loyalty, together with regression loyalty -a new metric introduced in this work. | Distilling the Knowledge of Romanian BERTs Using Multiple Teachers |
d227230408 | This paper presents a time-topic cohesive model describing the communication patterns on the coronavirus pandemic from three Asian countries. The strength of our model is two-fold. First, it detects contextualized events based on topical and temporal information via contrastive learning. Second, it can be applied to multiple languages, enabling a comparison of risk communication across cultures. We present a case study and discuss future implications of the proposed model. | A Risk Communication Event Detection Model via Contrastive Learning |
d16257303 | Even though NLP tools are widely used for contemporary text today, there is a lack of tools that can handle historical documents. Such tools could greatly facilitate the work of researchers dealing with large volumes of historical texts. In this paper we propose a method for extracting verbs and their complements from historical Swedish text, using NLP tools and dictionaries developed for contemporary Swedish and a set of normalisation rules that are applied before tagging and parsing the text. When evaluated on a sample of texts from the period 1550-1880, this method identifies verbs with an F-score of 77.2% and finds a partially or completely correct set of complements for 55.6% of the verbs. Although these results are in general lower than for contemporary Swedish, they are strong enough to make the approach useful for information extraction in historical research. Moreover, the exact match rate for complete verb constructions is in fact higher for historical texts than for contemporary texts (38.7% vs. 30.8%). | Parsing the Past -Identification of Verb Constructions in Historical Text |
d184482665 | In this paper, we present a system description for implementing a sentiment analysis agent capable of interpreting the state of an interlocutor engaged in short three message conversations. We present the results and observations of our work and which parts could be further improved in the future. | UAIC at SemEval-2019 Task 3: Extracting Much from Little |
d201711379 | This paper describes our trained models of phrase-based statistical machine translation (PBSMT) systems for Indic− →English and English− →Indic language-pairs, which has been submitted to the WAT 2018 shared task. In addition, we have introduced many-to-one statistical machine translation (SMT). This new approach produced comparable results in terms of translation accuracy with respect to the result of baseline SMT. | Multilingual Indian Language Translation System at WAT 2018: Many-to-one Phrase-based SMT |
d18208705 | The Machine Translation course at Dublin City University is taught to undergraduate students in Applied Computational Linguistics, while Computer-Assisted Translation is taught on two translator-training programmes, one undergraduate and one postgraduate. Given the differing backgrounds of these sets of students, the course material, methods of teaching and assessment all differ. We report here on our experiences of teaching these courses over a number of years, which we hope will be of interest to lecturers of similar existing courses, as well as providing a reference point for others who may be considering the introduction of such material. | Teaching Machine Translation & Translation Technology: A Contrastive Study |
d579134 | In dependency parsing of long sentences with fewer subjects than predicates, it is difficult to recognize which predicate governs which subject. To handle such syntactic ambiguity between subjects and predicates, this paper proposes an "Sclause" segmentation method, where an S(ubject)clause is defined as a group of words containing several predicates and their common subject. We propose an automatic S-clause segmentation method using decision trees. The S-clause information was shown to be very effective in analyzing long sentences, with an improved performance of 5 percent. | S-clause Segmentation for Efficient Syntactic Analysis Using Decision Trees |
d239890017 | This paper reports an approach for summarizing financial texts that combine genetic algorithms and neural document modeling. We treat summarization as the task of binary classification of sentences. Financial reports in the shared data of the FNS workshop are very long, have many sections, and are written in "financial" language using various special terms, numerical data, and tables. Our approach follows two main stages: (1) filtering the most irrelevant information with help of a supervised state-of-the-art summarizer and (2) extracting the most relevant sentences from the selected sentences in stage (1), using a novel deep neural model. As all participants of the Financial Narrative Summarization (FNS 2021) shared task, we used FNS 2021 dataset for training and evaluation. | Summarization of financial reports with AMUSE |
d11039292 | 1We propose and implement an alternative source of contextual features for word similarity detection based on the notion of lexicogrammatical construction. On the assumption that selectional restrictions provide indicators of the semantic similarity of words attested in selected positions, we extend the notion of selection beyond that of single selecting heads to multiword constructions exerting selectional preferences. Our model of 92 million cross-indexed hybrid n-grams (serving as our machine-tractable proxy for constructions) extracted from BNC provides the source of contextual features. We compare results with those of a grammatical dependency approach (Lin 1998), testing both against WordNetbased similarity rankings(Lin 1998;Resnik 1995). Averaged over the entire set of target nouns and 10-best candidate similar words, Lin's approach gives overall similarity results closer to WordNet rankings than the constructional approach does, while the constructional approach overtakes Lin's in approximating WordNet similarity for target nouns with a frequency over 3000. While this suggests feature sparseness for constructions that resolves with higher frequency nouns, constructions as shared contextual features render a much higher yield in similarity performance in approximating WordNet similarity than grammatical relations do. We examine some cases in detail showing the sorts of similarity detected by a constructional approach that are undetected by a grammatical relations approach or by WordNet or both and thus overlooked in benchmark evaluations. | Word Similarity Using Constructions as Contextual Features 1 |
d2884643 | This paper addresses the problem of learning phrase patterns for unsupervised speaker role classification. Phrase patterns are automatically extracted from large corpora, and redundant patterns are removed via a graph pruning algorithm. In experiments on English and Mandarin talk shows, the use of phrase patterns results in an increase of role classification accuracy over n-gram lexical features, and more compact phrase pattern lists are obtained due to the redundancy removal. | Extracting Phrase Patterns with Minimum Redundancy for Unsupervised Speaker Role Classification |
d18481498 | Compounds occur very frequently in Indian Languages. There are no strict orthographic conventions for compounds in modern Indian Languages. In this paper, Sanskrit compounding system is examined thoroughly and the insight gained from the Sanskrit grammar is applied for the analysis of compounds in Hindi and Marathi. It is interesting to note that compounding in Hindi deviates from that in Sanskrit in two aspects. The data analysed for Hindi does not contain any instance of Bahuvrīhi (exo-centric) compound. Second, Hindi data presents many cases where quite a lot of compounds require a verb as well as vibhakti(a case marker) for its paraphrasing. Compounds requiring a verb for paraphrasing are termed as madhyama-pada-lopī in Sanskrit, and they are found to be rare in Sanskrit. | Semantic Processing of Compounds in Indian Languages |
d11366978 | Translation of discourse relations is one of the recent efforts of incorporating discourse information to statistical machine translation (SMT). While existing works focus on disambiguation of ambiguous discourse connectives, or transformation of discourse trees, only explicit discourse relations are tackled. A greater challenge exists in machine translation of Chinese, since implicit discourse relations are abundant and occur both inside and outside a sentence. This thesis proposal describes ongoing work on bilingual discourse annotation and plans towards incorporating discourse relation knowledge to a Chinese-English SMT system with consideration of implicit discourse relations. The final goal is a discourse-unit-based translation model unbounded by the traditional assumption of sentence-to-sentence translation. | Towards a Discourse Relation-aware Approach for Chinese-English Machine Translation |
d29042955 | This work presents the finite state approach to the Kazakh nominal paradigm. The development and implementation of a finitestate transducer for the nominal paradigm of the Kazakh language belonging to agglutinative languages were undertaken. The morphophonemic constraints that are imposed by the Kazakh language synharmonism (vowels and consonants harmony) on the combinations of letters under affix joining as well as morphotactics are considered. Developed Kazakh finite state transducer realizes some morphological analysis/generation functions. A preliminary testing on the use of the morphological analyzer after OCR preprocessing for correcting errors in the Kazakh texts was made. | Finite State Approach to the Kazakh Nominal Paradigm |
d252624606 | Humans' emotional perception is subjective by nature, in which each individual could express different emotions regarding the same textual content. Existing datasets for emotion analysis commonly depend on a single ground truth per data sample, derived from majority voting or averaging the opinions of all annotators. In this paper, we introduce a new non-aggregated dataset, namely StudEmo, that contains 5,182 customer reviews, each annotated by 25 people with intensities of eight emotions from Plutchik's model, extended with valence and arousal. We also propose three personalized models that use not only textual content but also the individual human perspective, providing the model with different approaches to learning human representations. The experiments were carried out as a multitask classification on two datasets: our StudEmo dataset and GoEmotions dataset, which contains 28 emotional categories. The proposed personalized methods significantly improve prediction results, especially for emotions that have low inter-annotator agreement. | StudEmo: A Non-aggregated Review Dataset for Personalized Emotion Recognition |
d5267356 | Named Entity Disambiguation (NED) refers to the task of resolving multiple named entity mentions in a document to their correct references in a knowledge base (KB) (e.g., Wikipedia). In this paper, we propose a novel embedding method specifically designed for NED. The proposed method jointly maps words and entities into the same continuous vector space. We extend the skip-gram model by using two models. The KB graph model learns the relatedness of entities using the link structure of the KB, whereas the anchor context model aims to align vectors such that similar words and entities occur close to one another in the vector space by leveraging KB anchors and their context words. By combining contexts based on the proposed embedding with standard NED features, we achieved state-of-theart accuracy of 93.1% on the standard CoNLL dataset and 85.2% on the TAC 2010 dataset. | Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation |
d5359682 | Online forums are now one of the primary venues for public dialogue on current social and political issues. The related corpora are often huge, covering any topic imaginable. Our aim is to use these dialogue corpora to automatically discover the semantic aspects of arguments that conversants are making across multiple dialogues on a topic. We frame this goal as consisting of two tasks: argument extraction and argument facet similarity. We focus here on the argument extraction task, and show that we can train regressors to predict the quality of extracted arguments with RRSE values as low as .73 for some topics. A secondary goal is to develop regressors that are topic independent: we report results of cross-domain training and domain-adaptation with RRSE values for several topics as low as .72, when trained on topic independent features. | Argument Mining: Extracting Arguments from Online Dialogue |
d5459392 | This paper describes our contribution to the SemEval 2016 Workshop. We participated in the Shared Task 8 on Meaning Representation parsing using a transition-based approach, which builds upon the system ofWang et al. (2015a)andWang et al. (2015b), with additions that utilize a Feedforward Neural Network classifier and an enriched feature set. We observed that exploiting Neural Networks in Abstract Meaning Representation parsing is challenging and we could not benefit from it, while the feature enhancements yielded an improved performance over the baseline model. | M2L at SemEval-2016 Task 8: AMR Parsing with Neural Networks |
d177971 | In the past few years, we have been developing a robust, wide-coverage, and cognitive load-sensitive spoken dialog interface, CHAT (Conversational Helper for Automotive Tasks). New progress has been made to address issues related to dynamic and attention-demanding environments, such as driving. Specifically, we try to address imperfect input and imperfect memory issues through robust understanding, knowledge-based interpretation, flexible dialog management, sensible information communication, and user-adaptive responses. In addition to the MP3 player and restaurant finder applications reported in previous publications, a third domain, navigation, has been developed, where one has to deal with dynamic information, domain switch, and error recovery. Evaluation in the new domain has shown a good degree of success: including high task completion rate, dialog efficiency, and improved user experience. | CHAT To Your Destination |
d47021019 | Recent literature has shown a wide variety of benefits to mapping traditional onehot representations of words and phrases to lower-dimensional real-valued vectors known as word embeddings. Traditionally, most word embedding algorithms treat each word as the finest meaningful semantic granularity and perform embedding by learning distinct embedding vectors for each word. Contrary to this line of thought, technical domains such as scientific and medical literature compose words from subword structures such as prefixes, suffixes, and root-words as well as compound words. Treating individual words as the finest-granularity unit discards meaningful shared semantic structure between words sharing substructures. This not only leads to poor embeddings for text corpora that have longtail distributions, but also heuristic methods for handling out-of-vocabulary words. In this paper we propose SubwordMine, an entropybased subword mining algorithm that is fast, unsupervised, and fully data-driven. We show that this allows for great cross-domain performance in identifying semantically meaningful subwords. We then investigate utilizing the mined subwords within the FastText embedding model and compare performance of the learned representations in a downstream language modeling task. | Entropy-Based Subword Mining with an Application to Word Embeddings |
d3190097 | RESUME __________________________________________________________________________________________________La présente étude a pour but de vérifier si les informations visuelles situées au niveau du cou peuvent contribuer à la perception visuelle des tons en mandarin. Cependant, ce que montre principalement cette étude est que les tons peuvent être lus sur les lèvres, et ce contre toute attente, même lorsque la syllabe est prononcée en arrière de la cavité buccale. En effet, il semblerait d'une part que la lecture labiale soit possible pour les tons du mandarin, et d'autre part qu'il existe différents profils de perception : certaines personnes semblent plus sensibles à la lecture labiale, alors que d'autres auraient a priori recours aux informations visuelles au niveau du cou. En contrepartie, ces personnes montreraient une aptitude moindre à la lecture labiale.ABSTRACT _______________________________________________________________________________________________Read the tones on the lips : visual perception(s) of lexical tones in Mandarin ChineseThe aim of the present study is to verify whether the visual cues located on the neck, can contribute in Mandarin tones visual perception. However, in an unexpected way, this study shows that tones can be read on the lips, even when the syllable is pronounced in the back of the oral cavity. It seems indeed on the one hand that the labial reading is possible for Mandarin tones, on the other hand, that there could be various profiles of perception : some people seem to be more sensitive to the labial reading, other people could a priori use the neck's cues, and they would be less suited to the labial reading.MOTS--CLES : chinois mandarin, tons, perception audiovisuelle, lecture labiale, multimodalité. | Lire les tons sur les lèvres : perception(s) visuelle(s) des tons lexicaux en chinois mandarin |
d15623848 | In this paper we present a Basque coreference resolution system enriched with semantic knowledge. An error analysis carried out revealed the deficiencies that the system had in resolving coreference cases in which semantic or world knowledge is needed. We attempt to improve the deficiencies using two semantic knowledge sources, specifically Wikipedia and Word-Net. | Enriching Basque Coreference Resolution System using Semantic Knowledge sources |
d258765291 | This paper describes a collaborative European project whose aim was to gather open source Natural Language Processing (NLP) tools and make them accessible as running services and easy to try out in the European Language Grid (ELG). The motivation of the project was to increase accessibility for more European languages and make it easier for developers to use the underlying tools in their own applications. The project resulted in the containerization of 60 existing NLP tools for 16 languages, all of which are now currently running as easily testable services in the ELG platform. | Microservices at Your Service: Bridging the Gap between NLP Research and Industry |
d37880495 | Evaluation of LTAG Parsing with Supertag Compaction | |
d202766921 | Customers ask questions and customer service staffs answer their questions, which is the basic service model via multi-turn customer service (CS) dialogues on E-commerce platforms. Existing studies fail to provide comprehensive service satisfaction analysis, namely satisfaction polarity classification (e.g., well satisfied, met and unsatisfied) and sentimental utterance identification (e.g., positive, neutral and negative). In this paper, we conduct a pilot study on the task of service satisfaction analysis (SSA) based on multi-turn CS dialogues. We propose an extensible Context-Assisted Multiple Instance Learning (CAMIL) model to predict the sentiments of all the customer utterances and then aggregate those sentiments into service satisfaction polarity. After that, we propose a novel Context Clue Matching Mechanism (CCMM) to enhance the representations of all customer utterances with their matched context clues, i.e., sentiment and reasoning clues. We construct two CS dialogue datasets from a top E-commerce platform. Extensive experimental results are presented and contrasted against a few previous models to demonstrate the efficacy of our model. 1 | Using Customer Service Dialogues for Satisfaction Analysis with Context-Assisted Multiple Instance Learning |
d846195 | The task of identifying proper names, unknown words and new terms, is an important step in text processing systems. This paper describes a method of using mutual information to collect possible segments as candidates of these three feature types in a document scope. Then the construction and context of each possible feature is examined to determine its type, canonical form and meaning. Adding very little domain-specific knowledge, this method adapts to various domains easily. | Using Mutual Information to Identify New Features for Text documents of Various Domains |
d202589255 | Spicy' 辣 and 'Numbing' 麻 have long been known as tastes by Chinese people, though they are proved to be chemesthesis by neuroscientists. To examine the conceptualised perception of 'spicy' and 'numbing' among Chinese people, a corpus was compiled in the Sketch Engine which consists of comments on spicy and numbing food in Dazhong Dianping, the most popular food review website in China. After analysing 'spicy' and 'numbing' words and their collocations, we found evidence that they are indeed perceived as chemesthesis by Chinese people. First, these two senses are closely related to hurt and irritation which are among the properties of chemesthesis. Secondly, verbs that are semantically related to hurt and irritation collocate with 'spicy' and 'numbing', but not with the basic five taste properties. Thirdly, some collocations are found in accordance with the mechanisms of capsaicin in various aspects. In addition, semantic extension of the morphemes meaning 'spicy' and 'numbing' in Sinitic languages are mainly based on the meaning of irritation. Apart from that, according to the data, 'spicy' and 'numbing' interact with taste and smell sensations to some extent but have a loose relation with 'mouthfeel'. A synaesthetic account of transfer from taste to touch is provided for the divergence of 'spicy' and 'numbing' being deemed tastes while perceived as chemesthesis. | How do non-tastes taste? A corpus-based study on Chinese people's perception of spicy and numbing food |
d798697 | A theoretically sound method for learning dependencies between case frame slots is proposed. In particular, the problem is viewed as that of estimating a probability distribution over the case slots represented by a dependency graph (a dependency forest). Experimental results indicate that the proposed method can bring about a small improvement in disambiguation, but the results are largely consistent with the assumption often made in practice that case slots are mutually independent, at least when the data size is at the level that is currently available. | Learning Dependencies between Case Frame Slots |
d250390753 | This paper describes our participation in SemEval-2022 Task 10, a structured sentiment analysis. In this task, we have to parse opinions considering both structure-and contextdependent subjective aspects, which is different from typical dependency parsing. Some of the major parser types have recently been used for semantic and syntactic parsing, while it is still unknown which type can capture structured sentiments well due to their subjective aspects. To this end, we compared two different types of state-of-the-art parser, namely graph-based and seq2seq-based. Our in-depth analyses suggest that, even though graph-based parser generally outperforms the seq2seq-based one, with strong pre-trained language models both parsers can essentially output acceptable and reasonable predictions. The analyses highlight that the difficulty derived from subjective aspects in structured sentiment analysis remains an essential challenge. | Hitachi at SemEval-2022 Task 10: Comparing Graph-and Seq2Seq-based Models Highlights Difficulty in Structured Sentiment Analysis |
d6049508 | The documentation of a care episode consists of clinical notes concerning patient care, concluded with a discharge summary. Care episodes are stored electronically and used throughout the health care sector by patients, administrators and professionals from different areas, primarily for clinical purposes, but also for secondary purposes such as decision support and research. A common use case is, given a -possibly unfinished -care episode, to retrieve the most similar care episodes among the records. This paper presents several methods for information retrieval, focusing on care episode retrieval, based on textual similarity, where similarity is measured through domain-specific modelling of the distributional semantics of words. Models include variants of random indexing and a semantic neural network model called word2vec. A novel method is introduced that utilizes the ICD-10 codes attached to care episodes to better induce domain-specificity in the semantic model. We report on an experimental evaluation of care episode retrieval that circumvents the lack of human judgements regarding episode relevance by exploiting (1) ICD-10 codes of care episodes and (2) semantic similarity between their discharge summaries. Results suggest that several of the methods proposed outperform a state-ofthe art search engine (Lucene) on the retrieval task. | Care Episode Retrieval |
d216847732 | Political surveys have indicated a relationship between a sense of Scottish identity and voting decisions in the 2014 Scottish Independence Referendum. Identity is often reflected in language use, suggesting the intuitive hypothesis that individuals who support Scottish independence are more likely to use distinctively Scottish words than those who oppose it. In the first large-scale study of sociolinguistic variation on social media in the UK, we identify distinctively Scottish terms in a data-driven way, and find that these terms are indeed used at a higher rate by users of pro-independence hashtags than by users of anti-independence hashtags. However, we also find that in general people are less likely to use distinctively Scottish words in tweets with referendum-related hashtags than in their general Twitter activity. We attribute this difference to style-shifting relative to audience, aligning with previous work showing that Twitter users tend to use fewer local variants when addressing a broader audience. | Aye or naw, whit dae ye hink? Scottish independence and linguistic identity on social media |
d18579650 | Some Technology Transfer: Observations from the TIPSTER Text Program | |
d44071286 | Question answering (QA) and question generation (QG) are closely related tasks that could improve each other; however, the connection of these two tasks is not well explored in literature. In this paper, we give a systematic study that seeks to leverage the connection to improve both QA and QG. We present a training algorithm that generalizes both Generative Adversarial Network (GAN) and Generative Domain-Adaptive Nets (GDAN) under the question answering scenario. The two key ideas are improving the QG model with QA through incorporating additional QA-specific signal as the loss function, and improving the QA model with QG through adding artificially generated training instances. We conduct experiments on both document based and knowledge based question answering tasks. We have two main findings. Firstly, the performance of a QG model (e.g in terms of BLEU score) could be easily improved by a QA model via policy gradient. Secondly, directly applying GAN that regards all the generated questions as negative instances could not improve the accuracy of the QA model. Learning when to regard generated questions as positive instances could bring performance boost. | Learning to Collaborate for Question Answering and Asking |
d247447324 | Currently, Medical Subject Headings (MeSH) are manually assigned to every biomedical article published and subsequently recorded in the PubMed database to facilitate retrieving relevant information. With the rapid growth of the PubMed database, large-scale biomedical document indexing becomes increasingly important. MeSH indexing is a challenging task for machine learning, as it needs to assign multiple labels to each article from an extremely large hierachically organized collection. To address this challenge, we propose KenMeSH, an end-to-end model that combines new text features and a dynamic Knowledge-enhanced mask attention that integrates document features with MeSH label hierarchy and journal correlation features to index MeSH terms. Experimental results show the proposed method achieves state-of-the-art performance on a number of measures. | KenMeSH: Knowledge-enhanced End-to-end Biomedical Text Labelling |
d44075584 | This paper describes a warrant classification system for SemEval 2018 Task 12, that attempts to learn semantic representations of reasons, claims and warrants. The system consists of 3 stacked LSTMs: one for the reason, one for the claim, and one shared Siamese Network for the 2 candidate warrants. Our main contribution is to force the embeddings into a shared feature space using vector operations, semantic similarity classification, Siamese networks, and multi-task learning. In doing so, we learn a form of generative implication, in encoding implication interrelationships between reasons, claims, and the associated correct and incorrect warrants. We augment the limited data in the task further by utilizing WordNet synonym "fuzzing". When applied to SemEval 2018 Task 12, our system performs well on the development data, and officially ranked 8th among 21 teams. | UniMelb at SemEval-2018 Task 12: Generative Implication using LSTMs, Siamese Networks and Semantic Representations with Synonym Fuzzing |
d52142476 | Arabic plagiarism detection is a difficult task because of the great richness of Arabic language characteristics of which it is a productive, derivational and inflectional language, on the one hand, and a word can has more than one lexical category in different contexts allows us to have different meanings of the word what changes the meaning of the sentence, on the other hand. In this context, Arabic paraphrase identification allows quantifying how much a suspect Arabic text and source Arabic text are similar based on their contexts. In this paper, we proposed a semantic similarity approach for paraphrase identification in Arabic texts by combining different techniques of Natural Language Processing NLP, such as: Term Frequency-Inverse Document Frequency TF-IDF technique to improve the identification of words that are highly descriptive in each sentence; and distributed word vector representations using word2vec algorithm to reduce computational complexity and to optimize the probability of predicting words in the context given the current center word, which they would be subsequently used to generate a sentence vector representations and after applying a similarity measurement operation based on different metrics of comparison, such as: Cosine Similarity and Euclidean Distance. Finally, our proposed approach was evaluated on the Open Source Arabic Corpus OSAC and obtained a promising rate. | Semantic Similarity Analysis for Paraphrase Identification in Arabic Texts |
d229365833 | ||
d174798410 | Neural generative models have been become increasingly popular when building conversational agents. They offer flexibility, can be easily adapted to new domains, and require minimal domain engineering. A common criticism of these systems is that they seldom understand or use the available dialog history effectively. In this paper, we take an empirical approach to understanding how these models use the available dialog history by studying the sensitivity of the models to artificially introduced unnatural changes or perturbations to their context at test time. We experiment with 10 different types of perturbations on 4 multi-turn dialog datasets and find that commonly used neural dialog architectures like recurrent and transformer-based seq2seq models are rarely sensitive to most perturbations such as missing or reordering utterances, shuffling words, etc. Also, by open-sourcing our code, we believe that it will serve as a useful diagnostic tool for evaluating dialog systems in the future 1 . | Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study |
d253762062 | This paper is intended to study the effects of age of acquisition (AoA) and orthographic transparency on word retrieval in Persian, which is an understudied language. A naming task (both pictures and words) and a recall task (both pictures and words) were used to explore how lexical retrieval and verbal memory are affected by AoA and transparency. Seventy two native speakers of Persian were recruited to participate in two experiments. The results showed that early acquired words are processed faster than late acquired words only when pictures were used as stimuli. Transparency of the words was not an influential factor. However, in the recall experiment a threeway interaction was observed: early acquired pictures and words were processed faster than late acquired stimuli except the words in the transparent condition. The findings speak to the fact that language-specific properties of languages are very important. | Do age of acquisition and orthographic transparency have the same effects in different modalities? Fixed effects Estimate Std. Error |
d203648270 | Tests added to Kleene algebra (by Kozen and others) are considered within Monadic Second Order logic over strings, where they are likened to statives in natural language. Reducts are formed over tests and non-tests alike, specifying what is observable. Notions of temporal granularity are based on observable change, under the assumption that a finite set bounds what is observable (with the possibility of stretching such bounds by moving to a larger finite set). String projections at different granularities are conjoined by superpositions that provide another variant of concatenation for Booleans. | MSO with tests and reducts |
d16390819 | This paper describes an approach to treebank development which relies on the manual development of annotation tools. The overall process of tree annotation is described, and a special emphasis is put on the description of the last tool which has been built, i.e. a dependency-based robust chunk parser. The modularization of the parser and the central role of verbal subcategorization is presented. The first experimental results, carried out on a corpus of 645 sentences are reported and discussed. | Transformed Subcategorization Frames in Chunk Parsing |
d13896151 | This paper reports our submissions to semantic textual similarity task, i.e., task 2 in Semantic Evaluation 2015. We built our systems using various traditional features, such as string-based, corpus-based and syntactic similarity metrics, as well as novel similarity measures based on distributed word representations, which were trained using deep learning paradigms. Since the training and test datasets consist of instances collected from various domains, three different strategies of the usage of training datasets were explored: (1) use all available training datasets and build a unified supervised model for all test datasets; (2) select the most similar training dataset and separately construct a individual model for each test set; (3) adopt multi-task learning framework to make full use of available training sets. Results on the test datasets show that using all datasets as training set achieves the best averaged performance and our best system ranks 15 out of 73. | ECNU: Using Traditional Similarity Measurements and Word Embedding for Semantic Textual Similarity Estimation |
d250164306 | We apply Formal Concept Analysis (FCA) to organize and to improve the quality of Démonette2, a French derivational database, through a detection of both missing and spurious derivations in the database. We represent each derivational family as a graph. Given that the subgraph relation exists among derivational families, FCA can group families and represent them in a partially ordered set (poset). This poset is also useful for improving the database. A family is regarded as a possible anomaly (meaning that it may have missing and/or spurious derivations) if its derivational graph is almost, but not completely identical to a large number of other families. | Organizing and Improving a Database of French Word Formation Using Formal Concept Analysis |
d225063187 | Aspect-category sentiment classification (ACSC) aims to identify the sentiment polarities towards the aspect categories mentioned in a sentence. Because a sentence often mentions more than one aspect category and expresses different sentiment polarities to them, finding aspect category-related information from the sentence is the key challenge to accurately recognize the sentiment polarity. Most previous models take both sentence and aspect category as input and query aspect category-related information based on the aspect category. However, these models represent the aspect category as a context-independent vector called aspect embedding, which may not be effective enough as a query. In this paper, we propose two contextualized aspect category representations, Contextualized Aspect Vector (CAV) and Contextualized Aspect Matrix (CAM). Specifically, we use the coarse aspect category-related information found by the aspect category detection task to generate CAV or CAM. Then the CAV or CAM as queries are used to search for fine-grained aspect category-related information like aspect embedding by aspect-category sentiment classification models. In experiments, we integrate the proposed CAV and CAM into several representative aspect embedding-based aspect-category sentiment classification models. Experimental results on the SemEval-2014 Restaurant Review dataset and the Multi-Aspect Multi-Sentiment dataset demonstrate the effectiveness of CAV and CAM. | Better Queries for Aspect-Category Sentiment Classification |
d251406404 | Many applications crucially rely on the availability of high-quality word vectors. To learn such representations, several strategies based on language models have been proposed in recent years. While effective, these methods typically rely on a large number of contextualised vectors for each word, which makes them impractical. In this paper, we investigate whether similar results can be obtained when only a few contextualised representations of each word can be used. To this end, we analyze a range of strategies for selecting the most informative sentences. Our results show that with a careful selection strategy, high-quality word vectors can be learned from as few as 5 to 10 sentences. | Sentence Selection Strategies for Distilling Word Embeddings from BERT |
d2315102 | We describe a new modality/negation (MN) annotation scheme, the creation of a (publicly available) MN lexicon, and two automated MN taggers that we built using the annotation scheme and lexicon. Our annotation scheme isolates three components of modality and negation: a trigger (a word that conveys modality or negation), a target (an action associated with modality or negation), and a holder (an experiencer of modality). We describe how our MN lexicon was semi-automatically produced and we demonstrate that a structure-based MN tagger results in precision around 86% (depending on genre) for tagging of a standard LDC data set.We apply our MN annotation scheme to statistical machine translation using a syntactic framework that supports the inclusion of semantic annotations. Syntactic tags enriched with semantic annotations are assigned to parse trees in the target-language training texts through a process of tree grafting. Although the focus of our work is modality and negation, the tree grafting procedure is general and supports other types of semantic information. We exploit this capability by including named entities, produced by a pre-existing tagger, in addition to the MN elements produced by the taggers described here. The resulting system significantly outperformed a linguistically naive baseline model (Hiero), and reached the highest scores yet reported on the NIST 2009 Urdu-English test set. This finding supports the hypothesis that both syntactic and semantic information can improve translation quality. | Use of Modality and Negation in Semantically-Informed Syntactic MT |
d254275598 | This paper briefly presents an evaluation of three models: a domain-specific one based upon typed feature structures, a neural language model, and a mixture of the two, on an unseen but in-domain corpus of user queries in the context of a dialogue classification task. We find that the mixture performs the best, which opens the door to a potentially new application of neural language models. A further examination of the domain-We also consider the inner workings of the domainspecific model in more detail, as well as how it came into being, from an ethnographic perspective. This has changed our perspective on the potential role of structured representations in the future of dialogue systems, and suggests that formal research in this area may have a new role to play in validating and coordinating ad hoc dialogue systems development. | Feature Structures in the Wild: A Case Study in Mixing Traditional Linguistic Knowledge Representation with Neural Language Models |
d8126147 | We describe a parser for robust and flexible interpretation of user utterances in a multi-modal system for web search in newspaper databases. Users can speak or type, and they can navigate and follow links using mouse clicks. Spoken or written queries may combine search expressions with browser commands and search space restrictions. In interpreting input queries, the system has to be fault-tolerant to account for spontanous speech phenomena as well as typing or speech recognition errors which often distort the meaning of the utterance and are difficult to detect and correct. Our parser integrates shallow parsing techniques with knowledge-based text retrieval to allow for robust processing and coordination of input modes. Parsing relies on a two-layered approach: typical meta-expressions like those concerning search, newspaper types and dates are identified and excluded from the search string to be sent to the search engine. The search terms which are left after preprocessing are then grouped according to co-occurrence statistics which have been derived from a newspaper corpus. These co-occurrence statistics concern typical noun phrases as they appear in newspaper texts. | Robust Interpretation of User Requests for Text Retrieval in a Multimodal Environment |
d1917359 | This study is aimed at a better understanding of the perception of syllables. As the traditional view seems to associate syllable perception with segmental cues that result from local (i.e. present only within or adjacent to the syllable) supralaryngeal events, we are particularly interested in whether non-segmental and non-local laryngeal information contribute to syllable perception as well. Existing works on Indo-European languages show that local stress patterns and global (i.e. non-local) speech rates provide perceptual cues to words and phonemes. While we believe that the effects of the global speech rate hold across languages, based on the long-developed notion of language-specific perception, we expect that lexical tones, rather than stress patterns, serve as an important local non-segmental cue in tonal languages. We conducted a perception study on Mandarin to investigate whether tonal f0 patterns and speech rates interfere with spectral information in determining the number of syllables in an utterance. F0 contours were generated using the qTA model (Prom-on, Xu & Thipakorn, 2009). Our results show that the perceptual number of syllables depends on the perception of tonal f0 patterns and speech rates to a substantial extent. Combining our findings with prior claims(Olsberg, Xu & Green, 2007), it appears that a variety of cues -segments, lexical tones, and speech rate -compete in perceiving Mandarin syllables. In relating this study to the existing works on word segmentation, lexical access, and phoneme identification, 1 Earlier versions of this work were presented at LabPhon (Conference on Lab Phonology) 13 and InterSpeech (Conference of the International Speech Communication Association) 2012; I would like to thank the audience members for the valuable comments and suggestions. Thanks also go to the USC Phonetics Lab group for feedback during the development of this project. I would also like to thank the editors of 'International Journal of Computational Linguistics and Chinese Language Processing' and two anonymous reviewers for their helpful comments.Iris Chuoying Ouyangwe find that the language comprehension system integrates local with global, supralaryngeal with laryngeal information, in perceiving linguistic units -not only words and phonemes, but also syllables. | Non-segmental Cues for Syllable Perception: the Role of Local Tonal f0 and Global Speech Rate in Syllabification 1 |
d26543396 | This paper describes a system developed for a shared sentiment analysis task and its subtasks organized by SemEval-2017.A key feature of our system is the embedded ability to detect sarcasm in order to enhance the performance of sentiment classification. We first constructed an affect-cognition-sociolinguistics sarcasm features model and trained a SVM-based classifier for detecting sarcastic expressions from general tweets. For sentiment prediction, we developed CrystalNest-a two-level cascade classification system using features combining sarcasm score derived from our sarcasm classifier, sentiment scores from Alchemy, NRC lexicon, n-grams, word embedding vectors, and part-of-speech features. We found that the sarcasm detection derived features consistently benefited key sentiment analysis evaluation metrics, in different degrees, across four subtasks A-D. | CrystalNest at SemEval-2017 Task 4: Using Sarcasm Detection for Enhancing Sentiment Classification and Quantification |
d12271396 | Entity linking disambiguates a mention of an entity in text to a Knowledge Base (KB). Most previous studies disambiguate a mention of a name (e.g."AZ") based on the distribution knowledge learned from labeled instances, which are related to other names (e.g."Hoffman","Chad Johnson", etc.). The gaps among the distributions of the instances related to different names hinder the further improvement of the previous approaches. This paper proposes a lazy learning model, which allows us to improve the learning process with the distribution information specific to the queried name (e.g."AZ"). To obtain this distribution information, we automatically label some relevant instances for the queried name leveraging its unambiguous synonyms. Besides, another advantage is that our approach still can benefit from the labeled data related to other names (e.g."Hoffman","Chad Johnson", etc.), because our model is trained on both the labeled data sets of queried and other names by mining their shared predictive structure. | A Lazy Learning Model for Entity Linking Using Query-Specific Information |
d16091111 | We present a family of priors over probabilistic grammar weights, called the shared logistic normal distribution. This family extends the partitioned logistic normal distribution, enabling factored covariance between the probabilities of different derivation events in the probabilistic grammar, providing a new way to encode prior knowledge about an unknown grammar. We describe a variational EM algorithm for learning a probabilistic grammar based on this family of priors. We then experiment with unsupervised dependency grammar induction and show significant improvements using our model for both monolingual learning and bilingual learning with a non-parallel, multilingual corpus. | Shared Logistic Normal Distributions for Soft Parameter Tying in Unsupervised Grammar Induction |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.