_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d3527591 | VOYAGER is a speech understanding system currently under development at MIT. It provides information and navigational assistance for a geographical area within the city of Cambridge, Massachusetts. Recently, we have completed the initial implementation of the system. This paper describes the preliminary evaluation of VOYAGEi% using a spontaneous speech database that was also recently collected. | PRELIMINARY EVALUATION OF THE VOYAGER SPOKEN LANGUAGE SYSTEM* |
d16662635 | In this paper, we investigate four important issues together for explicit discourse relation labelling in Chinese texts: (1) discourse connective extraction, (2) linking ambiguity resolution, (3) relation type disambiguation, and (4) argument boundary identification. In a pipelined Chinese discourse parser, we identify potential connective candidates by string matching, eliminate non-discourse usages from them with a binary classifier, resolve linking ambiguities among connective components by ranking, disambiguate relation types by a multiway classifier, and determine the argument boundaries by conditional random fields. The experiments on Chinese Discourse Treebank show that the F1 scores of 0.7506, 0.7693, 0.7458, and 0.3134 are achieved for discourse usage disambiguation, linking disambiguation, relation type disambiguation, and argument boundary identification, respectively, in a pipelined Chinese discourse parser. | Detection, Disambiguation and Argument Identification of Discourse Connectives in Chinese Discourse Parsing |
d23962793 | Supervised approaches for text summarisation suffer from the problem of mismatch between the target labels/scores of individual sentences and the evaluation score of the final summary. Reinforcement learning can solve this problem by providing a learning mechanism that uses the score of the final summary as a guide to determine the decisions made at the time of selection of each sentence. In this paper we present a proof-of-concept approach that applies a policy-gradient algorithm to learn a stochastic policy using an undiscounted reward. The method has been applied to a policy consisting of a simple neural network and simple features. The resulting deep reinforcement learning system is able to learn a global policy and obtain encouraging results. | Towards the Use of Deep Reinforcement Learning with Global Policy For Query-based Extractive Summarisation * |
d6786457 | The fine-grained task of automatically detecting all sentiment expressions within a given document and the aspects to which they refer is known as aspect-based sentiment analysis. In this paper we present the first full aspect-based sentiment analysis pipeline for Dutch and apply it to customer reviews. To this purpose, we collected reviews from two different domains, i.e. restaurant and smartphone reviews. Both corpora have been manually annotated using newly developed guidelines that comply to standard practices in the field. For our experimental pipeline we perceive aspect-based sentiment analysis as a task consisting of three main subtasks which have to be tackled incrementally: aspect term extraction, aspect category classification and polarity classification. First experiments on our Dutch restaurant corpus reveal that this is indeed a feasible approach that yields promising results. | Rude Waiter but Mouthwatering Pastries! An Exploratory Study into Dutch Aspect-Based Sentiment Analysis |
d8893912 | We study question answering as a machine learning problem, and induce a function that maps open-domain questions to queries over a database of web extractions. Given a large, community-authored, question-paraphrase corpus, we demonstrate that it is possible to learn a semantic lexicon and linear ranking function without manually annotating questions. Our approach automatically generalizes a seed lexicon and includes a scalable, parallelized perceptron parameter estimation scheme. Experiments show that our approach more than quadruples the recall of the seed lexicon, with only an 8% loss in precision. | Paraphrase-Driven Learning for Open Question Answering |
d5911629 | The ability to represent cross-serial dependencies is one of the central features of Tree Adjoining Grammar (TAG). The class of dependency structures representable by lexicalized TAG derivations can be captured by two graph-theoretic properties: a bound on the gap degree of the structures, and a constraint called well-nestedness. In this paper, we compare formalisms from two strands of extensions to TAG in the context of the question, how they behave with respect to these constraints. In particular, we show that multi-component TAG does not necessarily retain the well-nestedness constraint, while this constraint is inherent to Coupled Context-Free Grammar(Hotz and Pitsch, 1996). | Extended cross-serial dependencies in Tree Adjoining Grammars |
d9086181 | While substantial studies have been achieved on sentiment polarity classification to date, lacking enough opinion-annotated corpora for reliable t rain ing is still a challenge. In this paper we propose to improve a supported vector mach ines based polarity classifier by enriching both training data and test data via opinion paraphrasing. In particular, we first extract an equivalent set of attributeevaluation pairs fro m the training data and then exploit it to generate opinion paraphrases in order to expand the training corpus or enrich opinionated sentences for polarity classification. We tested our system over two sets of online product reviews in car and mobilephone domains. The experimental results show that using opinion paraphrases results in significant performance imp rovement in polarity classification. | Improving Chinese Sentence Polarity Classification via Opinion Paraphrasing |
d53087869 | L'article présente une étude des descripteurs linguistiques pour la caractérisation d'un texte selon son registre de langue (familier, courant, soutenu). Cette étude a pour but de poser un premier jalon pour des tâches futures sur le sujet (classification, extraction de motifs discriminants). À partir d'un état de l'art mené sur la notion de registre dans la littérature linguistique et sociolinguistique, nous avons identifié une liste de 72 descripteurs pertinents. Dans cet article, nous présentons les 30 premiers que nous avons pu valider sur un corpus de textes français de registres distincts.ABSTRACTFeature identification for register characterization.The paper presents a study of linguistic features for the characterization of a text according to its language register (formal, neutral, informal). This study aims at laying a first milestones for future work on this subject (e.g., classification, discriminating patterns extraction, etc.). From a state of the art conducted on the notion of register in linguistics and sociolinguistics, we have identified a list of 72 relevant descriptors. In this paper, we present the first 30 ones that we could validate on a corpus of French texts from distinct registers. MOTS-CLÉS : registres de langue, descripteur linguistique, validation. | Identification de descripteurs pour la caractérisation de registres |
d7088626 | The multimodal presentation dashboard allows users to control and browse presentation content such as slides and diagrams through a multimodal interface that supports speech and pen input. In addition to control commands (e.g. "take me to slide 10"), the system allows multimodal search over content collections. For example, if the user says "get me a slide about internet telephony," the system will present a ranked series of candidate slides that they can then select among using voice, pen, or a wireless remote. As presentations are loaded, their content is analyzed and language and understanding models are built dynamically. This approach frees the user from the constraints of linear order allowing for a more dynamic and responsive presentation style. | The Multimodal Presentation Dashboard |
d16126163 | In this paper we present the implementation of definition extraction from multilingual corpora of scientific articles. We establish relations between the definitions and authors by using indexed references in the text. Our method is based on a linguistic ontology designed for this purpose. We propose two evaluations of the annotations. | Extraction of Author's Definitions Using Indexed Reference Identification |
d5510770 | In this paper, we present a formalization of grammatical role labeling within the framework of Integer Linear Programming (ILP). We focus on the integration of subcategorization information into the decision making process. We present a first empirical evaluation that achieves competitive precision and recall rates. | Grammatical Role Labeling with Integer Linear Programming |
d8764788 | In this paper we present LX-Suite, a set of tools for the shallow processing of Portuguese. This suite comprises several modules, namely: a sentence chunker, a tokenizer, a POS tagger, featurizers and lemmatizers. | A Suite of Shallow Processing Tools for Portuguese: LX-Suite |
d6346478 | This paper describes a demonstration of the WinkTalk system, which is a speech synthesis platform using expressive synthetic voices. With the help of a webcamera and facial expression analysis, the system allows the user to control the expressive features of the synthetic speech for a particular utterance with their facial expressions. Based on a personalised mapping between three expressive synthetic voices and the users facial expressions, the system selects a voice that matches their face at the moment of sending a message. The WinkTalk system is an early research prototype that aims to demonstrate that facial expressions can be used as a more intuitive control over expressive speech synthesis than manual selection of voice types, thereby contributing to an improved communication experience for users of speech generating devices. | WinkTalk: a demonstration of a multimodal speech synthesis platform linking facial expressions to expressive synthetic voiceś |
d2368628 | In this survey we overview graph-based clustering and its applications in computational linguistics. We summarize graph-based clustering as a five-part story: hypothesis, modeling, measure, algorithm and evaluation. We then survey three typical NLP problems in which graph-based clustering approaches have been successfully applied. Finally, we comment on the strengths and weaknesses of graph-based clustering and envision that graph-based clustering is a promising solution for some emerging NLP problems. | Graph-based Clustering for Computational Linguistics: A Survey |
d226283877 | ||
d259126995 | Dependency annotation can be a laborious process for under-resourced languages. However, in some cases, other resources are available. We investigate whether we can leverage such resources in the case of Swahili: We use the annotations of the Helsinki Corpus of Swahili for creating a Universal Dependency treebank for Swahili. The Helsinki Corpus of Swahili provides word-level annotations for part of speech tags, morphological features, and functional syntactic tags. We train neural taggers for these types of annotations, then use those models to annotate our target corpus, the Swahili portion of the Global Voices Corpus. Based on the word-level annotations, we then manually create constraint grammar rules to annotate the target corpus for Universal Dependencies. In this paper, we describe the process, discuss the annotation decisions we had to make, and we evaluate the approach. | Towards a Swahili Universal Dependency Treebank: Leveraging the Annotations of the Helsinki Corpus of Swahili |
d15380259 | In this paper, we prose to build a repository of events and event references from clusters of news articles. We present an automated approach that is based on the hypothesis that if two sentences are a) found in the same cluster of news articles and b) contain temporal expressions that reference the same point in time, they are likely to refer to the same event. This allows us to group similar sentences together and apply open-domain Information Extraction (OpenIE) methods to extract lists of textual references for each detected event. We outline our proposed approach and present a preliminary evaluation in which we extract events and references from 20 clusters of online news. Our experiments indicate that for the largest part our hypothesis holds true, pointing to a strong potential for applying our approach to building an event repository. We illustrate cases in which our hypothesis fails and discuss ways for addressing sources or errors. | Extracting a Repository of Events and Event References from News Clusters |
d12233806 | In this paper we describe the prototype of a new grammar checker specifically geared to the needs of French speakers writing in English. Most commercial grammar checkers on the market today are meant to be used by native speakers of a language who have good intuitions about their own language competence. Non-native speakers of a language, however, have different intuitions and are very easily confused by false alarms, i.e. error messages given by the grammar checker when there is in fact no error in the text. In our project aimed at developing a complete writing tool for the non-native speaker, we concentrated on building a grammar checker that keeps the rate of over-flagging down and on developing a user-friendly writing environment which contains, among other things, a series of on-line helps. The grammar checking component, which is the focus of this paper, uses island processing (or chunking) rather than a full parse. This approach is both rapid and appropriate when a text contains many errors. We explain how we use automata to identify multi-word units, detect errors (which we first isolated in a corpus of errors) and interact with the user. We end with a short evaluation of our prototype and compare it to three currently available commercial grammar checkers. | Developing a new grammar checker for English as a second language |
d248780550 | Neural Machine Translation (NMT) systems exhibit problematic biases, such as stereotypical gender bias in the translation of occupation terms into languages with grammatical gender. In this paper we describe a new source of bias prevalent in NMT systems, relating to translations of sentences containing person names. To correctly translate such sentences, a NMT system needs to estimate the gender of names. We show that leading systems are particularly poor at this task, especially for female given names. This bias is deeper than given name gender: we show that the translation of terms with ambiguous sentiment can also be affected by person names, and the same holds true for proper nouns denoting race. To mitigate these biases we propose a simple but effective data augmentation method based on randomly switching entities during translation, which effectively eliminates the problem without any effect on translation quality. | Measuring and Mitigating Name Biases in Neural Machine Translation |
d9426034 | Previous work demonstrated that web counts can be used to approximate bigram frequencies, and thus should be useful for a wide variety of NLP tasks. So far, only two generation tasks (candidate selection for machine translation and confusion-set disambiguation) have been tested using web-scale data sets. The present paper investigates if these results generalize to tasks covering both syntax and semantics, both generation and analysis, and a larger range of n-grams. For the majority of tasks, we find that simple, unsupervised models perform better when n-gram frequencies are obtained from the web rather than from a large corpus. However, in most cases, web-based models fail to outperform more sophisticated state-of-theart models trained on small corpora. We argue that web-based models should therefore be used as a baseline for, rather than an alternative to, standard models. | The Web as a Baseline: Evaluating the Performance of Unsupervised Web-based Models for a Range of NLP Tasks |
d6214612 | Transliteration is the process of transcribing words from a source script to a target script. These words can be content words or proper nouns. They may be of local or foreign origin. In this paper we present a more discerning method which applies different techniques based on the word origin. The techniques used also take into account the properties of the scripts. Our approach does not require training data on the target side, while it uses more sophisticated techniques on the source side. Fuzzy string matching is used to compensate for lack of training on the target side. We have evaluated on two Indian languages and have achieved substantially better results (increase of up to 0.44 in MRR) than the baseline and comparable to the state of the art. Our experiments clearly show that word origin is an important factor in achieving higher accuracy in transliteration. | A More Discerning and Adaptable Multilingual Transliteration Mechanism for Indian Languages |
d227231039 | We present a novel method for embedding trees in a vector space based on Tensor-Product Representations (TPRs) which allows for inversion: the retrieval of the original tree structure and nodes from the vectorial embedding. Unlike previous attempts, this does not come at the cost of intractable representation size; we utilize a method for non-exact inversion, showing that it works well when there is sufficient randomness in the representation scheme for simple data and providing an upper bound on its error. To handle the huge number of possible tree positions without memoizing position representation vectors, we present a method (Cryptographic Role Embedding) using cryptographic hashing algorithms that allows for the representation of unboundedly many positions. Through experiments on parse tree data, we show a 30,000-dimensional Cryptographic Role Embedding of trees can provide invertibility with error < 1% that previous methods would require 8.6 × 10 57 dimensions to represent. | Invertible Tree Embeddings using a Cryptographic Role Embedding Scheme |
d11692694 | We present a formal account of the meaning of vague scalar adjectives such as 'tall' formulated in Type Theory with Records.Our approach makes precise how perceptual information can be integrated into the meaning representation of these predicates; how an agent evaluates whether an entity counts as tall; and how the proposed semantics can be learned and dynamically updated through experience. | Vagueness and Learning: A Type-Theoretic Approach |
d34911213 | In this paper, we demonstrate three NLP applications of the BioLexicon, which is a lexical resource tailored to the biology domain. The applications consist of a dictionary-based POS tagger, a syntactic parser, and query processing for biomedical information retrieval. Biological terminology is a major barrier to the accurate processing of literature within biology domain. In order to address this problem, we have constructed the BioLexicon using both manual and semiautomatic methods. We demonstrate the utility of the biology-oriented lexicon within three separate NLP applications. | Three BioNLP Tools Powered by a Biological Lexicon |
d53082905 | Attention-based neural models have achieved great success in natural language inference (NLI). In this paper, we propose the Convolutional Interaction Network (CIN), a general model to capture the interaction between two sentences, which can be an alternative to the attention mechanism for NLI. Specifically, CIN encodes one sentence with the filters dynamically generated based on another sentence. Since the filters may be designed to have various numbers and sizes, CIN can capture more complicated interaction patterns. Experiments on three very large datasets demonstrate CIN's efficacy. | Convolutional Interaction Network for Natural Language Inference |
d10700106 | Little research has been done to explore differences in the interactional aspects of dialogue between children with Autistic Spectrum Disorder (ASD) and those with typical development (TD). Quantifying the differences could aid in diagnosing ASD, understanding its nature, and better understanding the mechanisms of dialogue processing. In this paper, we report on a study of dialogues with children with ASD and TD. We find that the two groups differ substantially in how long they pause before speaking, and their use of fillers, acknowledgments, and discourse markers. | Autism and Interactional Aspects of Dialogue |
d220445392 | ||
d16015588 | We present an episodic memory component for enhancing the dialogue of artificial companions with the capability to refer to, take up and comment on past interactions with the user, and to take into account in the dialogue long-term user preferences and interests. The proposed episodic memory is based on RDF representations of the agent's experiences and is linked to the agent's semantic memory containing the agent's knowledge base of ontological data and information about the user's interests. | Episodic Memory for Companion Dialogue |
d743925 | A challenging problem in open information extraction and text mining is the learning of the selectional restrictions of semantic relations. We propose a minimally supervised bootstrapping algorithm that uses a single seed and a recursive lexico-syntactic pattern to learn the arguments and the supertypes of a diverse set of semantic relations from the Web. We evaluate the performance of our algorithm on multiple semantic relations expressed using "verb", "noun", and "verb prep" lexico-syntactic patterns. Humanbased evaluation shows that the accuracy of the harvested information is about 90%. We also compare our results with existing knowledge base to outline the similarities and differences of the granularity and diversity of the harvested knowledge. | Learning Arguments and Supertypes of Semantic Relations using Recursive Patterns |
d6963336 | Meaningful evaluation of spoken language interfaces must be based on detailed comparisons with an alternate, well-understood input modality, such as the keyboard. This paper presents an empirical study in which users were asked to enter digit strings into the computer by voice and by keyboard. Two different ways of verifying and correcting the spoken input were also examined using either voice or keyboard. Timing analyses were performed to determine which aspects of the interface were critical to speedy completion of the task. The results show that speech is preferable for strings that require more than a few keystrokes. The results emphasize the need for fast mad accurate speech recognition, but also demonstrate how error correction and input validation are crucial components of a speech interface.Although the performance of continuous speech recognizers has improved significantly in recent years [6], few application programs using such technology have been built. This discrepancy is based on the fallacy of equating speech recognition performance with the usability of a spoken language application. Clearly, the accuracy of the speech recognition component is a key factor in the usability of a spoken language system. However other factors come into play when we consider a recognition system in the context of live use. For example, system response time has direct consequences for system usability. Various studies have shown that the amount of delay introduced by a system significantly affects the characteristics of a task (such as throughput) as well as human performance (such as choice of task strategy)[3,15]. Less intuitive interface issues concern the control of the interaction. When does the system listen to the speaker and when should it ignore speech as extraneous? How can the system best signal to the speaker that it is ready to listen? How can a user verify that the system understood the utteraance correctly? How does the user correct any recognition errors quickly and efficiently? These and other questions are currently unanswered.While some researchers have found speech to be the best communication mode in human-human problem solving [1], results from evaluations of computer speech recognizers point in the opposite direction[10,11,16,9], with the exception of a few, contrived, exceptions[14]. The community has become aware that speech applications need more than good recognition to function adequately[13,4,8], but no systematic solutions have been offered.Our objectives in this paper are to clarify some of the tradeoffs involved when users are given the option of using either speech or typing as an input to an application program. We deliberately chose the simplest possible task to avoid confusing task-related cognitive factors with the inherent advantages and disadvantages of the interface modes.Experimental ProcedureA study was conducted at Carnegie Mellon to contrast the input of numeric data through speech with data entry through a conventional keyboard. The study consisted of two essentially identical experiments and differed only in the method of stimulus presentation. Both experiments required the subjects to enter three lists of 66 digit strings into the computer, using three different data entry modes. In the first experiment, the digit strings were presented on the screen, two lines above the area where either the speech recognition result or the typed input was displayed. In the second experiment, the subjects had to read the digit strings from a sheet of paper placed next to the keyboard and monitor. We will refer to the first experiment as the screen experiment and to the second experiment as the paper experiment throughout this report.There were 3 lists of 66 digit strings to be entered. Each data set contained exactly 11 randomly generated digit strings of length 1, 3, 5, 7, 9, and 11. The first six digit strings included one string of each length and were identical for all data sets. These first six digit strings were included for the purpose of familiarizing the subject with a particular condition and were consequently removed from the transcripts before data analysis. Three lists of randomized digit string were generated once at the start of the experiment and used throughout. | A Comparison of Speech and Typed Input |
d252995526 | ||
d250179944 | La détection des valeurs humaines dans le texte est une tâche qui intéresse les industriels dans la mesure où elles complèten t le profil des consommateurs. Cette détection nécessite des outils et des méthodes issues du traitement automatique des langues (TAL) et s'appuie sur un modèle psychologique. Il n'existe que très peu de travaux, alliant modèles psychologiques de valeurs humaines et extraction de leur réalisation linguistique sur les réseaux sociaux à l'aide du TAL. Dans cet article, après avoir défin i le modèle des valeurs de Schwartz que nous utilisons ainsi que le corpus en cours de construction pour le domaine de la parfumerie, nous proposons quelques pistes de réflexion possibles pour la construction de technologies permettant de relier des marqueurs textuels à des valeurs humaines.ABSTRACTDetecting Human Values in Comments on PerfumeryThe detection of human values from texts is an interesting work for enterprises as this is a way to create a more comprehensive profile of consumers. This task requires tools and methods of automatic language processing (NLP) and relies on a psychological model. There are very few works that combine psychological models of human values and the extraction of their linguistic realization on social networks using NLP. In this article, after having defined the model of human values of Schwartz that we use, and the corpus being collected for the field of perfumery, we propose some possible ideas for the implementation of technologies to find links between textual signals and human values.Le système des valeurs humaines de SchwartzLes valeurs humaines sont étudiées et utilisées dans le monde de la psychologie. La notion de valeur qui nous intéresse est définie dans le dictionnaire Larousse comme étant « ce qui est posé comme vrai, beau, bien, d'un point de vue personnel ou selon les critères d'une société et qui est donné comme un idéal à atteindre, comme quelque chose à défendre. » Les valeurs peuvent ainsi expliquer les choix que font les gens dans leur vie(Verplanken et Holland, 2002). Pour les entreprises, les valeurs associées à chaque consommateur peuvent aider à comprendre leur choix de produits. Ce travail s'inscrit dans cette hypothèse et cherche à détecter Actes de la 29e Conférence sur le Traitement Automatique des Langues Naturelles Avignon, France, 27 juin au 1er juillet 2022Volume 2 : 24e Rencontres Etudiants Chercheurs en Informatique pour le TAL (RECITAL), pages 47-54.Cette oeuvre est mise à disposition sous licence Attribution 4.0 International.ConclusionAu cours de notre projet de recherche, nous allons nous inspirer des différentes méthodes de la détection de personnalité citées dans la partie 3, et tester certaines d'entre elles pour notre objectif de détection des signaux des valeurs humaines. En plus de ces méthodes, ce projet de recherche nous donne aussi l'occasion de tester la performance des modèles pré-entraînés de grande taille comme le GPT-3 (Brown et al., 2020) pour des tâches abstraites (en l'occurrence psycholinguistique). Pour atteindre cet objectif, nous allons aussi construire un outil de l'analyse textuelle adapté au domaine de la parfumerie. Cela nous donnera l'occasion d'examiner si les connaissances sur la parfumerie peuvent contribuer à la performance de détection des valeurs dans le texte.RemerciementCe travail est effectué dans le cadre d'une convention CIFRE, gérée par l'Association Nationale de la Recherche Technique (ANRT), et établie entre le Laboratoire ERTIM de l'Inalco et la société IFF. Un grand merci à Dr Frédérique SEGOND et à Dr Céline MANETTA pour leur relecture de l'article et leur soutien au projet de recherche. | Étapes préparatoires pour la détection des valeurs humaines dans des commentaires du domaine de la parfumerie |
d249204476 | This paper illustrates a new evaluation framework developed at Unbabel for measuring the quality of source language text and its effect on both Machine Translation (MT) and Human Post-Edition (PE) performed by non-professional post-editors. We examine both agent and user-generated content from the Customer Support domain and propose that differentiating the two is crucial to obtaining high quality translation output. Furthermore, we present results of initial experimentation with a new evaluation typology based on the Multidimensional Quality Metrics (MQM) Framework(Lommel et al., 2014), specifically tailored toward the evaluation of source language text. We show how the MQM Framework(Lommel et al., 2014)can be adapted to assess errors of monolingual source texts and demonstrate how very specific source errors propagate to the MT and PE targets. Finally, we illustrate how MT systems are not robust enough to handle specific types of source noise in the context of Customer Support data. | Agent and User-Generated Content and its Impact on Customer Support MT |
d252819371 | Most comparative datasets of Chinese varieties are not digital; however, Wiktionary includes a wealth of transcriptions of words from these varieties. The usefulness of these data is limited by the fact that they use a wide range of variety-specific romanizations, making data difficult to compare. The current work collects this data into a single constituent (IPA, or International Phonetic Alphabet) and structured form (TSV) for use in comparative linguistics and Chinese NLP. At the time of writing, the dataset contains 67,943 entries across 8 varieties and Middle Chinese. 1 The dataset is validated on a protoform reconstruction task using an encoder-decoder cross-attention architecture (Meloni et al., 2021), achieving an accuracy of 54.11%, a PER (phoneme error rate) of 17.69%, and a FER (feature error rate) of 6.60%. | WikiHan: A New Comparative Dataset for Chinese Languages |
d6942230 | A language-independent framework for syntactic finlte-state parsing is discussed. The article presents a framework, a formalism, a compiler and a parser for grammars written in this forrealism. As a substantial example, fragments from a nontrivial finite-state grammar of English are discussed.The linguistic framework of the present approach is based on a surface syntactic tagging scheme by F. Karlsson. This representation is slightly less powerful than phrase structure tree notation, letUng some ambiguous constructions be described more concisely.The finite-state rule compiler implements what was briefly sketched by Koskenniemi (1990). It is based on the calculus of finite-state machines. The compiler transforms rules into rule-automata. The run-time parser exploits one of certain alternative strategies in performing the effective intersection of the rule automata and the sentence automaton.Fragments of a fairly comprehensive finite-state granmmr of English axe presented here, including samples from non-finite constructions as a demonstration of the capacity of the present formalism, which goes far beyond plain disamblguation or part of speech tagging. The grammar itself is directly related to a parser and tagging system for English created as a part of project SIMPR I using Karlsson's CG (Constraint Grammar) formalism. | Compiling and Using Finite-State Syntactic Rules |
d203935 | Information of interest to users is often distributed over a set of documents. Users can specify their request for information as a query/topic -a set of one or more sentences or questions. Producing a good summary of the relevant information relies on understanding the query and linking it with the associated set of documents. To "understand" the query we expand it using encyclopedic knowledge in Wikipedia. The expanded query is linked with its associated documents through spreading activation in a graph that represents words and their grammatical connections in these documents. The topic expanded words and activated nodes in the graph are used to produce an extractive summary. The method proposed is tested on the DUC summarization data. The system implemented ranks high compared to the participating systems in the DUC competitions, confirming our hypothesis that encyclopedic knowledge is a useful addition to a summarization system. | Topic-Driven Multi-Document Summarization with Encyclopedic Knowledge and Spreading Activation |
d119117163 | In this paper we study how different ways of combining character and word-level representations affect the quality of both final word and sentence representations. We provide strong empirical evidence that modeling characters improves the learned representations at the word and sentence levels, and that doing so is particularly useful when representing less frequent words. We further show that a feature-wise sigmoid gating mechanism is a robust method for creating representations that encode semantic similarity, as it performed reasonably well in several word similarity datasets. Finally, our findings suggest that properly capturing semantic similarity at the word level does not consistently yield improved performance in downstream sentencelevel tasks. Our code is available at https: //github.com/jabalazs/gating. | Gating Mechanisms for Combining Character and Word-level Word Representations: An Empirical Study |
d53601909 | Recent work has shown that neural models can be successfully trained on multiple languages simultaneously. We investigate whether such models learn to share and exploit common syntactic knowledge among the languages on which they are trained. This extended abstract presents our preliminary results. | Does Syntactic Knowledge in Multilingual Language Models Transfer Across Languages? |
d48362220 | We present a variation of the incremental and memory-limited algorithm in for Bayesian cross-situational word learning and evaluate the model in terms of its functional performance and its sensitivity to input order. We show that the functional performance of our sub-optimal model on corpus data is close to that of its optimal counterpart (Frank et al., 2009), while only the sub-optimal model is capable of predicting the input order effects reported in experimental studies.This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. | Sensitivity to Input Order: Evaluation of an Incremental and Memory-Limited Bayesian Cross-Situational Word Learning Model |
d226283748 | The analogy task introduced byMikolov et al. (2013)has become the standard metric for tuning the hyperparameters of word embedding models. In this paper, however, we argue that the analogy task is unsuitable for lowresource languages for two reasons: (1) it requires that word embeddings be trained on large amounts of text, and (2) analogies may not be well-defined in some low-resource settings. We solve these problems by introducing the OddOneOut and Topk tasks, which are specifically designed for model selection in the low-resource setting. We use these metrics to successfully tune hyperparameters for a lowresource emoji embedding task and word embeddings on 16 extinct languages. The largest of these languages (Ancient Hebrew) has a 41 million token dataset, and the smallest (Old Gujarati) has only a 1813 token dataset. | Evaluating Word Embeddings on Low-Resource Languages |
d39404464 | We investigated methods for the discovery of clichés from song lyrics. Trigrams and rhyme features were extracted from a collection of lyrics and ranked using term-weighting techniques such as tf-idf. These attributes were also examined over both time and genre. We present an application to produce a cliché score for lyrics based on these findings and show that number one hits are substantially more clichéd than the average published song. | In Your Eyes: Identifying Clichés in Song Lyrics |
d6468198 | Speaker identification and verification systems have a poor performance when model training is done in one language while the testing is done in another. This situation is not unusual in multilingual environments, where people should be able to access the system in any language he or she prefers in each moment, without noticing a performance drop. In this work we study the possibility of using features derived from prosodic parameters in order to reinforce the language robustness of these systems. First the features' properties in terms of language and session variability are studied, predicting an increase in the language robustness when frame-wise intonation and energy values are combined with traditional MFCC features. The experimental results confirm that these features provide an improvement in the speaker recognition rates under language-mismatch conditions. The whole study is carried out in the Basque Country, a bilingual region in which Basque and Spanish languages co-exist. | Text independent speaker identification in multilingual environments |
d6333924 | There are two main methodologies for constructing the knowledge base of a natural language analyser: the linguistic and the data-driven. Recent state-ofthe-art part-of-speech taggers are based on the data-driven approach. Because of the known feasibility of the linguistic rule-based approach at related levels of description, the success of the datadriven approach in part-of-speech analysis may appear surprising. In this paper, a case is made for the syntactic nature of part-of-speech tagging. A new tagger of English that uses only linguistic distributional rules is outlined and empirically evaluated. Tested against a benchmark corpus of 38,000 words of previously unseen text, this syntax-based system reaches an accuracy of above 99%. Compared to the 95-97% accuracy of its best competitors, this result suggests the feasibility of the linguistic approach also in part-of-speech analysis.157 | A syntax-based part-of-speech analyser |
d236898620 | ||
d13035084 | This paper describes an architecture to convert Sinhala Unicode text into phonemic specification of pronunciation. The study was mainly focused on disambiguating schwa-/\/ and /a/ vowel epenthesis for consonants, which is one of the significant problems found in Sinhala. This problem has been addressed by formulating a set of rules. The proposed set of rules was tested using 30,000 distinct words obtained from a corpus and compared with the same words manually transcribed to phonemes by an expert. The Grapheme-to-Phoneme (G2P) conversion model achieves 98 % accuracy. | Sinhala Grapheme-to-Phoneme Conversion and Rules for Schwa Epenthesis |
d227230331 | A basic step in any annotation effort is the measurement of the Inter Annotator Agreement (IAA). An important factor that can affect the IAA is the presence of annotator bias. In this paper we introduce a new interpretation and application of the Item Response Theory (IRT) to detect annotators' bias. Our interpretation of IRT offers an original bias identification method that can be used to compare annotators' bias and characterise annotation disagreement. Our method can be used to spot outlier annotators, improve annotation guidelines and provide a better picture of the annotation reliability. Additionally, because scales for IAA interpretation are not generally agreed upon, our bias identification method is valuable as a complement to the IAA value which can help with understanding the annotation disagreement. | Identifying Annotator Bias: A new IRT-based method for bias identification |
d959085 | A Method of Automatic Hypertext Construction from an Encyclopedic Dictionary of a Specific Field | |
d2607435 | We present an efficient multi-level chart parser that was designed for syntactic analysis of closed captions (subtitles) in a real-time Machine Translation (MT) system. In order to achieve high parsing speed, we divided an existing English grammar into multiple levels. The parser proceeds in stages. At each stage, rules corresponding to only one level are used. A constituent pruning step is added between levels to insure that constituents not likely to be part of the final parse are removed. This results in a significant parse time and ambiguity reduction. Since the domain is unrestricted, out-of-coverage sentences are to be expected and the parser might not produce a single analysis spanning the whole input. Despite the incomplete parsing strategy and the radical pruning, the initial evaluation results show that the loss of parsing accuracy is acceptable. The parsing time favorable compares with a Tomita parser and a chart parser parsing time when run on the same grammar and lexicon.7 | Efficient parsing strategies for syntactic analysis of closed captions |
d1323575 | This paper introduces a psycholinguistic model of sentence processing which combines a Hidden Markov Model noun phrase chunker with a co-reference classifier. Both models are fully incremental and generative, giving probabilities of lexical elements conditional upon linguistic structure. This allows us to compute the information theoretic measure of surprisal, which is known to correlate with human processing effort. We evaluate our surprisal predictions on the Dundee corpus of eye-movement data show that our model achieve a better fit with human reading times than a syntax-only model which does not have access to co-reference information. | A Model of Discourse Predictions in Human Sentence Processing |
d236459804 | Training datasets for semantic parsing are typically small due to the higher expertise required for annotation than most other NLP tasks. As a result, models for this application usually need additional prior knowledge to be built into the architecture or algorithm. The increased dependency on human experts hinders automation and raises the development and maintenance costs in practice. This work investigates whether a generic transformer-based seq2seq model can achieve competitive performance with minimal code-generation-specific inductive bias design. By exploiting a relatively sizeable monolingual corpus of the target programming language, which is cheap to mine from the web, we achieved 81.03% exact match accuracy on Django and 32.57 BLEU score on CoNaLa. Both are SOTA to the best of our knowledge. This positive evidence highlights a potentially easier path toward building accurate semantic parsers in practice. † | Code Generation from Natural Language with Less Prior and More Monolingual Data |
d19286820 | ||
d226283796 | Pre-trained language models that learn contextualized word representations from a large unannotated corpus have become a standard component for many state-of-the-art NLP systems. Despite their successful applications in various downstream NLP tasks, the extent of contextual impact on the word representation has not been explored. In this paper, we present a detailed analysis of contextual impact in Transformer-and BiLSTM-based masked language models. We follow two different approaches to evaluate the impact of context: a masking based approach that is architecture agnostic, and a gradient based approach that requires back-propagation through networks. The findings suggest significant differences on the contextual impact between the two model architectures. Through further breakdown of analysis by syntactic categories, we find the contextual impact in Transformer-based MLM aligns well with linguistic intuition. We further explore the Transformer attention pruning based on our findings in contextual analysis.Marco Ancona, Enea Ceolini, CengizÖztireli, and Markus Gross. 2017. Towards better understanding of gradient-based attribution methods for deep neural networks. arXiv preprint arXiv:1711.06104. | Context Analysis for Pre-trained Masked Language Models |
d36827775 | In this paper, we describe an ongoing effort in collecting and annotating a multilingual speech database of natural stress emotion from university students. The goal is to detect natural stress emotions and study the stress expression differences in different languages, which may help psychologists in the future. We designed a common questionnaire of stress-inducing and non-stress-inducing questions in English, Mandarin and Cantonese and collected a first ever, multilingual corpus of natural stress emotion. All of the students are native speakers of the corresponding language. We asked native language speakers to annotate recordings according to the participants' self-label states and obtained a very good kappa inter labeler agreement. We carried out human perception tests where listeners who do not understand Chinese were asked to detect stress emotion from the Mandarin Chinese database. Compared to the annotation labels, these human perceived emotions are of low accuracy, which shows a great necessity for natural stress detection research. | A Multilingual Database of Natural Stress Emotion |
d232021754 | This paper aims to study auto-hyponymy and auto-troponymy relations (or vertical polysemy) in 11 wordnets uploaded into the new Open Multilingual Wordnet (OMW) webpage. We investigate how vertical polysemy forms polysemy structures (or sense clusters) in semantic hierarchies of the wordnets. Our main results and discoveries are new polysemy structures that have not previously been associated with vertical polysemy, along with some inconsistencies of semantic relations analysis in the studied wordnets, which should not be there.In the case study, we turn attention to polysemy structures in the Estonian Wordnet (version 2.2.0), analyzing them and giving the lexicographers comments. In addition, we describe the detection algorithm of polysemy structures and an overview of the state of polysemy structures in 11 wordnets. | New Polysemy Structures in Wordnets Induced by Vertical Polysemy |
d15009512 | This paper reports the highest results (95% in MUC and 92% in CoNLL metric) in the literature for Turkish named entity recognition; more specifically for the task of detecting person, location and organization entities in general news texts. We give an in depth analysis of the previous reported results and make comparisons with them whenever possible. We use conditional random fields (CRFs) as our statistical model. The paper presents initial explorations on the usage of rich morphological structure of the Turkish language as features to CRFs together with the use of some basic and generative gazetteers. | Initial explorations on using CRFs for Turkish Named Entity Recognition |
d46941209 | In this paper, we describe our experiments for the Shared Task on Complex Word Identification (CWI) 2018(Yimam et al., 2018), hosted by the 13 th Workshop on Innovative Use of NLP for Building Educational Applications (BEA) at NAACL 2018. Our system for English builds on previous work for Swedish concerning the classification of words into proficiency levels. We investigate different features for English and compare their usefulness using feature selection methods. For the German, Spanish and French data we use simple systems based on character n-gram models and show that sometimes simple models achieve comparable results to fully featureengineered systems. | SB@GU at the Complex Word Identification 2018 Shared Task |
d219306524 | Bipin Indurkhya has written a wide-ranging and interesting work that is easy to read. Although Indurkhya's starting point is the puzzle of similarity-creating metaphors, the book is really about cognition and conceptual structure. In particular, he is concerned with the philosophical problem of reconciling the constructive nature of our concepts with the notion of a pre-existing mind-independent structure of reality. While the book covers a great deal of ground and is well worth reading, I feel that the basic theory is flawed in that it rests on a common philosophical view of meaning that has been considered inadequate. | Metaphor and Cognition: An Interactionist Approach |
d40332352 | This paper describes an approach to translating course unit descriptions from Italian and German into English, using a phrase-based machine translation (MT) system. The genre is very prominent among those requiring translation by universities in European countries in which English is a non-native language. For each language combination, an in-domain bilingual corpus including course unit and degree program descriptions is used to train an MT engine, whose output is then compared to a baseline engine trained on the Europarl corpus. In a subsequent experiment, a bilingual terminology database is added to the training sets in both engines and its impact on the output quality is evaluated based on BLEU and postediting score. Results suggest that the use of domain-specific corpora boosts the engines quality for both language combinations, especially for German-English, whereas adding terminological resources does not seem to bring notable benefits. | Enhancing Machine Translation of Academic Course Catalogues with Terminological Resources |
d6677061 | Arguably, grammars which associate natural language expressions not only with a syntactic but also with a semantic representation, should do so in a way that capture paraphrasing relations between sentences whose core semantics are equivalent. Yet existing semantic grammars fail to do so. In this paper, we describe an ongoing project whose aim is the production of a "paraphrastic grammar" that is, a grammar which associates paraphrases with identical semantic representations. We begin by proposing a typology of paraphrases. We then show how this typology can be used to simultaneously guide the development of a grammar and of a testsuite designed to support the evaluation of this grammar. | Paraphrastic Grammars |
d52011179 | We developed three systems for generating pros and cons summaries of product reviews. Automating this task eases the writing of product reviews, and offers readers quick access to the most important information. We compared SynPat, a system based on syntactic phrases selected on the basis of valence scores, against a neural-network-based system trained to map bag-ofwords representations of reviews directly to pros and cons, and the same neural system trained on clusters of word-embedding encodings of similar pros and cons. We evaluated the systems in two ways: first on held-out reviews with gold-standard pros and cons, and second by asking human annotators to rate the systems' output on relevance and completeness. In the second evaluation, the gold-standard pros and cons were assessed along with the system output. We find that the human-generated summaries are not deemed as significantly more relevant or complete than the SynPat systems; the latter are scored higher than the human-generated summaries on a precision metric. The neural approaches yield a lower performance in the human assessment, and are outperformed by the baseline. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/ 1 https://www.amazon.com/ 2 https://www.trustpilot.com/ | Aspect-based summarization of pros and cons in unstructured product reviews |
d2114517 | Traditional approaches to the task of ACE event extraction usually rely on sequential pipelines with multiple stages, which suffer from error propagation since event triggers and arguments are predicted in isolation by independent local classifiers. By contrast, we propose a joint framework based on structured prediction which extracts triggers and arguments together so that the local predictions can be mutually improved. In addition, we propose to incorporate global features which explicitly capture the dependencies of multiple triggers and arguments. Experimental results show that our joint approach with local features outperforms the pipelined baseline, and adding global features further improves the performance significantly. Our approach advances state-ofthe-art sentence-level event extraction, and even outperforms previous argument labeling methods which use external knowledge from other sentences and documents. | Joint Event Extraction via Structured Prediction with Global Features |
d259833789 | In developing countries like India, doctors and healthcare professionals working in public health spend significant time answering health queries that are fact-based and repetitive. Therefore, we propose an automated way to answer maternal and child health-related queries. A database of Frequently Asked Questions (FAQs) and their corresponding answers generated by experts is curated from rural health workers and young mothers. We develop a Hindi chatbot that identifies k relevant Question and Answer (QnA) pairs from the database in response to a healthcare query (q) written in Devnagri script or Hindi-English (Hinglish) code-mixed script. The curated database covers 80% of all the queries that a user of our study is likely to ask. We experimented with (i) rule-based methods, (ii) sentence embeddings, and (iii) a paraphrasing classifier, to calculate the q-Q similarity. We observed that paraphrasing classifier gives the best result when trained first on an open-domain text and then on the healthcare domain. Our chatbot uses an ensemble of all three approaches. We observed that if a given q can be answered using the database, then our chatbot can provide at least one relevant QnA pair among its top three suggestions for up to 70% of the queries. | Hindi Chatbot for Supporting Maternal and Child Health Related Queries in Rural India |
d218973811 | ||
d33322722 | This article addresses the question of how to deal with text categorization when the set of documents to be classified belong to different languages. The figures we provide demonstrate that cross-lingual classification where a classifier is trained using one language and tested against another is possible and feasible provided we translate a small number of words: the most relevant terms for class profiling. The experiments we report, demonstrate that the translation of these most relevant words proves to be a cost-effective approach to cross-lingual classification. | Cross-effective cross-lingual document classification |
d15525085 | The syntactic analysis of languages with respect to Government-binding (GB) grammar is a problem that has received relatively little attention until recently. This paper describes an attribute grammar specification of the Government-binding theory. The paper focuses on the description of the attribution rules responsible for determining antecedent-trace relations in phrase-structure trees, and on some theoretical implications of those rules for the GB model.The specification relies on a transformation-lem variant of Government-binding theory, briefly discussed by Chomsky (1981), in which the rule move-a is replaced by an interpretive rule. Here the interpretive rule is specified by means of attribution rules. The attribute grammar is currently being used to write an English parser which embodies the principles of GB theory. The parsing strategy and attribute evaluation scheme are cursorily described at the end of the paper. | An Attribute-Grammar Implementation of Government-bindlng Theory |
d8401287 | Web reviews have been intensively studied in argumentation-related tasks such as sentiment analysis. However, due to their focus on content-based features, many sentiment analysis approaches are effective only for reviews from those domains they have been specifically modeled for. This paper puts its focus on domain independence and asks whether a general model can be found for how people argue in web reviews. Our hypothesis is that people express their global sentiment on a topic with similar sequences of local sentiment independent of the domain. We model such sentiment flow robustly under uncertainty through abstraction. To test our hypothesis, we predict global sentiment based on sentiment flow. In systematic experiments, we improve over the domain independence of strong baselines. Our findings suggest that sentiment flow qualifies as a general model of web review argumentation. | Sentiment Flow -A General Model of Web Review Argumentation |
d11415196 | This paper proposes new distortion models for phrase-based SMT. In decoding, a distortion model estimates the source word position to be translated next (NP) given the last translated source word position (CP). We propose a distortion model that can consider the word at the CP, a word at an NP candidate, and the context of the CP and the NP candidate simultaneously. Moreover, we propose a further improved model that considers richer context by discriminating label sequences that specify spans from the CP to NP candidates. It enables our model to learn the effect of relative word order among NP candidates as well as to learn the effect of distances from the training data. In our experiments, our model improved 2.9 BLEU points for Japanese-English and 2.6 BLEU points for Chinese-English translation compared to the lexical reordering models. | Distortion Model Considering Rich Context for Statistical Machine Translation |
d16489464 | This paper describes the details of our system submitted to the SemEval 2015 shared task on sentiment analysis of figurative language on Twitter. We tackle the problem as regression task and combine several base systems using stacked generalization(Wolpert, 1992). An initial analysis revealed that the data is heavily biased, and a general sentiment analysis system (GSA) performs poorly on it. However, GSA proved helpful on the test data, which contains an estimated 25% nonfigurative tweets. Our best system, a stacking system with backoff to GSA, ranked 4th on the final test data (Cosine 0.661, MSE 3.404). 1 | CPH: Sentiment analysis of Figurative Language on Twitter #easypeasy #not |
d12700513 | Distributional word representations are widely used in NLP tasks. These representations are based on an assumption that words with a similar context tend to have a similar meaning. To improve the quality of the context-based embeddings, many researches have explored how to make full use of existing lexical resources. In this paper, we argue that while we incorporate the prior knowledge with contextbased embeddings, words with different occurrences should be treated differently. Therefore, we propose to rely on the measurement of information content to control the degree of applying prior knowledge into context-based embeddings -different words would have different learning rates when adjusting their embeddings. In the result, we demonstrate that our embeddings get significant improvements on two different tasks: Word Similarity and Analogical Reasoning. | Integrating Semantic Knowledge into Lexical Embeddings Based on Information Content Measurement |
d16285911 | In contrast to the "designer logic" approach, this paper shows how the attribute-value feature structures of unification grammar and constraints on them can be axiomatized in classical first-order logic, which can express disjunctive and negative constraints. Because only quantifier-free formulae are used in the axiomatization, the satisfiability problem is NPcomplete. | EXPRESSING DISJUNCTIVE AND NEGATIVE FEATURE CONSTRAINTS WITH CLASSICAL FIRST-ORDER LOGIC |
d221373796 | RESUMEDans cette étude nous examinons, sur un groupe varié de 29 locuteurs, les différences de réponses entre locuteur à une demande explicite de modification du débit tout d'abord dans une tâche de répétition rapide, puis entre une tâche de lecture et une tâche de répétition confortable. Ces réponses sont évaluées en termes de débit articulatoire et de réduction vocalique (temporelle et/ou spectrale). Les résultats montrent différents profils de réponses dans la tâche de répétition rapide par rapport à la même tâche sans contrainte temporelle, et on voit que le débit peut être augmenté avec ou sans réduction spectrale. On montre également une forte variation dans les réponses des locuteurs à une tâche de répétition confortable par rapport à de la lecture, avec pour certains locuteurs des différences nettes d'organisation spectro-temporelle. Dans cette tâche assez artificielle de répétition, sans instruction précise, davantage de différences individuelles émergent.ABSTRACTRate and vowel reduction : effects of speech task and speaker.We investigated on a wide variety of speakers, speaker-dependent responses to an explicit demand of speech rate increase in a fast repetition task, and between a reading and a self-paced repetition task. Responses are tested in terms of articulation rate and temporal and/or spectral vowel reduction. Results show different patterns of response in the fast repetition task compared to the same task without a temporal constraint, and we observe that rate can be increased with or without spectral reduction. Inter-speakers variation in responses to a self-paced repetition task compared to reading is equally showed, with clear-cut differences in the spectro-temporal organization between the two tasks for some speakers. In the absence of precise instruction, more individual differences emerge in this quite artificial task. MOTS-CLES : variations interlocuteurs ; phonétique acoustique ; débit articulatoire ; réduction vocalique ; | Débit et réduction vocalique : effets de la tâche de parole et du locuteur |
d236486073 | ||
d16897771 | We analyse the computational complexity of phonological models as they have developed over the past twenty years. The major results ate that generation and recognition are undecidable for segmental models, and that recognition is NPhard for that portion of segmental phonology subsumed by modern autosegmental models. Formal restrictions are evaluated. | Computational structure of generative phonology and its relation to language comprehension |
d219307097 | ||
d16449394 | We examine the task of separating types from brands in the food domain. Framing the problem as a ranking task, we convert simple textual features extracted from a domain-specific corpus into a ranker without the need of labeled training data. Such method should rank brands (e.g. sprite) higher than types (e.g. lemonade). Apart from that, we also exploit knowledge induced by semisupervised graph-based clustering for two different purposes. On the one hand, we produce an auxiliary categorization of food items according to the Food Guide Pyramid, and assume that a food item is a type when it belongs to a category unlikely to contain brands. On the other hand, we directly model the task of brand detection using seeds provided by the output of the textual ranking features. We also harness Wikipedia articles as an additional knowledge source. | Separating Brands from Types: an Investigation of Different Features for the Food Domain |
d208513183 | We introduce NoReCfine, a dataset for fine-grained sentiment analysis in Norwegian, annotated with respect to polar expressions, targets and holders of opinion. The underlying texts are taken from a corpus of professionally authored reviews from multiple news-sources and across a wide variety of domains, including literature, games, music, products, movies and more. We here present a detailed description of this annotation effort. We provide an overview of the developed annotation guidelines, illustrated with examples, and present an analysis of inter-annotator agreement. We also report the first experimental results on the dataset, intended as a preliminary benchmark for further experiments. | A Fine-Grained Sentiment Dataset for Norwegian |
d236917216 | ||
d9192200 | HITIQA is an interactive question answering technology designed to allow intelligence analysts and other users of information systems to pose questions in natural language and obtain relevant answers, or the assistance they require in order to perform their tasks. Our objective in HITIQA is to allow the user to submit exploratory, analytical, non-factual questions, such as "What has been Russia's reaction to U.S. bombing of Kosovo?" The distinguishing property of such questions is that one cannot generally anticipate what might constitute the answer. While certain types of things may be expected (e.g., diplomatic statements), the answer is heavily conditioned by what information is in fact available on the topic. From a practical viewpoint, analytical questions are often underspecified, thus casting a broad net on a space of possible answers. Therefore, clarification dialogue is often needed to negotiate with the user the exact scope and intent of the question. | HITIQA: An Interactive Question Answering System A Preliminary Report |
d9836852 | One of the questions asked of the participants to the workshop is What is the evidence for the existence of intentions? What types of intentions are useful to identify for communication?The former part of the question has already been positively answered by many researchers, such as [GS86; GS90], [LC91],[LA90]; the same answer emerges fl'om the abstracts submitted to the workshop, and I will therefore take it for granted. | Speaker's Intentions and Beliefs in Negative Imperatives |
d256461417 | Flowchart grounded dialog systems converse with users by following a given flowchart and a corpus of FAQs. The existing state-of-the-art approach(Raghu et al., 2021)for learning such a dialog system, named FLONET, has two main limitations. (1) It uses a Retrieval Augmented Generation (RAG) framework which represents a flowchart as a bag of nodes. By doing so, it loses the connectivity structure between nodes which can aid in better response generation.(2) Typically dialogs progress with the agent asking polar (Y/N) questions, but users often respond indirectly without the explicit use of polar words. In such cases, it fails to understand the correct polarity of the answer. To overcome these issues, we propose Structure-Aware FLONET (SA-FLONET) which infuses structural constraints derived from the connectivity structure of flowcharts into the RAG framework. It uses natural language inference to better predict the polarity of indirect Y/N answers. We find that SA-FLONET outperforms FLONET, with a success rate improvement of 68% and 123% in flowchart grounded response generation and zero-shot flowchart grounded response generation tasks respectively. | Structural Constraints and Natural Language Inference for End-to-End Flowchart Grounded Dialog Response Generation |
d218974393 | ||
d16377642 | One of the main interests in the analysis of large document collections is to discover domains of discourse that are still actively developing, growing in interest and relevance, at a given point in time, and to distinguish them from those topics that are in stagnation or decline. The present paper describes a terminologically inspired approach to this kind of task. The inputs to the method are a corpus spanning several decades of research in computational linguistics and a set of single-word terms that frequently occur in that corpus. The diachronic development of these terms is modelled by means of term life cycle information, namely the parameters relative frequency and productivity. In a second step, k-means clustering is used to identify groups of terms with similar development patterns. The paper describes a mathematical approach to modelling term productivity and discusses what kind of information can be obtained from this measure. The results of the clustering experiment are promising and well motivate future research. | Brave New World Uncovering Topical Dynamics in the ACL Anthology Reference Corpus Using Term Life Cycle Information |
d219309487 | ||
d248780527 | We present a novel pipeline for the collection of parallel data for the detoxification task. We collect non-toxic paraphrases for over 10,000 English toxic sentences. We also show that this pipeline can be used to distill a large existing corpus of paraphrases to get toxicneutral sentence pairs. We release two parallel corpora which can be used for the training of detoxification models. To the best of our knowledge, these are the first parallel datasets for this task. We describe our pipeline in detail to make it fast to set up for a new language or domain, thus contributing to faster and easier development of new parallel resources.We train several detoxification models on the collected data and compare them with several baselines and state-of-the-art unsupervised approaches. We conduct both automatic and manual evaluations. All models trained on parallel data outperform the state-of-the-art unsupervised models by a large margin. This suggests that our novel datasets can boost the performance of detoxification systems. | ParaDetox: Detoxification with Parallel Data |
d17772255 | This paper consists of two parts. The first part discusses commonsense knowledge about events as manifested in language. Three kinds of knowledge are identified: compositional, durational, and aspectual. Compositional knowledge concerns internal structuring of events into preparatory, initial, main (the body), final, and resulting stages. Durational knowledge concerns durational relations between events and stages of the same event. Durational knowledge can be expressed as qualitative dependencies among the parameters of the event and as its time scale. The notion of time scale is introduced and related to shared cyclical events (time units).In discussing aspectuai knowledge three notions are distinguished: aspect as a grammatical category of the verb, implemented by affixes, auxiliaries, and such; aspectual class, which is a characteristic of a lexical meaning; and the aspectual perspective of the sentence determined by the position of the Reference Time (RT) with respect to the event described by a finite clause. I argue that an aspectual classification of situations evolving in time should be based on such considerations as the kinds of resources they consume and the goals they achieve. A detailed classification of instantaneous and noninstantaneous events is developed.The second part of the paper discusses how this knowledge is employed in understanding extended narratives. Temporal discontinuities, in conjunction with other kinds of discontinuities identified in the paper, signal boundaries between discourse segments; within each segment, all three varieties of temporal knowledge help establish the temporal relations among narrated events. | ASPECT, ASPECTUAL CLASS, AND THE TEMPORAL STRUCTURE OF NARRATIVE |
d197640450 | Naming and titling have been discussed in sociolinguistics as markers of status or solidarity. However, these functions have not been studied on a larger scale or for social media data. We collect a corpus of tweets mentioning presidents of six G20 countries by various naming forms. We show that naming variation relates to stance towards the president in a way that is suggestive of a framing effect mediated by respectfulness. This confirms sociolinguistic theory of naming and titling as markers of status. | Not My President: How Names and Titles Frame Political Figures |
d208245087 | This paper describes our submission to the shared task 1 on "Multi-hop Inference Explanation Regeneration" in TextGraphs workshop at EMNLP 2019 (Jansen and Ustalov, 2019). Our system identifies chains of facts relevant to explain an answer to an elementary science examination question. To counter the problem of 'spurious chains' leading to 'semantic drifts', we train a ranker that uses contextualized representation of facts to score its relevance for explaining an answer to a question. Our system 2 was ranked first w.r.t the mean average precision (MAP) metric outperforming the second best system by 14.95 points. | Chains-of-Reasoning at TextGraphs 2019 Shared Task: Reasoning over Chains of Facts for Explainable Multi-hop Inference |
d44135464 | In this paper we describe the system used by the ValenTO team in the shared task on Irony Detection in English Tweets at SemEval 2018. The system takes as starting point emotIDM, an irony detection model that explores the use of affective features based on a wide range of lexical resources available for English, reflecting different facets of affect. We experimented with different settings, by exploiting different classifiers and features, and participated both to the binary irony detection task and to the task devoted to distinguish among different types of irony. We report on the results obtained by our system both in a constrained setting and unconstrained setting, where we explored the impact of using additional data in the training phase, such as corpora annotated for the presence of irony or sarcasm from the state of the art. Overall, the performance of our system seems to validate the important role that affective information has for identifying ironic content in Twitter. | ValenTO at SemEval-2018 Task 3: Exploring the Role of Affective Content for Detecting Irony in English Tweets |
d2234587 | Topic modelling has been popularly used to discover latent topics from text documents. Most existing models work on individual words. That is, they treat each topic as a distribution over words. However, using only individual words has several shortcomings. First, it increases the co-occurrences of words which may be incorrect because a phrase with two words is not equivalent to two separate words. These extra and often incorrect co-occurrences result in poorer output topics. A multi-word phrase should be treated as one term by itself. Second, individual words are often difficult to use in practice because the meaning of a word in a phrase and the meaning of a word in isolation can be quite different. Third, topics as a list of individual words are also difficult to understand by users who are not domain experts and do not have any knowledge of topic models. In this paper, we aim to solve these problems by considering phrases in their natural form. One simple way to include phrases in topic modelling is to treat each phrase as a single term. However, this method is not ideal because the meaning of a phrase is often related to its composite words. That information is lost. This paper proposes to use the generalized Pólya Urn (GPU) model to solve the problem, which gives superior results. GPU enables the connection of a phrase with its content words naturally. Our experimental results using 32 review datasets show that the proposed approach is highly effective.This work is licensed under a Creative Commons Attribution 4.0 International Licence. Page numbers and proceedings footer are added by the organisers. Licence details: http://creativecommons.org/licenses/by/4.0/ | Review Topic Discovery with Phrases using the Pólya Urn Model |
d39984436 | Detection and correction of Chinese grammatical errors have been two of major challenges for Chinese automatic grammatical error diagnosis. This paper presents an N-gram model for automatic detection and correction of Chinese grammatical errors in NLPTEA 2017 task. The experiment results show that the proposed method is good at correction of Chinese grammatical errors. | N-gram Model for Chinese Grammatical Error Diagnosis |
d9720760 | In this paper we discuss the design, acquisition and preprocessing of a Czech audio-visual speech corpus. The corpus is intended for training and testing of existing audio-visual speech recognition system. The name of the database is UWB-07-ICAVR, where ICAVR stands for Impaired Condition Audio Visual speech Recognition. The corpus consist of 10000 utterances of continuous speech obtained from 50 speakers. The total length of the database is 25 hours. Each utterance is stored as a separate sentence. The corpus extends existing databases by covering condition of variable illumination. We acquired 50 speakers, where half of them were men and half of them were women. Recording was done by two cameras and two microphones. Database introduced in this paper can be used for testing of visual parameterization in audio-visual speech recognition (AVSR). Corpus can be easily split into training and testing part. Each speaker pronounced 200 sentences: first 50 were the same for all, the rest of them were different. Six types of illumination were covered. Session for one speaker can fit on one DVD disk. All files are accompanied by visual labels. Labels specify region of interest (mouth and area around them specified by bounding box). Actual pronunciation of each sentence is transcribed into the text file. | Design and Recording of Czech Audio-Visual Database with Impaired Conditions for Continuous Speech Recognition |
d2382442 | Machine translation (MT) has recently been formulated in terms of constraint-based knowledge representation and unification theories~ but it is becoming more and more evident that it is not possible to design a practical MT system without an adequate method of handling mismatches between semantic representations in the source and target languages. In this paper, we introduce the idea of "information-based" MT, which is considerably more flexible than interlingual MT or the conventional transfer-based MT. | Tricolor DAGs for Machine Translation |
d9778637 | Building machine translation (MT) for many minority languages in the world is a serious challenge. For many minor languages there is little machine readable text, few knowledgeable linguists, and little money available for MT development. For these reasons, it becomes very important for an MT system to make best use of its resources, both labeled and unlabeled, in building a quality system. In this paper we argue that traditional active learning setup may not be the right fit for seeking annotations required for building a Syntax Based MT system for minority languages. We posit that a relatively new variant of active learning, Proactive Learning, is more suitable for this task. | Proactive Learning for Building Machine Translation Systems for Minority Languages |
d262731766 | Recent years have seen the appearance of adaptive learning technologies that offer significant potential for bringing about fundamental improvements in education. A promising development in this arena is the emergence of narrative-centered learning environments, which integrate the inferential capabilities of intelligent tutoring systems with the rich gameplay supported by commercial game engines. While narrative-centered learning environments have demonstrated effectiveness in both student learning and engagement, their capabilities will increase dramatically with expressive NLG. In this talk we will introduce the principles motivating the design of narrative-centered learning environments, discuss the role of NLG in narrative-centered learning, consider the interaction of NLG, affect, and learning, and explore how next-generation learning environments will push the envelope in expressive NLG. focuses on intelligent tutoring systems, computational linguistics, and intelligent user interfaces. It has been recognized by several Best Paper awards. His research interests include intelligent game-based learning environments, computational models of narrative, affective computing, creativity-enhancing technologies, and tutorial dialogue. He is | Invited Speaker |
d13441181 | When translating Japanese nouns into English, we face the problem of articles and numbers which the Japanese language does not have, but which are necessary for the English composition. To solve this difficult problem we classified the referential property and the number of nouns into three types respectively. This paper shows that the referential property and the number of nouns in a sentence can be estimated fairly reliably by the words in the sentence. Many rules for the estimation were written in forms similar to rewriting rules in expert systems. We obtained the correct recognition scores of 85.5% and 89.0% in the estimation of the referential property and the number respectively for the sentences which were used for the construction of our rules. We tested these rules for some other texts, and obtained the scores of 68.9% and 85.6% respectively. | Determination of referential property and number of nouns in Japanese sentences for machine translation into English |
d9613237 | Ontologies and datasets for the Semantic Web are encoded in OWL formalisms that are not easily comprehended by people. To make ontologies accessible to human domain experts, several research groups have developed ontology verbalisers using Natural Language Generation. In practice ontologies are usually composed of simple axioms, so that realising them separately is relatively easy; there remains however the problem of producing texts that are coherent and efficient. We describe in this paper some methods for producing sentences that aggregate over sets of axioms that share the same logical structure. Because these methods are based on logical structure rather than domain-specific concepts or language-specific syntax, they are generic both as regards domain and language. | Grouping axioms for more coherent ontology descriptions |
d8828965 | We present the situated reference generation module of a hybrid human-robot interaction system that collaborates with a human user in assembling target objects from a wooden toy construction set. The system contains a sub-symbolic goal inference system which is able to detect the goals and errors of humans by analysing their verbal and non-verbal behaviour. The dialogue manager and reference generation components then use situated references to explain the errors to the human users and provide solution strategies. We describe a user study comparing the results from subjects who heard constant references to those who heard references generated by an adaptive process. There was no difference in the objective results across the two groups, but the subjects in the adaptive condition gave higher subjective ratings to the robot's abilities as a conversational partner. An analysis of the objective and subjective results found that the main predictors of subjective user satisfaction were the user's performance at the assembly task and the number of times they had to ask for instructions to be repeated. | Situated Reference in a Hybrid Human-Robot Interaction System |
d1752194 | We discuss several feature sets for novelty detection at the sentence level, using the data and procedure established in task 2 of the TREC 2004 novelty track. In particular, we investigate feature sets derived from graph representations of sentences and sets of sentences. We show that a highly connected graph produced by using sentence-level term distances and pointwise mutual information can serve as a source to extract features for novelty detection. We compare several feature sets based on such a graph representation. These feature sets allow us to increase the accuracy of an initial novelty classifier which is based on a bagof-word representation and KL divergence. The final result ties with the best system at TREC 2004. | Graph-Based Text Representation for Novelty Detection |
d1762277 | We present a sentence compression system based on synchronous context-free grammars (SCFG), following the successful noisy-channel approach of(Knight and Marcu, 2000). We define a headdriven Markovization formulation of SCFG deletion rules, which allows us to lexicalize probabilities of constituent deletions. We also use a robust approach for tree-to-tree alignment between arbitrary document-abstract parallel corpora, which lets us train lexicalized models with much more data than previous approaches relying exclusively on scarcely available document-compression corpora. Finally, we evaluate different Markovized models, and find that our selected best model is one that exploits head-modifier bilexicalization to accurately distinguish adjuncts from complements, and that produces sentences that were judged more grammatical than those generated by previous work. | Lexicalized Markov Grammars for Sentence Compression * |
d15386287 | This paper presents the system submitted by University of Wolverhampton for SemEval-2014 task 1. We proposed a machine learning approach which is based on features extracted using Typed Dependencies, Paraphrasing, Machine Translation evaluation metrics, Quality Estimation metrics and Corpus Pattern Analysis. Our system performed satisfactorily and obtained 0.711 Pearson correlation for the semantic relatedness task and 78.52% accuracy for the textual entailment task. | UoW: NLP Techniques Developed at the University of Wolverhampton for Semantic Similarity and Textual Entailment |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.