_id
stringlengths
4
10
text
stringlengths
0
18.4k
title
stringlengths
0
8.56k
d53248322
Developing conventional natural language generation systems requires extensive attention from human experts in order to craft complex sets of sentence planning rules. We propose a Bayesian nonparametric approach to learn sentence planning rules by inducing synchronous tree substitution grammars for pairs of text plans and morphosyntactically-specified dependency trees. Our system is able to learn rules which can be used to generate novel texts after training on small datasets.
Toward Bayesian Synchronous Tree Substitution Grammars for Sentence Planning
d184483311
In this paper, we present a system description for the SemEval-2019 Task 6 submitted by our team. For the task, our system takes tweet as an input and determine if the tweet is offensive or non-offensive (Sub-task A). In case a tweet is offensive, our system identifies if a tweet is targeted (insult or threat) or nontargeted like swearing (Sub-task B). In targeted tweets, our system identifies the target as an individual or group (Sub-task C). We used data pre-processing techniques like splitting hashtags into words, removing special characters, stop-word removal, stemming, lemmatization, capitalization, and offensive word dictionary. Later, we used keras tokenizer and word embeddings for feature extraction. For classification, we used the LSTM (Long shortterm memory) model of keras framework. Our accuracy scores for Sub-task A, B and C are 0.8128, 0.8167 and 0.3662 respectively. Our results indicate that fine-grained classification to identify offense target was difficult for the system. Lastly, in the future scope section, we will discuss the ways to improve system performance.
USF at SemEval-2019 Task 6: Offensive Language Detection Using LSTM With Word Embeddings
d18827045
The Active Listening Corpus (ALICO) is a multimodal database of spontaneous dyadic conversations with diverse speech and gestural annotations of both dialogue partners. The annotations consist of short feedback expression transcription with corresponding communicative function interpretation as well as segmentation of interpausal units, words, rhythmic prominence intervals and vowel-to-vowel intervals. Additionally, ALICO contains head gesture annotation of both interlocutors. The corpus contributes to research on spontaneous human-human interaction, on functional relations between modalities, and timing variability in dialogue. It also provides data that differentiates between distracted and attentive listeners. We describe the main characteristics of the corpus and present the most important results obtained from analyses in recent years.
ALICO: A multimodal corpus for the study of active listening
d6229041
This paper describes Vi-xfst, a visual interface and a development environment, for developing finite state language processing applications using the Xerox Finite State Tool, xfst. Vi-xfst lets a user construct complex regular expressions via a drag-anddrop visual interface, treating simpler regular expressions as "Lego Blocks." It also enables the visualization of the structure of the regular expression components, providing a bird's eye view of the overall system, enabling a user to easily understand and track the structural and functional relationships among the components involved. Since the structure of a large regular expression (built in terms of other regular expressions) is now transparent, users can also interact with regular expressions at any level of detail, easily navigating among them for testing. Vi-xfst also keeps track of dependencies among the regular expressions at a very finegrained level. So when a certain regular expression is modified as a result of testing, only the dependent regular expressions are recompiled resulting in an improvement in development process time, by avoiding file level recompiles which usually causes redundant regular expression compilations.
Vi-xfst: A Visual Regular Expression Development Environment for Xerox Finite State Tool
d16940412
We study the problem of jointly aligning sentence constituents and predicting their similarities. While extensive sentence similarity data exists, manually generating reference alignments and labeling the similarities of the aligned chunks is comparatively onerous. This prompts the natural question of whether we can exploit easy-to-create sentence level data to train better aligners. In this paper, we present a model that learns to jointly align constituents of two sentences and also predict their similarities. By taking advantage of both sentence and constituent level data, we show that our model achieves state-of-the-art performance at predicting alignments and constituent similarities.
Exploiting Sentence Similarities for Better Alignments
d16987118
Personal writings have inspired researchers in the fields of linguistics and psychology to study the relationship between language and culture to better understand the psychology of people across different cultures. In this paper, we explore this relation by developing cross-cultural word models to identify words with cultural bias -i.e., words that are used in significantly different ways by speakers from different cultures. Focusing specifically on two cultures: United States and Australia, we identify a set of words with significant usage differences, and further investigate these words through feature analysis and topic modeling, shedding light on the attributes of language that contribute to these differences.IntroductionAccording toShweder et al. (1998), "to be a member of a group is to think and act in a certain way, in the light of particular goals, values, pictures of the world; and to think and act so is to belong to a group."Culture can be defined as any characteristic of a group of people, which can affect and shape their beliefs and behaviors (e.g., nationality, region, state, gender, or religion). It reflects itself in people's everyday thoughts, beliefs, ideas, and actions, and understanding what people say or write in their daily lives can help us understand and differentiate cultures. In this work, we use very large corpora of personal writings in the form of blogs from multiple cultures 1 to understand cultural differences in word usage.We find inspiration in a line of research in psychology that poses that people from different cultural backgrounds and/or speaking different languages perceive the world around them differently, which is reflected in their perception of time and space(Kern, 2003;Boroditsky, 2001), body shapes(Furnham and Alibhai, 1983), or surrounding objects(Boroditsky et al., 2003). As an example, consider the study described byBoroditsky et al. (2003), which showed how the perception of objects in different languages can be affected by their gender differences. For instance, one of the words used in their study is the word "bridge," which is masculine in Spanish and feminine in German: when asked about the descriptive properties of a bridge, Spanish speakers described bridges as being big, dangerous, long, strong, sturdy, and towering, while German speakers said they are beautiful, elegant, fragile, peaceful, pretty, and slender.While this previous research has the benefit of careful in-lab studies that explore differences in world view for one dimension (e.g., time, space) or word (e.g., bridge, sun) at a time, it also has limitations in terms of the number of experiments that can be run when subjects are being brought to the lab for every new question being asked. We aim to address this shortcoming by using the power of large-scale computational linguistics, which allows us to identify cultural differences in word usage in a data-driven bottom-up fashion.We hypothesize that we can use computational models to identify differences in word usage between cultures, regarded as an approximation of their differences in world view. Rather than starting with predetermined hypotheses (e.g., that Spanish and German speakers would have a different way of talking about bridges), we can use computational linguistics to run experiments on hundreds of words, and This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http: //creativecommons.org/licenses/by/4.0/ 1 Throughout this paper, we use the term culture to represent the nationality (country) of a group of people.
Identifying Cross-Cultural Differences in Word Usage
d259833833
Natural language processing (NLP) has shown great potential for Alzheimer's disease (AD) detection, particularly due to the adverse effect of AD on spontaneous speech. The current body of literature has directed attention toward context-based models, especially Bidirectional Encoder Representations from Transformers (BERTs), owing to their exceptional abilities to integrate contextual information in a wide range of NLP tasks. This comes at the cost of added model opacity and computational requirements. Taking this into consideration, we propose a Word2Vec-based model for AD detection in 108 age-and sex-matched participants who were asked to describe the Cookie Theft picture. We also investigate the effectiveness of our model by fine-tuning BERTbased sequence classification models, as well as incorporating linguistic features. Our results demonstrate that our lightweight and easyto-implement model outperforms some of the state-of-the-art models available in the literature, as well as BERT models.
Who needs context? Classical techniques for Alzheimer's disease detection
d45491946
Pour une communauté, la terminologie est essentielle car elle permet de décrire, échanger et récupérer les données. Dans de nombreux domaines, l'explosion du volume des données textuelles nécessite de recourir à une automatisation du processus d'extraction de la terminologie, voire son enrichissement. L'extraction automatique de termes peut s'appuyer sur des approches de traitement du langage naturel. Des méthodes prenant en compte les aspects linguistiques et statistiques proposées dans la littérature, résolvent quelques problèmes liés à l'extraction de termes tels que la faible fréquence, la complexité d'extraction de termes de plusieurs mots, ou l'effort humain pour valider les termes candidats. Dans ce contexte, nous proposons deux nouvelles mesures pour l'extraction et le "ranking" des termes formés de plusieurs mots à partir des corpus spécifiques d'un domaine. En outre, nous montrons comment l'utilisation du Web pour évaluer l'importance d'un terme candidat permet d'améliorer les résultats en terme de précision. Ces expérimentations sont réalisées sur le corpus biomédical GENIA en utilisant des mesures de la littérature telles que C-value.Abstract. Comprehensive terminology is essential for a community to describe, exchange, and retrieve data. In multiple domain, the explosion of text data produced has reached a level for which automatic terminology extraction and enrichment is mandatory. Automatic Term Extraction (or Recognition) methods use natural language processing to do so. Methods featuring linguistic and statistical aspects as often proposed in the literature, rely some problems related to term extraction as low frequency, complexity of the multi-word term extraction, human effort to validate candidate terms. In contrast, we present two new measures for extracting and ranking muli-word terms from domain-specific corpora, covering the all mentioned problems. In addition we demonstrate how the use of the Web to evaluate the significance of a multi-word term candidate, helps us to outperform precision results obtain on the biomedical GENIA corpus with previous reported measures such as C-value.Mots-clés : Extraction Automatique de Termes, Mesure basée sur le Web, Mesure Linguistique, Mesure Statistique, Traitement Automatique du Langage Biomédical.
21 ème Traitement Automatique des Langues Naturelles
d44056690
We describe the development of a prototype of an open source rule-based Icelandic→English MT system, based on the Apertium MT framework and IceNLP, a natural language processing toolkit for Icelandic. Our system, Apertium-IceNLP, is the first system in which the whole morphological and tagging component of Apertium is replaced by modules from an external system. Evaluation shows that the word error rate and the positionindependent word error rate for our prototype is 50.6% and 40.8%, respectively. As expected, this is higher than the corresponding error rates in two publicly available MT systems that we used for comparison. Contrary to our expectations, the error rates of our prototype is also higher than the error rates of a comparable system based solely on Apertium modules. Based on error analysis, we conclude that better translation quality may be achieved by replacing only the tagging component of Apertium with the corresponding module in IceNLP, but leaving morphological analysis to Apertium.
Apertium-IceNLP: A rule-based Icelandic to English machine translation system
d15035128
In this paper we describe a method for developing a virtual instructor for pedestrian navigation based on real interactions between a human instructor and a human pedestrian. A virtual instructor is an agent capable of fulfilling the role of a human instructor, and its goal is to assist a pedestrian in the accomplishment of different tasks within the context of a real city.The instructor decides what to say using a generation by selection algorithm, based on a corpus of real interactions generated within the world of interest. The instructor is able to react to different requests by the pedestrian. It is also aware of the pedestrian position with a certain degree of uncertainty, and it can use different city landmarks to guide him.
A Natural Language Instructor for pedestrian navigation based in generation by selection
d51884041
We present results from a project on sentiment analysis of drama texts, more concretely the plays of Gotthold Ephraim Lessing. We conducted an annotation study to create a gold standard for a systematic evaluation. The gold standard consists of 200 speeches of Lessing's plays and was manually annotated with sentiment information by five annotators. We use the gold standard data to evaluate the performance of different German sentiment lexicons and processing configurations like lemmatization, the extension of lexicons with historical linguistic variants, and stop words elimination, to explore the influence of these parameters and to find best practices for our domain of application. The best performing configuration accomplishes an accuracy of 70%. We discuss the problems and challenges for sentiment analysis in this area and describe our next steps toward further research.
An Evaluation of Lexicon-based Sentiment Analysis Techniques for the Plays of Gotthold Ephraim Lessing
d171755024
d15650401
Contrary to popular beliefs, idioms show a high degree of formal flexibility, ranging from word-like idioms to those which are like almost regular phrases. However, we argue that their meanings are not transparent, i.e. they are non-compositional, regardless of their syntactic flexibility. In this paper, firstly, we will introduce a framework to represent their syntactic flexibility, which is developed in Chae (2014), and will observe some consequences of the framework on the lexicon and the set of rules. Secondly, there seem to be some phenomena which can only be handled under the assumption that the component parts of idioms have their own separate meanings. However, we will show that all the phenomena, focusing on the behavior of idiom-internal adjectives, can be accounted for effectively without assuming separate meanings of parts, which confirms the non-transparency of idioms.PACLIC 2946
Idioms: Formally Flexible but Semantically Non-transparent
d249204499
We address the task of automatically distinguishing between human-translated (HT) and machine translated (MT) texts. Following recent work, we fine-tune pretrained language models (LMs) to perform this task. Our work differs in that we use state-of-the-art pre-trained LMs, as well as the test sets of the WMT news shared tasks as training data, to ensure the sentences were not seen during training of the MT system itself. Moreover, we analyse performance for a number of different experimental setups, such as adding translationese data, going beyond the sentencelevel and normalizing punctuation. We show that (i) choosing a state-of-the-art LM can make quite a difference: our best baseline system (DEBERTA) outperforms both BERT and ROBERTA by over 3% accuracy, (ii) adding translationese data is only beneficial if there is not much data available, (iii) considerable improvements can be obtained by classifying at the document-level and (iv) normalizing punctuation and thus avoiding (some) shortcuts has no impact on model performance.
Automatic Discrimination of Human and Neural Machine Translation: A Study with Multiple Pre-Trained Models and Longer Context
d324062
Even though historical texts reveal a lot of interesting information on culture and social structure in the past, information access is limited and in most cases the only way to find the information you are looking for is to manually go through large volumes of text, searching for interesting text segments. In this paper we will explore the idea of facilitating this timeconsuming manual effort, using existing natural language processing techniques. Attention is focused on automatically identifying verbs in early modern Swedish texts (1550-1800).The results indicate that it is possible to identify linguistic categories such as verbs in texts from this period with a high level of precision and recall, using morphological tools developed for present-day Swedish, if the text is normalised into a more modern spelling before the morphological tools are applied.
Automatic Verb Extraction from Historical Swedish Texts
d51616502
Briefly Noted Opinion Mining and Sentiment Analysis
d15230543
Published literature in molecular genetics may collectively provide much information on gene regulation networks. Dedicated computational approaches are required to sip through large volumes of text and infer gene interactions. We propose a novel sieve-based relation extraction system that uses linear-chain conditional random fields and rules. Also, we introduce a new skip-mention data representation to enable distant relation extraction using first-order models. To account for a variety of relation types, multiple models are inferred. The system was applied to the BioNLP 2013 Gene Regulation Network Shared Task. Our approach was ranked first of five, with a slot error rate of 0.73.
Extracting Gene Regulation Networks Using Linear-Chain Conditional Random Fields and Rules
d241583381
We propose a personalized dialogue scenario generation system which transmits efficient and coherent information with a real-time extractive summarization method optimized by an Ising machine. The summarization problem is formulated as a quadratic unconstraint binary optimization (QUBO) problem, which extracts sentences that maximize the sum of the degree of user's interest in the sentences of documents with the discourse structure of each document and the total utterance time as constraints. To evaluate the proposed method, we constructed a news article corpus with annotations of the discourse structure, users' profiles, and interests in sentences and topics. The experimental results confirmed that a Digital Annealer, which is a simulated annealing-based Ising machine, can solve our QUBO model in a practical time without violating the constraints using this dataset.
Personalized Extractive Summarization Using an Ising Machine Towards Real-time Generation of Efficient and Coherent Dialogue Scenarios
d4948776
In this paper we present an iterative methodology to improve classifier performance by incorporating linguistic knowledge, and propose a way to incorporate domain rules into the learning process. We applied the methodology to the tasks of hedge cue recognition and scope detection and obtained competitive results on a publicly available corpus.
Improving Speculative Language Detection using Linguistic Knowledge
d902939
Completely data-driven grammar training is prone to over-fitting. Human-defined word class knowledge is useful to address this issue. However, the manual word class taxonomy may be unreliable and irrational for statistical natural language processing, aside from its insufficient linguistic phenomena coverage and domain adaptivity. In this paper, a formalized representation of function word subcategorization is developed for parsing in an automatic manner. The function word classification representing intrinsic features of syntactic usages is used to supervise the grammar induction, and the structure of the taxonomy is learned simultaneously. The grammar learning process is no longer a unilaterally supervised training by hierarchical knowledge, but an interactive process between the knowledge structure learning and the grammar training. The established taxonomy implies the stochastic significance of the diversified syntactic features. The experiments on both Penn Chinese Treebank and Tsinghua Treebank show that the proposed method improves parsing performance by 1.6% and 7.6% respectively over the baseline. This work is licenced under a Creative Commons Attribution 4.0 International License. Page numbers and proceedings footer are added by the organizers. License details: http://creativecommons.org/licenses/by/4.0/
Learning the Taxonomy of Function Words for Parsing
d12738488
This paper describes the input specification language of the WAG Sentence Generation system. The input is described in terms ofHalliday's (1978)three meaning components, ideational meaning (the propositional content to be expressed), interactional meaning (what the speaker intends the listener to do in making the utterance), and textual meaning (how the content is structured as a message, in terms of theme, reference, etc.).
Input Specification in the WAG Sentence Generation System
d6605684
The ambiguity of person names in the Web has become a new area of interest for NLP researchers. This challenging problem has been formulated as the task of clustering Web search results (returned in response to a person name query) according to the individual they mention. In this paper we compare the coverage, reliability and independence of a number of features that are potential information sources for this clustering task, paying special attention to the role of named entities in the texts to be clustered. Although named entities are used in most approaches, our results show that, independently of the Machine Learning or Clustering algorithm used, named entity recognition and classification per se only make a small contribution to solve the problem.
The role of named entities in Web People Search
d249204470
The development of deep learning techniques has allowed Neural Machine Translation (NMT) models to become extremely powerful, given sufficient training data and training time. However, such translation models struggle when translating text of a new or unfamiliar domain(Koehn and Knowles, 2017). A domain may be a well-defined topic, text of a specific provenance, text of unknown provenance with an identifiable vocabulary distribution, or language with some other stylometric feature.NMT models can achieve good translation performance on domain-specific data via simple tuning on a representative training corpus. However, such data-centric approaches have negative sideeffects, including over-fitting and brittleness on narrow-distribution samples and catastrophic forgetting of previously seen domains.This thesis focuses instead on more robust approaches to domain adaptation for NMT. We consider the case where a system is adapted to a specified domain of interest, but may also need to accommodate new language, or domain-mismatched sentences. As well, the thesis highlights that lines of MT research other than performance on traditional 'domains' can be framed as domain adaptation problems. Techniques that are effective for e.g. adapting machine translation to a biomedical domain can also be used when making use of language representations beyond the surface-level, or when encouraging better machine translation of gendered terms.Over the course of the thesis we pose and answer five research questions: * Now at RWS Language Weaver * © 2022 The authors. This article is licensed under a Creative Commons 3.0 licence, no derivative works, attribution, CC-BY-ND.How effective are data-centric approaches to NMT domain adaptation? We find that simply selecting-domain relevant training data and finetuning an existing model achieves strong results, especially when a domain-specific data curriculum is used during training. However, we also demonstrate the side-effects of exposure bias and catastrophic forgetting.Given an adaptation set, what training schemes improve NMT quality? We investigate two variations on the NMT adaptation algorithm, regularized tuning including Elastic Weighting Consolidation, and a new variant of Minimum Risk Training. We show they can mitigate the pitfalls of datacentric adaptation. Aside from avoiding the failure modes of data-centric methods, we show these methods may also give better model convergence.Can domain adaptation help when the test domain is unknown? Most approaches to domain adaptation in the literature assume any unseen test data of interest has a known, fixed domain, with a matching set of tuning data. This thesis works towards relaxing these assumptions. We show that adapting sequentially across domains with regularization can achieve good cross-domain performance without knowing the specific test domain. We also explore domain adaptive model ensembling and automatic model selection. We find this can outperform oracle approaches, which select the best model for inference by using known provenance labels.Can changing data representation have similar effects to changing data domain? Unlike data domain, data representation -for example, choice of subword granularity or use of syntactic annotation -does not change meaning or correspond to provenance. However, like domain, it can affect the information available to the model, and therefore impacts NMT quality for a given input. We combine multiple representations in a single model or in ensembles in a way reminiscent of multi-domain translation. In particular, we develop a scheme for ensembles of models producing multiple target language representations, and show that multi-representation ensembles improve syntax-based NMT.Can gender bias in NMT systems be mitigated by treating it as a domain? We show that translation of gendered language is strongly influenced by vocabulary distributions in the training data, a hallmark of a domain. We also show that data selection methods have a strong effect on apparent NMT gender bias. We apply techniques from elsewhere in the thesis to tune NMT on a 'gender' domain, specifically regularized adaptation and multi-domain inference. We show this can improve gendered language translation while maintaining generic translation quality.Human language itself is constantly adapting, and people's interactions with and expectations of MT are likewise evolving. With this thesis we hope to draw attention to the possible benefits and applications of different approaches to adapting machine translation. We hope that future work on adaptive NMT will focus not only on the language of immediate interest but the machine translation abilities or tendencies that we wish to maintain or abandon.
Domain Adaptation for Neural Machine Translation
d6552261
Current work on automatic opinion mining has ignored opinion targets expressed by anaphorical pronouns, thereby missing a significant number of opinion targets. In this paper we empirically evaluate whether using an off-the-shelf anaphora resolution algorithm can improve the performance of a baseline opinion mining system. We present an analysis based on two different anaphora resolution systems. Our experiments on a movie review corpus demonstrate, that an unsupervised anaphora resolution algorithm significantly improves the opinion target extraction. We furthermore suggest domain and task specific extensions to an off-the-shelf algorithm which in turn yield significant improvements.
Using Anaphora Resolution to Improve Opinion Target Identification in Movie Reviews
d220047307
Despite the pervasiveness of clinical depression in modern society, professional help remains highly stigmatized, inaccessible, and expensive. Accurately diagnosing depression is difficult-requiring time-intensive interviews, assessments, and analysis. Hence, automated methods that can assess linguistic patterns in these interviews could help psychiatric professionals make faster, more informed decisions about diagnosis. We propose JLPC, a method that analyzes interview transcripts to identify depression while jointly categorizing interview prompts into latent categories. This latent categorization allows the model to identify high-level conversational contexts that influence patterns of language in depressed individuals. We show that the proposed model not only outperforms competitive baselines, but that its latent prompt categories provide psycholinguistic insights about depression.
Predicting Depression in Screening Interviews from Latent Categorization of Interview Prompts
d203200649
d8855547
Hebrew •includes a very productive noun-compounding construction called smixut. Because smixut is marked morphologically and is restricted by many syntactic constraints, it has been the focus of many descriptive studies in Hebrew grammar.We present the treatment of smixut in HUGG, a FUF-based syntactic realization system capable of producing complex noun phrases in Hebrew. We contrast the treatment of smixut with noun-compounding in English and illustrate the potential for paraphrasing it introduces.We Specifically address the issue of determining when a smixut construction can be generated as opposed to other semantically equivalent constructs. We investigate several competing hypotheses -smixut is lexically, semantically and/or pragmatically determined. For each hy: pothesis, we explain why the decision to produce a smixut construction cannot be reduced to a computation Over features produced by an outside module that Would not need to know about the smixut phenomenon.We conclude that smixut provides yet another theoretical example where the interface that a syntactic realization component presents to the other components of a generation architecture cannot be made as isolated as we would hope. While the syntactic constraints on smixut are encapsulated within HUGG, the input Specification language to HUGG must contain a feature that specifies that smixut is requested if possible.• However, because smixut accounts for close to half the cases of NP modifiers observed on a corpus of complex NPs, and it •appears to be the unmarked realization form for some frequent semantic relations, we empirically evaluate a default setting strategy for the feature use-smixut based on a simple semantic Classification of the relations head-modifier in the NP. This study provides a Solid ground for the definition of a small set of predicates in the input specification language to HUGG, that has applications beyond the selection of smixut --for the determination of the order of modifiers in the NP and the use of stacking vs. conjunction --and for the definition of a bilingual input specification language.
GENERATION OF NOUN COMPOUNDS IN HEBREW: CAN SYNTACTIC KNOWLEDGE BE FULLY ENCAPSULATED?
d36544987
We assume that unknown words with internal structure (affixed words or compounds) can provide speakers with linguistic cues as for their meaning, and thus help their decoding and understanding. To verify this hypothesis, we propose to work with a set of French medical words. These words are annotated by five annotators. Then, two kinds of analysis are performed: analysis of the evolution of understandable and non-understandable words (globally and according to some suffixes) and analysis of clusters created with unsupervised algorithms on basis of linguistic and extralinguistic features of the studied words. Our results suggest that, according to linguistic sensitivity of annotators, technical words can be decoded and become understandable. As for the clusters, some of them distinguish between understandable and non-understandable words. Resources built in this work will be made freely available for the research purposes.
Understanding of unknown medical words
d18782861
Usually unsupervised dependency parsing tries to optimize the probability of a corpus by modifying the dependency model that was presumably used to generate the corpus. In this article we explore a different view in which a dependency structure is among other things a partial order on the nodes in terms of centrality or saliency. Under this assumption we model the partial order directly and derive dependency trees from this order. The result is an approach to unsupervised dependency parsing that is very different from standard ones in that it requires no training data. Each sentence induces a model from which the parse is read off. Our approach is evaluated on data from 12 different languages. Two scenarios are considered: a scenario in which information about part-of-speech is available, and a scenario in which parsing relies only on word forms and distributional clusters. Our approach is competitive to state-of-the-art in both scenarios.
From ranked words to dependency trees: two-stage unsupervised non-projective dependency parsing
d10767908
Just as programming is the traditional introduction to computer science, writing grammars by hand is an excellent introduction to many topics in computational linguistics. We present and justify a well-tested introductory activity in which teams of mixed background compete to write probabilistic context-free grammars of English. The exercise brings together symbolic, probabilistic, algorithmic, and experimental issues in a way that is accessible to novices and enjoyable.
Competitive Grammar Writing *
d219310197
d29077435
Transcription of speech is an important part of language documentation, and yet speech recognition technology has not been widely harnessed to aid linguists. We explore the use of a neural network architecture with the connectionist temporal classification loss function for phonemic and tonal transcription in a language documentation setting. In this framework, we explore jointly modelling phonemes and tones versus modelling them separately, and assess the importance of pitch information versus phonemic context for tonal prediction. Experiments on two tonal languages, Yongning Na and Eastern Chatino, show the changes in recognition performance as training data is scaled from 10 minutes to 150 minutes. We discuss the findings from incorporating this technology into the linguistic workflow for documenting Yongning Na, which show the method's promise in improving efficiency, minimizing typographical errors, and maintaining the transcription's faithfulness to the acoustic signal, while highlighting phonetic and phonemic facts for linguistic consideration.
Phonemic transcription of low-resource tonal languages
d220059524
d44245205
The IATE project was launched in early 2000 for the creation of a single central terminology database for all the institutions, agencies and other bodies of the European Union. By mid-2001, it had reached the prototype phase. It is evident that the attempt of uniting the terminology that has been created in different institutions, with different approaches to terminology and different working cultures, was not an easy task. Although the implementation of the system has, from a technical point of view, already reached a rather advanced stage it is predictable that user feedback during the prototype and pilot phases will still lead to a number of changes. The biggest challenge of the project, however, lies in its introduction in the terminology and translation workflow of the participating bodies. This is illustrated in the second part of the paper by the example of the European Parliament's Translation Service.
The IATE Project -Towards a Single Terminology Database For the European Union PART ONE: CURRENT STATUS OF THE IATE PROJECT
d2294286
A Robust Retrieval Engine for Proximal and Structural Search
d2913101
We introduce a word segmentation approach to languages where word boundaries are not orthographically marked, with application to Phrase-Based Statistical Machine Translation (PB-SMT). Instead of using manually segmented monolingual domain-specific corpora to train segmenters, we make use of bilingual corpora and statistical word alignment techniques. First of all, our approach is adapted for the specific translation task at hand by taking the corresponding source (target) language into account. Secondly, this approach does not rely on manually segmented training data so that it can be automatically adapted for different domains. We evaluate the performance of our segmentation approach on PB-SMT tasks from two domains and demonstrate that our approach scores consistently among the best results across different data conditions.
Bilingually Motivated Domain-Adapted Word Segmentation for Statistical Machine Translation
d18968839
We construct a large corpus of Japanese predicate phrases for synonym-antonym relations. The corpus consists of 7,278 pairs of predicates such as "receive-permission (ACC)" vs. "obtain-permission (ACC)", in which each predicate pair is accompanied by a noun phrase and case information. The relations are categorized as synonyms, entailment, antonyms, or unrelated. Antonyms are further categorized into three different classes depending on their aspect of oppositeness. Using the data as a training corpus, we conduct the supervised binary classification of synonymous predicates based on linguistically-motivated features. Combining features that are characteristic of synonymous predicates with those that are characteristic of antonymous predicates, we succeed in automatically identifying synonymous predicates at the high F-score of 0.92, a 0.4 improvement over the baseline method of using the Japanese WordNet. The results of an experiment confirm that the quality of the corpus is high enough to achieve automatic classification. To the best of our knowledge, this is the first and the largest publicly available corpus of Japanese predicate phrases for synonym-antonym relations.
Constructing a Corpus of Japanese Predicate Phrases for Synonym/Antonym Relations
d252847514
Socially Assistive Robots (SARs) have the potential to play an increasingly important role in a variety of contexts including healthcare, but most existing systems have very limited interactive capabilities. We will demonstrate a robot receptionist that not only supports task-based and social dialogue via natural spoken conversation but is also capable of visually grounded dialogue; able to perceive and discuss the shared physical environment (e.g. helping users to locate personal belongings or objects of interest). Task-based dialogues include check-in, navigation and FAQs about facilities, alongside social features such as chit-chat, access to the latest news and a quiz game to play while waiting. We also show how visual context (objects and their spatial relations) can be combined with linguistic representations of dialogue context, to support visual dialogue and question answering. We will demonstrate the system on a humanoid ARI robot, which is being deployed in a hospital reception area.
A Visually-Aware Conversational Robot Receptionist
d12617502
The syntax and semantic analyses of natural language are described from the standpoint of manmachine communication. The knowledge based system KAUS(Knowledge Acquisition and Utilization System) which has capabilities of deductive inference and automatic program generation of database access is utilized for that purpose. We try to perform syntax and semantic analyses of English sentences more or less conccurently by defining the correspondence between the basic patterns of English and the extended atomic formula in the framework of KAUS. Knowledge representation based on sets and logic, the sentence analysis utilizing this knowledge are given with some examples.
PROCESSING OF SYNTAX AND SEMANTICS OF NATURAL LANGUAGE BY PREDICATE LOGIC
d257985687
La plupart des outils d'analyse de sentiments traitent essentiellement l'arabe standard moderne (ASM), et peu d'entre eux ne prennent en considération les dialectes. À notre connaissance, aucun outil en libre accès n'est disponible concernant l'analyse de sentiments de textes écrits en dialecte algérien. Cet article présente un outil d'analyse de sentiments des messages écrits en dialecte algérien. Cet outil est fondé sur une approche combinant l'utilisation de lexiques ainsi qu'un traitement spécifique de l'agglutination. Nous avons évalué notre approche en utilisant deux lexiques annotés en sentiments et un corpus de test contenant 749 messages. Les résultats obtenus sont encourageants et montrent une amélioration continue après l'exécution de chaque étape de notre approche.ABSTRACT. Most of the sentiment analysis tools process only Modern Standard Arabic (MSA). Indeed, few dialects are considered by the actual tools, in particular Algerian dialect where we do not identify any free tool carrying texts of this dialect. In this article we present a tool for sentiment analysis of messages written in Algerian dialect. This tool is based on an approach which uses both lexicons and specific treatment of agglutination. This approach was experimented using two sentiment lexicons and a test corpus containing 749 messages. The obtained results were encouraging and showing continuous improvement after each step of the considered approach.MOTS-CLÉS : analyse de sentiments, dialecte algérien, lexique de sentiments, agglutination.
Une approche fondée sur les lexiques d'analyse de sentiments du dialecte algérien
d258765271
Resource-efficiency is a growing concern in the NLP community. But what are the resources we care about and why? How do we measure efficiency in a way that is reliable and relevant? And how do we balance efficiency and other important concerns? Based on a review of the emerging literature on the subject, we discuss different ways of conceptualizing efficiency in terms of product and cost, using a simple case study on fine-tuning and knowledge distillation for illustration. We propose a novel metric of amortized efficiency that is better suited for life-cycle analysis than existing metrics.143
On the Concept of Resource-Efficiency in NLP
d6271137
The Representation of Derivable Information in Memory: When What Might Have Been Left Unsaid Is Said
d209058570
Task oriented language understanding (LU) in human-to-machine (H2M) conversations has been extensively studied for personal digital assistants. In this work, we extend the task oriented LU problem to human-to-human (H2H) conversations, focusing on the slot tagging task. Recent advances on LU in H2M conversations have shown accuracy improvements by adding encoded knowledge from different sources. Inspired by this, we explore several variants of a bidirectional LSTM architecture that relies on different knowledge sources, such as Web data, search engine click logs, expert feedback from H2M models, as well as previous utterances in the conversation. We also propose ensemble techniques that aggregate these different knowledge sources into a single model. Experimental evaluation on a four-turn Twitter dataset in the restaurant and music domains shows improvements in the slot tagging F1-score of up to 6.09% compared to existing approaches.
Slot Tagging for Task Oriented Spoken Language Understanding in Human-to-human Conversation Scenarios
d14274174
This first half of this general survey covers MT and translation tools in use, including translators workstations, software localisation, and recent commercial and in-house MT systems. The second half covers the research scene, multilingual projects supported by the European Union, networking and evaluation.In comparison with the United States and elsewhere, the distinctive features of activity in Europe in the field of machine translation and machine-aided translation are: (i) the development and popularity of translator workstations, (ii) the strong software localisation industry, (iii) the vigorous activity in the area of lexical resources and terminology, (iv) and the broad based research on language engineering supported primarily by European Union funds.
THE STATE OF MACHINE TRANSLATION IN EUROPE
d15327919
Synthetic word analysis is a potentially important but relatively unexplored problem in Chinese natural language processing. Two issues with the conventional pipeline methods involving word segmentation are (1) the lack of a common segmentation standard and (2) the poor segmentation performance on OOV words. These issues may be circumvented if we adopt the view of character-based parsing, providing both internal structures to synthetic words and global structure to sentences in a seamless fashion. However, the accuracy of synthetic word parsing is not yet satisfactory, due to the lack of research. In view of this, we propose and present experiments on several synthetic word parsers. Additionally, we demonstrate the usefulness of incorporating large unlabelled corpora and a dictionary for this task. Our parsers significantly outperform the baseline (a pipeline method).
Parsing Chinese Synthetic Words with a Character-based Dependency Model
d10179143
We argue that it is useful for a machine translation system to be able to provide the user with an estimate of the translation quality for each sentence. This makes it possible for bad translations to be filtered out before post-editing, to be highlighted by the user interface, or to cause an interactive system to ask for a rephrasing. A system providing such an estimate is described, and examples from its practical application to an MT system are given.
A Confidence Index for Machine Translation
d225062764
d21723087
For the purpose of POS tagging noisy user-generated text, should normalization be handled as a preliminary task or is it possible to handle misspelled words directly in the POS tagging model? We propose in this paper a combined approach where some errors are normalized before tagging, while a Gated Recurrent Unit deep neural network based tagger handles the remaining errors. Word embeddings are trained on a large corpus in order to address both normalization and POS tagging. Experiments are run on Contact Center chat conversations, a particular type of formal Computer Mediated Communication data.
Handling Normalization Issues for Part-of-Speech Tagging of Online Conversational Text
d18508557
This paper presents a method for acquiring synonyms from monolingual comparable text (MCT). MCT denotes a set of monolingual texts whose contents are similar and can be obtained automatically. Our acquisition method takes advantage of a characteristic of MCT that included words and their relations are confined. Our method uses contextual information of surrounding one word on each side of the target words. To improve acquisition precision, prevention of outside appearance is used. This method has advantages in that it requires only part-ofspeech information and it can acquire infrequent synonyms. We evaluated our method with two kinds of news article data: sentence-aligned parallel texts and document-aligned comparable texts. When applying the former data, our method acquires synonym pairs with 70.0% precision. Re-evaluation of incorrect word pairs with source texts indicates that the method captures the appropriate parts of source texts with 89.5% precision. When applying the latter data, acquisition precision reaches 76.0% in English and 76.3% in Japanese.
Acquiring Synonyms from Monolingual Comparable Texts
d7378347
This demo abstract presents an interactive tool for supporting error analysis for text mining, which is situated within the Summarization Integrated Development Environment (SIDE). This freely downloadable tool was designed based on repeated experience teaching text mining over a number of years, and has been successfully tested in that context as a tool for students to use in conjunction with machine learning projects.
An Interactive Tool for Supporting Error Analysis for Text Mining
d8697151
We report on the first structured distributional semantic model for Croatian, DM.HR. It is constructed after the model of the English Distributional Memory (Baroni and Lenci, 2010), from a dependencyparsed Croatian web corpus, and covers about 2M lemmas. We give details on the linguistic processing and the design principles. An evaluation shows state-of-theart performance on a semantic similarity task with particularly good performance on nouns. The resource is freely available.
Building and Evaluating a Distributional Memory for Croatian
d14221574
Conventional topic models are ineffective for topic extraction from microblog messages since the lack of structure and context among the posts renders poor message-level word co-occurrence patterns. In this work, we organize microblog posts as conversation trees based on reposting and replying relations, which enrich context information to alleviate data sparseness. Our model generates words according to topic dependencies derived from the conversation structures. In specific, we differentiate messages as leader messages, which initiate key aspects of previously focused topics or shift the focus to different topics, and follower messages that do not introduce any new information but simply echo topics from the messages that they repost or reply. Our model captures the different extents that leader and follower messages may contain the key topical words, thus further enhances the quality of the induced topics. The results of thorough experiments demonstrate the effectiveness of our proposed model.
Topic Extraction from Microblog Posts Using Conversation Structures
d160948440
d20755967
([WUDFWLRQ RI 3ROLVK 1DPHG(QWLWLHV -DNXE 3LVNRUVNL
d11000999
The development of the Web 2.0 led to the birth of new textual genres such as blogs, reviews or forum entries. The increasing number of such texts and the highly diverse topics they discuss make blogs a rich source for analysis. This paper presents a comparative study on open domain and opinion QA systems. A collection of opinion and mixed fact-opinion questions in English is defined and two Question Answering systems are employed to retrieve the answers to these queries. The first one is generic, while the second is specific for emotions. We comparatively evaluate and analyze the systems' results, concluding that opinion Question Answering requires the use of specific resources and methods.
A Comparative Study of Open Domain and Opinion Question Answering Systems for Factual and Opinionated Queries
d219690993
One-to-one tutoring is often an effective means to help students learn, and recent experiments with neural conversation systems are promising. However, large open datasets of tutoring conversations are lacking. To remedy this, we propose a novel asynchronous method for collecting tutoring dialogue via crowdworkers that is both amenable to the needs of deep learning algorithms and reflective of pedagogical concerns. In this approach, extended conversations are obtained between crowdworkers role-playing as both students and tutors. The CIMA collection, which we make publicly available, is novel in that students are exposed to overlapping grounded concepts between exercises and multiple relevant tutoring responses are collected for the same input.CIMA contains several compelling properties from an educational perspective: student roleplayers complete exercises in fewer turns during the course of the conversation and tutor players adopt strategies that conform with some educational conversational norms, such as providing hints versus asking questions in appropriate contexts. The dataset enables a model to be trained to generate the next tutoring utterance in a conversation, conditioned on a provided action strategy.
CIMA: A Large Open Access Dialogue Dataset for Tutoring
d233809908
d904516
Annotation graphs provide an efficient and expressive data model for linguistic annotations of time-series data. This paper reports progress on a complete open-source software infrastructure supporting the rapid development of tools for transcribing and annotating time-series data.This generalpurpose infrastructure uses annotation graphs as the underlying model, and allows developers to quickly create special-purpose annotation tools using common components. An application programming interface, an I/O library, and graphical user interfaces are described. Our experience has shown us that it is a straightforward task to create new special-purpose annotation tools based on this general-purpose infrastructure.
Annotation Tools Based on the Annotation Graph API
d18198350
It is generally acknowledged that collocations in the sense of idiosyncratic word cooccurrences are a challenge in the context of second language learning. Advanced miscollocation correction is thus highly desirable. However, state-of-the-art "collocation checkers" are merely able to detect a possible miscollocation and then offer as correction suggestion a list of collocations of the given keyword retrieved automatically from a corpus. No more targeted correction is possible since state-ofthe-art collocation checkers are not able to identify the type of the miscollocation. We suggest a classification of the main types of lexical miscollocations by US American learners of Spanish and demonstrate its performance.
Classification of Lexical Collocation Errors in the Writings of Learners of Spanish
d9987841
We focus on the probleln of building large repositories of le.rical coJtceplual structure (LCS) representations for verbs in multiple languages. One of the main results of this work is the definition of a relat, ion between broad semantic classes and LCS meaniug components. Our acquisition program--LEXICALL--takes, as input, the result of previous work on verb classification and thematic grid tagging, and outputs LCS representations for different. languages. These representations have been ported into English, Arabic and Spanish lexicons, each containing approximately 9000 verbs. We are currently using these lexicons in an operational foreign language tutoring and machine translation.
Large-Scale Acquisition of LCS-Based Lexicons for Foreign Language Tutoring
d236477518
Chinese word segmentation (CWS) is a fundamental task for Chinese information processing, which always suffers from out-ofvocabulary word issues, especially when it is tested on data from different sources. Although one possible solution is to use more training data, in real applications, these data are stored at different locations and thus are invisible and isolated among each other owing to the privacy or legal issues (e.g., clinical reports from different hospitals). To address this issue and benefit from extra data, we propose a neural model for CWS with federated learning (FL) adopted to help CWS deal with data isolation, where a mechanism of global character associations is proposed to enhance FL to learn from different data sources. Experimental results on a simulated environment with five nodes confirm the effectiveness of our approach, where our approach outperforms different baselines including some well-designed FL frameworks. 1
Federated Chinese Word Segmentation with Global Character Associations
d10924970
We propose new methods to take advantage of text in resource-rich languages to sharpen statistical language models in resource-deficient languages. We achieve this through an extension of the method of lexical triggers to the cross-language problem, and by developing a likelihoodbased adaptation scheme for combining a trigger model with an ¡ -gram model. We describe the application of such language models for automatic speech recognition. By exploiting a side-corpus of contemporaneous English news articles for adapting a static Chinese language model to transcribe Mandarin news stories, we demonstrate significant reductions in both perplexity and recognition errors. We also compare our cross-lingual adaptation scheme to monolingual language model adaptation, and to an alternate method for exploiting cross-lingual cues, via crosslingual information retrieval and machine translation, proposed elsewhere.
Cross-Lingual Lexical Triggers in Statistical Language Modeling
d232021885
d18893263
We describe a nonparametric model and corresponding inference algorithm for learning Synchronous Context Free Grammar derivations for parallel text. The model employs a Pitman-Yor Process prior which uses a novel base distribution over synchronous grammar rules. Through both synthetic grammar induction and statistical machine translation experiments, we show that our model learns complex translational correspondences-including discontiguous, many-to-many alignments-and produces competitive translation results.Further, inference is efficient and we present results on significantly larger corpora than prior work.
A Bayesian Model for Learning SCFGs with Discontiguous Rules
d219301678
d5739852
DISCOURSE MODELS~ DIALOG MEMORIES~ AND USER MODELS
d13425358
This paper describes TwitterHawk, a system for sentiment analysis of tweets which participated in the SemEval-2015 Task 10, Subtasks A through D. The system performed competitively, most notably placing 1 st in topicbased sentiment classification (Subtask C) and ranking 4 th out of 40 in identifying the sentiment of sarcastic tweets. Our submissions in all four subtasks used a supervised learning approach to perform three-way classification to assign positive, negative, or neutral labels. Our system development efforts focused on text pre-processing and feature engineering, with a particular focus on handling negation, integrating sentiment lexicons, parsing hashtags, and handling expressive word modifications and emoticons. Two separate classifiers were developed for phrase-level and tweetlevel sentiment classification. Our success in aforementioned tasks came in part from leveraging the Subtask B data and building a single tweet-level classifier for Subtasks B, C and D.
TwitterHawk: A Feature Bucket Approach to Sentiment Analysis
d225062674
Event clustering on social streams aims to cluster short texts according to event contents. Event clustering models can be divided into unsupervised learning or supervised learning at present. The unsupervised models suffer from poor performance, while the supervised models require lots of labeling data. To address the above issues, this paper proposes a semi-supervised incremental event clustering model SemiEC based on a small-scale annotated dataset. This model encodes the events by LSTM and calculates text similarity by a linear model, and then clusters short texts on social streams. In particular, it uses the samples generated by incremental clustering to retrain the model and redistribute the uncertain samples. Experimental results show that this model SemiEC outperforms the traditional clustering algorithms.
Semi-supervised Method to Cluster Chinese Events on Social Streams
d5566545
A long-standing issue regarding algorithms that manipulate context-free grammars (CFGs) in a "topdown" left-to-right fashion is that left recursion can lead to nontermination. An algorithm is known that transforms any CFG into an equivalent nonleft-recursive CFG, but the resulting grammars are often too large for practical use. We present a new method for removing left recursion from CFGs that is both theoretically superior to the standard algorithm, and produces very compact non-left-recursive CFGs in practice.
Removing Left Recursion from Context-Free Grammars
d46338569
Cet article propose une méthode automatique d'augmentation des variations prosodiques en synthèse par sélection d'unités. Plus particulièrement, nous nous sommes intéressés à la synthèse de phrases interrogatives au sein du système de synthèse eLite, qui procède par sélection d'unités non uniformes et qui ne possède pas les unités nécessaires à la production de questions dans sa base de données. L'objectif de ce travail a été de pouvoir produire des interrogatives via ce système de synthèse, sans pour autant enregistrer une nouvelle base de données pour la sélection des unités. Après avoir décrit les phénomènes syntaxiques et prosodiques en jeu dans l'énonciation de phrases interrogatives, nous présentons la méthode développée, qui allie pré-traitement des cibles à rechercher dans la base de données, et post-traitement du signal de parole lorsqu'il a été généré. Une évaluation perceptive des phrases synthétisées via notre application nous a permis de percevoir l'intérêt du post-traitement en synthèse et de pointer les précautions qu'un tel traitement implique.ABSTRACTProsodic variations in unit-based speech synthesis: the example of interrogative sentencesThis paper proposes an automatic method to increase the number of possible prosodic variations in non-uniform unit-based speech synthesis. More specifically, we are interested in the production of interrogative sentences through the eLite text-to-speech synthesis system, which relies on the selection of non-uniform units, but does not have interrogative units in its speech database. The purpose of this work was to make the system able to synthesize interrogative sentences without having to record a new, interrogative database. After a study of the syntactic and prosodic phenomena involved in the production of interrogative sentences, we present our two-step method: an adapted pre-processing of the unit selection itself, and a post-processing of the whole speech signal built by the system. A perceptual evaluation of sentences synthesized by our approach is then described, which points out both pros and cons of the method and highlights some issues in the very principles of the eLite system. MOTS-CLÉS : synthèse NUU, phrases interrogatives, variations prosodiques.
Variations prosodiques en synthèse par sélection d'unités : l'exemple des phrases interrogatives
d8338820
Entity Resolution is the task of identifying which records in a database refer to the same entity. A standard machine learning pipeline for the entity resolution problem consists of three major components: blocking, pairwise linkage, and clustering. The blocking step groups records by shared properties to determine which pairs of records should be examined by the pairwise linker as potential duplicates. Next, the linkage step assigns a probability score to pairs of records inside each block. If a pair scores above a user-defined threshold, the records are presumed to represent the same entity. Finally, the clustering step turns the input records into clusters of records (or profiles), where each cluster is uniquely associated with a single real-world entity. This paper describes the blocking and clustering strategies used to deploy a massive database of organization entities to power a major commercial People Search Engine. We demonstrate the viability of these algorithms for large data sets on a 50-node hadoop cluster.
Graph-based Approaches for Organization Entity Resolution in MapReduce
d39065986
Violeta.Seretan@unige.ch RÉSUMÉ. Identifier les collocations dans le texte source (par exemple, break record) et les traduire correctement (battre record contre *casser record) constituent un réel défi pour la traduction automatique, d'autant plus que ces expressions sont très nombreuses et très flexibles du point de vue syntaxique. Cet article présente une méthode permettant de repérer des équivalents de traduction pour les collocations à partir de corpus parallèles, qui sera utilisée pour augmenter la base de données lexicales d'un système de traduction. La méthode est fondée sur une approche syntaxique « profonde », dans laquelle les collocations et leurs équivalents potentiels sont extraits à partir de phrases alignées à l'aide d'un analyseur multilingue. L'article présente également les outils qui sont utilisés par cette méthode. Il se concentre en particulier sur les efforts déployés afin de rendre compte des divergences structurelles entre les langues et d'optimiser la performance de la méthode, notamment en ce qui concerne la couverture.ABSTRACT. Identifying collocations in a text (e.g., break record) and correctly translating them (battre record vs. *casser record) represent key issues in machine translation, notably because of their prevalence in language and their syntactic flexibility. This article describes a method for discovering translation equivalents for collocations from parallel corpora, aimed at increasing the lexical coverage of a machine translation system. The method is based on a "deep" syntactic approach, in which collocations and candidate translations are identified from sentence-aligned text with the help of a multilingual parser. The article also introduces the tools on which this method relies. It focuses in particular on the efforts made to account for structural divergences between languages and to improve the method's performance in terms of coverage. MOTS-CLÉS : collocations, équivalents de traduction, analyse syntaxique, alignement de texte.
Extraction de collocations et leurs équivalents de traduction à partir de corpus parallèles
d5555941
Natural Language Processing (NLP) system developers face a number of new challenges. Interest is increasing for real-world systems that use NLP tools and techniques. The quantity of text now available for training and processing is increasing dramatically. Also, the range of languages and tasks being researched continues to grow rapidly. Thus it is an ideal time to consider the development of new experimental frameworks. We describe the requirements, initial design and exploratory implementation of a high performance NLP infrastructure. 1 We use high performance to refer to both state of the art performance and high runtime efficiency.
Blueprint for a High Performance NLP Infrastructure
d2031061
We report work 1 in progress on adding affect-detection to an existing program for virtual dramatic improvisation, monitored by a human director. To partially automate the directors' functions, we have partially implemented the detection of emotions, etc. in users' text input, by means of pattern-matching, robust parsing and some semantic analysis. The work also involves basic research into how affect is conveyed by metaphor.
Developments in Affect Detection in E-drama
d15744092
Mental Space Theory(Fauconnier, 1985)encompasses a wide variety of complex linguistics phenomena that are largely ignored in today's natural language processing systems. These phenomena include conditionals (e.g. If sentences), embedded discourse, and other natural language utterances whose interpretation depends on cognitive partitioning of contextual knowledge. A unification-based formalism, Embodied Construction Grammar (ECG) (Chang et al., 2002a) took initial steps to include space as a primitive type, but most of the details are yet to be worked out. The goal of this paper is to present a scalable computational account of mental spaces based on the Neural Theory of Language (NTL) simulation-based understanding framework(Narayanan, 1999;Chang et al., 2002b). We introduce a formalization of mental spaces based on ECG, and describe how this formalization fits into the NTL framework. We will also use English Conditionals as a case study to show how mental spaces can be parameterized from language.
Scaling Understanding up to Mental Spaces
d5636607
Induction of common sense knowledge about prototypical sequence of events has recently received much attention (e.g., Chambers and Jurafsky (2008); Regneri et al.(2010)). Instead of inducing this knowledge in the form of graphs, as in much of the previous work, in our method, distributed representations of event realizations are computed based on distributed representations of predicates and their arguments, and then these representations are used to predict prototypical event orderings. The parameters of the compositional process for computing the event representations and the ranking component of the model are jointly estimated. We show that this approach results in a substantial boost in performance on the event ordering task with respect to the previous approaches, both on natural and crowdsourced texts.
Inducing Neural Models of Script Knowledge
d236486266
d9554881
Metaphor is an important way of conveying the affect of people, hence understanding how people use metaphors to convey affect is important for the communication between individuals and increases cohesion if the perceived affect of the concrete example is the same for the two individuals. Therefore, building computational models that can automatically identify the affect in metaphor-rich texts like "The team captain is a rock.", "Time is money.", "My lawyer is a shark." is an important challenging problem, which has been of great interest to the research community.To solve this task, we have collected and manually annotated the affect of metaphor-rich texts for four languages. We present novel algorithms that integrate triggers for cognitive, affective, perceptual and social processes with stylistic and lexical information. By running evaluations on datasets in English, Spanish, Russian and Farsi, we show that the developed affect polarity and valence prediction technology of metaphor-rich texts is portable and works equally well for different languages.
Multilingual Affect Polarity and Valence Prediction in Metaphor-Rich Texts
d208324656
d30831790
The aim of this short paper is to present the FLaReNet Thematic Network for Language Resources and Language Technologies to the Asian Language Resources Community. Creation of a wide and committed community and of a shared policy in the field of Language Resources is essential in order to foster a substantial advancement of the field. This paper presents the background, overall objectives and methodology of work of the project, as well as a set of preliminary results.
The FLaReNet Thematic Network: A Global Forum for Cooperation
d11918974
We examine the performance of three dependency parsing systems, in particular, their performance variation across Wikipedia domains. We assess the performance variation of (i) Alpino, a deep grammar-based system coupled with a statistical disambiguation versus (ii) MST and Malt, two purely data-driven statistical dependency parsing systems. The question is how the performance of each parser correlates with simple statistical measures of the text (e.g. sentence length, unknown word rate, etc.). This would give us an idea of how sensitive the different systems are to domain shifts, i.e. which system is more in need for domain adaptation techniques. To this end, we extend the statistical measures used byZhang and Wang (2009)for English and evaluate the systems on several Wikipedia domains by focusing on a freer word-order language, Dutch. The results confirm the general findings of Zhang and Wang (2009), i.e. different parsing systems have different sensitivity against various statistical measure of the text, where the highest correlation to parsing accuracy was found for the measure we added, sentence perplexity.
Improved statistical measures to assess natural language parser performance across domains
d2046924
In this paper we use the Reeks Nederlandse Dialectatlassen as a source for the reconstruction of a 'proto-language' of Dutch dialects. We used 360 dialects from locations in the Netherlands, the northern part of Belgium and French-Flanders. The density of dialect locations is about the same everywhere. For each dialect we reconstructed 85 words. For the reconstruction of vowels we used knowledge of Dutch history, and for the reconstruction of consonants we used well-known tendencies found in most textbooks about historical linguistics. We validated results by comparing the reconstructed forms with pronunciations according to a proto-Germanic dictionary(Köbler, 2003). For 46% of the words we reconstructed the same vowel or the closest possible vowel when the vowel to be reconstructed was not found in the dialect material. For 52% of the words all consonants we reconstructed were the same. For 42% of the words, only one consonant was differently reconstructed. We measured the divergence of Dutch dialects from their 'proto-language'. We measured pronunciation distances to the protolanguage we reconstructed ourselves and correlated them with pronunciation distances we measured to proto-Germanic based on the dictionary. Pronunciation distances were measured using Levenshtein distance, a string edit distance measure. We found a relatively strong correlation (r=0.87).
The relative divergence of Dutch dialect pronunciations from their common source: an exploratory study
d231642302
It took us nearly ten years to get from no wordnet for Polish to the largest wordnet ever built. We started small but quickly learned to dream big. Now we are about to release plWordNet 3.0-emo -complete with sentiment and emotions annotatedand a domestic version of Princeton Word-Net, larger than WordNet 3.1 by nearly ten thousand newly added words. The paper retraces the road we travelled and talks a little about the future.
plWordNet 3.0 -Almost There