_id
stringlengths
4
10
text
stringlengths
0
18.4k
title
stringlengths
0
8.56k
d249204530
This paper reports on the implementation and deployment of an MT system in the Polish branch of EY Global Limited. The system supports standard CAT and MT functionalities such as translation memory fuzzy search, document translation and post-editing, and meets less common, customer-specific expectations. The deployment began in August 2018 with a Proof-of-Concept, and ended with the signing of the Final Version acceptance certificate in October 2021. We present the challenges that were faced during the deployment, particularly in relation to the security check and installation processes in the production environment.
nEYron: Implementation and Deployment of an MT System for a Large Audit & Consulting Corporation
d7251272
This paper proposes a method for incrementally understanding user utterances whose semantic boundaries are not known and responding in real time even before boundaries are determined. It is an integrated parsing and discourse processing method that updates the partial result of understanding word by word, enabling responses based on the partial result. This method incrementally finds plausible sequences of utterances that play crucial roles in the task execution of dialogues, and utilizes beam search to deal with the ambiguity of boundaries as well as syntactic and semantic ambiguities. The results of a preliminary experiment demonstrate that this method understands user utterances better than an understanding method that assumes pauses to be semantic boundaries.
Understanding Unsegmented User Utterances in Real-Time Spoken Dialogue Systems
d304033
Theoretical and methodological issues regarding the use of Language Technologies for patients with limited English proficiency Theoretical and methodological issues regarding the use of Language Technologies for patients with limited English proficiency 2
d51838647
Vector space word representations are typically learned using only co-occurrence statistics from text corpora. Although such statistics are informative, they disregard easily accessible (and often carefully curated) information archived in semantic lexicons such as WordNet, FrameNet, and the Paraphrase Database. This paper proposes a technique to leverage both distributional and lexicon-derived evidence to obtain better representations. We run belief propagation on a word type graph constructed from word similarity information from lexicons to encourage connected (related) words to have similar representations, and also to be close to the unsupervised vectors. Evaluated on a battery of standard lexical semantic evaluation tasks in several languages, using several different underlying word vector models, we obtain substantially improved vectors and consistently outperform existing approaches of incorporating semantic knowledge in word vectors.
Retrofitting Word Vectors to Semantic Lexicons
d252667572
Treebanks have traditionally included only text and were derived from written sources such as newspapers or the web. We introduce the Aligned Multimodal Movie Treebank (AMMT) †, an English language treebank derived from dialog in Hollywood movies which includes transcriptions of the audiovisual streams with word-level alignment, as well as part of speech tags and dependency parses in the Universal Dependencies (UD) formalism. AMMT consists of 31,264 sentences and 218,090 words, that will amount to the 3rd largest UD English treebank and the only multimodal treebank in UD. We find that parsers on this dataset often have difficulty with conversational speech and that they often rely on punctuation which is often not available from speech recognizers. To help with the web-based annotation effort, we also introduce the Efficient Audio Alignment Annotator (EAAA) ‡, a companion tool that enables annotators to significantly speed-up their annotation processes.
The Aligned Multimodal Movie Treebank: An audio, video, dependency-parse treebank
d38489946
Les systèmes d'extraction d'information doivent faire face depuis toujours à une double difficulté : d'une part, ils souffrent d'une dépendance forte vis-à-vis du domaine pour lesquels ils ont été développés ; d'autre part, leur coût de développement pour un domaine donné est important. Le travail que nous présentons dans cet article se focalise sur la seconde problématique en proposant néanmoins une solution en relation avec la première. Plus précisément, il aborde la tâche d'étiquetage en rôles événementiels dans le cadre du remplissage de formulaire (template filling) en proposant pour ce faire de s'appuyer sur un modèle de représentation distribuée de type neuronal. Ce modèle est appris à partir d'un corpus représentatif du domaine considéré sans nécessiter en amont l'utilisation de prétraitements linguistiques élaborés. Il fournit un espace de représentation permettant à un classifieur supervisé traditionnel de se dispenser de l'utilisation de traits complexes et variés (traits morphosyntaxiques, syntaxiques ou sémantiques). Par une série d'expérimentations menées sur le corpus de la campagne d'évaluation MUC-4, nous montrons en particulier que cette approche permet de dépasser les performances de l'état de l'art et que cette différence est d'autant plus importante que la taille du corpus d'entraînement est faible. Nous montrons également l'intérêt de l'adaptation de ce type de modèle au domaine traité par rapport à l'utilisation de représentations distribuées à usage générique.Abstract. Information Extraction systems must cope with two problems : they heavily depend on the considered domain but the cost of development for a domain-specific system is important. We propose a new solution for role labeling in the event-extraction task that relies on using unsupervised word representations (word embeddings) as word features. We automatically learn domain-relevant distributed representations from a domain-specific unlabeled corpus without complex linguistic processing and use these features in a supervised classifier. Our experimental results on the MUC-4 corpus show that this system outperforms state-of-the-art systems on this event extraction task, especially when the amount of annotated data is small.We also show that using word representations induced on a domain-relevant dataset achieves better results than using more general word embeddings.Mots-clés : Extraction d'information, extraction de rôles événementiels, modèles de langage neuronaux.
21 ème Traitement Automatique des Langues Naturelles
d683878
Certains textes d'une langue peuvent 6tre considdrds comma des spdcifications linguistiques capables d'engendrer des animations qui simulent la comprdhension des taxies d'entr6e. Pour
INTERACTION BETWEEN LEXICON AND IMAGE: LINGUISTIC SPECIFICATIONS OF ANIMATION
d241583386
Large annotated corpora for coreference resolution are available for few languages. For machine translation, however, strong black-box systems exist for many languages. We empirically explore the appealing idea of leveraging such translation tools for bootstrapping coreference resolution in languages with limited resources. Two scenarios are analyzed, in which a large coreference corpus in a high-resource language is used for coreference predictions in a smaller language, i.e., by machine translating either the training corpus, or the test data. In our empirical evaluation of coreference resolution using the two scenarios on several medium-resource languages, we find no improvement over monolingual baseline models. Our analysis of the various sources of error inherent to the studied scenarios, reveals that in fact the quality of contemporary machine translation tools is the main limiting factor.
Lazy Low-Resource Coreference Resolution: a Study on Leveraging Black-Box Translation Tools
d5885072
The paper presents the general architecture of an experimental system for English-Swedish written and spoken summarization of news reports and focuses on the information extraction component. Information extraction and information structuring is based on the notion of mental spaces -one of the central notions in cognitive semantics. Speech act phrases, epistemic verbs, tense forms and certain adverbs and subjunctions are identified by the semantico-syntactic parsing procedure as marking shifts between different mental spaces and the textual information is structured accordingly. Less salient mental spaces are omitted in the textual representation. The summary generation component has access to language specific ways of formulating news reports. The text generator also provides the syntactic structures with prosodic markers that modify the default prosodic rules of the text-to-speech system that reads the summary.
MENTAL SPACES, SPACE BUILDERS AND BILINGUAL SUMMARIZATION OF NEWS REPORTS
d38742558
This paper aims to implement what is referred to as the collocation of the Arabic keywords approach for extracting formulaic sequences (FSs) in the form of high frequency but semantically regular formulas that are not restricted to any syntactic construction or semantic domain. The study applies several distributional semantic models in order to automatically extract relevant FSs related to Arabic keywords. The data sets used in this experiment are rendered from a new developed corpus-based Arabic wordlist consisting of 5,189 lexical items which represent a variety of modern standard Arabic (MSA) genres and regions, the new wordlist being based on an overlapping frequency based on a comprehensive comparison of four large Arabic corpora with a total size of over 8 billion running words. Empirical n-best precision evaluation methods are used to determine the best association measures (AMs) for extracting high frequency and meaningful FSs. The gold standard reference FSs list was developed in previous studies and manually evaluated against well-established quantitative and qualitative criteria. The results demonstrate that the MI.log_f AM achieved the highest results in extracting significant FSs from the large MSA corpus, while the T-score association measure achieved the worst results.
An Empirical Study of Arabic Formulaic Sequence Extraction Methods
d6305097
We study distributional similarity measures for the purpose of improving probability estimation for unseen cooccurrences. Our contributions are three-fold: an empirical comparison of a broad range of measures; a classification of similarity functions based on the information that they incorporate; and the introduction of a novel function that is superior at evaluating potential proxy distributions.
Measures of Distributional Similarity
d30823
This paper addresses the issue of cluster labeling and presents a method for assigning labels by using concepts in a machinereadable dictionary.We assume that salient terms in the cluster content have the same hypernym because hypernymic semantic relation represents a generalization that goes from specific to generic. Our experimental results reveal that hypernymic semantic relations can be exploited to increase labeling accuracy, as the results of 0.441 F-score improves over the two baselines.
Cluster Labeling based on Concepts in a Machine-Readable Dictionary
d6622975
In this paper, we propose adding long-term grammatical information in a Whole Sentence Maximun Entropy Language Model (WSME) in order to improve the performance of the model. The grammatical information was added to the WSME model as features and were obtained from a Stochastic Context-Free grammar. Finally, experiments using a part of the Penn Treebank corpus were carried out and significant improvements were acheived.
Improvement of a Whole Sentence Maximum Entropy Language Model Using Grammatical Features
d18443059
This paper presents CELI's participation in the SemEval Cross-lingual Textual Entailment for Content Synchronization task.
CELI: An Experiment with Cross Language Textual Entailment
d237366092
Zipf's law is a succinct yet powerful mathematical law in linguistics. However, the meaningfulness and units of the law have remained controversial. The current study uses online video comments call "danmu comment" to investigate these two questions. The results are consistent with previous studies arguing Zipf's law is subject to topical coherence. Specifically, it is found that danmu comments sampled from a single video follow Zipf's law better than danmu comments sampled from a collection of videos. The results also suggest the existence of multiple units of Zipf's law. When different units including words, n-grams, and danmu comments are compared, both words and danmu comments obey Zipf's law and words may be a better fit. The issues of combined n-grams in the literature are also discussed.
Meaningfulness and unit of Zipf 's law: evidence from danmu comments
d259370521
Abductive Reasoning, has long been considered to be at the core ability of humans, which enables us to infer the most plausible explanation of incomplete known phenomena in daily life. However, such critical reasoning capability is rarely investigated for contemporary AI systems under such limited observations. To facilitate this research community, this paper sheds new light on Abductive Reasoning by studying a new vision-language task, Multi-modal Action chain abductive Reasoning (MAR), together with a large-scale Abductive Reasoning dataset: Given an incomplete set of language described events, MAR aims to imagine the most plausible event by spatio-temporal grounding in past video and then infer the hypothesis of subsequent action chain that can best explain the language premise. To solve this task, we propose a strong baseline model that realizes MAR from two perspectives: (i) we first introduce the transformer, which learns to encode the observation to imagine the plausible event with explicitly interpretable event grounding in the video based on the commonsense knowledge recognition ability. (ii) To complete the assumption of a follow-up action chain, we design a novel symbolic module that can complete strict derivation of the progressive action chain layer by layer. We conducted extensive experiments on the proposed dataset, and the experimental study shows that the proposed model significantly outperforms existing video-language models in terms of effectiveness on our newly created MAR dataset. Our dataset is available 1 .
Multi-modal Action Chain Abductive Reasoning
d11403175
A metaphor is a figure of speech that refers to one concept in terms of another, as in "He is such a sweet person". Metaphors are ubiquitous and they present NLP with a range of challenges for WSD, IE, etc. Identifying metaphors is thus an important step in language understanding. However, since almost any word can serve as a metaphor, they are impossible to list. To identify metaphorical use, we assume that it results in unusual semantic patterns between the metaphor and its dependencies. To identify these cases, we use SVMs with tree-kernels on a balanced corpus of 3872 instances, created by bootstrapping from available metaphor lists. 1 We outperform two baselines, a sequential and a vectorbased approach, and achieve an F1-score of 0.75.
Identifying Metaphorical Word Use with Tree Kernels
d220650600
Consistent unsupervised estimators for anchored PCFGs
d8619716
NESPOLE! is a EU/NSF jointly funded project exploring multilingual (speech-to-speech translation) and multimodal communication in e-services. The current system allows users speaking different languages (English, French, German and Italian) to interact on the tourism domain through the Internet using thin terminals (PCs with sound and video cards and H323 video-conferencing software). Web pages and maps can be shared among users, by means of a special White Board. NESPOLE! provides for multimodal communication by allowing users to perform gestures on displayed maps, by means of a tablet and a pen. To test the integration of multilinguality with multimodality, and the impact of the latter on the former, we designed and executed an experiment, involving 35 subjects, 28 playing the role of customers (English and German) and 7 playing the role of agents (Italian). Subjects communicated through the NESPOLE! system to accomplish an assigned task (booking an hotel), meeting specific constraints as to available budget, location, distance from relevant spots, etc. Two experimental conditions were considered and compared, differing as to whether multimodal resources were available: a speech-only condition (SO), and a multimodal condition (MM). This paper reports on the resulting corpus, and on the results of the experiment.
NESPOLE!'s Multilingual and Multimodal Corpus
d40503511
For many applications of question answering (QA), being able to explain why a given model chose an answer is critical. However, the lack of labeled data for answer justifications makes learning this difficult and expensive. Here we propose an approach that uses answer ranking as distant supervision for learning how to select informative justifications, where justifications serve as inferential connections between the question and the correct answer while often containing little lexical overlap with either. We propose a neural network architecture for QA that reranks answer justifications as an intermediate (and human-interpretable) step in answer selection. Our approach is informed by a set of features designed to combine both learned representations and explicit features to capture the connection between questions, answers, and answer justifications. We show that with this end-to-end approach we are able to significantly improve upon a strong IR baseline in both justification ranking (+9% rated highly relevant) and answer selection (+6% P@1).
Tell Me Why: Using Question Answering as Distant Supervision for Answer Justification
d829361
This paper focuses on the task of collocation polarity disambiguation. The collocation refers to a binary tuple of a polarity word and a target (such as ⟨long, battery life⟩ or ⟨long, startup⟩), in which the sentiment orientation of the polarity word ("long") changes along with different targets ("battery life" or "startup"). To disambiguate a collocation's polarity, previous work always turned to investigate the polarities of its surrounding contexts, and then assigned the majority polarity to the collocation. However, these contexts are limited, thus the resulting polarity is insufficient to be reliable. We therefore propose an unsupervised three-component framework to expand some pseudo contexts from web, to help disambiguate a collocation's polarity.Without using any additional labeled data, experiments show that our method is effective. URL Camera http://www.qqdc.com.cn http://forums.nphoto.net
Collocation Polarity Disambiguation Using Web-based Pseudo Contexts
d151872921
Cet article présente une méthode de construction d'une ressource lexicale de sentiments/émotions. Son originalité est d'associer le crowdsourcing via un GWAP (Game With A Purpose) à un algorithme de propagation, les deux ayant pour support et source de données le réseau lexical JeuxDeMots. Nous décrivons le jeu permettant de collecter des informations de sentiments, ainsi que les principes et hypothèses qui sous-tendent le fonctionnement de l'algorithme qui les propage au sein du réseau. Enfin, nous donnons les résultats quantitatifs et expliquons les méthodes d'évaluation qualitative des données obtenues, à la fois par le jeu et par la propagation par l'algorithme. Ces méthodes incluent une comparaison avec Emolex, une autre ressource de sentiments/émotions.ABSTRACTBuilding a sentiment lexicon through crowdsourcing and spreadingThis paper describes a method for building a sentiment lexicon. Its originality is to combine crowdsourcing via a Game With A Purpose (GWAP) with automated propagation of sentiments via a spreading algorithm, both using the lexical JeuxDeMots network as data source and substratum. We present the game designed to collect sentiment data, and the principles and assumptions underlying the action of the algorithm that propagates them within the network. Finally, we give quantitative results and explain the methods for qualitative evaluation of the data obtained for both the game and the spreading done by the algorithm, these methods including a comparison with Emolex, another resource sentiment/emotions.
Construire un lexique de sentiments par crowdsourcing et propagation
d5654020
We present a Korean question answering framework for restricted domains, called K-QARD. K-QARD is developed to achieve domain portability and robustness, and the framework is successfully applied to build question answering systems for several domains.
K-QARD: A Practical Korean Question Answering Framework for Restricted Domain
d8467411
This paper introduces a dialogue-based ontology authoring interface. The aim of this interface is to simplify the ontology authoring process for the users. The design of the interface is based on the insights that have emerged from research into human language and explorations of authoring activities in Protégé. We will discuss our initial findings regarding ontology authoring patterns and how we aim at modelling user's goals and intentions. We will also discuss the challenges arising whilst generating interesting and comprehensible feedback. * This work was supported by EPSRC grants EP/J014354/1 and EP/J014176/1. 1 See e.g. various contributions in The Structure of Multimodal Dialogue, Volumes 1 and 2.
Ontology Authoring Inspired By Dialogue *
d52917952
We present a study on predicting the factuality of reporting and bias of news media. While previous work has focused on studying the veracity of claims or documents, here we are interested in characterizing entire news media. These are under-studied but arguably important research problems, both in their own right and as a prior for fact-checking systems. We experiment with a large list of news websites and with a rich set of features derived from (i) a sample of articles from the target news medium, (ii) its Wikipedia page, (iii) its Twitter account, (iv) the structure of its URL, and (v) information about the Web traffic it attracts. The experimental results show sizable performance gains over the baselines, and confirm the importance of each feature type.
Predicting Factuality of Reporting and Bias of News Media Sources
d7893375
In this paper, we focus on topic-based microblog sentiment classification task that classify the microblog's sentiment polarities toward a specific topic. Most of the existing approaches for sentiment analysis usually adopt the target-independent strategy, which may assign irrelevant sentiments to the given topic. In this paper, we leverage the non-negative matrix factorization to get the relevant topic words and then further incorporate target-dependent features for topic-based microblog sentiment classification. According to the experiment results, our system (NDMSCS) has achieved a good performance in the SIGHAN 8 Task 2.
NDMSCS: A Topic-Based Chinese Microblog Polarity Classification System
d14071324
In cross-database retrieval, the domain of queries di ers from that of the retrieval target in the distribution of term occurrences. This causes incorrect term weighting in the retrieval system which assigns to each term a retrieval weight based on the distribution of term occurrences. To resolve the problem, we propose \term distillation", a framework for query term selection in cross-database retrieval. The experiments using the NTCIR-3 patent retrieval test collection demonstrate that term distillation is e ective for cross-database retrieval.
Term Distillation in Patent Retrieval
d252624438
Mastering a foreign language like English can bring better opportunities. In this context, although multiword expressions (MWE) are associated with proficiency, they are usually neglected in the works of automatic scoring language learners. Therefore, we study MWE-based features (i.e., occurrence and concreteness) in this work, aiming at assessing their relevance for automated essay scoring. To achieve this goal, we also compare MWE features with other classic features, such as length-based, graded resource, orthographic neighbors, part-of-speech, morphology, dependency relations, verb tense, language development, and coherence. Although the results indicate that classic features are more significant than MWE for automatic scoring, we observed encouraging results when looking at the MWE concreteness through the levels.
MWE for Essay Scoring English as a Foreign Language
d245838346
This paper is an exploration of three types of dependency distances of the sentences in a multilingual parallel corpus, in order to verify the expectation that we can obtain an objective, quantitative measure to indicate cross-linguistic variation of the syntacticstructural setting of human languages. The results indicated that pair-wise average dependency distances seem to categorize languages into several groups, and typewise average dependency distances seem to provide us with fine-grained quantification of syntactic properties of individual natural languages.
Three Types of Average Dependency Distances of Sentences in a Multilingual Parallel Corpus
d15606320
SRI has developed a speaker-independent continuous speech, large vocabulary speech recognition system, DECIPHER, that provides state-of-theart performance on the DARPA standard speakerindependent resource management training and testing materials. SRI's approach is to integrate speech and linguistic knowledge into the HMM framework. This paper describes performance improvements arising from detailed phonological modeling and from the incorporation of cross-word coarticulatory constraints.
SRI's DECIPHER System
d10250647
Category Cooccurrence Restrictiorls and the Elimination of Metar1~les
d232021864
d11958702
This paper studies generation of descriptive sentences from densely annotated images. Previous work studied generation from automatically detected visual information but produced a limited class of sentences, hindered by currently unreliable recognition of activities and attributes. Instead, we collect human annotations of objects, parts, attributes and activities in images. These annotations allow us to build a significantly more comprehensive model of language generation and allow us to study what visual information is required to generate human-like descriptions. Experiments demonstrate high quality output and that activity annotations and relative spatial location of objects contribute most to producing high quality sentences. * This work was conducted at Microsoft Research. 1 While object recognition is improving (ImageNet accuracy is over 90% for 1000 classes) progress in activity recog-
See No Evil, Say No Evil: Description Generation from Densely Labeled Images
d252818940
Distance metric learning has become a popular solution for few-shot Named Entity Recognition (NER). The typical setup aims to learn a similarity metric for measuring the semantic similarity between test samples and referents, where each referent represents an entity class. The effect of this setup may, however, be compromised for two reasons. First, there is typically a limited optimization exerted on the representations of entity tokens after initing by pre-trained language models. Second, the referents may be far from representing corresponding entity classes due to the label scarcity in the few-shot setting. To address these challenges, we propose a novel approach named COntrastive learning with Prompt guiding for few-shot NER (COPNER). We introduce a novel prompt composed of class-specific words to COPNER to serve as 1) supervision signals for conducting contrastive learning to optimize token representations; 2) metric referents for distance-metric inference on test samples. Experimental results demonstrate that COPNER outperforms state-of-the-art models with a significant margin in most cases. Moreover, COP-NER shows great potential in the zero-shot setting. The source code is available at: https: //github.com/AndrewHYC/COPNER.
COPNER: Contrastive Learning with Prompt Guiding for Few-shot Named Entity Recognition
d258486932
This paper presents the results of the UNLP 2023 shared task, the first Shared Task on Grammatical Error Correction for the Ukrainian language. The task included two tracks: GEC-only and GEC+Fluency. The dataset and evaluation scripts were provided to the participants, and the final results were evaluated on a hidden test set. Six teams submitted their solutions before the deadline, and four teams submitted papers that were accepted to appear in the UNLP workshop proceedings and are referred to in this report. The CodaLab leaderboard is left open for further submissions.
The UNLP 2023 Shared Task on Grammatical Error Correction for Ukrainian
d259370506
Hierarchical topic models, which can extract semantically meaningful topics from a text corpus in an unsupervised manner and automatically organise them into a topic hierarchy, have been widely used to discover the underlying semantic structure of documents. However, the existing models often assume in the prior that the topic hierarchy is a tree structure, ignoring symmetrical dependencies between topics at the same level. Moreover, the sparsity of text data often complicate the analysis. To address these issues, we propose NSEM-GMHTM as a deep topic model, with a Gaussian mixture prior distribution to improve the model's ability to adapt to sparse data, which explicitly models hierarchical and symmetric relations between topics through the dependency matrices and nonlinear structural equations. Experiments on widely used datasets show that our NSEM-GMHTM generates more coherent topics and a more rational topic structure when compared to state-of-theart baselines. Our code is available at https: //github.com/nbnbhwyy/NSEM-GMHTM. 1
Nonlinear Structural Equation Model Guided Gaussian Mixture Hierarchical Topic Modeling
d259376483
This paper describes the system we submitted to the IWSLT 2023 multilingual speech translation track, with the input is speech from one language, and the output is text from 10 target languages. Our system consists of CNN and Transformer, convolutional neural networks downsample speech features and extract local information, while transformer extract global features and output the final results. In our system, we use speech recognition tasks to pre-train encoder parameters, and then use speech translation corpus to train the multilingual speech translation model. We have also adopted other methods to optimize the model, such as data augmentation, model ensemble, etc. Our system can obtain satisfactory results on test sets of 10 languages in the MUST-C corpus.
BIT's System for Multilingual Track
d5180810
This paper presents a comparison of open source search engine development frameworks in the context of their malleability for constructing multilingual search index. The comparison study reveals that none of these frameworks are designed for this task. This paper elicits the challenges involved in building a multilingual index. We also discuss policy decisions and the implementation changes made to an open source framework for building such an index. As a main contribution of this work, we propose an architecture that can be used for building multilingual index. It also lists some of the open research challenges involved.
Building Multilingual Search Index using open source framework
d51867076
Successful evidence-based medicine (EBM) applications rely on answering clinical questions by analyzing large medical literature databases. In order to formulate a well-defined, focused clinical question, a framework called PICO is widely used, which identifies the sentences in a given medical text that belong to the four components: Participants/Problem (P), Intervention (I), Comparison (C) and Outcome (O). In this work, we present a Long Short-Term Memory (LSTM) neural network based model to automatically detect PICO elements. By jointly classifying subsequent sentences in the given text, we achieve state-of-the-art results on PICO element classification compared to several strong baseline models. We also make our curated data public as a benchmarking dataset so that the community can benefit from it.
PICO Element Detection in Medical Text via Long Short-Term Memory Neural Networks
d1437405
The noisy channel model approach is successfully applied to various natural language processing tasks. Currently the main research focus of this approach is adaptation methods, how to capture characteristics of words and expressions in a target domain given example sentences in that domain. As a solution we describe a method enlarging the vocabulary of a language model to an almost infinite size and capturing their context information. Especially the new method is suitable for languages in which words are not delimited by whitespace. We applied our method to a phoneme-to-text transcription task in Japanese and reduced about 10% of the errors in the results of an existing method.
Phoneme-to-Text Transcription System with an Infinite Vocabulary
d14017344
In this paper, we present our system description for the CoNLL-2012 coreference resolution task on English, Chinese and Arabic. We investigate a projection-based model in which we first translate Chinese and Arabic into English, run a publicly available coreference system, and then use a new projection algorithm to map the coreferring entities back from English into mention candidates detected in the Chinese and Arabic source. We compare to a baseline that just runs the English coreference system on the supplied parses for Chinese and Arabic. Because our method does not beat the baseline system on the development set, we submit outputs generated by the baseline system as our final submission.
ICT: System Description for CoNLL-2012
d25281454
The computational solution uses to solve problems related to the authorship identification and verification has grown progressively in areas such as computing, linguistics and law. This article aims to provide a method for the identification of authors ot text, based on a conjunct of attributes stilometry, using on the characteristics of Portuguese language.
Identificação de Autoria de Textos através do uso de Classes Linguísticas da Língua Portuguesa
d252819255
Being able to infer possible events related to a specific target is critical to natural language processing. One challenging task in this line is event sequence prediction, which aims at predicting a sequence of events given a goal. Currently existing approach models this task as a statistical induction problem, to predict a sequence of events by exploring the similarity between the given goal and the known sequences of events. However, this statistical based approach is complex and predicts a limited variety of events. At the same time this approach ignores the rich knowledge of external events that is important for predicting event sequences. In this paper, in order to predict more diverse events, we first reformulate the event sequence prediction problem as a sequence generation problem. Then to leverage external event knowledge, we propose a three-stage model including augmentation, retrieval and generation. Experimental results on the event sequence prediction dataset show that our model outperforms existing methods, demonstrating the effectiveness of the proposed model.
Augmentation, Retrieval, Generation: Event Sequence Prediction with a Three-Stage Sequence-to-Sequence Approach
d243865138
Quality estimation (QE) of machine translation (MT) aims to evaluate the quality of machine-translated sentences without references and is important in practical applications of MT. Training QE models require massive parallel data with hand-crafted quality annotations, which are time-consuming and laborintensive to obtain. To address the issue of the absence of annotated training data, previous studies attempt to develop unsupervised QE methods. However, very few of them can be applied to both sentence-and word-level QE tasks, and they may suffer from noises in the synthetic data. To reduce the negative impact of noises, we propose a self-supervised method for both sentence-and word-level QE, which performs quality estimation by recovering the masked target words. Experimental results show that our method outperforms previous unsupervised methods on several QE tasks in different language pairs and domains.
Self-Supervised Quality Estimation for Machine Translation
d748769
Operational intelligence applications in specific domains are developed using numerous natural language processing technologies and tools. A challenge for this integration is to take into account the limitations of each of these technologies in the global evaluation of the application. We present in this article a complex intelligence application for the gathering of information from the Web about recent seismic events. We present the different components needed for the development of such system, including Information Extraction, Filtering and Clustering, and the technologies behind each component. We also propose an independent evaluation of each component and an insight of their influence in the overall performance of the system.
Evaluation of a Complex Information Extraction Application in Specific Domain
d21677829
Natural Language Inference is a challenging task that has received substantial attention, and state-of-the-art models now achieve impressive test set performance in the form of accuracy scores. Here, we go beyond this single evaluation metric to examine robustness to semantically-valid alterations to the input data. We identify three factorsinsensitivity, polarity and unseen pairs -and compare their impact on three SNLI models under a variety of conditions. Our results demonstrate a number of strengths and weaknesses in the models' ability to generalise to new in-domain instances. In particular, while strong performance is possible on unseen hypernyms, unseen antonyms are more challenging for all the models. More generally, the models suffer from an insensitivity to certain small but semantically significant alterations, and are also often influenced by simple statistical correlations between words and training labels. Overall, we show that evaluations of NLI models can benefit from studying the influence of factors intrinsic to the models or found in the dataset used.
Behavior Analysis of NLI Models: Uncovering the Influence of Three Factors on Robustness
d10990582
This work extends the study ofGermann et al. (2010)in investigating the lexical organization of verbs. Particularly, we look at the influence of frequency on the process of lexical acquis ition and use. We e xamine data obtained fro m psycholinguistic action naming tasks performed by children and adults (speakers of Brazilian Portuguese), and analyze some characteristics of the verbs used by each group in terms of similarity of content, using Jaccard"s coefficient, and of topology, using graph theory. The experiments suggest that younger children tend to use more frequent verbs than adults to describe events in the world.
An Investigation on the Influence of Freque ncy on the Lexical Organization of Verbs
d241583427
d12418427
With performance above 97% accuracy for newspaper text, part of speech (POS) tagging might be considered a solved problem. Previous studies have shown that allowing the parser to resolve POS tag ambiguity does not improve performance. However, for grammar formalisms which use more fine-grained grammatical categories, for example TAG and CCG, tagging accuracy is much lower. In fact, for these formalisms, premature ambiguity resolution makes parsing infeasible.We describe a multi-tagging approach which maintains a suitable level of lexical category ambiguity for accurate and efficient CCG parsing. We extend this multitagging approach to the POS level to overcome errors introduced by automatically assigned POS tags. Although POS tagging accuracy seems high, maintaining some POS tag ambiguity in the language processing pipeline results in more accurate CCG supertagging.
Multi-Tagging for Lexicalized-Grammar Parsing
d13573624
In this paper, we present a novel approach for mining opinions from product reviews, where it converts opinion mining task to identify product features, expressions of opinions and relations between them. By taking advantage of the observation that a lot of product features are phrases, a concept of phrase dependency parsing is introduced, which extends traditional dependency parsing to phrase level. This concept is then implemented for extracting relations between product features and expressions of opinions. Experimental evaluations show that the mining task can benefit from phrase dependency parsing.1. I highly [recommend] (1) the Canon SD500 (1) to anybody looking for a compact camera that can take [good] (2) pictures (2) .This camera takes [amazing](3) image qualities (3) and its size (4) [cannot be beat] (4) .
Phrase Dependency Parsing for Opinion Mining
d390966
This paper presents a Function Word centered, Syntax-based (FWS) solution to address phrase ordering in the context of statistical machine translation (SMT). Motivated by the observation that function words often encode grammatical relationship among phrases within a sentence, we propose a probabilistic synchronous grammar to model the ordering of function words and their left and right arguments. We improve phrase ordering performance by lexicalizing the resulting rules in a small number of cases corresponding to function words. The experiments show that the FWS approach consistently outperforms the baseline system in ordering function words' arguments and improving translation quality in both perfect and noisy word alignment scenarios.
Ordering Phrases with Function Words
d9142373
SEGMENTING NATURAL LANGUAGE BY ARTICULATORY FEATURES
d13225085
A new approach to structure-driven generation is I)resented that is based on a separate semantics as input structure. For the first time, a GPSGbased formalism is complemented with a system of pattern-action rules that relate the parts of a semantics to appropriate syntactic rules. This way a front end generator can be adapted to some application system (such as a machine translation system) more easily than would be possible with many previous generators based on modern grammar formalisms. 1
STRUCTURE-DRIVEN GENERATION FROM SEPARATE SEMANTIC REPRESENTATIONS
d2453283
Transition-based dependency parsers generally use heuristic decoding algorithms but can accommodate arbitrarily rich feature representations. In this paper, we show that we can improve the accuracy of such parsers by considering even richer feature sets than those employed in previous systems. In the standard Penn Treebank setup, our novel features improve attachment score form 91.4% to 92.9%, giving the best results so far for transitionbased parsing and rivaling the best results overall. For the Chinese Treebank, they give a signficant improvement of the state of the art. An open source release of our parser is freely available.
Transition-based Dependency Parsing with Rich Non-local Features
d2487982
Financial investors trade on the basis of information, and in particular, on the likelihood that a piece of information will impact the market. The ability to predict this within milliseconds of the information being released would be useful in applications such as algorithmic trading. We present a solution for classifying investor sentiment on internet stock message boards. Our solution develops upon prior work and examines several approaches for selecting features in a messy and sparse data set. Using a variation of the Bayes classifier with feature selection methods allows us to produce a system with better accuracy, execution performance and precision than using conventional Naïve Bayes and SVM classifiers. Evaluation against author-selected sentiment labels results in an accuracy of 78.72% compared to a human annotation and conventional Naïve Bayes accuracy of 57% and 65.63% respectively.
A Sentiment Detection Engine for Internet Stock Message Boards
d219303905
d52111710
We provide a comprehensive analysis of the interactions between pre-trained word embeddings, character models and POS tags in a transition-based dependency parser. While previous studies have shown POS information to be less important in the presence of character models, we show that in fact there are complex interactions between all three techniques. In isolation each produces large improvements over a baseline system using randomly initialised word embeddings only, but combining them quickly leads to diminishing returns. We categorise words by frequency, POS tag and language in order to systematically investigate how each of the techniques affects parsing quality. For many word categories, applying any two of the three techniques is almost as good as the full combined system. Character models tend to be more important for low-frequency open-class words, especially in morphologically rich languages, while POS tags can help disambiguate highfrequency function words. We also show that large character embedding sizes help even for languages with small character sets, especially in morphologically rich languages.
An Investigation of the Interactions Between Pre-Trained Word Embeddings, Character Models and POS Tags in Dependency Parsing
d38132419
Native Language Identification (NLI) is the task of automatically identifying the native language (L1) of an individual based on their language production in a learned language. It is typically framed as a classification task where the set of L1s is known a priori. Two previous shared tasks on NLI have been organized where the aim was to identify the L1 of learners of English based on essays (2013) and spoken responses (2016) they provided during a standardized assessment of academic English proficiency. The 2017 shared task combines the inputs from the two prior tasks for the first time. There are three tracks: NLI on the essay only, NLI on the spoken response only (based on a transcription of the response and i-vector acoustic features), and NLI using both responses. We believe this makes for a more interesting shared task while building on the methods and results from the previous two shared tasks. In this paper, we report the results of the shared task. A total of 19 teams competed across the three different sub-tasks. The fusion track showed that combining the written and spoken responses provides a large boost in prediction accuracy. Multiple classifier systems (e.g. ensembles and meta-classifiers) were the most effective in all tasks, with most based on traditional classifiers (e.g. SVMs) with lexical/syntactic features.
A Report on the 2017 Native Language Identification Shared Task
d15684367
This paper compares two approaches to lexical compound word reconstruction from a speech recognizer output where compound words are decomposed. The first method has been proposed earlier and uses a dedicated language model that models compound tails in the context of the preceding words and compound heads only in the context of the tail. A novel approach models imaginable compound particle connectors as hidden events and predicts such events using a simple N -gram language model. Experiments on two Estonian speech recognition tasks show that the second approach performs consistently better and achieves high accuracy.
Automatic Compound Word Reconstruction for Speech Recognition of Compounding Languages
d19038119
Question processing is a key step in Question Answering systems. For this task, it has been shown that a good syntactic analysis of questions helps to improve the results. However, general parsers seem to present some disadvantages in question analysis.We present a specific tool under development for Spanish question analysis in a QA context: SpQA.SpQA is a parser designed to deal with the special syntactic features of Spanish questions and to cover some needs of question analysis in QA systems such as target identification. The system has been evaluated together with three Spanish general parsers. In this comparative evaluation, SpQA shows the best results in Spanish question analysis.
Question Parsing for QA in Spanish
d5246978
This paper describes our approaches to Native Language Identification (NLI) for the NLI shared task 2013. NLI as a sub area of author profiling focuses on identifying the first language of an author given a text in his second language. Researchers have reported several sets of features that have achieved relatively good performance in this task. The type of features used in such works are: lexical, syntactic and stylistic features, dependency parsers, psycholinguistic features and grammatical errors. In our approaches, we selected lexical and syntactic features based on n-grams of characters, words, Penn TreeBank (PTB) and Universal Parts Of Speech (POS) tagsets, and perplexity values of character of n-grams to build four different models. We also combine all the four models using an ensemble based approach to get the final result. We evaluated our approach over a set of 11 native languages reaching 75% accuracy.
Native Language Identification: a Simple n-gram Based Approach
d202772508
Twitter is used for various applications such as disaster monitoring and news material gathering. In these applications, each Tweet is classified into pre-defined classes. These classes have a semantic relationship with each other and can be classified into a hierarchical structure, which is regarded as important information. Label texts of pre-defined classes themselves also include important clues for classification. Therefore, we propose a method that can consider the hierarchical structure of labels and label texts themselves. We conducted evaluation over the Text REtrieval Conference (TREC) 2018 Incident Streams (IS) track dataset, and we found that our method outperformed the methods of the conference participants.
Label Embedding using Hierarchical Structure of Labels for Twitter Classification
d251487293
Legislative debate transcripts provide citizens with information about the activities of their elected representatives, but are difficult for people to process. We propose the task of policy-focused stance detection, in which both the policy proposals under debate and the position of the speakers towards those proposals are identified. We adapt a previously existing dataset to include manual annotations of policy preferences, an established schema from political science. We evaluate a range of approaches to the automatic classification of policy preferences and speech sentiment polarity, including transformer-based text representations and a multi-task learning paradigm. We find that it is possible to identify the policies under discussion using features derived from the speeches, and that incorporating motion-dependent debate modelling, previously used to classify speech sentiment, also improves performance in the classification of policy preferences. The proposed use of contextual embeddings and a multi-task learning paradigm do not perform as well as simpler approaches. We analyse the output of the best performing system, finding that discriminating features for the task are highly domain-specific, and that speeches that address policy preferences proposed by members of the same party can be among the most difficult to predict.
Policy-focused Stance Detection in Parliamentary Debate Speeches
d13243922
Online forum discussions often contain vast amounts of questions that are the focuses of discussions. Extracting contexts and answers together with the questions will yield not only a coherent forum summary but also a valuable QA knowledge base. In this paper, we propose a general framework based on Conditional Random Fields (CRFs) to detect the contexts and answers of questions from forum threads. We improve the basic framework by Skip-chain CRFs and 2D CRFs to better accommodate the features of forums for better performance. Experimental results show that our techniques are very promising.
Using Conditional Random Fields to Extract Contexts and Answers of Questions from Online Forums
d9771696
This report consists of two documents describing the state of the art of computer generation of natural language text. Both were prepared by a panel of individuals who are active in research on text generation. The first document assesses the state of the art, identifying four kinds of technical developments which will shape the art in the coming decade: linguistically justified grammars, knowledge representation methods, models of the reader, and models of discourse. The second document is a comprehensive bibliography on text generation, the first of its kind. In addition to citations of documents, it includes descriptions of ongoing research efforts.Assessing Text Generation TechnologyOur goal here is to assess the state of the art of text generation for two purposes: to help people who intend to apply that art in the near future and to aid in the design or selection of appropriate research.This assessment covers all of the technical methods by which computer programs create and present English text in their outputs. (For simplicity we always call the output language English.) Because text generation has not always been taken seriously from a technical point of view, it has been actively pursued only recently as a topic in artificial intelligence. As a result of this late start, much of the technology available for application today is still rather superficial. However, text generation is now such an active research topic that this superficial technology will soon be surpassed. (The last part of this report contains an extensive bibliography on the subject.)What Techniques Are Now Available for Use in System Designs?Two kinds of practical text generation techniques are already in general use and fairly well understood. The first is displaying previously prepared text (or canned text), and the second is producing text by direct translation of knowledge structures.The simplest and most commonly used way to have a computer system produce text is for the implementers of the system to figure out in advance what sorts of English output will be required and then store it as text strings. The computer merely displays the text that has been stored. (For example, almost all error messages are produced in this way.) It is relatively easy to have a program produce English in this way, and the text can be complex and elegantly written if desired. Unfortunately, because the text strings can be changed independently of any knowledge structures the program might use, there is no guarantee of consistency between what the program does and what it says it does. Another problem with canned text is that all questions and answers must be anticipated in adwance; for large systems, that may prove to be impossible. Finally, since one text string looks like any other as far as the computer is concerned, the computer program cannot easily have a conceptual model of what it is saying. This means that one should not expect to see much closure: satisfying 100 needs for text will not make the second 100 much easier.Another approach to providing English output produces text by translating knowledge structures of the program directly to English. This method overcomes many of the problems with canned text, while introducing some of its own. Since the structures being transformed (or translated) are the same ones used in the program's reasoning process, consistency can be assured. Closure can be realized because transformations are written to handle large classes of knowledge structures.However, since the transformations performed are usually relatively simple, the quality of the text depends to a great degree on how the knowledge is structured. If the text is to be understandable, the knowledge used by the program must be structured so
2. Text Generation
d208031598
Event trigger extraction is an information extraction task of practical utility, yet it is challenging due to the difficulty of disambiguating word sense meaning. Previous approaches rely extensively on hand-crafted language-specific features and are applied mainly to English for which annotated datasets and Natural Language Processing (NLP) tools are available. However, the availability of such resources varies from one language to another. Recently, contextualized Bidirectional Encoder Representations from Transformers (BERT) models have established state-of-the-art performance for a variety of NLP tasks. However, there has not been much effort in exploring language transfer using BERT for event extraction.
Contextualized Cross-Lingual Event Trigger Extraction with Minimal Resources
d225062758
Multi-turn conversational Question Answering (ConvQA) is a practical task that requires the understanding of conversation history, such as previous QA pairs, the passage context, and current question. It can be applied to a variety of scenarios with human-machine dialogue. The major challenge of this task is to require the model to consider the relevant conversation history while understanding the passage. Existing methods usually simply prepend the history to the current question, or use the complicated mechanism to model the history. This article proposes an impression feature, which use the word-level inter attention mechanism to learn multi-oriented information from conversation history to the input sequence, including attention from history tokens to each token of the input sequence, and history turn inter attention from different history turns to each token of the input sequence, and self-attention within input sequence, where the input sequence contains a current question and a passage. Then a feature selection method is designed to enhance the useful history turns of conversation and weaken the unnecessary information. Finally, we demonstrate the effectiveness of the proposed method on the QuAC dataset, analyze the impact of different feature selection methods, and verify the validity and reliability of the proposed features through visualization and human evaluation.
Combining Impression Feature Representation for Multi-turn Conversational Question Answering
d33730973
d233365293
The aim of the paper is twofold: (1) to automatically predict the ratings assigned by viewers to 14 categories available for TED talks in a multi-label classification task and(2)to determine what types of features drive classification accuracy for each of the categories. The focus is on features of language usage from five groups pertaining to syntactic complexity, lexical richness, register-based n-gram measures, information-theoretic measures and LIWCstyle measures. We show that a Recurrent Neural Network classifier trained exclusively on within-text distributions of such features can reach relatively high levels of overall accuracy (69%) across the 14 categories. We find that features from two groups are strong predictors of the affective ratings across all categories and that there are distinct patterns of language usage for each rating category.
Language that Captivates the Audience: Predicting Affective Ratings of TED Talks in a Multi-Label Classification Task
d18458924
Due to its complexity, meeting speech provides a challenge for both transcription and annotation. While Amazon's Mechanical Turk (MTurk) has been shown to produce good results for some types of speech, its suitability for transcription and annotation of spontaneous speech has not been established. We find that MTurk can be used to produce highquality transcription and describe two techniques for doing so (voting and corrective). We also show that using a similar approach, high quality annotations useful for summarization systems can also be produced. In both cases, accuracy is comparable to that obtained using trained personnel.
Using the Amazon Mechanical Turk to Transcribe and Annotate Meeting Speech for Extractive Summarization
d256461037
Relational triple extraction is a critical task for natural language processing. Existing methods mainly focused on capturing semantic information, but suffered from ignoring the syntactic structure of the sentence, which is proved in the relation classification task to contain rich relational information. This is due to the absence of entity locations, which is the prerequisite for pruning noisy edges from the dependency tree, when extracting relational triples. In this paper, we propose a unified framework to tackle this challenge and incorporate syntactic information for relational triple extraction. First, we propose to automatically contract the dependency tree into a core relational topology and eliminate redundant information with graph pooling operations. Then, we propose a symmetrical expanding path with graph unpooling operations to fuse the contracted core syntactic interactions with the original sentence context. We also propose a bipartite graph matching objective function to capture the reflections between the core topology and golden relational facts. Since our model shares similar contracting and expanding paths with encoder-decoder models like U-Net, we name our model as Relation U-Net (RelU-Net). We conduct experiments on several datasets and the results prove the effectiveness of our method.
RelU-Net: Syntax-aware Graph U-Net for Relational Triple Extraction
d248779860
Many e-commerce websites provide Productrelated Question Answering (PQA) platform where potential customers can ask questions related to a product, and other consumers can post an answer to that question based on their experience. Recently, there has been a growing interest in providing automated responses to product questions. In this paper, we investigate the suitability of the generative approach for PQA. We use state-of-the-art generative models proposed by Deng et al. (2020) and Lu et al. (2020) for this purpose. On closer examination, we find several drawbacks in this approach: (1) input reviews are not always utilized significantly for answer generation, (2) the performance of the models is abysmal while answering the numerical questions, (3) many of the generated answers contain phrases like "I do not know" which are taken from the reference answer in training data, and these answers do not convey any information to the customer. Although these approaches achieve a high ROUGE score, it does not reflect upon these shortcomings of the generated answers. We hope that our analysis will lead to more rigorous PQA approaches, and future research will focus on addressing these shortcomings in PQA.
Investigating the Generative Approach for Question Answering in E-Commerce
d6940894
In this paper we investigate a number of questions relating to the identification of the domain of a term by domain classification of the document in which the term occurs. We propose and evaluate a straightforward method for domain classification of documents in 24 languages that exploits a multilingual thesaurus and Wikipedia. We investigate and provide quantitative results about the extent to which humans agree about the domain classification of documents and terms also the extent to which terms are likely to "inherit" the domain of their parent document.AbstractIn this paper we investigate a number of questions relating to the identification of the domain of a term by domain classification of the document in which the term occurs. We propose and evaluate a straightforward method for domain classification of documents in 24 languages that exploits a multilingual thesaurus and Wikipedia. We investigate and provide quantitative results about the extent to which humans agree about the domain classification of documents and terms also the extent to which terms are likely to "inherit" the domain of their parent document.
Assigning Terms to Domains by Document Classification
d17867477
Most NLP applications work under the assumption that a user input is error-free; thus, word segmentation (WS) for written languages that use word boundary markers (WBMs), such as spaces, has been regarded as a trivial issue. However, noisy real-world texts, such as blogs, e-mails, and SMS, may contain spacing errors that require correction before further processing may take place. For the Korean language, many researchers have adopted a traditional WS approach, which eliminates all spaces in the user input and re-inserts proper word boundaries. Unfortunately, such an approach often exacerbates the word spacing quality for user input, which has few or no spacing errors; such is the case, because a perfect WS model does not exist. In this paper, we propose a novel WS method that takes into consideration the initial word spacing information of the user input. Our method generates a better output than the original user input, even if the user input has few spacing errors. Moreover, the proposed method significantly outperforms a state-of-the-art Korean WS model when the user input initially contains less than 10% spacing errors, and performs comparably for cases containing more spacing errors. We believe that the proposed method will be a very practical pre-processing module.
A Novel Word Segmentation Approach for Written Languages with Word Boundary Markers
d164457536
d5090230
Students choose to use flashcard applications available on the Internet to help memorize word-meaning pairs. This is helpful for tests such as GRE, TOEFL or IELTS, which emphasize on verbal skills. However, monotonous nature of flashcard applications can be diminished with the help of Cognitive Science through Testing Effect. Experimental evidences have shown that memory tests are an important tool for long term retention(Roediger and Karpicke, 2006). Based on these evidences, we developed a novel flashcard application called "V for Vocab" that implements short answer based tests for learning new words. Furthermore, we aid this by implementing our short answer grading algorithm which automatically scores the user's answer. The algorithm makes use of an alternate thesaurus instead of traditional Wordnet and delivers state-of-theart performance on popular word similarity datasets. We also look to lay the foundation for analysis based on implicit data collected from our application.
V for Vocab: An Intelligent Flashcard Application
d10464407
Lexical decay is the phenomenon underlying the dating techniques known as "glottochronology" and"lexicostatistics." Much of the contraversial nature of work in this field is the result of extremely imprecise foundations and lack of attention to the underlying statistical and semantic models. A satisfactory semantic model can be found in the concept of semantic atom. Notwithstanding a number of philosophical objections, the semantic atom is an operationally feasible support for a lexicon which is a semantic subset of all possible meanings and at the same time, exhausts the vocabulary of a language. Lexical decay is the process by which the lexical item covering an atom is replaced by another lexical item.Exponential lexical preservation is, in this model, directly analogous to decay phenomena in nuclear physics. Consistency requires that the decay process involved in exponentially preserved vocabularies be a Poisson process. This shows how to form test vocabularies for dating and proves that presently used vocabularies are not correctly formed.Dialectation studies show that historically diverging populations must be modelled by correlated Poisson processes. Definitive statistical treatment of these questions is not possible at this time, but much desirable research can be indicated.Kleinecke -I
965 International Conference on Computational Linguistics MODELS OF LEXICAL DECAY
d219300331
This paper describes the CUNI submission to WAT 2018 for the English-Hindi translation task using a transfer learning techniques which has proven effective under low resource conditions. We have used the Transformer model and utilized an English-Czech parallel corpus as additional data source. Our simple transfer learning approach first trains a "parent" model for a high-resource language pair (English-Czech) and then continues the training on the low-resource (English-Hindi) pair by replacing the training corpus. This setup improves the performance compared with the baseline and in combination with back-translation of Hindi monolingual data, it allowed us to win the English-Hindi task. The automatic scoring by BLEU did not correlate well with human judgments.Method DescriptionWe utilize transfer learning based on the work ofKocmi and Bojar (2018). The method of training is
CUNI NMT System for WAT 2018 Translation Tasks
d852013
In this paper we examine how the differences in modelling between different data driven systems performing the same NLP task can be exploited to yield a higher accuracy than the best individual system. We do this by means of an experiment involving the task of morpho-syntactic wordclass tagging. Four well-known tagger generators (Hidden Markov Model, Memory-Based, Transformation Rules and Maximum Entropy) are trained on the same corpus data.After comparison, their outputs are combined using several voting strategies and second stage classifiers. All combination taggers outperform their best component, with the best combination showing a 19.1% lower error rate than the best individual tagger.
Improving Data Driven Wordclass Tagging by System Combination
d8314368
Microblogs such as Twitter, Facebook, and Sina Weibo (China's equivalent of Twitter) are a remarkable linguistic resource. In contrast to content from edited genres such as newswire, microblogs contain discussions of virtually every topic by numerous individuals in different languages and dialects and in different styles. In this work, we show that some microblog users post "self-translated" messages targeting audiences who speak different languages, either by writing the same message in multiple languages or by retweeting translations of their original posts in a second language. We introduce a method for finding and extracting this naturally occurring parallel data. Identifying the parallel content requires solving an alignment problem, and we give an optimally efficient dynamic programming algorithm for this. Using our method, we extract nearly 3M Chinese-English parallel segments from Sina Weibo using a targeted crawl of Weibo users who post in multiple languages. Additionally, from a random sample of Twitter, we obtain substantial amounts of parallel data in multiple language pairs. Evaluation is performed by assessing the accuracy of our extraction approach relative to a manual annotation as well as *
Mining Parallel Corpora from Sina Weibo and Twitter
d235097647
d221845203
Transformer models have advanced the state of the art in many Natural Language Processing (NLP) tasks.In this paper, we present a new Transformer architecture, Extended Transformer Construction (ETC), that addresses two key challenges of standard Transformer architectures, namely scaling input length and encoding structured inputs. To scale attention to longer inputs, we introduce a novel global-local attention mechanism between global tokens and regular input tokens. We also show that combining global-local attention with relative position encodings and a Contrastive Predictive Coding (CPC) pretraining objective allows ETC to encode structured inputs. We achieve state-of-the-art results on four natural language datasets requiring long and/or structured inputs.
ETC: Encoding Long and Structured Inputs in Transformers
d11566764
This paper investigates using prosodic information in the form of ToBI break indexes for parsing spontaneous speech. We revisit two previously studied approaches, one that hurt parsing performance and one that achieved minor improvements, and propose a new method that aims to better integrate prosodic breaks into parsing. Although these approaches can improve the performance of basic probabilistic context free grammar (PCFG) parsers, they all fail to produce fine-grained PCFG models with latent annotations (PCFG-LA)(Matsuzaki et al., 2005;Petrov and Klein, 2007)that perform significantly better than the baseline PCFG-LA model that does not use break indexes, partially due to mis-alignments between automatic prosodic breaks and true phrase boundaries. We propose two alternative ways to restrict the search space of the prosodically enriched parser models to the nbest parses from the baseline PCFG-LA parser to avoid egregious parses caused by incorrect breaks. Our experiments show that all of the prosodically enriched parser models can then achieve significant improvement over the baseline PCFG-LA parser.
Appropriately Handled Prosodic Breaks Help PCFG Parsing
d6876666
Abduction is the inference to the best explanation . Many tasks in natural language understanding such as word-sense disambiguity [1], local pragmatics [4], metaphor interpretation[3], and plan recognition[5,8], can be viewed as abduction .NUBA (Natural-language Understanding By Abduction) is a natural language understanding system , where syntactic, semantic, and discourse analysis are performed by abductive inference . The task of understanding is viewed as finding the structural relationships between unstructured inputs . That is, to understan d is to seek the best explanation of how the inputs are coherently related . From this abductive perspective , the goal of the parser is to explain how the words in a sentence are structured according to syntactic relationships ; the goal of the semantic interpreter is to find the semantic relationships among the content word s in a sentence ; the goal discourse analyzer is to show how the events mentioned in the sentences fit togethe r to form a coherent plan .Although the abductive formulation of natural language understanding tasks results in significant simplifications [4], the computational complexity of abductive inference presents a serious problem . Our solution to this problem is obvious abduction [6], a model of abductive inference that covers the kinds of abductiv e inferences people perform without apparent effort, such as parsing, plan recognition, and diagnosis . Obvious abduction uses a network to represent the domain knowledge . Observations correspond to nodes in th e network annotated with a set of attribute-value pairs . An explanation of the observations is a generalize d subtree of the network that connects all the observations . This connection is a coherent set of relationships between the observations, therefore, explains how they are related .
University of Manitoba: Description of the NUBA System as Used for MUC-5 INTRODUCTIO N
d51878269
Topic models with sparsity enhancement have been proven to be effective at learning discriminative and coherent latent topics of short texts, which is critical to many scientific and engineering applications. However, the extensions of these models require carefully tailored graphical models and re-deduced inference algorithms, limiting their variations and applications. We propose a novel sparsityenhanced topic model, Neural Sparse Topical Coding (NSTC) base on a sparsityenhanced topic model called Sparse Topical Coding (STC). It focuses on replacing the complex inference process with the back propagation, which makes the model easy to explore extensions. Moreover, the external semantic information of words in word embeddings is incorporated to improve the representation of short texts. To illustrate the flexibility offered by the neural network based framework, we present three extensions base on NSTC without re-deduced inference algorithms. Experiments on Web Snippet and 20Newsgroups datasets demonstrate that our models outperform existing methods.
Neural Sparse Topical Coding
d881437
We propose a new approach to the task of fine grained entity type classifications based on label embeddings that allows for information sharing among related labels. Specifically, we learn an embedding for each label and each feature such that labels which frequently co-occur are close in the embedded space. We show that it outperforms state-of-the-art methods on two fine grained entity-classification benchmarks and that the model can exploit the finer-grained labels to improve classification of standard coarse types.
Embedding Methods for Fine Grained Entity Type Classification
d241583236
d244464114
Multi-label toxicity detection is highly prominent, with many research groups, companies, and individuals engaging with it through shared tasks and dedicated venues. This paper describes a cross-lingual approach to annotating multi-label text classification on a newly developed Dutch language dataset, using a model trained on English data. We present an ensemble model of one Transformer model and an LSTM using Multilingual embeddings. The combination of multilingual embeddings and the Transformer model improves performance in a cross-lingual setting.
A Dutch Dataset for Cross-lingual Multi-label Toxicity Detection
d1992250
We present an LSTM approach to deletion-based sentence compression where the task is to translate a sentence into a sequence of zeros and ones, corresponding to token deletion decisions. We demonstrate that even the most basic version of the system, which is given no syntactic information (no PoS or NE tags, or dependencies) or desired compression length, performs surprisingly well: around 30% of the compressions from a large test set could be regenerated. We compare the LSTM system with a competitive baseline which is trained on the same amount of data but is additionally provided with all kinds of linguistic features. In an experiment with human raters the LSTMbased model outperforms the baseline achieving 4.5 in readability and 3.8 in informativeness.
Sentence Compression by Deletion with LSTMs
d1263341
The growing interest in open-domain question answering is limited by the lack of evaluation and training resources. To overcome this resource bottleneck for German, we propose a novel methodology to acquire new question-answer pairs for system evaluation that relies on volunteer collaboration over the Internet. Utilizing Wikipedia, a popular free online encyclopedia available in several languages, we show that the data acquisition problem can be cast as a Web experiment. We present a Web-based annotation tool and carry out a distributed data collection experiment. The data gathered from the mostly anonymous contributors is compared to a similar dataset produced in-house by domain experts on the one hand, and the German questions from the from the CLEF QA 2004 effort on the other hand. Our analysis of the datasets suggests that using our novel method a medium-scale evaluation resource can be built at very small cost in a short period of time. The technique and software developed here is readily applicable to other languages where free online encyclopedias are available, and our resulting corpus is likewise freely available.
Building an Evaluation Corpus for German Question Answering by Harvesting Wikipedia
d10859435
We are interested in extracting semantic structures from spoken utterances generated within conversational systems. Current Spoken Language Understanding systems rely either on hand-written semantic grammars or on flat attribute-value sequence labeling. While the former approach is known to be limited in coverage and robustness, the latter lacks detailed relations amongst attribute-value pairs. In this paper, we describe and analyze the human annotation process of rich semantic structures in order to train semantic statistical parsers. We have annotated spoken conversations from both a human-machine and a human-human spoken dialog corpus. Given a sentence of the transcribed corpora, domain concepts and other linguistic features are annotated, ranging from e.g. part-of-speech tagging and constituent chunking, to more advanced annotations, such as syntactic, dialog act and predicate argument structure. In particular, the two latter annotation layers appear to be promising for the design of complex dialog systems. Statistics and mutual information estimates amongst such features are reported and compared across corpora.
Annotating Spoken Dialogs: from Speech Segments to Dialog Acts and Frame Semantics
d44091147
Supervised event extraction systems are limited in their accuracy due to the lack of available training data. We present a method for self-training event extraction systems by bootstrapping additional training data. This is done by taking advantage of the occurrence of multiple mentions of the same event instances across newswire articles from multiple sources. If our system can make a highconfidence extraction of some mentions in such a cluster, it can then acquire diverse training examples by adding the other mentions as well. Our experiments show significant performance improvements on multiple event extractors over ACE 2005 and TAC-KBP 2015 datasets.
Semi-Supervised Event Extraction with Paraphrase Clusters
d16456504
We investigate the combination of several sources of information for the purpose of subjectivity recognition and polarity classification in meetings. We focus on features from two modalities, transcribed words and acoustics, and we compare the performance of three different textual representations: words, characters, and phonemes. Our experiments show that character-level features outperform wordlevel features for these tasks, and that a careful fusion of all features yields the best performance. 1
Multimodal Subjectivity Analysis of Multiparty Conversation
d1589010
A content selection component determines which information should be conveyed in the output of a natural language generation system. We present an efficient method for automatically learning content selection rules from a corpus and its related database. Our modeling framework treats content selection as a collective classification problem, thus allowing us to capture contextual dependencies between input items. Experiments in a sports domain demonstrate that this approach achieves a substantial improvement over context-agnostic methods.
Collective Content Selection for Concept-To-Text Generation
d10887722
In this paper, we present MADAMIRA, a system for morphological analysis and disambiguation of Arabic that combines some of the best aspects of two previously commonly used systems for Arabic processing, MADA (Habash and Rambow, 2005;Habash et al., 2013)and AMIRA (Diab et al., 2007). MADAMIRA improves upon the two systems with a more streamlined Java implementation that is more robust, portable, extensible, and is faster than its ancestors by more than an order of magnitude. We also discuss an online demo (see http://nlp.ldeo.columbia.edu/madamira/) that highlights these aspects.
MADAMIRA: A Fast, Comprehensive Tool for Morphological Analysis and Disambiguation of Arabic
d236460191
In this work, we describe our system submission to the SemEval 2021 Task 11: NLP Contribution Graph Challenge. We attempt all the three sub-tasks in the challenge and report our results. Subtask 1 aims to identify the contributing sentences in a given publication. Subtask 2 follows from Subtask 1 to extract the scientific term and predicate phrases from the identified contributing sentences. The final Subtask 3 entails extracting triples (subject, predicate, object) from the phrases and categorizing them under one or more defined information units. With the NLPContributionGraph Shared Task, the organizers formalized the building of a scholarly contributions-focused graph over NLP scholarly articles as an automated task. Our approaches include a BERT-based classification model for identifying the contributing sentences in a research publication, a rule-based dependency parsing for phrase extraction, followed by a CNN-based model for information units classification and a set of rules for triples extraction. The quantitative results show that we obtain the 5 th , 5 th , and 7 th rank respectively in three evaluation phases. We make our codes available at https://github.com/HardikArora17/ SemEval-2021-INNOVATORS
INNOVATORS at SemEval-2021 Task-11: A Dependency Parsing and BERT-based model for Extracting Contribution Knowledge from Scientific Papers
d3841188
Personal name disambiguation becomes hot as it provides a way to incorporate semantic understanding into information retrieval. In this campaign, we explore Chinese personal name disambiguation in news. In order to examine how well disambiguation technologies work, we concentrate on news articles, which is well-formatted and whose genre is well-studied. We then design a diagnosis test to explore the impact of Chinese word segmentation to personal name disambiguation.
The Chinese Persons Name Disambiguation Evaluation: Exploration of Personal Name Disambiguation in Chinese News
d220835108
d219310262
d5615997
A central topic in natural language processing is the design of lexical and syntactic features suitable for the target application. In this paper, we study convolution dependency tree kernels for automatic engineering of syntactic and semantic patterns exploiting lexical similarities. We define efficient and powerful kernels for measuring the similarity between dependency structures, whose surface forms of the lexical nodes are in part or completely different. The experiments with such kernels for question classification show an unprecedented results, e.g. 41% of error reduction of the former state-of-the-art. Additionally, semantic role classification confirms the benefit of semantic smoothing for dependency kernels.
Structured Lexical Similarity via Convolution Kernels on Dependency Trees