_id
stringlengths
4
10
text
stringlengths
0
18.4k
title
stringlengths
0
8.56k
d3075215
We introduce a novel precedence reordering approach based on a dependency parser to statistical machine translation systems. Similar to other preprocessing reordering approaches, our method can efficiently incorporate linguistic knowledge into SMT systems without increasing the complexity of decoding. For a set of five subject-object-verb (SOV) order languages, we show significant improvements in BLEU scores when translating from English, compared to other reordering approaches, in state-of-the-art phrase-based SMT systems.
Using a Dependency Parser to Improve SMT for Subject-Object-Verb Languages
d6203313
Our research is concerned with the development of robotic systems which can support people in household environments, such as taking care of elderly people. A central goal of our research consists in creating robot systems which are able to learn and communicate about a given environment without the need of a specially trained user. For the communication with such users it is necessary that the robot is able to communicate multimodally, which especially includes the ability to communicate in natural language. We believe that the ability to communicate naturally in multimodal communication must be supported by the ability to access contextual information, with topical knowledge being an important aspect of this knowledge. Therefore, we currently develop a topic tracking system for situated human-robot communication on our robot systems. This paper describes the BITT (Bielefeld Topic Tracking) corpus which we built in order to develop and evaluate our system. The corpus consists of human-robot communication sequences about a home-like environment, delivering access to the information sources a multimodal topic tracking system requires.
BITT: A Corpus for Topic Tracking Evaluation on Multimodal Human-Robot-Interaction
d763569
Recent "visual worlds" studies, wherein researchers study language in context by monitoring eye-movements in a visual scene during sentence processing, have revealed much about the interaction of diverse information sources and the time course of their influence on comprehension. In this study, five experiments that trade off scene context with a variety of linguistic factors are modelled with a Simple Recurrent Network modified to integrate a scene representation with the standard incremental input of a sentence. The results show that the model captures the qualitative behavior observed during the experiments, while retaining the ability to develop the correct interpretation in the absence of visual input.
A Connectionist Model of Anticipation in Visual Worlds
d410282
We propose a fast and reliable Question-answering (QA) system in Korean, which uses a predictive answer indexer based on 2-pass scoring method. The indexing process is as follows. The predictive answer indexer first extracts all answer candidates in a document. Then, using 2-pass scoring method, it gives scores to the adjacent content words that are closely related with each answer candidate. Next, it stores the weighted content words with each candidate into a database. Using this technique, along with a complementary analysis of questions, the proposed QA system saves response time and enhances the precision.
A Reliable Indexing Method for a Practical QA System
d18949950
This paper addresses the problem of related entity extraction and focuses on extracting related persons as a case study. The proposed method builds on a search engine. Specifically, we mine candidate related persons for a query person q using q's search results and the query logs containing q. The acquired candidates are then automatically rated and ranked using a SVM regression model that investigates multiple features. Experimental results on a set of 200 randomly sampled query persons show that the precision of the extracted top-1, 5, and 10 related persons exceeds 91%, 90%, and 84%, respectively, which significantly outperforms a state-ofthe-art baseline.
Harvesting Related Entities with a Search Engine *
d10226779
Morphological segmentation data for the METU-Sabancı Turkish Treebank is provided in this paper. The generalized lexical forms of the morphemes which the treebank previously lacked are added to the treebank. This data maybe used to train POS-taggers that use stemmer outputs to map these lexical forms to morphological tags.
Morpheme Segmentation in the METU-Sabancı Turkish Treebank
d220046499
Contextualized representations (e.g. ELMo, BERT) have become the default pretrained representations for downstream NLP applications. In some settings, this transition has rendered their static embedding predecessors (e.g. Word2Vec, GloVe) obsolete. As a side-effect, we observe that older interpretability methods for static embeddings -while more mature than those available for their dynamic counterparts -are underutilized in studying newer contextualized representations. Consequently, we introduce simple and fully general methods for converting from contextualized representations to static lookup-table embeddings which we apply to 5 popular pretrained models and 9 sets of pretrained weights. Our analysis of the resulting static embeddings notably reveals that pooling over many contexts significantly improves representational quality under intrinsic evaluation. Complementary to analyzing representational quality, we consider social biases encoded in pretrained representations with respect to gender, race/ethnicity, and religion and find that bias is encoded disparately across pretrained models and internal layers even for models that share the same training data. Concerningly, we find dramatic inconsistencies between social bias estimators for word embeddings.
Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings
d218973978
d252357829
Korean is a language with complex morphology that uses spaces at larger-than-word boundaries, unlike other East-Asian languages. While morpheme-based text generation can provide significant semantic advantages compared to commonly used character-level approaches, Korean morphological analyzers only provide a sequence of morpheme-level tokens, losing information in the tokenization process. Two crucial issues are the loss of spacing information and subcharacter level morpheme normalization, both of which make the tokenization result challenging to reconstruct the original input string, deterring the application to generative tasks. As this problem originates from the conventional scheme used when creating a POS tagging corpus, we propose an improvement to the existing scheme, which makes it friendlier to generative tasks. On top of that, we suggest a fully-automatic annotation of a corpus by leveraging public analyzers. We vote the surface and POS from the outcome and fill the sequence with the selected morphemes, yielding tokenization with a decent quality that incorporates space information. Our scheme is verified via an evaluation done on an external corpus, and subsequently, it is adapted to Korean Wikipedia to construct an open, permissive resource. We compare morphological analyzer performance trained on our corpus with existing methods, then perform an extrinsic evaluation on a downstream task.
OpenKorPOS: Democratizing Korean Tokenization with Voting-Based Open Corpus Annotation
d209461005
An increasing number of natural language processing papers address the effect of bias on predictions, introducing mitigation techniques at different parts of the standard NLP pipeline (data and models). However, these works have been conducted individually, without a unifying framework to organize efforts within the field. This situation leads to repetitive approaches, and focuses overly on bias symptoms/effects, rather than on their origins, which could limit the development of effective countermeasures. In this paper, we propose a unifying predictive bias framework for NLP. We summarize the NLP literature and suggest general mathematical definitions of predictive bias. We differentiate two consequences of bias: outcome disparities and error disparities, as well as four potential origins of biases: label bias, selection bias, model overamplification, and semantic bias. Our framework serves as an overview of predictive bias in NLP, integrating existing work into a single structure, and providing a conceptual baseline for improved frameworks.
Predictive Biases in Natural Language Processing Models: A Conceptual Framework and Overview
d198679280
Open-domain dialog systems are difficult to evaluate. The current best practice for analyzing and comparing these dialog systems is the use of human judgments. However, the lack of standardization in evaluation procedures, and the fact that model parameters and code are rarely published hinder systematic human evaluation experiments. We introduce a unified framework for human evaluation of chatbots that augments existing chatbot tools, and provides a web-based hub for researchers to share and compare their dialog systems. Researchers can submit their trained models to the ChatEval web interface and obtain comparisons with baselines and prior work. The evaluation code is open-source to ensure evaluation is performed in a standardized and transparent way. In addition, we introduce open-source baseline models and evaluation datasets. ChatEval can be found at https: //chateval.org.
ChatEval: A Tool for the Systematic Evaluation of Chatbots
d13175590
In this document, we outline some elements related to sense variation and to sense delimitation within the perspective of the Generative Lexicon. We then show that, in some cases, the Qualia structure can be combined with or replaced by a small number of rules, which seem to capture more adequately the relationships between the predicator and one of its arguments.In this paper, we contrast a rule-based approach (also used by other authors such as (Copestake and Briscoe 95), (Ostler and Atkins 92), (Numberg and Zaenen 79) with different perspectives) with the Qualia-based approach and comment on their respective advantages. We show how, in fact, they can cooperate. Another view is that presented in (Jackendoff 97, chapter 2) with his principle of enriched composition, which is in fact quite close to our view, but restricted to a few coercion situations (aspectual, mass-count, picture, begin-enjoy). As will be seen, these systems are not incompatible, they cover different forms of knowledge and may be useful in different situations. The rules we present here are not lexical rules, as in (Copestake and Biscoe 95), but they are part of the semantic composition sys-Saint-Dizier
Sense Variation and Lexicai Semantics Generative Operations
d1174710
In this paper we tackle the problem of automatically annotating, with both word senses and named entities, the MASC 3.0 corpus, a large English corpus covering a wide range of genres of written and spoken text. We use BabelNet 2.0, a multilingual semantic network which integrates both lexicographic and encyclopedic knowledge, as our sense/entity inventory together with its semantic structure, to perform the aforementioned annotation task. Word sense annotated corpora have been around for more than twenty years, helping the development of Word Sense Disambiguation algorithms by providing both training and testing grounds. More recently Entity Linking has followed the same path, with the creation of huge resources containing annotated named entities. However, to date, there has been no resource that contains both kinds of annotation. In this paper we present an automatic approach for performing this annotation, together with its output on the MASC corpus. We use this corpus because its goal of integrating different types of annotations goes exactly in our same direction. Our overall aim is to stimulate research on the joint exploitation and disambiguation of word senses and named entities. Finally, we estimate the quality of our annotations using both manually-tagged named entities and word senses, obtaining an accuracy of roughly 70% for both named entities and word sense annotations.
Annotating the MASC Corpus with BabelNet
d7287202
This paper describes our phrase-based Statistical Machine Translation (SMT) system for the WMT10 Translation Task. We submitted translations for the German to English and English to German translation tasks. Compared to state-of-the-art phrase-based systems we preformed additional preprocessing and used a discriminative word alignment approach. The word reordering was modeled using POS information and we extended the translation model with additional features.
The Karlsruhe Institute for Technology Translation System for the ACL-WMT 2010
d15599540
The logistics of collecting resources for Machine Translation (MT) has always been a cause of concern for some of the resource deprived languages of the world. The recent advent of crowdsourcing platforms provides an opportunity to explore the large scale generation of resources for MT. However, before venturing into this mode of resource collection, it is important to understand the various factors such as, task design, crowd motivation, quality control, etc. which can influence the success of such a crowd sourcing venture. In this paper, we present our experiences based on a series of experiments performed. This is an attempt to provide a holistic view of the different facets of translation crowd sourcing and identifying key challenges which need to be addressed for building a practical crowdsourcing solution for MT.
Experiences in Resource Generation for Machine Translation through Crowdsourcing
d1897225
The paper reports on the development methodology of a system aimed at multi-domain multi-lingual recognition and classification of names in texts, the focus being on the linguistic resources used for training and testing purposes. The corpus presented here has been collected and annotated in the framework of different projects the critical issue being the development of a final resource that is homogenous, re-usable and adaptable to different domains and languages with a view to robust multi-domain and multi-lingual NERC.
Multi-domain Multi-lingual Named Entity Recognition: Revisiting & Grounding the resources issue
d16796502
A novel method is presented for compiling two-level rules which have multiple context parts. The same method can also be applied to the resolution of so-called right-arrow rule conflicts. The method makes use of the fact that one can efficiently compose sets of twolevel rules with a lexicon transducer. By introducing variant characters and using simple pre-processing of multi-context rules, all rules can be reduced into single-context rules. After the modified rules have been combined with the lexicon transducer, the variant characters may be reverted back to the original surface characters. The proposed method appears to be efficient but only partial evidence is presented yet.
A Method for Compiling Two-level Rules with Multiple Contexts
d3596773
We propose a novel method to construct semantic orientation lexicons using large data and a thesaurus. To deal with large data, we use Count-Min sketch to store the approximate counts of all word pairs in a bounded space of 8GB. We use a thesaurus (like Roget) to constrain near-synonymous words to have the same polarity. This framework can easily scale to any language with a thesaurus and a unzipped corpus size ≥ 50 GB (12 billion tokens). We evaluate these lexicons intrinsically and extrinsically, and they perform comparable when compared to other existing lexicons.
Generating Semantic Orientation Lexicon using Large Data and Thesaurus
d18074028
This paper describes a heuristic approach capable of automatically clustering senses in a machinereadable dictionary (MRD). Including these clusters in the MRD-based lexical database offers several positive benefits for word sense disambiguation (WSD). First, the clusters can be used as a coarser sense division, so unnecessarily fine sense distinction can be avoided. The clustered entries in the MRD can also be used as materials for supervised training to develop a WSD system. Furthermore, if the algorithm is run on several MRDs, the clusters also provide a means of linking different senses across multiple MRDs to create an integrated lexical database.An implementation of the method for clustering definition sentences in the Longman Dictionary of Contemporary English (LDOCE) is described. To this end, the topical word lists and topical cross-references in the Longman Lexicon of Contemporary English (LLOCE) are used. Nearly half of the senses in the LDOCE can be linked precisely to a relevant LLOCE topic using a simple heuristic. With the definitions of senses linked to the same topic viewed as a document, topical clustering of the MRD senses bears a striking resemblance to retrieval of relevant documents for a given query in information retrieval (IR) research. Relatively well-established IR techniques ofweighting terms and ranking document relevancy are applied to find the topical clusters that are most relevant to the definition of each MRD sense. Finally, we describe an implemented version of the algorithms for the LDOCE and the LLOCE and assess the performance of the proposed approach in a series of experiments and evaluations.
Topical Clustering of MRD Senses Based on Information Retrieval Techniques
d42978253
This paper focuses on the linguistic analysis of economic information in French and English documents. Our objective is to establish domain-specific information schemes based on structural and conceptual information. At the structural level, we define linguistic triggers that take into account each language's specificity. At the conceptual level, analysis of concepts and relations between concepts result in a classification, prior to the representation of schemes. The final outcome of this study is a mapping between linguistic and conceptual structures in the field of economics.
Extraction of Concepts and Multilingual Information Schemes from French and English Economics Documents
d5349301
In this paper a discussion on multimodal referent resolution is presented. The discussion is centered on the analysis of how the referent of an expression in one modality can be found whenever the contextual information required for carrying on such an inference is expressed in one or more different modalities. In particular, a model for identifying the referent of a graphical expression when the relevant contextual information is expressed through natural language is presented. The model is also applied to the reciprocal problem of identifying the referent of a linguistic expression whenever a graphical context is given. In Section 1 of this paper the notion of modality in terms of which the theory is developed is presented. The discussion is motivated with a case of study in multimodal reference resolution. In Section 2 a theory for multimodal representation along the lines of Montague's semiotic programme is presented. In Section 3, an incremental model for multimodal reference resolution is illustrated. In Section 4 a brief discussion of how the theory could be extended to handle multimodal discourse is advanced. Finally, in the conclusion of the paper, a reflexion on the relation between spacial deixis and anaphora is advanced.
A Model for Multimodal Reference Resolution
d210722533
d16087823
We present a new multi-layered annotation scheme for orthographic errors in freely written German texts produced by primary school children. The scheme is closely linked to the German graphematic system and defines categories for both general structural word properties and errorrelated properties. Furthermore, it features multiple layers of information which can be used to evaluate an error. The categories can also be used to investigate properties of correctly-spelled words, and to compare them to the erroneous spellings. For data representation, we propose the XML-format LearnerXML.
Annotating Spelling Errors in German Texts Produced by Primary School Children
d10412283
The lack of a sufficient amount of data tailored for a task is a well-recognized problem for many statistical NLP methods. In this paper, we explore whether data sparsity can be successfully tackled when classifying language proficiency levels in the domain of learner-written output texts. We aim at overcoming data sparsity by incorporating knowledge in the trained model from another domain consisting of input texts written by teaching professionals for learners. We compare different domain adaptation techniques and find that a weighted combination of the two types of data performs best, which can even rival systems based on considerably larger amounts of in-domain data. Moreover, we show that normalizing errors in learners' texts can substantially improve classification when in-domain data with annotated proficiency levels is not available.
Predicting proficiency levels in learner writings by transferring a linguistic complexity model from expert-written coursebooks
d13245940
We describe three analyses on the effects of spontaneous speech on continuous speech recognition performance. We have found that: (1) spontaneous speech effects significantly degrade recognition performance, (2) fluent spontaneous speech yields word accuracies equivalent to read speech, and (3) using spontaneous speech training data can significantly improve performance for recognizing spontaneous speech. We conclude that word accuracy can be improved by explicitly modeling spontaneous effects in the recognizer, and by using as much spontaneous speech training data as possible. Inclusion of read speech training data, even within the task domain, does not significantly improve performance.
Spontaneous Speech Effects In Large Vocabulary Speech Recognition Applications
d38406635
A note on the definition of semantic annotation languages
d437644
This article introduces a new speech corpus, the Nijmegen Corpus of Casual Czech (NCCCz), which contains more than 30 hours of high-quality recordings of casual conversations in Common Czech, among ten groups of three male and ten groups of three female friends. All speakers were native speakers of Czech, raised in Prague or in the region of Central Bohemia, and were between 19 and 26 years old. Every group of speakers consisted of one confederate, who was instructed to keep the conversations lively, and two speakers naive to the purposes of the recordings. The naive speakers were engaged in conversations for approximately 90 minutes, while the confederate joined them for approximately the last 72 minutes. The corpus was orthographically annotated by experienced transcribers and this orthographic transcription was aligned with the speech signal. In addition, the conversations were videotaped. This corpus can form the basis for all types of research on casual conversations in Czech, including phonetic research and research on how to improve automatic speech recognition. The corpus will be freely available.
The Nijmegen Corpus of Casual Czech
d10023806
Putting Meaning into Your Trees
d1898978
An attractive feature of the formalism of synchronous tree adjoining grammar (STAG) is its potential to handle linguistic phenomena whose syntactic and semantic derivations seem to diverge. Recent work has aimed at adapting STAG to capture such cases. Anaphors, including both reflexives and reciprocals, have presented a particular challenge due to the locality constraints imposed by the STAG formalism. Previous attempts to model anaphors in STAG have focused specifically on reflexives and have not expanded to incorporate reciprocals. We show how STAG can not only capture the syntactic distribution and semantic representation of both reflexives and reciprocals, but also do so in a unified way.
Reflexives and Reciprocals in Synchronous Tree Adjoining Grammar
d59660043
Cet article décrit un analyseur syntaxique pour grammaires de dépendance lexicalisées. Le formalisme syntaxique se caractérise par une factorisation des contraintes syntaxiques qui se manifeste dans la séparation entre dépendance et ordre linéaire, la spécification fonctionnelle (plutôt que syntagmatique) des dépendants, la distinction entre dépendants valenciels (la sous-catégorisation) et non valenciels (les circonstants) et la saturation progressive des arbres. Ceci résulte en une formulation concise de la grammaire à un niveau très abstrait et l'élimination de la reduplication redondante des informations due aux réalisations alternatives des dépendants ou à leur ordre. Les arbres élémentaires (obtenus à partir des formes dans l'entrée) et dérivés sont combinés entre eux par adjonction d'un arbre dépendant saturé à un arbre régissant, moyennant l'unification des noeuds et des relations. La dérivation est réalisée grâce à un analyseur chart bi-directionnel.Abstract. We describe a parser for lexicalized dependency grammar. The formalism is characterized by a factorization of the syntactic constraints, based on the separation between dependency and word order, the functional (rather than phrasal) specification of dependents, the distinction between valency and non valency dependents, and the incremental saturation of the trees. These features enable a concise formulation of the grammar at a very abstract level and eliminate syntactic information redundancy due to alternative forms of dependents and word order. Each word form produces one or more elementary dependency trees. Trees, both elementary and derived, are combined by adjoining a saturated dependent to a governing tree, after unification of shared nodes and relations. This is achieved using a bi-directional chart parser.Mots-clés :Analyseur syntaxique, dépendance.
Factorisation des contraintes syntaxiques dans un analyseur de dépendance
d2097631
Aspect-oriented opinion mining aims to identify product aspects (features of products) about which opinion has been expressed in the text. We present an approach for aspect-oriented opinion mining from user reviews in Croatian. We propose methods for acquiring a domain-specific opinion lexicon, linking opinion clues to product aspects, and predicting polarity and rating of reviews. We show that a supervised approach to linking opinion clues to aspects is feasible, and that the extracted clues and aspects improve polarity and rating predictions.
Aspect-Oriented Opinion Mining from User Reviews in Croatian
d256461454
Out-of-distribution (OOD) settings are used to measure a model's performance when the distribution of the test data is different from that of the training data. NLU models are known to suffer in OOD settings(Utama et al., 2020b). We study this issue from the perspective of causality, which sees confounding bias as the reason for models to learn spurious correlations. While a common solution is to perform intervention, existing methods handle only known and single confounder (Pearl and Mackenzie, 2018), but in many NLU tasks the confounders can be both unknown and multifactorial. In this paper, we propose a novel interventional training method called Bottom-up Automatic Intervention (BAI) that performs multi-granular intervention with identified multifactorial confounders. Our experiments on three NLU tasks, namely, natural language inference, fact verification and paraphrase identification, show the effectiveness of BAI for tackling different OOD settings. 1
Interventional Training for Out-Of-Distribution Natural Language Understanding
d52012926
Previous work on the problem of Arabic Dialect Identification typically targeted coarse-grained five dialect classes plus Standard Arabic (6-way classification). This paper presents the first results on a fine-grained dialect classification task covering 25 specific cities from across the Arab World, in addition to Standard Arabic -a very challenging task. We build several classification systems and explore a large space of features. Our results show that we can identify the exact city of a speaker at an accuracy of 67.9% for sentences with an average length of 7 words (a 9% relative error reduction over the state-of-the-art technique for Arabic dialect identification) and reach more than 90% when we consider 16 words. We also report on additional insights from a data analysis of similarity and difference across Arabic dialects.Title and Abstract in Arabic, This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/
Fine-Grained Arabic Dialect Identification Title and Abstract in Arabic
d227230559
The information shared on social media is increasingly important; both images and text, and maybe the most popular combination of these two kinds of data are the memes. This manuscript describes our participation in Memotion task at SemEval 2020. This task is about to classify the memes in several categories related to the emotional content of them. For the proposed system construction, we used different strategies, and the best ones were based on deep neural networks and a text categorization algorithm. We obtained results analyzing the text and images separately, and also in combination. Our better performance was achieved in Task A, related to polarity classification.This work is licensed under a Creative Commons Attribution 4.0 International Licence.Licence details:
Infotec + CentroGEO at SemEval-2020 Task 8: Deep Learning and Text Categorization approach for Memes classification
d2060590
The goal of this research is to build a model to predict stock price movement using sentiments on social media. A new feature which captures topics and their sentiments simultaneously is introduced in the prediction model. In addition, a new topic model TSLDA is proposed to obtain this feature. Our method outperformed a model using only historical prices by about 6.07% in accuracy. Furthermore, when comparing to other sentiment analysis methods, the accuracy of our method was also better than LDA and JST based methods by 6.43% and 6.07%. The results show that incorporation of the sentiment information from social media can help to improve the stock prediction.
Topic Modeling based Sentiment Analysis on Social Media for Stock Market Prediction
d234487231
d53244328
In this paper we describe how we mine interactions between a human guide and a human visitor to build a virtual guide. A virtual guide is an agent capable of fulfilling the role of a human guide. Its goal is to guide visitors to each booth of a virtual fair and to provide information about the company or organization through interactive objects located at the fair.The guide decides what to say, using a graph search algorithm, and decides how to say using generation by selection based on contextual features. The guide decides where to speak at the virtual fair by creating clusters using a data classification algorithm to learn in what positions the human guide decided to talk.
Mining human interactions to construct a virtual guide for a virtual fair
d255775288
d11407588
We describe the University of Heidelberg (UHD) system for the Cross-Lingual Word Sense Disambiguation SemEval-2010 task (CL-WSD). The system performs CL-WSD by applying graph algorithms previously developed for monolingual Word Sense Disambiguation to multilingual cooccurrence graphs. UHD has participated in the BEST and out-of-five (OOF) evaluations and ranked among the most competitive systems for this task, thus indicating that graph-based approaches represent a powerful alternative for this task.
UHD: Cross-Lingual Word Sense Disambiguation Using Multilingual Co-occurrence Graphs
d233365348
d7314679
A Noun Phrase Parser of English A tro V o u tila in e n H elsin k i A b stract
d13114546
A PRA~ATIC CONCEPT OF THEME AND RHEME FOR MACHINE T~TION
d218974122
d10109001
A multiword is compositional if its meaning can be expressed in terms of the meaning of its constituents. In this paper, we collect and analyse the compositionality judgments for a range of compound nouns using Mechanical Turk. Unlike existing compositionality datasets, our dataset has judgments on the contribution of constituent words as well as judgments for the phrase as a whole. We use this dataset to study the relation between the judgments at constituent level to that for the whole phrase. We then evaluate two different types of distributional models for compositionality detection -constituent based models and composition function based models. Both the models show competitive performance though the composition function based models perform slightly better. In both types, additive models perform better than their multiplicative counterparts.
An Empirical Study on Compositionality in Compound Nouns
d233189614
Question answering over knowledge bases (KBQA) usually involves three sub-tasks, namely topic entity detection, entity linking and relation detection. Due to the large number of entities and relations inside knowledge bases (KB), previous work usually utilized sophisticated rules to narrow down the search space and managed only a subset of KBs in memory. In this work, we leverage a retrieveand-rerank framework to access KBs via traditional information retrieval (IR) method, and re-rank retrieved candidates with more powerful neural networks such as the pre-trained BERT model. Considering the fact that directly assigning a different BERT model for each sub-task may incur prohibitive costs, we propose to share a BERT encoder across all three sub-tasks and define task-specific layers on top of the shared layer. The unified model is then trained under a multi-task learning framework. Experiments show that: (1) Our IRbased retrieval method is able to collect highquality candidates efficiently, thus enables our method adapt to large-scale KBs easily; (2) the BERT model improves the accuracy across all three sub-tasks; and (3) benefiting from multitask learning, the unified model obtains further improvements with only 1/3 of the original parameters. Our final model achieves competitive results on the SimpleQuestions dataset and superior performance on the FreebaseQA dataset.
Retrieval, Re-ranking and Multi-task Learning for Knowledge-Base Question Answering
d10170777
In this paper, we describe the design and deployment of KANT Controlled English (KCE) for knowledge-based machine translation in the KANT system. KCE combines three kinds of constraints: constraints on the lexicon; constraints on the complexity of sentences; and the use of generalized markup language. We describe how each of these types of language control are utilized in the implementation of a typical KANT application. The principles described are not specific to knowledge-based MT, and can be applied in the design of controlled languages for any kind of MT application.
Controlled English for Knowledge-Based MT: Experience with the KANT System
d233189653
Previous research on target-dependent sentiment classification (TSC) has mostly focused on reviews, social media, and other domains where authors tend to express sentiment explicitly. In this paper, we investigate TSC in news articles, a much less researched TSC domain despite the importance of news as an essential information source in individual and societal decision making. We introduce NewsMTSC, a high-quality dataset for TSC on news articles with key differences compared to established TSC datasets, including, for example, different means to express sentiment, longer texts, and a second test-set to measure the influence of multi-target sentences. We also propose a model that uses a BiGRU to interact with multiple embeddings, e.g., from a language model and external knowledge sources. The proposed model improves the performance of the prior state-of-the-art from F 1 m = 81.7 to 83.1 (real-world sentiment distribution) and from F 1 m = 81.2 to 82.5 (multi-target sentences).
NewsMTSC: A Dataset for (Multi-)Target-dependent Sentiment Classification in Political News Articles
d15411367
Spoken language translation is a challenging new application that differs from written language translation in several ways, for instance, 1) human intervention (pre-edit or post-edit) should be avoided; 2) a real-time response is desirable for success. Example-based approaches meet these requirements, that is, they realize accurate structural disambiguation and target word selection, and respond quickly.This paper concentrates on structural disambiguation, particularly English prepositional . phrase attachment (pp-attachment). Usually, a pp-attachment is hard to determine by syntactic analysis alone and many candidates remain. In machine translation, if a pp-attachment is not likely, the translation of the preposition, indeed, the whole translation, is not likely. In order to select the most likely attachment from many candidates, various methods have been proposed. This paper proposes a new method, Example-Based Disambiguation (EBD) of pp-attachment, which 1) collects examples (prepositional phrase-attachment pairs) from a corpus; 2) computes the semantic distance between an input expression and examples; 3) selects the most likely attachment based on the minimum-distance examples. Through experiments contrasting EBD and conventional methods, the authors show the EBD's superiority from the standpoint of success rates.Prospects for and Related Research on EBDEBD, which is based on semantic distance and frequency, is not a specialized method for English pp-attachment. It is effective for other structural ambiguity caused by, for instance, infinitives, relative pronouns, subordinate conjunctions and so on.15The authors have already implemented structural disambiguation 16 based on semantic distance in TDMT, which translates 12 The causes of failures are intricate. Here, the authors determine countermeasures under the principle that any change in the thesaurus and semantic distance definition is to be avoided.13We have shown that, in general, the smaller the distance, the better the quality[8].14 We have shown that, in general, the more examples we have, the better the quality[8].15In the case of ambiguous scope caused by coordinate conjunctions, it is necessary to integrate EBD with a parsing method like the one proposed by Kurohashi[22] which is based on dynamic programming.16In TDMT, tiebreaking has not been implemented.
An Example-Based Disambiguation of Prepositional Phrase Attachment
d260063139
Consumers of services and products exhibit a wide range of behaviors on social networks when they are dissatisfied. In this paper, we consider three types of cynical expressionsnegative feelings, specific reasons, and attitude of being right -and annotate a corpus of 3189 comments in Spanish on car analysis channels from YouTube. We evaluate both token classification and text classification settings for this problem, and compare performance of different pre-trained models including BETO, Span-BERTa, Multilingual Bert, and RoBERTuito. The results show that models achieve performance above 0.8 F1 for all types of cynical expressions in the text classification setting, but achieve lower performance (around 0.6-0.7 F1) for the harder token classification setting.
Transformer-based cynical expression detection in a corpus of Spanish YouTube reviews
d227231218
Product matching, i.e., being able to infer the product being sold for a merchant-created offer, is crucial for any e-commerce marketplace, enabling product-based navigation, price comparisons, product reviews, etc. This problem proves a challenging task, mostly due to the extent of product catalog, data heterogeneity, missing product representants, and varying levels of data quality. Moreover, new products are being introduced every day, making it difficult to cast the problem as a classification task.In this work, we apply BERT-based models in a similarity learning setup to solve the product matching problem. We provide a thorough ablation study, showing the impact of architecture and training objective choices. Application of transformer-based architectures and proper sampling techniques significantly boosts performance for a range of e-commerce domains, allowing for production deployment.This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.
BERT-based similarity learning for product matching
d931147
To cluster textual sequence types (discourse types/modes) in French texts, K-means algorithm with high-dimensional embeddings and fuzzy clustering algorithm were applied on clauses whose POS (part-ofspeech) n-gram profiles were previously extracted. Uni-, bi-and trigrams were used on four 19 th century French short stories by Maupassant. For high-dimensional embeddings, power transformations on the chisquared distances between clauses were explored. Preliminary results show that highdimensional embeddings improve the quality of clustering, contrasting the use of biand trigrams whose performance is disappointing, possibly because of feature space sparsity.
Discourse Type Clustering using POS n-gram Profiles and High-Dimensional Embeddings
d233365099
In this paper, we present the design of a tool for the visualisation of linguistic complexity in second language (L2) learner writings. We show how metrics can be exploited to visualise complexity in L2 writings in relation to CEFR levels.
Towards a Data Analytics Pipeline for the Visualisation of Complexity Metrics in L2 writings
d208134583
We investigate the impact of search strategies in neural dialogue modeling. We first compare two standard search algorithms, greedy and beam search, as well as our newly proposed iterative beam search which produces a more diverse set of candidate responses. We evaluate these strategies in realistic full conversations with humans and propose a modelbased Bayesian calibration to address annotator bias. These conversations are analyzed using two automatic metrics: log-probabilities assigned by the model and utterance diversity. Our experiments reveal that better search algorithms lead to higher rated conversations. However, finding the optimal selection mechanism to choose from a more diverse set of candidates is still an open question.Pierre-Emmanuel Mazaré, Samuel Humeau, MartinRaison, and Antoine Bordes. 2018. Training millions of personalized dialogue agents. arXiv preprint arXiv:1809.01984.
Importance of Search and Evaluation Strategies in Neural Dialogue Modeling
d173611
We present a classifier-based parser that produces constituent trees in linear time. The parser uses a basic bottom-up shiftreduce algorithm, but employs a classifier to determine parser actions instead of a grammar. This can be seen as an extension of the deterministic dependency parser ofNivre and Scholz (2004)to full constituent parsing. We show that, with an appropriate feature set used in classification, a very simple one-path greedy parser can perform at the same level of accuracy as more complex parsers. We evaluate our parser on section 23 of the WSJ section of the Penn Treebank, and obtain precision and recall of 87.54% and 87.61%, respectively.
A Classifier-Based Parser with Linear Run-Time Complexity
d13639078
Natural language questions have become popular in web search. However, various questions can be formulated to convey the same information need, which poses a great challenge to search systems. In this paper, we automatically mined 5w1h question reformulation patterns from large scale search log data. The question reformulations generated from these patterns are further incorporated into the retrieval model. Experiments show that using question reformulation patterns can significantly improve the search performance of natural language questions.
Automatically Mining Question Reformulation Patterns from Search Log Data
d44171985
Search task extraction in information retrieval is the process of identifying search intents over a set of queries relating to the same topical information need. Search tasks may potentially span across multiple search sessions. Most existing research on search task extraction has focused on identifying tasks within a single session, where the notion of a session is defined by a fixed length time window. By contrast, in this work we seek to identify tasks that span across multiple sessions. To identify tasks, we conduct a global analysis of a query log in its entirety without restricting analysis to individual temporal windows. To capture inherent task semantics, we represent queries as vectors in an abstract space. We learn the embedding of query words in this space by leveraging the temporal and lexical contexts of queries. To evaluate the effectiveness of the proposed query embedding, we conduct experiments of clustering queries into tasks with a particular interest of measuring the cross-session search task recall. Results of our experiments demonstrate that task extraction effectiveness, including cross-session recall, is improved significantly with the help of our proposed method of embedding the query terms by leveraging the temporal and templexical contexts of queries.
Tempo-Lexical Context driven Word Embedding for Cross-Session Search Task Extraction
d40919282
of brain cancer. He was 84 years old. Fillmore was one of the world's pre-eminent scholars of lexical meaning and its relationship with context, grammar, corpora, and computation, and his work had an enormous impact on computational linguistics. His early theoretical work in the 1960s, 1970s, and 1980s on case grammar and then frame semantics significantly influenced computational linguistics, AI, and knowledge representation. More recent work in the last two decades on FrameNet, a computational lexicon and annotated corpus, influenced corpus linguistics and computational lexicography, and led to modern natural language understanding tasks like semantic role labeling.Fillmore was born and raised in St. Paul, Minnesota, and studied linguistics at the University of Minnesota. As an undergraduate he worked on a pre-computational Latin corpus linguistics project, alphabetizing index cards and building concordances. During his service in the Army in the early 1950s he was stationed for three years in Japan. After his service he became the first US soldier to be discharged locally in Japan, and stayed for three years studying Japanese. He supported himself by teaching English, pioneering a way to make ends meet that afterwards became popular with generations of young Americans abroad. In 1957 he moved back to the United States to attend graduate school at the University of Michigan.At Michigan, Fillmore worked on phonetics, phonology, and syntax, first in the American Structuralist tradition of developing what were called "discovery procedures" for linguistic analysis, algorithms for inducing phones or parts of speech. Discovery procedures were thought of as a methodological tool, a formal procedure that linguists could apply to data to discover linguistic structure, for example inducing parts of speech from the slots in "sentence frames" informed by the distribution of surrounding words. Like many linguistic graduate students of the period, he also worked partly on machine translation, and was interviewed at the time by Yehoshua Bar-Hillel, who was touring US machine translation laboratories in preparation for his famous report on the state of MT(Bar-Hillel 1960).Early in his graduate career, however, Fillmore read Noam Chomsky's Syntactic Structures and became an immediate proponent of the new transformational grammar. He graduated with his PhD in 1962 and moved to the linguistics department at Ohio State University. In his early work there Fillmore developed a number of early formal properties of generative grammar, such as the idea that rules would re-apply to representations in iterative stages called cycles (Fillmore 1963), a formal mechanism that still plays a role in modern theories of generative grammar.
Obituary
d11693295
This paper describes our system in the Bake-Off 2013 task of SIGHAN 7. We illustrate that Chinese spell checking and correction can be efficiently tackled with by utilizing word segmenter. A graph model is used to represent the sentence and a single source shortest path (SSSP) algorithm is performed on the graph to correct spell errors. Our system achieves 4 first ranks out of 10 metrics on the standard test set.
Graph Model for Chinese Spell Checking *
d226283960
Sarcasm detection in social media with text and image is becoming more challenging. Previous works of image-text sarcasm detection were mainly to fuse the summaries of text and image: different sub-models read the text and image respectively to get the summaries, and fuses the summaries. Recently, some multimodal models based on the architecture of BERT are proposed such as ViLBERT. However, they can only be pretrained on the imagetext data. In this paper, we propose an imagetext model for sarcasm detection using the pretrained BERT and ResNet without any further pretraining. BERT and ResNet have been pretrained on much larger text or image data than image-text data. We connect the vector spaces of BERT and ResNet to utilize more data. We use the pretrained Multi-Head Attention of BERT to model the text and image. Besides, we propose a 2D-Intra-Attention to extract the relationships between words and images. In experiments, our model outperforms the state-of-the-art model.
Building a Bridge: A Method for Image-Text Sarcasm Detection Without Pretraining on Image-Text Data
d7523960
Paradigmsprovide an inherent organizational structure to natural language morphology. ParaMor, our minimally supervised morphology induction algorithm, retrusses the word forms of raw text corpora back onto their paradigmatic skeletons; performing on par with state-ofthe-art minimally supervised morphology induction algorithms at morphological analysis of English and German. ParaMor consists of two phases. Our algorithm first constructs sets of affixes closely mimicking the paradigms of a language. And with these structures in hand, ParaMor then annotates word forms with morpheme boundaries. To set ParaMor's few free parameters we analyze a training corpus of Spanish. Without adjusting parameters, we induce the morphological structure of English and German. Adopting the evaluation methodology of Morpho Challenge 2007(Kurimo et al., 2007, we compareParaMor's morphological analyses with Morfessor (Creutz, 2006), a modern minimally supervised morphology induction system. ParaMor consistently achieves competitive F 1 measures.
ParaMor: Minimally Supervised Induction of Paradigm Structure and Morphological Analysis
d222225275
News recommendation aims to display news articles to users based on their personal interest. Existing news recommendation methods rely on centralized storage of user behavior data for model training, which may lead to privacy concerns and risks due to the privacy-sensitive nature of user behaviors. In this paper, we propose a privacy-preserving method for news recommendation model training based on federated learning, where the user behavior data is locally stored on user devices. Our method can leverage the useful information in the behaviors of massive number users to train accurate news recommendation models and meanwhile remove the need of centralized storage of them. More specifically, on each user device we keep a local copy of the news recommendation model, and compute gradients of the local model based on the user behaviors in this device. The local gradients from a group of randomly selected users are uploaded to server, which are further aggregated to update the global model in the server. Since the model gradients may contain some implicit private information, we apply local differential privacy (LDP) to them before uploading for better privacy protection. The updated global model is then distributed to each user device for local model update. We repeat this process for multiple rounds. Extensive experiments on a real-world dataset show the effectiveness of our method in news recommendation model training with privacy protection.Ligeng Zhu, Zhijian Liu, and Song Han. 2019a.Deep leakage from gradients. arXiv preprint arXiv:1906.08935.
Privacy-Preserving News Recommendation Model Learning
d221097247
d7165936
This article reports the results of a prehmlnary analysis of translation equivalents in four languages from different language famdles, extracted from an on-hne parallel corpus of George Orwell's Nmeteen Eighty-Four The goal of the study is to determine the degree to which translatmn equivalents for different meamngs of a polysemous word In Enghsh are lexlcahzed differently across a variety of languages, and to detelmme whether this information can be used to structure or create a set of sense distinctions useful in natural language processing apphcatmns A coherence Index is computed that measures the tendency for different senses o1 the same English word to be lexlcahzed differently, and flora this data a clustering algorithm is used to create sense hierat chles Pioceedtngs of the ARPA Human Language Technology Workshop, Princeton, New Jersey, 266-271
Parallel Translations as Sense Discriminators
d969555
The problem of rare and unknown words is an important issue that can potentially effect the performance of many NLP systems, including both traditional countbased and deep learning models. We propose a novel way to deal with the rare and unseen words for the neural network models with attention. Our model uses two softmax layers in order to predict the next word in conditional language models: one of the softmax layers predicts the location of a word in the source sentence, and the other softmax layer predicts a word in the shortlist vocabulary. The decision of which softmax layer to use at each timestep is adaptively made by an MLP which is conditioned on the context. We motivate this work from a psychological evidence that humans naturally have a tendency to point towards objects in the context or the environment when the name of an object is not known. Using our proposed model, we observe improvements in two tasks, neural machine translation on the Europarl English to French parallel corpora and text summarization on the Gigaword dataset.
Pointing the Unknown Words
d236460209
d40026059
We have built a simple corpus-based system to estimate words similarity in multiple languages with a count-based approach. After training on Wikipedia corpora, our system was evaluated on the multilingual subtask of SemEval-2017 Task 2 and achieved a good level of performance, despite its great simplicity. Our results tend to demonstrate the power of the distributional approach in semantic similarity tasks, even without knowledge of the underlying language. We also show that dimensionality reduction has a considerable impact on the results.
Jmp8 at SemEval-2017 Task 2: A simple and general distributional approach to estimate word similarity
d15744744
In this paper, we address the problem of extracting and integrating bilingual terminology into a Statistical Machine Translation (SMT) system for a Computer Aided Translation (CAT) tool scenario. We develop a framework that, taking as input a small amount of parallel in-domain data, gathers domain-specific bilingual terms and injects them in an SMT system to enhance the translation productivity. Therefore, we investigate several strategies to extract and align bilingual terminology, and to embed it into the SMT. We compare two embedding methods that can be easily used at run-time without altering the normal activity of an SMT system: XML markup and the cache-based model. We tested our framework on two different domains showing improvements up to 15% BLEU score points.
Enhancing Statistical Machine Translation with Bilingual Terminology in a CAT Environment
d252403111
We present Barch, a new English dataset of human-written summaries describing bar charts. This dataset contains 47 charts based on a selection of 18 topics. Each chart is associated with one of the four intended messages expressed in the chart title. Using crowdsourcing, we collected around 20 summaries per chart, or one thousand in total. The text of the summaries is aligned with the chart data as well as with analytical inferences about the data drawn by humans. Our datasets is one of the first to explore the effect of intended messages on the data descriptions in chart summaries. Additionally, it lends itself well to the task of training data-driven systems for chart-to-text generation. We provide results on the performance of state-of-the-art neural generation models trained on this dataset and discuss the strengths and shortcomings of different models.
Barch: an English Dataset of Bar Chart Summaries
d60411758
Dans cet article, nous traitons de la segmentation automatique des textes en épisodes thématiques non superposés et ayant une structure linéaire. Notre étude porte sur l'utilisation des traits lexicaux, acoustiques et syntaxiques et sur l'influence de ces traits sur la performance d'un système automatique de segmentation thématique. Nous appliquons notre approche, basée sur des machines à vecteurs support, à des transcriptions des dialogues multilocuteurs.Abstract. In this article we address the task of automatic text structuring into linear and non-overlapping thematic episodes. Our investigation reports on the use of various lexical, acoustic and syntactic features, and makes a comparison of how these features influence performance of automatic topic segmentation. Using datasets containing multi-party meeting transcriptions, we base our experiments on a proven state-of-the-art approach using support vector classification.Mots-clés : segmentation automatique en épisodes thématiques, machines à vecteurs support, dialogues multi-locuteurs.
Exploiting structural meeting-specific features for topic segmentation
d1017034
We describe recent work on MedSLT, a medium-vocabulary interlingua-based medical speech translation system, focussing on issues that arise when handling languages of which the grammar engineer has little or no knowledge. We describe how we can systematically create and maintain multiple forms of grammars, lexica and interlingual representations, with some versions being used by language informants, and some by grammar engineers. In particular, we describe the advantages of structuring the interlingua definition as a simple semantic grammar, which includes a human-readable surface form. We show how this allows us to rationalise the process of evaluating translations between languages lacking common speakers. The grammar-based interlingua definition can also be used in other ways. We describe two applications: a simple generic tool for debugging to-interlingua translation rules, and a method for improving speech understanding performance by rescoring N-best speech hypothesis lists. Examples presented focus on the concrete case of translation between Japanese and Arabic in both directions.
Developing Non-European Translation Pairs in a Medium-Vocabulary Medical Speech Translation System
d2161699
We present a Variational-Bayes model for learning rules for the Hierarchical phrasebased model directly from the phrasal alignments. Our model is an alternative to heuristic rule extraction in hierarchical phrase-based translation(Chiang, 2007), which uniformly distributes the probability mass to the extracted rules locally. In contrast, in our approach the probability assigned to a rule is globally determined by its contribution towards all phrase pairs and results in a sparser rule set. We also propose a distributed framework for efficiently running inference for realistic MT corpora. Our experiments translating Korean, Arabic and Chinese into English demonstrate that they are able to exceed or retain the performance of baseline hierarchical phrase-based models. *
Scalable Variational Inference for Extracting Hierarchical Phrase-based Translation Rules *
d208120872
Prior work on temporal relation classification has focused extensively on event pairs in the same or adjacent sentences (local), paying scant attention to discourse-level (global) pairs. This restricts the ability of systems to learn temporal links between global pairs, since reliance on local syntactic features suffices to achieve reasonable performance on existing datasets. However, systems should be capable of incorporating cues from documentlevel structure to assign temporal relations. In this work, we take a first step towards discourse-level temporal ordering by creating TDDiscourse, the first dataset focusing specifically on temporal links between event pairs which are more than one sentence apart. We create TDDiscourse by augmenting TimeBank-Dense, a corpus of English news articles, manually annotating global pairs that cannot be inferred automatically from existing annotations. Our annotations double the number of temporal links in TimeBank-Dense, while possessing several desirable properties such as focusing on long-distance pairs and not being automatically inferable. We adapt and benchmark the performance of three stateof-the-art models on TDDiscourse and observe that existing systems indeed find discourselevel temporal ordering harder.
TDDiscourse: A Dataset for Discourse-Level Temporal Ordering of Events
d75539
This paper addresses syntax-based paraphrasing methods for Recognizing Textual Entailment (RTE). In particular, we describe a dependency-based paraphrasing algorithm, using the DIRT data set, and its application in the context of a straightforward RTE system based on aligning dependency trees. We find a small positive effect of dependency-based paraphrasing on both the RTE3 development and test sets, but the added value of this type of paraphrasing deserves further analysis.
Dependency-based paraphrasing for recognizing textual entailment
d1471171
An algorithm based on the Generalized Hebbian Algorithm is described that allows the singular value decomposition of a dataset to be learned based on single observation pairs presented serially. The algorithm has minimal memory requirements, and is therefore interesting in the natural language domain, where very large datasets are often used, and datasets quickly become intractable. The technique is demonstrated on the task of learning word and letter bigram pairs from text.
Generalized Hebbian Algorithm for Incremental Singular Value Decomposition in Natural Language Processing
d27080528
This paper discusses the creation of a semantically annotated corpus of questions about patient data in electronic health records (EHRs). The goal is to provide the training data necessary for semantic parsers to automatically convert EHR questions into a structured query. A layered annotation strategy is used which mirrors a typical natural language processing (NLP) pipeline. First, questions are syntactically analyzed to identify multi-part questions. Second, medical concepts are recognized and normalized to a clinical ontology. Finally, logical forms are created using a lambda calculus representation. We use a corpus of 446 questions asking for patient-specific information. From these, 468 specific questions are found containing 259 unique medical concepts and requiring 53 unique predicates to represent the logical forms. We further present detailed characteristics of the corpus, including inter-annotator agreement results, and describe the challenges automatic NLP systems will face on this task.
Annotating Logical Forms for EHR Questions
d4983428
Matching coreferent named entities without prior knowledge requires good similarity measures. Soft-TFIDF is a fine-grained measure which performs well in this task. We propose to enhance this kind of metrics, through a generic model in which measures may be mixed, and show experimentally the relevance of this approach.
Robust Similarity Measures for Named Entities Matching
d51874381
Nowadays, more and more people are learning Chinese as their second language. Establishing an automatic diagnosis system for Chinese grammatical error has become an important challenge. In this paper, we propose a Chinese grammatical error diagnosis (CGED) model with contextualized character representation. Compared to the traditional model using LST-M (Long-Short Term Memory), our model have better performance and there is no need to add too many artificial features.
Contextualized Character Representation for Chinese Grammatical Error Diagnosis
d202782172
We present 1) a work in progress method to visually segment key regions of scientific articles using an object detection technique augmented with contextual features, and 2) a novel dataset of region-labeled articles. A continuing challenge in scientific literature mining is the difficulty of consistently extracting high-quality text from formatted PDFs. To address this, we adapt the object-detection technique Faster R-CNN for document layout detection, incorporating contextual information that leverages the inherently localized nature of article contents to improve the region detection performance. Due to the limited availability of high-quality region-labels for scientific articles, we also contribute a novel dataset of region annotations, the first version of which covers 9 region classes and 822 article pages. Initial experimental results demonstrate a 23.9% absolute improvement in mean average precision over the baseline model by incorporating contextual features, and a processing speed 14x faster than a text-based technique. Ongoing work on further improvements is also discussed.
Visual Detection with Context for Document Layout Analysis
d237510
This paper presents a reesthnation algorithm and a best-first parsing (BFP) algorithm for probabilistic dependency grummars (PDG). The proposed reestimation algorithm is a variation of the inside-outside algorithm adapted to probabilistic dependency grammars. The inside-outside algorithm is a probabilistic parameter reestimation algorithm for phrase structure grammars in Chomsky Normal Form (CNF). Dependency grammar represents a sentence structure as a set of dependency links between arbitrary two words in the sentence, and can not be reestimated by the inside-outside algorithm directly. In this paper, non-constituent objects, complete-llnk and complete-sequence are defined as basic units of dependency structure, and the probabilities of them are reestimated. The reestimation and BFP algorithms utilize CYK-style chart and the nonconstituent objects as chart entries. Both algoritbrn~ have O(n s) time complexities.42
Reestimation and Best-First Parsing Algorithm for Probabilistic Dependency Grammars
d215764610
Tree-adjoining grammars (TAG) have been proposed as a formalism for generation based on the intuition that the extended domain of syntactic locality that TAGs provide should aid in localizing semantic dependencies as well, in turn serving as an aid to generation from semantic representations. We demonstrate that this intuition can be made concrete by using the formalism of synchronous tree-adjoining grammars. The use of synchronous TAGs for generation provides solutions to several problems with previous approaches to TAG generation. Furthermore, the semantic monotonicity requirement previously advocated for generation grammars as a computational aid is seen to be an inherent property of synchronous TAGs.
Generation and Synchronous Tree-Adjoining Grammars
d218977424
d227231268
d118684820
Text generation with generative adversarial networks (GANs) can be divided into the text-based and code-based categories according to the type of signals used for discrimination. In this work, we introduce a novel text-based approach called Soft-GAN to effectively exploit GAN setup for text generation. We demonstrate how autoencoders (AEs) can be used for providing a continuous representation of sentences, which we will refer to as soft-text. This soft representation will be used in GAN discrimination to synthesize similar soft-texts. We also propose hybrid latent code and text-based GAN (LATEXT-GAN 1 ) approaches with one or more discriminators, in which a combination of the latent code and the soft-text is used for GAN discriminations. We perform a number of subjective and objective experiments on two well-known datasets (SNLI and Image COCO) to validate our techniques. We discuss the results using several evaluation metrics and show that the proposed techniques outperform the traditional GANbased text-generation methods.
Latent Code and Text-based Generative Adversarial Networks for Soft-text Generation
d198163236
d1347118
In this paper we investigate a structured model for jointly classifying the sentiment of text at varying levels of granularity. Inference in the model is based on standard sequence classification techniques using constrained Viterbi to ensure consistent solutions. The primary advantage of such a model is that it allows classification decisions from one level in the text to influence decisions at another. Experiments show that this method can significantly reduce classification error relative to models trained in isolation.My 11 year old daughter has also been using it and it is a lot harder than it looks.In isolation, this sentence appears to convey negative sentiment. However, it is part of a favorable review 432
Structured Models for Fine-to-Coarse Sentiment Analysis
d218974203
d236460136
This paper describes the system submitted to SemEval 2021 Task 5: Toxic Spans Detection. The task concerns evaluating systems that detect the spans that make a text toxic when detecting such spans are possible. To address the possibly multi-span detection problem, we develop a start-to-end tagging framework on the top of RoBERTa based language model. Besides, we design a custom loss function which take distance into account. In comparison to other participating teams, our system has achieved 69.03% F1 score, which is slight lower (-1.8 and -1.73) than the top 1 (70.83%) and top 2 (70.77%), respectively.
MedAI at SemEval-2021 Task 5: Start-to-end Tagging Framework for Toxic Spans Detection
d245838263
d4341200
Like text in other domains, biomedical documents contain a range of terms with more than one possible meaning. These ambiguities form a significant obstacle to the automatic processing of biomedical texts. Previous approaches to resolving this problem have made use of a variety of knowledge sources including linguistic information (from the context in which the ambiguous term is used) and domain-specific resources (such as UMLS). In this paper we compare a range of knowledge sources which have been previously used and introduce a novel one: MeSH terms. The best performance is obtained using linguistic features in combination with MeSH terms. Results from our system outperform published results for previously reported systems on a standard test set (the NLM-WSD corpus).
Knowledge Sources for Word Sense Disambiguation of Biomedical Text
d219307068
d849831
This paper presents an attempt at building a large scale distributed composite language model that is formed by seamlessly integrating an n-gram model, a structured language model, and probabilistic latent semantic analysis under a directed Markov random field paradigm to simultaneously account for local word lexical information, mid-range sentence syntactic structure, and long-span document semantic content. The composite language model has been trained by performing a convergent N-best list approximate EM algorithm and a follow-up EM algorithm to improve word prediction power on corpora with up to a billion tokens and stored on a supercomputer. The large scale distributed composite language model gives drastic perplexity reduction over n-grams and achieves significantly better translation quality measured by the Bleu score and "readability" of translations when applied to the task of re-ranking the N-best list from a state-of-the-art parsing-based machine translation system.
A Scalable Distributed Syntactic, Semantic, and Lexical Language Model
d43986718
We propose the use of WordNet synsets in a syntax-based reordering model for hierarchical statistical machine translation (HPB-SMT) to enable the model to generalize to phrases not seen in the training data but that have equivalent meaning. We detail our methodology to incorporate synsets' knowledge in the reordering model and evaluate the resulting WordNetenhanced SMT systems on the English-to-Farsi language direction. The inclusion of synsets leads to the best BLEU score, outperforming the baseline (standard HPB-SMT) by 0.6 points absolute.
Using Wordnet to Improve Reordering in Hierarchical Phrase-Based Statistical Machine Translation
d18993073
The paper proposes a treatment of relative sentences within the framework of Head-driven Phrase Structure Grammar (HPSG). Relative sentences are considered as a rather delicate linguistic phenomenon and not explored enough by Arabic researchers. In an attempt to deal with this phenomenon, we propose in this paper a study about different forms of relative clauses and the interaction of relatives with other linguistic phenomena such as ellipsis and coordination. In addition, in this paper we shed light on the recursion in Arabic relative sentences which makes this phenomenon more delicate in its treatment. This study will be used for the construction of an HPSG grammar that can process relative sentences. The HPSG formalism is based on two fundamental components: features and AVM (Attribute-Value-Matrix). In fact, an adaptation of HPSG for the Arabic language is made here in order to integrate features and rules of the Arabic language. The established HPSG grammar is specified in TDL (Type Description Language). This specification is used by the LKB platform (Linguistic Knowledge Building) in order to generate the parser.
Construction of an HPSG Grammar for the Arabic Relative Sentences
d19822724
We present texigt, a command-line tool for the extraction of structured linguistic data from L A T E X source documents, and a language resource that has been generated using this tool: a corpus of interlinear glossed text (IGT) extracted from open access books published by Language Science Press. Extracted examples are represented in a simple XML format that is easy to process and can be used to validate certain aspects of interlinear glossed text. The main challenge involved is the parsing of T E X and L A T E X documents. We review why this task is impossible in general and how the texhs Haskell library uses a layered architecture and selective early evaluation (expansion) during lexing and parsing in order to provide access to structured representations of L A T E X documents at several levels. In particular, its parsing modules generate an abstract syntax tree for L A T E X documents after expansion of all user-defined macros and lexer-level commands that serves as an ideal interface for the extraction of interlinear glossed text by texigt. This architecture can easily be adapted to extract other types of linguistic data structures from L A T E X source documents.
Extracting Interlinear Glossed Text from L A T E X Documents
d15990976
Distributional approaches are based on a simple hypothesis: the meaning of a word can be inferred from its usage. The application of that idea to the vector space model makes possible the construction of a WordSpace in which words are represented by mathematical points in a geometric space. Similar words are represented close in this space and the definition of "word usage" depends on the definition of the context used to build the space, which can be the whole document, the sentence in which the word occurs, a fixed window of words, or a specific syntactic context. However, in its original formulation WordSpace can take into account only one definition of context at a time. We propose an approach based on vector permutation and Random Indexing to encode several syntactic contexts in a single WordSpace. Moreover, we propose some operations in this space and report the results of an evaluation performed using the GEMS 2011 Shared Evaluation data.
Encoding syntactic dependencies by vector permutation
d218974091
Multiword Expressions (MWEs) are a frequently occurring phenomenon found in all natural languages that is of great importance to linguistic theory, natural language processing applications, and machine translation systems. Neural Machine Translation (NMT) architectures do not handle these expressions well and previous studies have rarely addressed MWEs in this framework. In this work, we show that annotation and data augmentation, using external linguistic resources, can improve both translation of MWEs that occur in the source, and the generation of MWEs on the target, and increase performance by up to 5.09 BLEU points on MWE test sets. We also devise a MWE score to specifically assess the quality of MWE translation which agrees with human evaluation. We make available the MWE score implementation -along with MWE-annotated training sets and corpus-based lists of MWEs -for reproduction and extension.
Multiword Expression aware Neural Machine Translation
d1649240
This paper describes a system for discriminating between factual and non-factual contexts, trained on weakly labeled data by taking advantage of information implicit in annotations of negated events. In addition to evaluating factuality detection in isolation, we also evaluate its impact on a system for event detection. The two components for factuality detection and event detection form part of a system for identifying negative factual events, or counterfacts, with top-ranked results in the *SEM 2012 shared task.
Factuality Detection on the Cheap: Inferring Factuality for Increased Precision in Detecting Negated Events
d18786150
This paper describes a small, structured English corpus that is designed for translation into Less Commonly Taught Languages (LCTLs), and a set of re-usable tools for creation of similar corpora. 1 The corpus systematically explores meanings that are known to affect morphology or syntax in the world's languages. Each sentence is associated with a feature structure showing the elements of meaning that are represented in the sentence. The corpus is highly structured so that it can support machine learning with only a small amount of data. As part of the REFLEX program, the corpus will be translated into multiple LCTLs, resulting in parallel corpora can be used for training of MT and other language technologies. Only the untranslated English corpus is described in this paper.
The MILE Corpus for Less Commonly Taught Languages
d28688924
The emerging area of Geographic Information Systems (GIS) has proven to add an interesting dimension to many research projects. Within the language-sites initiative we have brought together a broad range of links to digital language corpora and resources. Via Google Earth's visually appealing 3D-interface users can spin the globe, zoom into an area they are interested in and access directly the relevant language resources. This paper focuses on several ways of relating the map and the online data (lexica, annotations, multimedia recordings, etc.). Furthermore, we discuss some of the implementation choices that have been made, including future challenges. In addition, we show how scholars (both linguists and anthropologists) are using GIS tools to fulfill their specific research needs by making use of practical examples. This illustrates how both scientists and the general public can benefit from geography-based access to digital language data.
Language-Sites: Accessing and Presenting Language Resources via Geographic Information Systems
d219307131