_id
stringlengths
4
10
text
stringlengths
0
18.4k
title
stringlengths
0
8.56k
d65137767
Machine learning has become the predominant problem-solving strategy for computational linguistics problems in the last decade. Many researchers work on improving algorithms, developing new ones, testing feature representation issues, and so forth. Other researchers, however, apply machine-learning techniques as off-the-shelf implementation, often with little knowledge about the algorithms and intricacies of data representation issues. In this book, Daelemans and van den Bosch provide an indepth introduction to Memory-Based Language Processing (MBLP) that shows for different problems in NLP how the technique is successfully applied. Apart from the more practical issues, the book also explores the suitability of the chosen learning paradigm, memory-based learning (Stanfill and Waltz 1986), for NLP problems. Thus the book is a valuable source of information for a wide range of readers from the linguist interested in applying machine-learning techniques or the machine-learning specialist with no prior experience in NLP to the expert in machine learning wanting to learn more about the appropriateness of the MBLP bias for NLP problems.Memory-based learning is a machine-learning method based on the idea that examples can be re-used directly in processing natural language problems. Training examples are stored without modification or abstraction. During the classification process, the most similar examples from the training data are located, and their class is used to classify the new example.The book addresses different levels of understanding and working with MBLP: On one level, it explains the theoretical concepts of memory-based learning; on another, it provides more practical information: The implementation of memory-based learning, TiMBL, is described as well as different extensions such as FamBL and MBT. On a different level, the application of these techniques is described for typical problems in natural language processing. The reader learns how to model standard classification problems such as POS tagging, as well as sequence-learning problems, which are more difficult to model as classification problems. Daelemans and van den Bosch also cover critical issues, such as problems that arise in the evaluation of such experiments and the automation of searching for suitable system parameter settings. On a more abstract level, they approach the question of how suitable the bias of MBLP is. In chapter 6, they compare memory-based learning as an instance of lazy learning to an instance of eager learning, rule induction, with regard to their classification accuracy if, for example, more abstraction is introduced. Since MBLP does not abstract over the training data, it is called a lazy learning approach. Rule induction, in contrast, learns rules and does not go back to the actual training data during classification.
Memory-Based Language Processing
d16544299
Public speaking is a widely requested professional skill, and at the same time an activity that causes one of the most common adult phobias(Miller and Stone, 2009). It is also known that the study of stress under laboratory conditions, as it is most commonly done, may provide only limited ecological validity (Wilhelm and Grossman, 2010). Previously, we introduced an inter-disciplinary methodology to enable collecting a large amount of recordings under consistent conditions(Aguiar et al., 2013). This paper introduces the VOCE corpus of speech annotated with stress indicators under naturalistic public speaking (PS) settings. The novelty of this corpus is that the recordings are carried out in objectively stressful PS situations, as recommended in (Zanstra and Johnston, 2011). The current database contains a total of 38 recordings, 13 of which contain full psychologic and physiologic annotation. We show that the collected recordings validate the assumptions of the methodology, namely that participants experience stress during the PS events. We describe the various metrics that can be used for physiologic and psychologic annotation, and we characterise the sample collected so far, providing evidence that demographics do not affect the relevant psychologic or physiologic annotation. The collection activities are on-going, and we expect to increase the number of complete recordings in the corpus to 30 by June 2014.
VOCE Corpus: Ecologically Collected Speech Annotated with Physiological and Psychological Stress Assessments
d7925642
This paper presents a collaborative partitioning algorithm-a novel ensemblebased approach to coreference resolution. Starting from the all-singleton partition, we search for a solution close to the ensemble's outputs in terms of a task-specific similarity measure. Our approach assumes a loose integration of individual components of the ensemble and can therefore combine arbitrary coreference resolvers, regardless of their models. Our experiments on the CoNLL dataset show that collaborative partitioning yields results superior to those attained by the individual components, for ensembles of both strong and weak systems. Moreover, by applying the collaborative partitioning algorithm on top of three state-of-the-art resolvers, we obtain the second-best coreference performance reported so far in the literature (MELA v08 score of 64.47).
Collaborative Partitioning for Coreference Resolution
d13529880
The paper studies the automatic extraction of diagnostic word endings for Slavonic languages aimed to determine some grammatical, morphological and semantic properties of the underlying word. In particular, ending guessing rules are being learned from a large morphological dictionary of Bulgarian in order to predict POS, gender, number, article and semantics. A simple exact high accuracy algorithm is developed and compared to an approximate one, which uses a scoring function previously proposed by Mikheev for POS guessing. It is shown how the number of rules of the latter can be reduced by a factor of up to 35, without sacrificing performance. The evaluation demonstrates coverage close to 100%, and precision of 97-99% for the approximate algorithm.
Robust Ending Guessing Rules with Application to Slavonic Languages
d1805147
Argumentation schemes are structures or templates for various kinds of arguments. Given the text of an argument with premises and conclusion identified, we classify it as an instance of one of five common schemes, using features specific to each scheme. We achieve accuracies of 63-91% in one-against-others classification and 80-94% in pairwise classification (baseline = 50% in both cases).
Classifying Arguments by Scheme
d8406106
contains 27 revised versions of papers and commentaries presented at the First Conference in Laboratory Phonology, held in June 1987. The editors lead off with an excellent introduction, including discussion of the motivation for the conference, explanation of its multi-disciplinary nature, and summarizations of the contributions, showing their relations both to one another and to the structure of the conference as a whole.From the point of view of many researchers in general natural language processing, much of the background assumed and many of the questions addressed are likely to seem somewhat arcane. Nevertheless, the book is valuable on two counts: first, as an (admittedly sophisticated) overview of current practical and theoretical concerns in phonetics and phonology; and second, as an exemplary effort to integrate perspectives from radically differing "scientific subcultures" (i.e., phonetics and phonology). Traditionally, phonology has concerned itself with symbolic representations of cognitive primitives and processes that are manipulated by native speakers and hearers in conjunction with grammatical systems, while phonetics has devoted itself to instrumental analysis of the articulatory organs and the speech signal. The conference and this book are landmark efforts in attempting to reconcile these intellectual streams and lay the foundations for a unified theory of speech production and perception. As neighboring subdisciplines in all areas of NLP bring themselves into ever greater proximity by virtue of their own progress, general and methodological issues such as those addressed in this work become ever more pressing, and will require constant attention and evaluation.
+ 506 pp. Hardbound, ISBN 0-521-36238-5, $69.50; Paperbound
d245838273
d18952981
1. The speed at which MT researchers are abandoning or supplementing their traditional rule-based paradigms in favor of corpus-based approaches provides a striking proof of their general lack of enthusiasm for the results of the last three decades of MT research. No doubt, this feeling is fueled in part by the fact these results have not allowed the MT business to penetrate any more than a marginal segment of the global translation market.2. Martin Kay [3] was already contemplating this failure when, in 1980, he forcefully urged the MT community to shift its near-term goal away from classical MT towards machine-aided human translation (MAHT). While many MT researchers probably felt that Kay's proposed program was a more sensible way to go, most of them declined to sign up for the journey and preferred to stick it out for another decade on their familiar path towards an improbable place.3. This reluctance to give any serious consideration to MAHT clearly had something to do with the then-prevalent rule-based paradigms of MT. It is in fact very hard to single out, within three decades of rule-based MT research, a single result that has any obvious and immediate potential from the point of view of MAHT. There was just no straightforward way of applying the core techniques of rule-based MT systems to the development of translation support tools.4. But just the opposite appears to be true of corpus-based approaches. For example, one of the very first results obtained within this new paradigm is the development of algorithms capable of aligning the sentences of bilingual texts. This simple result turns out to be of fundamental importance from the point of view of MAHT. It constitutes in itself a suitable foundation for many kinds of new translation support tools. More on this below. 5. Why should there be such a difference between the two paradigms? The explanation, I think, is as follows. Rule-based MT tends to focus exclusively on the translation production problem. In the rare cases where it is possible to define good and complete translation models, this approach yields effective MT systems. But in all other cases, those where MAHT is called for, it turns out to be very difficult to make any use of production-oriented models. For example, it is hard to see how the particular target text intended by some translator could be partially generated in advance. Corpusbased methods, on the other hand, start from translations that have already been produced by humans and seek to discover their structure, completely or partially. This analysis-oriented perspective lends itself naturally to the development of translator's aids because in MAHT the machine is not expected to produce the translations, but rather to understand enough about them to become helpful.6. Translation analysis is the process of making explicit some or all of the translation correspondences that link the segments of a source text with those of its translation. Sentence alignment is a 177
Machine-Aided Human Translation and the Paradigm Shift
d120786723
This paper proposes an algorithm for causality inference based on a set of lexical knowledge bases that contain information about such items as event role, is-a hierarchy, relevant relation, antonymy, and other features. These lexical knowledge bases have mainly made use of lexical features and symbols in HowNet. Several types of questions are experimented to test the effectiveness of the algorithm here proposed. Particularly in this paper, the question form of "why" is dealt with to show how causality inference works.
Question-Answering Based on Virtually Integrated Lexical Knowledge Base
d14202905
In this paper, we compare three different approaches to build a probabilistic context-free grammar for natural language parsing from a tree bank corpus: 1) a model that simply extracts the rules contained in the corpus and counts the number of occurrences of each rule 2) a model that also stores information about the parent node's category and, 3) a model that estimates the probabilities according to a generalized k-gram scheme with k --3. The last one allows for a faster parsing and decreases the perplexity of test samples.
A Comparison of PCFG Models*
d8700750
We present FireCite, a Mozilla Firefox browser extension that helps scholars assess and manage scholarly references on the web by automatically detecting and parsing such reference strings in real-time. FireCite has two main components: 1) a reference string recognizer that has a high recall of 96%, and 2) a reference string parser that can process HTML web pages with an overall F 1 of .878 and plaintext reference strings with an overall F 1 of .97. In our preliminary evaluation, we presented our FireCite prototype to four academics in separate unstructured interviews. Their positive feedback gives evidence to the desirability of FireCite's citation management capabilities.
FireCite: Lightweight real-time reference string extraction from webpages
d33175460
The Knowledge Representation (KR) community and the Natural Language Processing community, in our opinion, have common goals yet finding a language that is expressive enough and capable of efficient reasoning is yet a challenge. We have claimed elsewhere that having a Natural Language(NL)like KR may be a step towards solving that challenge. The NL-like KR we looked at defines a controlled subset of English that is not trivial and exhibits powerful reasoning properties. Controlled Language for Inference Purposes (CLIP) is a dialect of English that was considered while developing a domain-independent knowledge-based system. The system takes as input 'clippy' utterances, U i , and uses a NLlike KR called McLogic to deduce plausible inferences from U i and give a justification for these deductions.1 http://www.cogsci.ed.ac.uk/ fracas/ 2 acting at the same time as a semantic representation.
Mind your Language! Controlled Language for Inference Purposes
d2166932
This paper reports on an empirically based system that automatically resolves VP ellipsis in the 644 examples identified in the parsed Penn Treebank. The results reported here represent the first systematic corpus-based study of VP ellipsis resolution, and the performance of the system is comparable to the best existing systems for pronoun resolution. The methodology and utilities described can be applied to other discourse-processing problems, such as other forms of ellipsis and anaphora resolution.The system determines potential antecedents for ellipsis by applying syntactic constraints, and these antecedents are ranked by combining structural and discourse preference factors such as recency, clausal relations, and parallelism. The system is evaluated by comparing its output to the choices of human coders. The system achieves a success rate of 94.8%, where success is defined as sharing of a head between the system choice and the coder choice, while a baseline recency-based scheme achieves a success rate o,I:75.0% by this measure. Other criteria for success are also examined. When success is defined as an exact, word-for-word match with the coder choice, the system performs with 76.0% accuracy, and the baseline approach achieves only 14.6% accuracy. Analysis of the individual components of the system shows that each of the structural and discourse constraints used are strong predictors of the antecedent of VP ellipsis. The VP elipsis resolution system (VPE-RES) operates on Penn Treebank parse trees to determine the antecedent for VPE occurrences. The system, implemented in Common LISP, uses a Syntactic Filter to eliminate candidate antecedents in impossible
An Empirical Approach to VP Ellipsis
d874460
Detecting opinion relation is a crucial step for fine-gained opinion summarization. A valid opinion relation has three requirements: a correct opinion word, a correct opinion target and the linking relation between them. Previous works prone to only verifying two of these requirements for opinion extraction, while leave the other requirement unverified. This could inevitably introduce noise terms. To tackle this problem, this paper proposes a joint approach, where all three requirements are simultaneously verified by a deep neural network in a classification scenario. Some seeds are provided as positive labeled data for the classifier. However, negative labeled data are hard to acquire for this task. We consequently introduce one-class classification problem and develop a One-Class Deep Neural Network. Experimental results show that the proposed joint approach significantly outperforms state-of-the-art weakly supervised methods.
Joint Opinion Relation Detection Using One-Class Deep Neural Network
d51882382
The paper describes Tilde's work on developing a neural machine translation (NMT) tool for the 2017-2018 Presidency of the Council of the European Union. The tool was developed by combining the European Commission's eTranslation service with a set of customized, domainadapted NMT systems built by Tilde. The central aim of the tool is to assist staff members, translators, EU delegates, journalists, and other visitors at EU Council Presidency events in Estonia, Bulgaria, and Austria. The paper provides details on the workflow used to collect, filter, clean, normalize, and pre-process data for the NMT systems; and the methods applied for training and adaptation of the NMT systems for the EU Council Presidency. The paper also compares the trained NMT systems to other publicly available MT systems for Estonian and Bulgarian, showing that the custom systems achieve better results than competing systems.
Developing a Neural Machine Translation Service for the 2017-2018 European Union Presidency
d1356737
In this paper, a Maximum Entropy Markov Model (MEMM) for dialog state tracking is proposed to efficiently handle user goal evolvement in two steps. The system first predicts the occurrence of a user goal change based on linguistic features and dialog context for each dialog turn, and then the proposed model could utilize this user goal change information to infer the most probable dialog state sequence which underlies the evolvement of user goal during the dialog. It is believed that with the suggested various domain independent feature functions, the proposed model could better exploit not only the intra-dependencies within long ASR N-best lists but also the inter-dependencies of the observations across dialog turns, which leads to more efficient and accurate dialog state inference.
User Goal Change Model for Spoken Dialog State Tracking
d16298796
This paper investigates the complexity of the satisfiability problem for feature logics strong enough to code entire grammars unaided. We show that feature logics capable of both enforcing re-entrancy and stating linguistic generalisations will have undecidable satisfiability problems even when most Boolean expressivity has been discarded. We exhibit a decidable fragment, but the restrictions imposed to ensure decidability render it unfit for stand-alone use. The import of these results is discussed, and we conclude that there is a need for feature logics that are less homogeneous in their treatment of linguistic structure.
Decidability and Undecidability in stand-alone Feature Logics
d248780384
Word identification from continuous input is typically viewed as a segmentation task. Experiments with human adults suggest that familiarity with syntactic structures in their native language also influences word identification in artificial languages; however, the relation between syntactic processing and word identification is yet unclear. This work takes one step forward by exploring a radically different approach of word identification, in which segmentation of a continuous input is viewed as a process isomorphic to unsupervised constituency parsing. Besides formalizing the approach, this study reports simulations of human experiments with DIORA (Drozdov et al., 2019), a neural unsupervised constituency parser. Results show that this model can reproduce human behavior in word identification experiments, suggesting that this is a viable approach to study word identification and its relation to syntactic processing. 1 In the field of word identification from naturalistic input,
Word Segmentation as Unsupervised Constituency Parsing
d250141107
We present a resource of German light verb constructions extracted from textual labels in graphical business process models. Those models depict the activities in processes in an organization in a semi-formal way. From a large range of sources, we compiled a repository of 2,301 business process models. Their textual labels (altogether 52,963 labels) were analyzed. This produced a list of 5,246 occurrences of 846 light verb constructions. We found that the light verb constructions that occur in business process models differ from light verb constructions that have been analyzed in other texts. Hence, we conclude that texts in graphical business process models represent a specific type of texts that is worth to be studied on its own. We think that our work is a step towards better automatic analysis of business process models because understanding the actual meaning of activity labels is a prerequisite for detecting certain types of modelling problems.
German Light Verb Constructions in Business Process Models
d6830189
Current Chinese event extraction systems suffer much from two problems in trigger identification: unknown triggers and word segmentation errors to known triggers. To resolve these problems, this paper proposes two novel inference mechanisms to explore special characteristics in Chinese via compositional semantics inside Chinese triggers and discourse consistency between Chinese trigger mentions. Evaluation on the ACE 2005 Chinese corpus justifies the effectiveness of our approach over a strong baseline.
Employing Compositional Semantics and Discourse Consistency in Chinese Event Extraction
d10011066
Parsing is one of the important processes for natural language processing and, in general, a large-scale CFG is used to parse a wide variety of sentences. For many languages, a CFG is derived from a large-scale syntactically annotated corpus, and many parsing algorithms using CFGs have been proposed. However, we could not apply them to Japanese since a Japanese syntactically annotated corpus has not been available as of yet. In order to solve the problem, we have been building a large-scale Japanese syntactically annotated corpus. In this paper, we show the evaluation results of a CFG derived from our corpus and compare it with results of some Japanese dependency analyzers.
Evaluation of a Japanese CFG Derived from a Syntactically Annotated Corpus with Respect to Dependency Measures
d64055047
Nous assistons actuellement en TAL à un regain d'intérêt pour le traitement de la temporalité véhiculée par les textes. Dans cet article, nous présentons une proposition de caractérisation et de typage des expressions temporelles tenant compte des travaux effectués dans ce domaine tout en cherchant à pallier les manques et incomplétudes de certains de ces travaux. Nous explicitons comment nous nous situons par rapport à l'existant et les raisons pour lesquelles parfois nous nous en démarquons. Le typage que nous définissons met en évidence de réelles différences dans l'interprétation et le mode de résolution référentielle d'expressions qui, en surface, paraissent similaires ou identiques. Nous proposons un ensemble des critères objectifs et linguistiquement motivés permettant de reconnaître, de segmenter et de typer ces expressions. Nous verrons que cela ne peut se réaliser sans considérer les procès auxquels ces expressions sont associées et un contexte parfois éloigné.Abstract. Temporal processing in texts is a topic of renewed interest in NLP. In this paper we present a new way of typing temporal expressions that takes into account both the state of the art of this domain and that also tries to be more precise and accurate that some of the current proposals. We explain into what extent our proposal is compatible and comparable with the stateof-the art and why sometimes we stray from it. The typing system that we define highlights real differences in the interpretation and reference calculus of these expressions. At the same time, by offering objective criteria, it fulfils the necessity of high inter-agreement between annotators.After having defined what we consider as temporal expressions, we will show that tokenization, characterization and typing of those expressions can only be done having into account processes to which these expressions are linked.Mots-clés : Temporalité, typage et caractérisation des expressions temporelles.
Proposition de caractérisation et de typage des expressions temporelles en contexte
d6551679
3. Concept Extraction
d16870294
We propose a novel approach to deciphering short monoalphabetic ciphers that combines both character-level and word-level language models. We formulate decipherment as tree search, and use Monte Carlo Tree Search (MCTS) as a fast alternative to beam search. Our experiments show a significant improvement over the state of the art on a benchmark suite of short ciphers. Our approach can also handle ciphers without spaces and ciphers with noise, which allows us to explore its applications to unsupervised transliteration and deniable encryption.This work is licensed under a Creative Commons Attribution 4.0 International Licence. Page numbers and proceedings footer are added by the organizers. Licence details: http://creativecommons.org/licenses/by/4.0/
Solving Substitution Ciphers with Combined Language Models
d9710165
Deriving the emotion of a human speaker is a hard task, especially if only the audio stream is taken into account. While state-of-the-art approaches already provide good results, adaptive methods have been proposed in order to further improve the recognition accuracy. A recent approach is to add characteristics of the speaker, e.g., the gender of the speaker. In this contribution, we argue that adding information unique for each speaker, i.e., by using speaker identification techniques, improves emotion recognition simply by adding this additional information to the feature vector of the statistical classification algorithm. Moreover, we compare this approach to emotion recognition adding only the speaker gender being a non-unique speaker attribute. We justify this by performing adaptive emotion recognition using both gender and speaker information on four different corpora of different languages containing acted and non-acted speech. The final results show that adding speaker information significantly outperforms both adding gender information and solely using a generic speaker-independent approach.
Comparison of Gender-and Speaker-adaptive Emotion Recognition
d52011439
In this article, we discuss which text, speech, and image technologies have been developed, and would be feasible to develop, for the approximately 60 Indigenous languages spoken in Canada. In particular, we concentrate on technologies that may be feasible to develop for most or all of these languages, not just those that may be feasible for the few most-resourced of these. We assess past achievements and consider future horizons for Indigenous language transliteration, text prediction, spell-checking, approximate search, machine translation, speech recognition, speaker diarization, speech synthesis, optical character recognition, and computer-aided language learning.
Indigenous language technologies in Canada: Assessment, challenges, and successes
d52012539
Recently, deep reinforcement learning (DRL) has been used for dialogue policy optimization. However, many DRL-based policies are not sample-efficient. Most recent advances focus on improving DRL optimization algorithms to address this issue. Here, we take an alternative route of designing neural network structure that is better suited for DRL-based dialogue management. The proposed structured deep reinforcement learning is based on graph neural networks (GNN), which consists of some sub-networks, each one for a node on a directed graph. The graph is defined according to the domain ontology and each node can be considered as a sub-agent. During decision making, these sub-agents have internal message exchange between neighbors on the graph. We also propose an approach to jointly optimize the graph structure as well as the parameters of GNN. Experiments show that structured DRL significantly outperforms previous state-of-the-art approaches in almost all of the 18 tasks of the PyDial benchmark.
Structured Dialogue Policy with Graph Neural Networks
d9422008
We describe the Sheffield system used in TempEval-2007. Our system takes a machine-learning (ML) based approach, treating temporal relation assignment as a simple classification task and using features easily derived from the TempEval data, i.e. which do not require 'deeper' NLP analysis. We aimed to explore three questions:(1) How well would a 'lite' approach of this kind perform? (2) Which features contribute positively to system performance?(3) Which ML algorithm is better suited for the TempEval tasks? We used the Weka ML workbench to facilitate experimenting with different ML algorithms. The paper describes our system and supplies preliminary answers to the above questions.
USFD: Preliminary Exploration of Features and Classifiers for the TempEval-2007 Tasks
d6660152
We present a multimodal dialogue system that allows doctors to interact with a medical decision support system in virtual reality (VR). We integrate an interactive visualization of patient records and radiology image data, as well as therapy predictions. Therapy predictions are computed in realtime using a deep learning model.
A Multimodal Dialogue System for Medical Decision Support in Virtual Reality
d16378459
Language engineering implenicnts functions of a language and inforillation via computers. '['he need for language engineering plattbrms has been generally recognized and several researches are being undertaken around the worhl. Our goal is to establish Korean inforn-iation platform of linguistic resources and tools for Korean language and information colnumnities.The platform will support researchers and engineers with welldeveloped and standardized resources and al)plication tools thereby avoiding duplicate activities fi'om scratch a.nd aniplifyiilg overall effort on the domain. This paper reports tile components and the current status of the project, and the importance of the effort.
Korean Language Engineering: Current Status of the Information Platform *
d260440841
This paper presents a new web mining scheme for parallel data acquisition. Based on the Document Object Model (DOM), a web page is represented as a DOM tree. Then a DOM tree alignment model is proposed to identify the translationally equivalent texts and hyperlinks between two parallel DOM trees. By tracing the identified parallel hyperlinks, parallel web documents are recursively mined. Compared with previous mining schemes, the benchmarks show that this new mining scheme improves the mining coverage, reduces mining bandwidth, and enhances the quality of mined parallel sentences.
A DOM Tree Alignment Model for Mining Parallel Data from the Web
d17585853
Manual annotation of large textual corpora can be cost-prohibitive, especially for rare and under-resourced languages. One potential solution is pre-annotation: asking human annotators to correct sentences that have already been annotated, usually by a machine. Another potential solution is correction propagation: using annotator corrections to dynamically improve to the remaining pre-annotations within the current sentence. The research presented in this paper employs a controlled user study to discover under what conditions these two machine-assisted annotation techniques are effective in increasing annotator speed and accuracy and thereby reducing the cost for the task of morphologically annotating texts written in classical Syriac. A preliminary analysis of the data indicates that pre-annotations improve annotator accuracy when they are at least 60% accurate, and annotator speed when they are at least 80% accurate. This research constitutes the first systematic evaluation of pre-annotation and correction propagation together in a controlled user study.
First Results in a Study Evaluating Pre-annotation and Correction Propagation for Machine-Assisted Syriac Morphological Analysis
d252819434
Multilingual Neural Machine Translation (MNMT) exhibits incredible performance with the development of a single translation model for many languages. Previous studies on multilingual translation reveal that multilingual training is effective for languages with limited corpus. This paper presents our submission (Team Id: NITR) in the WAT 2022 for "MultiIndicMT shared task" where the objective of the task is the translation between 5 Indic languages(which are newly added in WAT 2022 corpus) into English and vice versa using the corpus provided by the organizer of WAT. Our system is based on a transformer-based NMT using fairseq modelling toolkit with ensemble techniques. Heuristic pre-processing approaches are carried out before keeping the model under training. Our multilingual NMT systems are trained with shared encoder and decoder parameters followed by assigning language embeddings to each token in both encoder and decoder. Our final multilingual system was examined by using BLEU and RIBES metric scores.
NIT Rourkela Machine Translation(MT) System Submission to WAT 2022 for MultiIndicMT: An Indic Language Multilingual Shared Task
d219300752
d248780349
Large-scale language modeling and natural language prompting have demonstrated exciting capabilities for few and zero shot learning in NLP. However, translating these successes to specialized domains such as biomedicine remains challenging, due in part to biomedical NLP's significant dataset debt -the technical costs associated with data that are not consistently documented or easily incorporated into popular machine learning frameworks at scale. To assess this debt, we crowdsourced curation of datasheets for 167 biomedical datasets. We find that only 13% of datasets are available via programmatic access and 30% lack any documentation on licensing and permitted reuse. Our dataset catalog is available at: https://tinyurl.com/bigbio22. . 2022. Benchmark datasets driving artificial intelligence development fail to capture the needs of medical professionals. arXiv preprint arXiv:2201.07040.
Dataset Debt in Biomedical Language Modeling
d1106388
d419627
d9955460
We aim to address two complementary deficiencies in Natural Language Processing (NLP) research: (i) Despite the importance and prevalence of metaphor across many discourse genres, and metaphor's many functions, applied NLP has mostly not addressed metaphor understanding. But, conversely, (ii) difficult issues in metaphor understanding have hindered large-scale application, extensive empirical evaluation, and the handling of the true breadth of metaphor types and interactions with other language phenomena. In this paper, abstracted from a recent grant proposal, a new avenue for addressing both deficiencies and for inspiring new basic research on metaphor is investigated: namely, placing metaphor research within the "Recognizing Textual Entailment" (RTE) task framework for evaluation of semantic processing systems. 357 358 Agerri, Barnden, Lee, and Wallington
Textual Entailment as an Evaluation Framework for Metaphor Resolution: A Proposal
d51868342
When learning Chinese as a foreign language, the learners may have some grammatical errors due to negative migration of their native languages. However, few grammar checking applications have been developed to support the learners. The goal of this paper is to develop a tool to automatically diagnose four types of grammatical errors which are redundant words (R), missing words (M), bad word selection (S) and disordered words (W) in Chinese sentences written by those foreign learners. In this paper, a conventional linear CRF model with specific feature engineering and a LSTM-CRF model are used to solve the CGED (Chinese Grammatical Error Diagnosis) task. We make some improvement on both models and the submitted results have better performance on false positive rate and accuracy than the average of all runs from CGED2018 for all three evaluation levels.
Chinese Grammatical Error Diagnosis Based on CRF and LSTM-CRF model
d8042124
This paper shows that incorporating linguistically motivated features to ensure correct animacy and number agreement in an averaged perceptron ranking model for CCG realization helps improve a state-ofthe-art baseline even further. Traditionally, these features have been modelled using hard constraints in the grammar. However, given the graded nature of grammaticality judgements in the case of animacy we argue a case for the use of a statistical model to rank competing preferences. Though subject-verb agreement is generally viewed to be syntactic in nature, a perusal of relevant examples discussed in the theoretical linguistics literature(Kathol, 1999;Pollard and Sag, 1994)points toward the heterogeneous nature of English agreement. Compared to writing grammar rules, our method is more robust and allows incorporating information from diverse sources in realization. We also show that the perceptron model can reduce balanced punctuation errors that would otherwise require a post-filter. The full model yields significant improvements in BLEU scores on Section 23 of the CCGbank and makes many fewer agreement errors.
Designing Agreement Features for Realization Ranking
d37440813
The University of Sheffield (USFD) participated in the International Workshop for Spoken Language Translation (IWSLT) in 2014. In this paper, we will introduce the USFD SLT system for IWSLT. Automatic speech recognition (ASR) is achieved by two multi-pass deep neural network systems with adaptation and rescoring techniques. Machine translation (MT) is achieved by a phrase-based system. The USFD primary system incorporates state-of-the-art ASR and MT techniques and gives a BLEU score of 23.45 and 14.75 on the English-to-French and English-to-German speech-totext translation task with the IWSLT 2014 data. The USFD contrastive systems explore the integration of ASR and MT by using a quality estimation system to rescore the ASR outputs, optimising towards better translation. This gives a further 0.54 and 0.26 BLEU improvement respectively on the IWSLT 2012 and 2014 evaluation data.
more authors) (2014) The USFD SLT System for IWSLT
d17585795
SYNOPSISThis paper introduces the topic of evaluation of Natural Language Processing systems, and discusses the role of test suites in the linguistic evaluation of a system. The work on test suites that is being carried out within the framework of the TSNLP project is described in detail and the relevance of the project to the evaluation of machine translation systems considered.
Test Suites for Natural Language Processing
d2717698
We consider the problem of modeling the content structure of texts within a specific domain, in terms of the topics the texts address and the order in which these topics appear. We first present an effective knowledge-lean method for learning content models from unannotated documents, utilizing a novel adaptation of algorithms for Hidden Markov Models. We then apply our method to two complementary tasks: information ordering and extractive summarization. Our experiments show that incorporating content models in these applications yields substantial improvement over previously-proposed methods.
Catching the Drift: Probabilistic Content Models, with Applications to Generation and Summarization
d5575191
Weighted tree transducers have been proposed as useful formal models for representing syntactic natural language processing applications, but there has been little description of inference algorithms for these automata beyond formal foundations. We give a detailed description of algorithms for application of cascades of weighted tree transducers to weighted tree acceptors, connecting formal theory with actual practice. Additionally, we present novel on-the-fly variants of these algorithms, and compare their performance on a syntax machine translation cascade based on(Yamada and Knight, 2001).MotivationWeighted finite-state transducers have found recent favor as models of natural language(Mohri, 1997). In order to make actual use of systems built with these formalisms we must first calculate the set of possible weighted outputs allowed by the transducer given some input, which we call forward application, or the set of possible weighted inputs given some output, which we call backward application. After application we can do some inference on this result, such as determining its k highest weighted elements.We may also want to divide up our problems into manageable chunks, each represented by a transducer. As noted byWoods (1980), it is easier for designers to write several small transducers where each performs a simple transformation, rather than painstakingly construct a single complicated device. We would like to know, then, the result of transformation of input or output by a cascade of transducers, one operating after the other. As we will see, there are various strategies for approaching this problem. We will consider offline composition, bucket brigade application, and on-the-fly application.Application of cascades of weighted string transducers (WSTs) has been well-studied (Mohri, 1997). Less well-studied but of more recent interest is application of cascades of weighted tree transducers (WTTs). We tackle application of WTT cascades in this work, presenting:• explicit algorithms for application of WTT cascades • novel algorithms for on-the-fly application of WTT cascades, and • experiments comparing the performance of these algorithms.Strategies for the string caseBefore we discuss application of WTTs, it is helpful to recall the solution to this problem in the WST domain. We recall previous formal presentations of WSTs (Mohri, 1997) and note informally that they may be represented as directed graphs with designated start and end states and edges labeled with input symbols, output symbols, and weights. 1 Fortunately, the solution for WSTs is practically trivial-we achieve application through a series of embedding, composition, and projection operations. Embedding is simply the act of representing a string or regular string language as an identity WST. Composition of WSTs, that is, generating a single WST that captures the transformations of two input WSTs used in sequence, is not at all trivial, but has been well covered in, e.g.,(Mohri, 2009), where directly implementable algorithms can be found. Finally, projection is another trivial operation-the domain or range language can be obtained from a WST by ignoring the output or input symbols, respectively, on its arcs, and summing weights on otherwise identical arcs. By embedding an input, composing the result with the given WST, and projecting the result, forward application is accomplished. 2 We are then left with a weighted string acceptor (WSA), essentially a weighted, labeled graph, which can be traversed 1 We assume throughout this paper that weights are in R+ ∪ {+∞}, that the weight of a path is calculated as the product of the weights of its edges, and that the weight of a (not necessarily finite) set T of paths is calculated as the sum of the weights of the paths of T .2 For backward applications, the roles of input and output are simply exchanged.1058
Efficient Inference Through Cascades of Weighted Tree Transducers
d155092969
Recent work establishes dataset difficulty and removes annotation artifacts via partial-input baselines (e.g., hypothesis-only models for SNLI or question-only models for VQA).When a partial-input baseline gets high accuracy, a dataset is cheatable.However, the converse is not necessarily true: the failure of a partialinput baseline does not mean a dataset is free of artifacts.To illustrate this, we first design artificial datasets which contain trivial patterns in the full input that are undetectable by any partial-input model.Next, we identify such artifacts in the SNLI dataset-a hypothesis-only model augmented with trivial patterns in the premise can solve 15% of the examples that are previously considered "hard".Our work provides a caveat for the use of partial-input baselines for dataset verification and creation.
Misleading Failures of Partial-input Baselines
d15861335
We report on recent advances in HPSG parsing of German with local ambiguity packing(Oepen and Carroll, 2000), achieving a speed-up factor of 2 on a balanced test-suite. In contrast to earlier studies carried out for English using the same packing algorithm, we show that restricting semantic features only is insufficient for achieving acceptable runtime performance with a German HPSG grammar. In a series of experiments relating to the three different types of discontinuities in German (head movement, extraction, extraposition), we examine the effects of restrictor choice, ultimately showing that extraction and head movement require partial restriction of the respective features encoding the dependency, whereas full restriction gives best results for extraposition.
Local ambiguity packing and discontinuity in German
d226238977
d226239152
d6528511
Language model (LM) adaptation is important for both speech and language processing. It is often achieved by combining a generic LM with a topic-specific model that is more relevant to the target document. Unlike previous work on unsupervised LM adaptation, this paper investigates how effectively using named entity (NE) information, instead of considering all the words, helps LM adaptation. We evaluate two latent topic analysis approaches in this paper, namely, clustering and Latent Dirichlet Allocation (LDA). In addition, a new dynamically adapted weighting scheme for topic mixture models is proposed based on LDA topic analysis. Our experimental results show that the NE-driven LM adaptation framework outperforms the baseline generic LM. The best result is obtained using the LDA-based approach by expanding the named entities with syntactically filtered words, together with using a large number of topics, which yields a perplexity reduction of 14.23% compared to the baseline generic LM.
Unsupervised Language Model Adaptation Incorporating Named Entity Information
d1588777
This paper investigates automatic identification of Information Structure (IS) in texts. The experiments use the Prague Dependency Treebank which is annotated with IS following the Praguian approach of Topic Focus Articulation. We automatically detect t(opic) and f(ocus), using node attributes from the treebank as basic features and derived features inspired by the annotation guidelines. We present the performance of decision trees (C4.5), maximum entropy, and rule induction (RIPPER) classifiers on all tectogrammatical nodes. We compare the results against a baseline system that always assigns f(ocus) and against a rule-based system. The best system achieves an accuracy of 90.69%, which is a 44.73% improvement over the baseline (62.66%). The notion 'kontrast' with a 'k' has been introduced in(Vallduví and Vilkuna, 1998)to replace what Steedman calls 'focus', and to avoid confusion with other definitions of focus.
Data-driven Approaches for Information Structure Identification
d256461293
We describe TTIC's model submission to WMT-SLT 2022 task (Müller et al., 2022 on sign language translation (Swiss-German Sign Language (DSGS) → German). Our model consists of an I3D backbone for image encoding and a Transformer-based encoder-decoder model for sequence modeling. The I3D is pretrained with isolated sign recognition using the WLASL dataset. The model is based on RGB images alone and does not rely on the pre-extracted human pose. We explore a few different strategies for model training in this paper. Our system achieves 0.3 BLEU score and 0.195 Chrf score on the official test set.
TTIC's WMT-SLT 22 Sign Language Translation System
d28805394
d18087464
This paper proposes a convolution tree kernel-based approach for relation extraction where the parse tree is expanded with entity features such as entity type, subtype, and mention level etc. Our study indicates that not only can our method effectively capture both syntactic structure and entity information of relation instances, but also can avoid the difficulty with tuning the parameters in composite kernels. We also demonstrate that predicate verb information can be used to further improve the performance, though its enhancement is limited. Evaluation on the ACE2004 benchmark corpus shows that our system slightly outperforms both the previous best-reported feature-based and kernel-based systems.
Relation Extraction Using Convolution Tree Kernel Expanded with Entity Features *
d2275946
Studies of the graph of dictionary definitions (DD)(Picard et al., 2009;Levary et al., 2012)have revealed strong semantic coherence of local topological structures. The techniques used in these papers are simple and the main results are found by understanding the structure of cycles in the directed graph (where words point to definitions). Based on our earlier work (Levary et al., 2012), we study a different class of word definitions, namely those of the Free Association (FA) dataset . These are responses by subjects to a cue word, which are then summarized by a directed, free association graph.
The Topology of Semantic Knowledge
d10976937
We report on the development of a new automatic feedback model to improve information retrieval in digital libraries. Our hypothesis is that some particular sentences, selected based on argumentative criteria, can be more useful than others to perform well-known feedback information retrieval tasks. The argumentative model we explore is based on four disjunct classes, which has been very regularly observed in scientific reports: PURPOSE, METHODS, RE-SULTS, CONCLUSION. To test this hypothesis, we use the Rocchio algorithm as baseline. While Rocchio selects the features to be added to the original query based on statistical evidence, we propose to base our feature selection also on argumentative criteria. Thus, we restrict the expansion on features appearing only in sentences classified into one of our argumentative categories. Our results, obtained on the OHSUMED collection, show a significant improvement when expansion is based on PURPOSE (mean average precision = +23%) and CONCLUSION (mean average precision = +41%) contents rather than on other argumentative contents. These results suggest that argumentation is an important linguistic dimension that could benefit information retrieval.
Argumentative Feedback: A Linguistically-motivated Term Expansion for Information Retrieval
d41480412
Out-of-vocabulary word translation is a major problem for the translation of low-resource languages that suffer from a lack of parallel training data. This paper evaluates the contributions of target-language context models towards the translation of OOV words, specifically in those cases where OOV translations are derived from external knowledge sources, such as dictionaries. We develop both neural and non-neural context models and evaluate them within both phrase-based and self-attention based neural machine translation systems. Our results show that neural language models that integrate additional context beyond the current sentence are the most effective in disambiguating possible OOV word translations. We present an efficient second-pass lattice-rescoring method for wide-context neural language models and demonstrate performance improvements over state-of-the-art self-attention based neural MT systems in five out of six low-resource language pairs.
Context Models for OOV Word Translation in Low-Resource Languages
d3067452
We describe a corpus of consumer health questions annotated with named entities. The corpus consists of 1548 de-identified questions about diseases and drugs, written in English. We defined 15 broad categories of biomedical named entities for annotation. A pilot annotation phase in which a small portion of the corpus was double-annotated by four annotators was followed by a main phase in which double annotation was carried out by six annotators, and a reconciliation phase in which all annotations were reconciled by an expert. We conducted the annotation in two modes, manual and assisted, to assess the effect of automatic pre-annotation and calculated inter-annotator agreement. We obtained moderate inter-annotator agreement; assisted annotation yielded slightly better agreement and fewer missed annotations than manual annotation. Due to complex nature of biomedical entities, we paid particular attention to nested entities for which we obtained slightly lower inter-annotator agreement, confirming that annotating nested entities is somewhat more challenging. To our knowledge, the corpus is the first of its kind for consumer health text and is publicly available.
Annotating Named Entities in Consumer Health Questions
d219303882
d3012890
We present a model and an experimental platform of a bootstrapping approach to statistical induction of natural language properties that is constraint based with voting components. The system is incremental and unsupervised. In the following discussion we focus on the components for morphological induction. We show that the much harder problem of incremental unsupervised morphological induction can outperform comparable all-at-once algorithms with respect to precision. We discuss how we use such systems to identify cues for induction in a cross-level architecture.
On Statistical Parameter Setting
d236486099
d15967673
In the present work we raise the hypothesis that eye-movements when reading texts reveal task performance, as measured by the level of understanding of the reader. With the objective of testing that hypothesis, we introduce a framework to integrate geometric information of eye-movements and text layout into natural language processing models via image processing techniques. We evidence the patterns in reading behavior between subjects with similar task performance using principal component analysis and quantify the likelihood of our hypothesis using the concept of linear separability. Finally, we point to potential applications that could benefit from these findings.
Recognizing personal characteristics of readers using eye-movements and text features
d8866717
This paper describes a multichannel acoustic data collection recorded under the European DICIT project, during the Wizard of Oz (WOZ) experiments carried out at FAU and FBK-irst laboratories. The scenario is a distant-talking interface for interactive control of a TV. The experiments involve the acquisition of multichannel data for signal processing front-end and were carried out due to the need to collect a database for testing acoustic pre-processing algorithms. In this way, realistic scenarios can be simulated at a preliminary stage, instead of real-time implementations, allowing for repeatable experiments. To match the project requirements, the WOZ experiments were recorded in three languages: English, German and Italian. Besides the user inputs, the database also contains non-speech related acoustic events, room impulse response measurements and video data, the latter used to compute 3D labels. Sessions were manually transcribed and segmented at word level, introducing also specific labels for acoustic events.
WOZ Acoustic Data Collection For Interactive TV
d2724633
In this paper, we propose a neural machine translation (NMT) with a key-value attention mechanism on the source-side encoder. The key-value attention mechanism separates the source-side content vector into two types of memory known as the key and the value. The key is used for calculating the attention distribution, and the value is used for encoding the context representation. Experiments on three different tasks indicate that our model outperforms an NMT model with a conventional attention mechanism. Furthermore, we perform experiments with a conventional NMT framework, in which a part of the initial value of a weight matrix is set to zero so that the matrix is at the same initial-state as the key-value attention mechanism. As a result, we obtain comparable results with the key-value attention mechanism without changing the network structure.
Key-value Attention Mechanism for Neural Machine Translation
d1797891
This paper addresses the issue of text normalization, an important yet often overlooked problem in natural language processing. By text normalization, we mean converting 'informally inputted' text into the canonical form, by eliminating 'noises' in the text and detecting paragraph and sentence boundaries in the text. Previously, text normalization issues were often undertaken in an ad-hoc fashion or studied separately. This paper first gives a formalization of the entire problem. It then proposes a unified tagging approach to perform the task using Conditional Random Fields (CRF). The paper shows that with the introduction of a small set of tags, most of the text normalization tasks can be performed within the approach. The accuracy of the proposed method is high, because the subtasks of normalization are interdependent and should be performed together. Experimental results on email data cleaning show that the proposed method significantly outperforms the approach of using cascaded models and that of employing independent models. Detection Task Prec. Rec. F1 Acc. Independent 95.16 91.52 93.30 93.81 Cascaded 95.16 91.52 93.30 93.81 Extra Line Break Unified 93.87 93.63 93.75 94.53 Independent 91.85 94.64 93.22 99.87 Cascaded 94.54 94.56 94.55 99.89 Extra Space Unified 95.17 93.98 94.57 99.90 Independent 88.63 82.69 85.56 99.66 Cascaded 87.17 85.37 86.26 99.66 Extra Punctuation Mark Unified 90.94 84.84 87.78 99.71 Independent 98.46 99.62 99.04 98.36 Cascaded 98.55 99.20 98.87 98.08 Sentence Boundary Unified 98.76 99.61 99.18 98.61 Independent 72.51 100.0 84.06 84.27 Cascaded 72.51 100.0 84.06 84.27 Unnecessary Token Unified 98.06 95.47 96.75 96.18 Independent 27.32 87.44 41.63 96.22 Case Restoration (TrueCasing) Cascaded 28.04 88.21 42.55 96.35 Independent 84.96 62.79 72.21 99.01 Cascaded 85.85 63.99 73.33 99.07 Case Restoration (CRF) Unified 86.65 67.09 75.63 99.21
A Unified Tagging Approach to Text Normalization
d233365237
d502609
Automatic topic segmentation, separation of a discourse stream into its constituent stories or topics, is a necessary preprocessing step for applications such as information retrieval, anaphora resolution, and summarization. While significant progress has been made in this area for text sources and for English audio sources, little work has been done in automatic segmentation of other languages using both text and acoustic information. In this paper, we focus on exploiting both textual and prosodic features for topic segmentation of Mandarin Chinese. As a tone language, Mandarin presents special challenges for applicability of intonation-based techniques, since the pitch contour is also used to establish lexical identity. However, intonational cues such as reduction in pitch and intensity at topic boundaries and increase in duration and pause still provide significant contrasts in Mandarin Chinese. We first build a decision tree classifier that based only on prosodic information achieves boundary classification accuracy of 89-95.8% on a large standard test set. We then contrast these results with a simple text similarity-based classification scheme. Finally we build a merged classifier, finding the best effectiveness for systems integrating text and prosodic cues., assuming one speaker per topic 1 . For duration, we consider both absolute and normalized word duration, where average word duration is used as the mean in the calculation above.
Assessing Prosodic and Text Features for Segmentation of Mandarin Broadcast News
d243865127
Aspect terms extraction (ATE) and aspect sentiment classification (ASC) are two fundamental and fine-grained sub-tasks in aspect-level sentiment analysis (ALSA). In the textual analysis, jointly extracting both aspect terms and sentiment polarities has been drawn much attention due to the better applications than individual sub-task. However, in the multimodal scenario, the existing studies are limited to handle each sub-task independently, which fails to model the innate connection between the above two objectives and ignores the better applications. Therefore, in this paper, we are the first to jointly perform multi-modal ATE (MATE) and multi-modal ASC (MASC), and we propose a multi-modal joint learning approach with auxiliary cross-modal relation detection for multi-modal aspect-level sentiment analysis (MALSA). Specifically, we first build an auxiliary text-image relation detection module to control the proper exploitation of visual information. Second, we adopt the hierarchical framework to bridge the multi-modal connection between MATE and MASC, as well as separately visual guiding for each sub module. Finally, we can obtain all aspect-level sentiment polarities dependent on the jointly extracted specific aspects. Extensive experiments show the effectiveness of our approach against the joint textual approaches, pipeline and collapsed multi-modal approaches.
Joint Multi-modal Aspect-Sentiment Analysis with Auxiliary Cross-modal Relation Detection
d6778398
Lexical Disambiguation Using Constraint Handling In Prolog (CHIP) *
d52019251
Given a partial description like "she opened the hood of the car," humans can reason about the situation and anticipate what might come next ("then, she examined the engine"). In this paper, we introduce the task of grounded commonsense inference, unifying natural language inference and commonsense reasoning.We present Swag, a new dataset with 113k multiple choice questions about a rich spectrum of grounded situations. To address the recurring challenges of the annotation artifacts and human biases found in many existing datasets, we propose Adversarial Filtering (AF), a novel procedure that constructs a de-biased dataset by iteratively training an ensemble of stylistic classifiers, and using them to filter the data. To account for the aggressive adversarial filtering, we use state-of-theart language models to massively oversample a diverse set of potential counterfactuals. Empirical results demonstrate that while humans can solve the resulting inference problems with high accuracy (88%), various competitive models struggle on our task. We provide comprehensive analysis that indicates significant opportunities for future research.
Swag: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference
d5304321
This work is in the context of TRANSTYPE, a system that observes its user as he or she types a translation and repeatedly suggests completions for the text already entered. The user may either accept, modify, or ignore these suggestions. We describe the design, implementation, and performance of a prototype which suggests completions of units of texts that are longer than one word.
Unit Completion for a Computer-aided Translation System Typing
d12226067
In this paper we describe our system submitted for evaluation in the CLTE-SemEval-2013 task, which achieved the best results in two of the four data sets, and finished third in average. This system consists of a SVM classifier with features extracted from texts (and their translations SMT) based on a cardinality function. Such function was the soft cardinality. Furthermore, this system was simplified by providing a single model for the 4 pairs of languages obtaining better (unofficial) results than separate models for each language pair. We also evaluated the use of additional circular-pivoting translations achieving results 6.14% above the best official results.
SOFTCARDINALITY: Learning to Identify Directional Cross-Lingual Entailment from Cardinalities and SMT
d3431022
Unsupervised learning of morphology is used for automatic affix identification, morphological segmentation of words and generating paradigms which give a list of all affixes that can be combined with a list of stems. Various unsupervised approaches are used to segment words into stem and suffix. Most unsupervised methods used to learn morphology assume that suffixes occur frequently in a corpus. We have observed that for morphologically rich Indian Languages like Konkani, 31 percent of suffixes are not frequent. In this paper we report our framework for Unsupervised Morphology Learner which works for less frequent suffixes. Less frequent suffixes can be identified using p-similar technique which has been used for suffix identification, but cannot be used for segmentation of short stem words. Using proposed Suffix Association Matrix, our Unsupervised Morphology Learner can also do segmentation of short stem words correctly. We tested our framework to learn derivational morphology for English and two Indian languages, namely Hindi and Konkani. Compared to other similar techniques used for segmentation, there was an improvement in the precision and recall.This work is licenced under a Creative Commons Attribution 4.0 International License. Page numbers and proceedings footer are added by the organizers. License details: http://creativecommons.org/licenses/by/4.0/
A Framework for Learning Morphology using Suffix Association Matrix
d7626777
Animacy detection is a problem whose solution has been shown to be beneficial for a number of syntactic and semantic tasks. We present a state-of-the-art system for this task which uses a number of simple classifiers with heterogeneous data sources in a voting scheme. We show how this framework can give us direct insight into the behavior of the system, allowing us to more easily diagnose sources of error.
Animacy Detection with Voting Models
d28952320
La normalisation de documents en domaine contraint tels que des notices de médicaments peut être poussée plus loin que les technologies actuelles ne le permettent, et sans alourdir significativement la tâche du rédacteur technique. Nous proposons une nouvelle approche de normalisation de documents en domaine contraint qui utilise des représentations du contenu bien formées des documents en termes de buts communicatifs et des textes normalisés associés. Cette approche combine une analyse automatique et une phase de négociation interactive. La création de documents normalisés par les méthodes traditionnelles de saisie du texte est présentée et critiquée, et le paradigme émergent de la création de documents par spécification du contenu, dans lequel s'ancre notre approche, est introduit. ABSTRACT. The present article shows that the normalization of documents in limited domains, such as drug leaflets, can be improved over what can be done with current technologies and without putting more constraints on the technical writer's work. We propose a novel approach to document normalization that uses well-formed content representations expressed at the level of communicative goals and associated normalized texts. This approach combines automatic analysis and interactive negociation with an expert of the domain. Traditional document authoring techniques based on text input are reviewed and critiziced, and the emerging approach to symbolic document authoring, to which our approach belongs, is introduced. MOTS-CLÉS : documents techniques ; documents normalisés ; création de documents contrôlée ; analyse du contenu ; normalisation de documents.
Contraindre le fond et la forme en domaine contraint: la normalisation de documents
d8326965
In this paper we propose an end-toend neural CRF autoencoder (NCRF-AE) model for semi-supervised learning of sequential structured prediction problems. Our NCRF-AE consists of two parts: an encoder which is a CRF model enhanced by deep neural networks, and a decoder which is a generative model trying to reconstruct the input. Our model has a unified structure with different loss functions for labeled and unlabeled data with shared parameters. We developed a variation of the EM algorithm for optimizing both the encoder and the decoder simultaneously by decoupling their parameters. Our experimental results over the Part-of-Speech (POS) tagging task on eight different languages, show that the NCRF-AE model can outperform competitive systems in both supervised and semi-supervised scenarios.
Semi-Supervised Structured Prediction with Neural CRF Autoencoder
d8294822
This paper discusses a text extraction approach to multidocument summarization that builds on single-document summarization methods by using additional, available in-, formation about the document set as a whole and the relationships between the documents. Multi-document summarization differs from single in that the issues of compression, speed, redundancy and passage selection are critical in the formation of useful summaries. Our approach addresses these issues by using domainindependent techniques based mainly on fast, statistical processing, a metric for reducing redundancy and maximizing diversity in the selected passages, and a modular framework to allow easy parameterization for different genres, corpora characteristics and user requirements.
Multi-Document Summarization By Sentence Extraction
d6339908
SO uR "INFORMATIONAL" LANGUAGES AND MODELS. -A SEMANTIC VIEW
d260063065
In this paper, we leverage the GPT-3.5 language model both using the Chat-GPT API interface and the GPT-3.5 API interface to generate realistic examples of anti-vaccination tweets in Dutch with the aim of augmenting an imbalanced multi-label vaccine hesitancy argumentation classification dataset. In line with previous research, we devise a prompt that, on the one hand, instructs the model to generate realistic examples based on the human dataset (gold standard) and, on the other hand, to assign one or multiple labels to the generated instances. We then augment our gold standard data with the generated examples and evaluate the impact thereof in a cross-validation setting with several state-of-the-art Dutch BERT models. This augmentation technique predominantly shows improvements in F1 for classifying underrepresented classes while increasing the overall recall, paired with a slight decrease in precision for more common classes. Furthermore, we examine how well the synthetic data generalises to human data in the classification task. To our knowledge, we are the first to utilise Chat-GPT and GPT-3.5 for augmenting a Dutch multilabel dataset classification task.
Improving Dutch Vaccine Hesitancy Monitoring via Multi-Label Data Augmentation with GPT-3.5
d2043360
This paper presents an on-going effort which aims to annotate the Wall Street Journal sections of the Penn Treebank with the help of a hand-written large-scale and wide-coverage grammar of English. In doing so, we are not only focusing on the various stages of the semi-automated annotation process we have adopted, but we are also showing that rich linguistic annotations, which can apart from syntax also incorporate semantics, ensure that the treebank is guaranteed to be a truly sharable, re-usable and multi-functional linguistic resource † .
Annotating Wall Street Journal Texts Using a Hand-Crafted Deep Linguistic Grammar
d6680780
Interactive Multimedia Explanation for Equipment Maintenance and Repair
d5550409
A proposal for recognizing coordinate structures using the 'reconnaissance-attack' model is presented. The approach concentrates on di;tinguishhlg predicate coordination from other types of coordination and suggests that low-level stmctnral cues (such as the number of predicates, coordinators, and subordinators occurring in the input string) can be exploited at little cost during the early phase of the parse, with dramatic results. The method is tested on a text of 16,000 words. 0. Introduclion Coordinate structures are difficult to parse in part because of the problem ol determining, in a given case, what kinds of constituents are being coordinated. The examples in (1) will illustrate: (1) a. John hits Fred and the other guys. b. John hits Fred and the other guys attack him. c. When John hits Fred and the other guys attack him. Many variations on this theme are possible, to the point where serious doubts are raised regarding the efficacy in this domain of convention;tl parsers of either the top-down or bottom-up variety.In such parsers, it is necessary either to invoke backtracking to undo the effects of incorrect hypotheses or to store large numbers of alternatives until local indetermlnacies are resolved. In this paper, we will suggest an alternative approach based on the 'Recotmaissance-Attack' model described in Kac et al. 1986 (and more fully :in Rindflesch forthcoming), designed to skirt many of the problems associated with more traditional designs.*The work presented here was supported under Control Data Corporation Grant #86M102 to the University of Minnesota (Jeanette Gundel, Larry Hutchinson and Michael Kac, Principal Investigators). Special thanks are due to Nancy Hedberg and Karl Swingle for their xssistance on the project, and to Walling Cyre, technical liaison with CDC. The authors are listed in alphabetical older.Our proposal is theoretical in two senses. On the one hand, it does not present a detailed picture of an actual parsing algorithm, being intended rather to show that a significant body of linguistic data supports the contention that rapid, early resolution of local structural indeterminacies of the kind exemplified in(1)is feasible in the vast majority of cases. On the other hand, it is also based on a significant idealization, namely that each word belongs to only one syntactic category. Our intent is, in part, to show the applicability to a difficult parsing problem of a technique which can be found in other AI domains (Kowalski ! 979) but which seems to have been little exploited in work on natural language processing 1.Theoretical BackgroundIn a Reconnaissance-Attack parser, no structure-building is attempted until after an initial 'overflight' of the entire sentence has been made, directed at obtaining information, provided by lowlevel structural cues, which can then be exploited in narrowing the range of available options at a later point. (We assume here that the cues used are present in a minimally analyzed string, by which we mean one about which the only sU'uctural information available concerns the relative order and category membership of the individual words.) It is of the utmost importance to bear in mind that in this approach, ira given case cannot be resolved at a given point in tire parse, there is no guessing as to which type of coordination might obtain and hence no need to backlrack for the purpose of undoing the effects of erroneous hypotheses; rather, the parser simply defers the decision to a later phase at which more structural information is available. Note as well that this is not 'bottom-up' parsing in the usual sense either, since where more than one possibility is logically available, the parser makes no attempt to represent them all and cull out tlte false positives later on; there is a strict principle of 'altruism avoidance' (that is, never undertaking computational effort without a guaranteed payoff) which compels the parser to give no answer at all during lThe approach described in Sampson 1986, while quite different in its actual character, is nonetheless similar in spirit to what we are proposing.
COORDINATION IN RECONNAISSANCE-ATTACK PARSING*
d14523174
Most existing knowledge base (KB) embedding methods solely learn from time-unknown fact triples but neglect the temporal information in the knowledge base. In this paper, we propose a novel time-aware KB embedding approach taking advantage of the happening time of facts. Specifically, we use temporal order constraints to model transformation between time-sensitive relations and enforce the embeddings to be temporally consistent and more accurate. We empirically evaluate our approach in two tasks of link prediction and triple classification. Experimental results show that our method outperforms other baselines on the two tasks consistently.
Encoding Temporal Information for Time-Aware Link Prediction
d5160989
When translating into a morphologically complex language, segmenting the target language can reduce data sparsity, while introducing the complication of desegmenting the system output. We present a method for decoderintegrated desegmentation, allowing features that consider the desegmented target, such as a word-level language model, to be considered throughout the entire search space. Our results on a large-scale, English to Arabic translation task show significant improvement over the 1-best desegmentation baseline.
Integrating Morphological Desegmentation into Phrase-based Decoding
d14846265
We present an evaluation of a spoken dialogue system that detects and adapts to user disengagement and uncertainty in real-time. We compare this version of our system to a version that adapts to only user disengagement, and to a version that ignores user disengagement and uncertainty entirely. We find a significant increase in task success when comparing both affectadaptive versions of our system to our nonadaptive baseline, but only for male users.
Evaluating a Spoken Dialogue System that Detects and Adapts to User Affective States
d6169546
In this paper, we present a novel morphology preprocessing technique for Arabic-English translation. We exploit the Arabic morphology-English alignment to learn a model removing nonaligned Arabic morphemes. The model is an instance of the Conditional Random Field (Lafferty et al., 2001) model; it deletes a morpheme based on the morpheme's context. We achieved around two BLEU points improvement over the original Arabic translation for both a travel-domain system trained on 20K sentence pairs and a news domain system trained on 177K sentence pairs, and showed a potential improvement for a large-scale SMT system trained on 5 million sentence pairs.
Context-based Arabic Morphological Analysis for Machine Translation
d219302943
Treebanks are an essential resource for syntactic parsing. The available Paninian dependency treebank(s) for Telugu is annotated only with inter-chunk dependency relations and not all words of a sentence are part of the parse tree. In this paper, we automatically annotate the intra-chunk dependencies in the treebank using a Shift-Reduce parser based on Context Free Grammar rules for Telugu chunks. We also propose a few additional intra-chunk dependency relations for Telugu apart from the ones used in Hindi treebank. Annotating intra-chunk dependencies finally provides a complete parse tree for every sentence in the treebank. Having a fully expanded treebank is crucial for developing end to end parsers which produce complete trees. We present a fully expanded dependency treebank for Telugu consisting of 3220 sentences. In this paper, we also convert the treebank annotated with Anncorra part-of-speech tagset to the latest BIS tagset. The BIS tagset is a hierarchical tagset adopted as a unified part-of-speech standard across all Indian Languages. The final treebank is made publicly available.
A Fully Expanded Dependency Treebank for Telugu
d219309350
d220045828
This paper explores data augmentation methods for training Neural Machine Translation to make use of similar translations, in a comparable way a human translator employs fuzzy matches. In particular, we show how we can simply feed the neural model with information on both source and target sides of the fuzzy matches, we also extend the similarity to include semantically related translations retrieved using distributed sentence representations. We show that translations based on fuzzy matching provide the model with "copy" information while translations based on embedding similarities tend to extend the translation "context". Results indicate that the effect from both similar sentences are adding up to further boost accuracy, are combining naturally with model fine-tuning and are providing dynamic adaptation for unseen translation pairs. Tests on multiple data sets and domains show consistent accuracy improvements. To foster research around these techniques, we also release an Open-Source toolkit with efficient and flexible fuzzy-match implementation.
Boosting Neural Machine Translation with Similar Translations
d17352106
Identification of prepositional phrases (PP) has been an issue in the field of Natural Language Processing (NLP). In this paper, towards Chinese patent texts, we present a rule-based method and a CRF-based method to identify the PPs. In the rule-based method, according to the special features and expressions of PPs, we manually write targeted formal identification rules; in the CRF approach, after labelling the sentences with features, a typical CRF toolkit is exploited to train the model for identifying PPs. We then conduct some experiments to test the performance of the two methods, and final precision rates are over 90%, indicating the proposed methods are effective and feasible.
Identifying Prepositional Phrases in Chinese Patent Texts with Rule-based and CRF Methods
d11036142
This paper proposes a dependency parsing method that uses bilingual constraints to improve the accuracy of parsing bilingual texts (bitexts). In our method, a targetside tree fragment that corresponds to a source-side tree fragment is identified via word alignment and mapping rules that are automatically learned. Then it is verified by checking the subtree list that is collected from large scale automatically parsed data on the target side. Our method, thus, requires gold standard trees only on the source side of a bilingual corpus in the training phase, unlike the joint parsing model, which requires gold standard trees on the both sides. Compared to the reordering constraint model, which requires the same training data as ours, our method achieved higher accuracy because of richer bilingual constraints. Experiments on the translated portion of the Chinese Treebank show that our system outperforms monolingual parsers by 2.93 points for Chinese and 1.64 points for English.
Bitext Dependency Parsing with Bilingual Subtree Constraints
d1122650
A polyadic dynamic logic is introduced in which a model-theoretic version of nonlocal multicomponent tree-adjoining grammar can be formulated. It is shown to have a low polynomial time model checking procedure. This means that treebanks for nonlocal MCTAG, incl. all weaker extensions of TAG, can be efficiently corrected and queried. Our result is extended to HPSG treebanks (with some qualifications). The model checking procedures can also be used in heuristics-based parsing.
Verifying context-sensitive treebanks and heuristic parses in polynomial time *
d252624574
We present the AGODA (Analyse sémantique et Graphes relationnels pour l'Ouverture des Débatsà l'Assemblée nationale) project, which aims to create a platform for consulting and exploring digitised French parliamentary debates available in the digital library of the National Library of France. This project brings together historians and NLP specialists: parliamentary debates are indeed an essential source for French history of the contemporary period, but also for linguistics. This project therefore aims to produce a corpus of texts that can be easily exploited with computational methods, and that respect the TEI standard. Ancient parliamentary debates are also an excellent case study for the development and application of tools for publishing and exploring large historical corpora. In this paper, we present the steps necessary to produce such a corpus. We detail the processing and publication chain of these documents, in particular by mentioning the problems linked to the extraction of texts from digitised images. We also introduce the first analyses that we have carried out on this corpus with "bag-of-words" techniques not too sensitive to OCR quality (namely topic modelling and word embedding).
d219299870
d15240818
This paper describes the structural annotation of a spoken dialogue corpus. By statistically dealing with the corpus, the automatic acquisition of dialoguestructural rules is achieved. The dialogue structure is expressed as a binary tree and 789 dialogues consisting of 8150 utterances in the CIAIR speech corpus are annotated. To evaluate the scalability of the corpus for creating dialogue-structural rules, a dialogue parsing experiment was conducted.
Construction of Structurally Annotated Spoken Dialogue Corpus
d12409561
Tibetan word segmentation is essential for Tibetan information processing. People mainly use the basic machine matching method which is based on dictionary to segment Tibetan words at present, because there is no segmented Tibetan corpus which can be used for training in Tibetan word segmentation. But the method based on dictionary is not fit to Tibetan number identification. This paper studies the characteristics of Tibetan numbers, and then, proposes a method to identify Tibetan numbers based on classification of number components. The method first tags every number component according to the class it belongs to while segmenting, and then updates the tag series according to some predefined rules. At last adjacent number components are combined to form a Tibetan number if they meet a certain requirement. In the testing result from 7938K Tibetan corpus, the identification accuracy is 99.21%.
Tibetan Number Identification Based on Classification of Number Components in Tibetan Word Segmentation
d221097209
d218977403
In recent years, there has been increasing interest in publishing lexicographic and terminological resources as linked data. The benefit of using linked data technologies to publish terminologies is that terminologies can be linked to each other, thus creating a cloud of linked terminologies that cross domains, languages and that support advanced applications that do not work with single terminologies but can exploit multiple terminologies seamlessly. We present Terme-à-LLOD (TAL), a new paradigm for transforming and publishing terminologies as linked data which relies on a virtualization approach. The approach rests on a preconfigured virtual image of a server that can be downloaded and installed. We describe our approach to simplifying the transformation and hosting of terminological resources in the remainder of this paper. We provide a proof-of-concept for this paradigm showing how to apply it to the conversion of the well-known IATE terminology as well as to various smaller terminologies. Further, we discuss how the implementation of our paradigm can be integrated into existing NLP service infrastructures that rely on virtualization technology. While we apply this paradigm to the transformation and hosting of terminologies as linked data, the paradigm can be applied to any other resource format as well.
Terme-à-LLOD: Simplifying the Conversion and Hosting of Terminological Resources as Linked Data
d27747034
Cross-lingual model transfer is a compelling and popular method for predicting annotations in a low-resource language, whereby parallel corpora provide a bridge to a high-resource language and its associated annotated corpora. However, parallel data is not readily available for many languages, limiting the applicability of these approaches. We address these drawbacks in our framework which takes advantage of cross-lingual word embeddings trained solely on a high coverage bilingual dictionary. We propose a novel neural network model for joint training from both sources of data based on cross-lingual word embeddings, and show substantial empirical improvements over baseline techniques. We also propose several active learning heuristics, which result in improvements over competitive benchmark methods.
Model Transfer for Tagging Low-resource Languages using a Bilingual Dictionary
d218774130
d18616914
In this paper, the use of two modals (can and may) in four varieties of English (British, India, Philippines, and USA) was compared and the characteristics of each variety were statistically analyzed. After all the sample sentences were extracted from each component of the ICE corpus, a total of twenty linguistic factors were encoded. Then, the collected data were statistically analyzed with R. Through the analysis, the following facts were observed: (i) India and Philippine speakers used can more frequently than natives, (ii) Three linguistic factors interacted with CORPUS, and (iii) The distinctions between American and British were more influential than those of the Inner Circle vs. the Outer Circle.
The Inner Circle vs. the Outer Circle or