_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d7760176 | This paper introduces a new approach for improving hypermedia design, by providing the author with a tool to visualise, examine, and analyse the structure of documents containing hypermedia links. Our proposal is a new representation method the purpose of which is not to show the document structure in graphical form in order to enable the users to know where they are, where they can go next, or to give an overview of their environlncnt[12]. The purpose of our representation method is to depict what options the users are about to be offered, so that the author can examine the structure in order to better guide theln around the information space. Our representation is shown in the tbl'm of a map consisting of a proposed classification of elements that define the composition of a hyperdocument. To extract and classify those attributes we have analysed in detail all three dimensions of the hypermcdia cube: internal dynamics, external visual appearance and content synthesis. After introducing our diagrammatic notation we give a brief example of how it could be applied to an existing hyperlnedia application. In the boundaries of this paper, we include the part of the method which represents the structure of just one hypermedia document and its linked branches. The paper concludes with. a brief description of how the lnethod is applied to represent groups of hyperlnedia documents, and a discussion of our plans for future work. | The Pausanian Notation: a method for representing the structure and the content of a hyperdocument |
d219302547 | ||
d203610511 | ||
d225066960 | We consider a new perspective on dialog state tracking (DST), the task of estimating a user's goal through the course of a dialog. By formulating DST as a semantic parsing task over hierarchical representations, we can incorporate semantic compositionality, crossdomain knowledge sharing and co-reference. We present TreeDST, a dataset of 27k conversations annotated with tree-structured dialog states and system acts. 1 We describe an encoder-decoder framework for DST with hierarchical representations, which leads to 20% improvement over state-of-the-art DST approaches that operate on a flat meaning space of slot-value pairs. | Conversational Semantic Parsing for Dialog State Tracking |
d9816020 | In this paper we sketch an approach for Natural Language parsing. Our approach is an example-based approach, which relies mainly on examples that already parsed to their representation structure, and on the knowledge that we can get from these examples the required information to parse a new input sentence. In our approach, examples are annotated with the Structured String Tree Correspondence (SSTC) annotation schema where each SSTC describes a sentence, a representation tree as well as the correspondence between substrhzgs in the sentence and subtrees in the representation tree. In the process of parsing, we first try to build subtrees for phrases in the input sentence which have been successfully found in the example-base -a bottom up approach. These subtrees will then be combined together to form a single rooted representation tree based on an example with similar representation structure -a top down approach. | A FLEXIBLE EXAMPLE-BASED PARSER BASED ON THE SSTC" |
d5480885 | This paper presents a morphological lexicon for English that handle more than 317000 in ected forms derived from over 90000 stems. The lexicon is available in two formats. The rst can be used by an implementation of a two-level processor for morphological analysis(Karttunen and Wittenburg, 1983;Antworth, 1990). The second, derived from the rst one for e ciency reasons, consists of a disk-based database using a UNIX hash table facility(Seltzer and Yigit, 1991). We also built an X Window tool to facilitate the maintenance and browsing of the lexicon. The package is ready to be integrated into an natural language application such as a parser through hooks written in Lisp and C.To our knowledge, this package is the only available free English morphologicalanalyzer with very wide coverage. | A Freely Available Wide Coverage Morphological Analyzer for English |
d7016289 | Modeling of foreign entity names is an important unsolved problem in morpheme-based modeling that is common in morphologically rich languages. In this paper we present an unsupervised vocabulary adaptation method for morph-based speech recognition. Foreign word candidates are detected automatically from in-domain text through the use of letter n-gram perplexity. Over-segmented foreign entity names are restored to their base forms in the morph-segmented in-domain text for easier and more reliable modeling and recognition. The adapted pronunciation rules are finally generated with a trainable grapheme-tophoneme converter. In ASR performance the unsupervised method almost matches the ability of supervised adaptation in correctly recognizing foreign entity names. | NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT, pages 37-40, Unsupervised Vocabulary Adaptation for Morph-based Language Models |
d8249987 | In the paper we investigate the impact of data size on a Word Sense Disambiguation task (WSD). We question the assumption that the knowledge acquisition bottleneck, which is known as one of the major challenges for WSD, can be solved by simply obtaining more and more training data. Our case study on 1,000 manually annotated instances of the German verb drohen (threaten) shows that the best performance is not obtained when training on the full data set, but by carefully selecting new training instances with regard to their informativeness for the learning process (Active Learning). We present a thorough evaluation of the impact of different sampling methods on the data sets and propose an improved method for uncertainty sampling which dynamically adapts the selection of new instances to the learning progress of the classifier, resulting in more robust results during the initial stages of learning. A qualitative error analysis identifies problems for automatic WSD and discusses the reasons for the great gap in performance between human annotators and our automatic WSD system. | There's no Data like More Data? Revisiting the Impact of Data Size on a Classification Task |
d27598697 | We present a simple, broad coverage method for clarifying the meaning of sentences with coordination ambiguities, a frequent cause of parse errors. For each of the two most likely parses involving a coordination ambiguity, we produce a disambiguating paraphrase that splits the sentence in two, with one conjunct appearing in each half, so that the span of each conjunct becomes clearer. In a validation study, we show that the method enables meaning judgments to be crowd-sourced with good reliability, achieving 83% accuracy at 80% coverage. | A Simple Method for Clarifying Sentences with Coordination Ambiguities |
d9845387 | In this paper, we present an annotation tool developed specifically for manual sentiment analysis of social media posts. The tool provides facilities for general and target based opinion marking on different type of posts (i.e. comparative, ironic, conditional) with a web based UI which supports synchronous annotation. It is also designed as a SaaS (Software as a Service). The tool's outstanding features are easy and fast annotation interface, detailed sentiment levels, multi-client support, easy to manage administrative modules and linguistic annotation capabilities. | TURKSENT: A Sentiment Annotation Tool for Social Media |
d7157421 | AOL'I 1992 l 3 1 4 PROC. OF COLING-92 | |
d14101228 | This paper describes our submission for the CoNLL 2013 Shared Task, which aims to to improve the detection and correction of the five most common grammatical error types in English text written by non-native speakers. Our system concentrates only on two of them; it employs machine learning classifiers for the ArtOrDet-, and a fully deterministic rule based workflow for the SVA error type. | UdS at the CoNLL 2013 Shared Task |
d15587985 | Annotation of digitized pages from historical document collections is very important to research on automatic extraction of text blocks, lines, and handwriting recognition. We have recently introduced a new handwritten text database, GERMANA, which is based on a Spanish manuscript from 1891. To our knowledge, GERMANA is the first publicly available database mostly written in Spanish and comparable in size to standard databases. In this paper, we present another handwritten text database, RODRIGO, completely written in Spanish and comparable in size to GERMANA. However, RODRIGO comes from a much older manuscript, from 1545, where the typical difficult characteristics of historical documents are more evident. In particular, the writing style, which has clear Gothic influences, is significantly more complex than that of GERMANA. We also provide baseline results of handwriting recognition for reference in future studies, using standard techniques and tools for preprocessing, feature extraction, HMM-based image modelling, and language modelling. | The RODRIGO database |
d227905503 | ||
d171423914 | ||
d41796833 | In this paper we present SCALE, a new Python toolkit that contains two extensions to n-gram language models. The first extension is a novel technique to model compound words called Semantic Head Mapping (SHM). The second extension, Bag-of-Words Language Modeling (BagLM), bundles popular models such as Latent Semantic Analysis and Continuous Skip-grams. Both extensions scale to large data and allow the integration into first-pass ASR decoding. The toolkit is open source, includes working examples and can be found on http://github.com/jorispelemans/scale. | SCALE: A Scalable Language Engineering Toolkit |
d243865319 | In previous similarity-based WSD systems, studies have allocated much effort on learning comprehensive sense embeddings using contextual representations and knowledge sources. However, the context embedding of an ambiguous word is learned using only the sentence where the word appears, neglecting its global context. In this paper, we investigate the contribution of both word-level and senselevel global context of an ambiguous word for disambiguation. Experiments have shown that the Context-Oriented Embedding (COE) can enhance a similarity-based system's performance on WSD by relatively large margins, achieving state-of-the-art on all-words WSD benchmarks in knowledge-based category. | Enhancing the Context Representation in Similarity-based Word Sense Disambiguation |
d3840201 | Analyses of filler-gap dependencies usually involve complex syntactic rules or heuristics; however recent results suggest that filler-gap comprehension begins earlier than seemingly simpler constructions such as ditransitives or passives. Therefore, this work models filler-gap acquisition as a byproduct of learning word orderings (e.g. SVO vs OSV), which must be done at a very young age anyway in order to extract meaning from language. Specifically, this model, trained on part-of-speech tags, represents the preferred locations of semantic roles relative to a verb as Gaussian mixtures over real numbers.This approach learns role assignment in filler-gap constructions in a manner consistent with current developmental findings and is extremely robust to initialization variance. Additionally, this model is shown to be able to account for a characteristic error made by learners during this period (A and B gorped interpreted as A gorped B ). | Bootstrapping into Filler-Gap: An Acquisition Story |
d226283663 | ||
d5504657 | The translation of compound nouns is a major issue in machine translation due to their frequency of occurrence and high productivity. Various shallow methods have been proposed to translate compound nouns, notable amongst which are memory-based machine translation and word-to-word compositional machine translation. This paper describes the results of a feasibility study on the ability of these methods to translate Japanese and English noun-noun compounds. 2 The graph for Japanese NN compounds based on the Mainichi Corpus is almost identical.3 With all Japanese NN compound examples, we explicitly segment the compound into its component nouns through the use of the " " symbol. | Noun-Noun Compound Machine Translation: A Feasibility Study on Shallow Processing |
d27929345 | Social localisation is a kind of community action, which matches communities and the content they need, and supports their localisation efforts. The goal of social localisation-based statistical machine translation (SL-SMT) is to support and bridge global communities exchanging any type of digital content across different languages and cultures. Trommons is an open platform maintained by The Rosetta Foundation to connect non-profit translation projects and organisations with the skills and interests of volunteer translators, where they can translate, post-edit or proofread different types of documents. Using Trommons as the experimental platform, this paper focuses on domain adaptation techniques to augment SL-SMT to facilitate translators/post-editors. Specifically, the Cross Entropy Difference algorithm is used to adapt Europarl data to the social localisation data. Experimental results on English-Spanish show that the domain adaptation techniques can significantly improve translation performance by 6.82 absolute BLEU points and 5.99 absolute TER points compared to the baseline. | Domain Adaptation for Social Localisation-based SMT: A Case Study Using the Trommons Platform |
d10912608 | Packing of Feature Structures for Optimizing the HPSG~style Gran1mar translated fro1n TAG | |
d7878494 | Statistical machine translation systems are normally optimised for a chosen gain function (metric) by using MERT to find the best model weights. This algorithm suffers from stability problems and cannot scale beyond 20-30 features. We present an alternative algorithm for discriminative training of phrasebased MT systems, SampleRank, which scales to hundreds of features, equals or beats MERT on both small and medium sized systems, and permits the use of sentence or document level features. SampleRank proceeds by repeatedly updating the model weights to ensure that the ranking of output sentences induced by the model is the same as that induced by the gain function. | SampleRank Training for Phrase-Based Machine Translation |
d236477788 | ||
d221373768 | ||
d227217204 | ||
d15208133 | This paper proposes a simple and fast person-name filter, which plays an important role in automatic compilation of a large bilingual person-name lexicon. This filter is based on pn score, which is the sum of two component scores, the score of the first name and that of the last name. Each score is calculated from two term sets: one is a dense set in which most of the members are person names; another is a baseline set that contains less person names. The pn score takes one of five values, {+2, +1, 0, −1, −2 }, which correspond to strong positive, positive, undecidable, negative, and strong negative, respectively. This pn score can be easily extended to bilingual pn score that takes one of nine values, by summing scores of two languages. Experimental results show that our method works well for monolingual person names in English and Japanese; the F-score of each language is 0.929 and 0.939, respectively. The performance of the bilingual person-name filter is better; the F-score is 0.955. | A Person-Name Filter for Automatic Compilation of Bilingual Person-Name Lexicons |
d2503061 | We propose Hidden Markov models with unsupervised training for extractive summarization. Extractive summarization selects salient sentences from documents to be included in a summary. Unsupervised clustering combined with heuristics is a popular approach because no annotated data is required. However, conventional clustering methods such as K-means do not take text cohesion into consideration. Probabilistic methods are more rigorous and robust, but they usually require supervised training with annotated data. Our method incorporates unsupervised training with clustering, into a probabilistic framework. Clustering is done by modified K-means (MKM)--a method that yields more optimal clusters than the conventional K-means method. Text cohesion is modeled by the transition probabilities of an HMM, and term distribution is modeled by the emission probabilities. The final decoding process tags sentences in a text with theme class labels. Parameter training is carried out by the segmental K-means (SKM) algorithm. The output of our system can be used to extract salient sentences for summaries, or used for topic detection.Content-based evaluation shows that our method outperforms an existing extractive summarizer by 22.8% in terms of relative similarity, and outperforms a baseline summarizer that selects the top N sentences as salient sentences by 46.3%. | Combining Optimal Clustering and Hidden Markov Models for Extractive Summarization |
d13515673 | We present a practical use case of knowledge base (KB) population at the French news agency AFP. The target KB instances are entities relevant for news production and content enrichment. In order to acquire uniquely identified entities over news wires, i.e. textual data, and integrate the resulting KB in the Linked Data framework, a series of data models need to be aligned: Web data resources are harvested for creating a wide coverage entity database, which is in turn used to link entities to their mentions in French news wires. Finally, the extracted entities are selected for instantiation in the target KB. We describe our methodology along with the resources created and used for the target KB population. | Population of a Knowledge Base for News Metadata from Unstructured Text and Web Data |
d195349840 | ||
d6439737 | The applicability of ontologies for natural language processing depends on the ability to link ontological concepts and relations to their realisations in texts. We present a general, resource-poor account to create such a linking automatically by extracting Wikipedia articles corresponding to ontology classes. We evaluate our approach in an experiment with the Music Ontology. We consider linking as a promising starting point for subsequent steps of information extraction. | A Resource-Poor Approach for Linking Ontology Classes to Wikipedia Articles |
d236898623 | ||
d33485732 | In this paper we describe the creation of large scale linguistic resources for Russian language. Internet/intranet system architecture was developed to make a large volume of Russian language lexical information, corpora (texts) and knowledge base (Russian WordNet) available to the system at development and/or run time. There are four linguistic counterparts, corresponding to the major categories of lexical information developed in our system: lexicon, knowledge base, corpora and Russian language processing software. | Integration of Russian Language Resources |
d219301263 | ||
d30556185 | Cet article présente les mécanismes de création d'un treebank hybride enrichissant le FTB à l'aide d'annotations dans le formalisme des Grammaires de Propriétés. Ce processus consiste à acquérir une grammaire GP à partir du treebank source et générer automatiquement les structures syntaxiques dans le formalisme cible en s'appuyant sur la spécification d'un schéma d'encodage adapté. Le résultat produit, en partant d'une version du FTB corrigée et modifiée en fonction de nos besoins, constitue une ressource ouvrant de nouvelles perspectives pour le traitement et la description du français.ABSTRACTEnriching the French Treebank with PropertiesWe present in this paper the hybridation of the French Treebank with Property Grammars annotations. This process consists in acquiring a PG grammar from the source treebank and generating the new syntactic encoding on top of the original one. The result is a new resource for French, opening the way to new tools and descriptions. MOTS-CLÉS : Treebank hybride, French Treebank, Grammaires de Propriétés. | Enrichissement du FTB : un treebank hybride constituants/propriétés |
d36771997 | Cet article propose une approche pour la formalisation de grammaires pour les langues des signes, rendant compte de leurs particularités linguistiques. Comparable aux grammaires génératives en termes de récursivité productive, le système présente des propriétés nouvelles comme la multi-linéarité permettant la spécification simultanée des articulateurs. Basé sur l'analyse des liens entre formes produites/observées et fonctions linguistiques au sens large, on observe un décloisonnement des niveaux traditionnels de construction de la langue, inhérent à la méthodologie employée. Nous présentons un ensemble de règles trouvées suivant la démarche présentée et concluons avec une perspective intéressante en traduction automatique vers la langue des signes.Abstract. This article presents a formal approach to Sign Language grammars, with the aim of capturing their specificities. The system is comparable to generative grammar in the sense that it is recursively productive, but it has quite different properties such as multilinearity, enabling simultaneous articulator specification. As it is based on the analysis of systematic links between observable form features and interpreted linguistic functions in the general sense, the traditionally separate linguistic levels all end up covered by the same model. We present the results found for a set of linguistic structures, following the presented methodology, and conclude with an interesting prospect in the field of text-to-Sign machine translation.Mots-clés : Grammaire formelle, multi-linéarité, langue des signes. | 21 ème Traitement Automatique des Langues Naturelles |
d202541617 | ||
d5199024 | We address the problem of constructing hybrid translation systems by intersecting a Hiero-style hierarchical system with a phrase-based system and present formal techniques for doing so. We model the phrase-based component by introducing a variant of weighted finite-state automata, called σ-automata, provide a self-contained description of a general algorithm for intersecting weighted synchronous context-free grammars with finite-state automata, and extend these constructs to σ-automata. We end by briefly discussing complexity properties of the presented algorithms. | Intersecting Hierarchical and Phrase-Based Models of Translation: Formal Aspects and Algorithms |
d15481453 | In this paper, a new robust wavelet-based voice activity detection (VAD) algorithm derived from the discrete wavelet transform (DWT) and Teager energy operation (TEO) processing is presented. We decompose the speech signal into four subbands by using the DWT. By means of the multi-resolution analysis property of the DWT, the voiced, unvoiced, and transient components of speech can be distinctly discriminated. In order to develop a robust feature parameter called the speech activity envelope (SAE), the TEO is then applied to the DWT coefficients of each subband. The periodicity of speech signal is further exploited by using the subband signal auto-correlation function (SSACF) for. Experimental results show that the proposed SAE feature parameter can extract the speech activity under poor SNR conditions and that it is also insensitive to variable-level of noise. | Voice Activity Detection Based on Auto-Correlation Function Using Wavelet Transform and Teager Energy Operator |
d14679201 | This paper describes an unsupervised knowledge-lean methodology for automatically determining the number of senses in which an ambiguous word is used in a large corpus. It is based on the use of global criterion functions that assess the quality of a clustering solution. | Selecting the "Right" Number of Senses Based on Clustering Criterion Functions |
d16031673 | The goal of this paper is to compare a set of distance/similarity measures, some motivated statistically, others motivated stylistically, regarding their ability to reflect stylistic similarity between texts. To assess the ability of these distance/similarity functions to capture stylistic similarity between texts, we have tested them in the two most frequently employed multivariate statistical analysis settings: cluster analysis and (kernel) principal components analysis. | Comparing Statistical Similarity Measures for Stylistic Multivariate Analysis |
d14228845 | Freer-word-order languages such as German exhibit linguistic phenomena that present unique challenges to traditional CFG parsing. Such phenomena produce discontinuous constituents, which are not naturally modelled by projective phrase structure trees. In this paper, we examine topological field parsing, a shallow form of parsing which identifies the major sections of a sentence in relation to the clausal main verb and the subordinating heads. We report the results of topological field parsing of German using the unlexicalized, latent variable-based Berkeley parser(Petrov et al., 2006)Without any language-or model-dependent adaptation, we achieve state-of-the-art results on the TüBa-D/Z corpus, and a modified NE-GRA corpus that has been automatically annotated with topological fields(Becker and Frank, 2002). We also perform a qualitative error analysis of the parser output, and discuss strategies to further improve the parsing results. | Topological Field Parsing of German |
d53096578 | We describe the graph-based dependency parser in our system (AntNLP) submitted to the CoNLL 2018 UD Shared Task. We use bidirectional lstm to get the word representation, then a bi-affine pointer networks to compute scores of candidate dependency edges and the MST algorithm to get the final dependency tree. From the official testing results, our system gets 70.90 LAS F1 score (rank 9/26), 55.92 MLAS (10/26) and 60.91 BLEX (8/26). . 2018. Stackpointer networks for dependency parsing. CoRR abs/1805.01087. . 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980 . | Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies |
d1454594 | We describe an automatic Word Sense Disambiguation (WSD) system that disambiguates verb senses using syntactic and semantic features that encode information about predicate arguments and semantic classes. Our system performs at the best published accuracy on the English verbs of Senseval-2. We also experiment with using the gold-standard predicateargument labels from PropBank for disambiguating fine-grained WordNet senses and course-grained PropBank framesets, and show that disambiguation of verb senses can be further improved with better extraction of semantic roles. | The Role of Semantic Roles in Disambiguating Verb Senses |
d53222541 | MorAz is an open-source morphological analyzer for Azerbaijani Turkish. The analyzer is available through both as a website for interactive exploration and as a RESTful web service for integration into a natural language processing pipeline. MorAz implements the morphology of Azerbaijani Turkish following a two-level approach using Helsinki finite-state transducer and wraps the analyzer with python scripts in a Django instance. | MorAz: an Open-source Morphological Analyzer for Azerbaijani Turkish |
d6039355 | The paper deals with the issue of how and to what extent WordNet-like resources provide the necessary information for an assessment of semantic similarity which is useful for practical applications. The general point is made that taxonomical information should be complemented with distributional evidence. The claim is substantiated through experimental a~t8 and an illustration of a word sense disambiguation system (SENSE) capable of using contextually-relevant semantic similarity. | Augmenting WordNet-like lexical resources with distributional evidence. An application-oriented perspective" |
d234344802 | ||
d52871779 | PROJECT GOALSThe goal of speech research at Carnegie Mellon continues to be the development of spoken language systems that effectively integrate speech processing into the human-computer interface in a way that facilitates the use of computers in the performance of practical tasks. Component technologites are being developed in the context of spoken language systems in two domains: the DARPA-standard ATIS travel planning task, and CMU's office management task. Research in spoken language is currently focussed in the following areas: • Improved speech recognition technologies. Research is directed toward increasing the useful vocabulary of the speech recognizer, using better subword models and vocabularyindependent recognition techniques, providing for rapid configuration for new tasks.• Fluent human/machine interfaces. The goal of reserach in the spoken language interface is the development of an understanding of how people interact by voice with computer systems. Specific development systems such as the Office Manager are used to study this interaction.• Understanding spontaneous spoken language. Actual spoken language is ill-formed with respect to grammar, syntax, and semantics. We are analyzing many types of spontaneous speech phenomena and developing appropriate syntactic and semantic representations of language that enable spontaneous speech to be understood in a robust fashion.• Dialog modeling. This goal of this research is to identify invariant properties of spontaneous spoken dialog at both the utterance and dialog level, and to apply constraints based on dialog, semantic, and pragmantic knowledge to enhance speech recognition. These knowledge sources can also be used to learn new vocabulary items incrementally.• Acoustical and environmental robustness. The goal of this work is to make speech recognition robust with respect to variability in acoustical ambience and choice of microphone, so that recognition accuracy using desk-top or bezel-mounted microphones in office environments will become comparable to performance using close-talking microphones.RECENT RESULTS• Incorporation of semi-continuous HMMs and speaker adaptation has produced speaker-adaptive recognition performance that is comparable to speaker-dependent performance reported previously by other sites. Speaker-adaptation algorithms using neural networks have also been developed with encouraging preliminary r~ults.• A vocabulary-independent speech recognition system has been developed. Improvements including the use of second order cepstra, between-word triphones and decision-tree clustering have produced a level of vocabulary-independent performance that is better than the corresponding vocabulary-dependent performance.• A dynamic recognition-knowledge base has been incorporated into the Office Manager system, as well as models of noise phenomena. The natural language and situational knowledge capabilities of the system have also been extended.• The ATIS system has been augmented by incorporating the use of padded bigrams and models for non-lexical events, providing for increased coverage at reduced perplexity. These changes have produced major improvements in accuracy using both speech and transcripts of ATIS dialogs as input.• Six principles of dialog that characterize spontaneous speech at the pragmatic and semantic levels were identified. Algorithms were developed to invoke these principles at the utterance levels to constrain the search space for speech input and transcripts of ATIS dialogs.• Pre-processing algorithms that normalize cepstral coefficients to compensate for additive noise and spectral tilt have been made more efficient.PLANS FOR THE COMING YEAR• We will continue to investigate neural-network-based speaker nonnalization and its application to speaker-independent speech recognitiorL .The vocabulary-independent system will be improved by refinements in decision-tree clustering, pruning strategies, and selection of contextual questions. Non-intrusive task and environmental normalization will be introduced.• We will continue refining the Office Manager system and begin using it as a testbed for the development of error repair strategies and intelligent interaction management.• The constraints imposed by dialog models will be extended to allow more dialog and pragmatic knowledge to be used by the ATIS system in the understanding process. The ATIS system will be improved by the addition of out-of-vocabulary models and an improved rejection capability and user interface. | SPOKEN-LANGUAGE RESEARCH AT CARNEGIE MELLON |
d36857618 | The performance of Phrase-Based Statistical Machine Translation (PBSMT) systems mostly depends on training data. Many papers have investigated how to create new resources in order to increase the size of the training corpus in an attempt to improve PBSMT performance. In this work, we analyse and characterize the way in which the in-domain and outof-domain performance of PBSMT is impacted when the amount of training data increases. Two different PBSMT systems, Moses and Portage, two of the largest parallel corpora, Giga (French-English) and UN (Chinese-English) datasets and several in-and out-of-domain test sets were used to build high quality learning curves showing consistent logarithmic growth in performance. These results are stable across language pairs, PBSMT systems and domains. We also analyse the respective impact of additional training data for estimating the language and translation models. Our proposed model approximates learning curves very well and indicates the translation model contributes about 30% more to the performance gain than the language model. | Learning Machine Translation from In-domain and Out-of-domain Data |
d18551017 | The interlingua (IL) in machine translation (MT) systems can be defined in terms of two components: (i) "lexical IL forms" within language-specific lexicons where each lexical entry has associated with it one or more lexical representations, and (ii) algorithms for creating and decomposing the instantiated "pivot" representation. Within this framework, we examine five different approaches to the level of representation for the lexical IL forms and then discuss a tool, ILustrate, 2 we are building to develop and evaluate different IL representations coupled with their corresponding translation algorithms. | The Case for a MT Developers' Tool with a Two-Component View of the Interlingua 1 |
d5178123 | The purpose of this work is to investigate the use of machine learning approaches for confidence estimation within a statistical machine translation application. Specifically, we attempt to learn probabilities of correctness for various model predictions, based on the native probabilites (i.e. the probabilites given by the original model) and on features of the current context. Our experiments were conducted using three original translation models and two types of neural nets (single-layer and multilayer perceptrons) for the confidence estimation task. | Confidence Estimation for Translation Prediction |
d15574471 | In the field of Natural Language Processing, in order to work out a thematic representation system of general knowledge, methods relying on thesaurus have been used for about twenty years. A thesaurus consists of a set of concepts which define a generating system of a vector space modelling general knowledge. These concepts, often organized in a treelike structure, constitute a fundamental, but completely fixed tool. Even if the concepts evolve (we think for example of the technical fields), a thesaurus as for it can evolve only at the time of a particularly heavy process, because it requires the collaboration of human experts. After detailing the characteristics which a generating system of the vector space of knowledge modelling must have, we define the "basic notions". Basic notions, whose construction is initially based on the concepts of a thesaurus, constitute another generating system of this vector space. We then approach the determination of the acceptions expressing the basic notions. Lastly, we clarify how, being freed from the concepts of the thesaurus, the basic notions evolve progressively with the analysis of new texts by an iterative process. | Evolutionary basic notions for a thematic representation of general knowledge |
d30194818 | In this paper we present a semantic enrichment approach for linking two distinct data sets: the ÖBL (Austrian Biographical Dictionary) and the dbo@ema (Database of Bavarian Dialects in Austria electronically mapped). Although the data sets are different in their content and in the structuring of data, they contain similar common "entities" such as names of persons. Here we describe the semantic enrichment process of how these data sets can be inter-linked through URIs (Uniform Resource Identifiers) taking person names as a concrete example. Moreover, we also point to societal benefits of applying such semantic enrichment methods in order to open and connect our resources to various services. | Connecting people digitally -a semantic web based approach to linking heterogeneous data sets |
d235599182 | ||
d42356407 | We explore an extreme case of text classification. The short statements in micro-blogs were collected, and were associated by a category based on the sentiment indicated by the associated icons. We evaluated different methods that assigned the categories with just the wordings in the short statements. Short statements in micro-blogs are harder to classify because of the shortage of context, yet it is not rare for the statements to include words that may be linked to sentiments directly. In this work, we considered two polarities of sentiments: negative and positive. We employed the statistical information about the word usage, a dictionary for Chinese synonyms, and an emotional phrases dictionary to convert short statements into vectors, and applied techniques of support vector machines and probabilistic modeling for the classification task. The results of classification varied with the classification methods and experimental setups. The best one exceeded 80%, but the lowest just made 55%. 關鍵詞:情緒分類、文件分類、支援向量機、naïve Bayes、特徵選擇 1. 緒論 Web 2.0 時代的來臨改變媒體傳播方式以及人類與資訊的互動關係,不再是由傳統的媒 體扮演資訊生產者和資訊守門員的角色,而是使用者導向的互動式媒體,強調使用者自 己生產資訊並選擇自己所要接收的資訊,使得網路上的數位內容越來越豐富,其中部落 格、YouTube 是最佳的代表。時下許多的使用者透過 Web 2.0 網站以文字、圖片、影音 | 中文短句之情緒分類 Sentiment Classification of Short Chinese Sentences 蘇豐文 ↕ |
d10271022 | I am concerned by a comment made in Julia Johnson's review of my book Adaptive Parsing (Kluwer Academic Publishers, 1992) in Computational Linguistics 18(3) (September 1992). In the review, Dr. Johnson poses a number of thought-provoking questions that underscore open issues in this research, and seems to have been, overall, a thoughtful and attentive reader. Toward the end of the review, however, she states, "The performance improvements realized with adaptive parsing over a particular kernel grammar without adaptation were not strong." This statement does not agree with the results in the book. As shown in the utility analysis on pages 194-200 and 207-210, performance of the system using the kernel grammar without adaptation gave an acceptance range from 7% to 24% of utterances; with adaptation, acceptance increased to 81% to 91%. I find it difficult to interpret this data as anything but a very strong performance improvement.Since the perceived usefulness of adaptation rests in great part on the performance improvements it affords over a static sublanguage, I am grateful for the opportunity to point out and correct this misperception. | Letters to the Editor Adaptive Parsing |
d18723951 | In this paper, we present a comparison between two corpora acquired by means of two different techniques. The first corpus was acquired by means of the Wizard of Oz technique. A dialog simulation technique has been developed for the acquisition of the second corpus. A random selection of the user and system turns has been used, defining stop conditions for automatically deciding if the simulated dialog is successful or not. We use several evaluation measures proposed in previous research to compare between our two acquired corpora, and then discuss the similarities and differences between the two corpora with regard to these measures. | Acquisition and Evaluation of a Dialog Corpus through WOz and Dialog Simulation Techniques |
d7133783 | SYSTEM SUPPORT IN CHINESE DATA ENTRY | |
d12446797 | Hantology, a character-based Chinese language resource is created to provide an infrastructure for language processing and research on the writing system. Unlike alphabetic or syllabic writing systems, the ideographic writing system of Chinese poses both a challenge and an opportunity. The challenge is that a totally different resources structure must be created to represent and process speaker's conventionalization of the language. The rare opportunity is that the structure itself is enriched with conceptual classification and can be utilized for ontology building. We describe the contents and possible applications of Hantology in this paper. The applications of Hantology include: (1) an account for the diachronic development of Chinese lexica (2) character-based language processing, (3) a study of conceptual structure differences in Chinese and English, and (4) comparisons of different ideographic writing systems. | Hantology-A Linguistic Resource for Chinese Language Processing and Studying |
d472215 | Most tools and resources developed for natural language processing of Arabic are designed for Modern Standard Arabic (MSA) and perform terribly on Arabic dialects, such as Egyptian Arabic. Egyptian Arabic differs from MSA phonologically, morphologically and lexically and has no standardized orthography. We present a linguistically accurate, large-scale morphological analyzer for Egyptian Arabic. The analyzer extends an existing resource, the Egyptian Colloquial Arabic Lexicon, and follows the part-of-speech guidelines used by the Linguistic Data Consortium for Egyptian Arabic. It accepts multiple orthographic variants and normalizes them to a conventional orthography. | A Morphological Analyzer for Egyptian Arabic |
d14597491 | In this paper we propose a rule-based approach to extract dependency and grammatical relations from the Venice Italian Treebank (VIT)(Delmonte et al., 2007)with bracketed tree structure. To our knowledge, the only dependency annotated corpus for Italian available is the Turin University Treebank(Lesmo et al., 2002), which has 25,000 tokens and is about 1/10 of VIT. As manual corpus annotation is expensive and time-consuming, we decided to exploit an existing constituency-based treebank, the VIT, to derive dependency structures with lower effort. After describing the procedure to extract heads and dependents, based on a head percolation table for Italian, we introduce the rules adopted to add grammatical relation labels. To this purpose, we manually relabeled all non-canonical arguments, which are very frequent in Italian, then we automatically labeled the remaining complements or arguments following some syntactic restrictions based on the position of the constituents w.r.t to parent and sibling nodes. The final section of the paper describes evaluation results, carried out in two steps, one for dependency relations and one for grammatical roles. Since results are promising, we plan to use the dependency treebank to train a dependency-based parser and eventually a semantic role labelling system. | Enriching the Venice Italian Treebank with dependency and grammatical relations |
d219303097 | ||
d252762533 | Deep Learning (DL) is dominating the fields of Natural Language Processing (NLP) and Computer Vision (CV) in the recent times. However, DL commonly relies on the availability of large data annotations, so other alternative or complementary pattern-based techniques can help to improve results. In this paper, we build upon Key Information Extraction (KIE) in purchase documents using both DL and rule-based corrections. Our system initially trusts on Optical Character Recognition (OCR) and text understanding based on entity tagging to identify purchase facts of interest (e.g., product codes, descriptions, quantities, or prices). These facts are then linked to a same product group, which is recognized by means of line detection and some grouping heuristics. Once these DL approaches are processed, we contribute several mechanisms consisting of rule-based corrections for improving the baseline DL predictions. We prove the enhancements provided by these rule-based corrections over the baseline DL results in the presented experiments for purchase documents from public and NielsenIQ datasets. | Key Information Extraction in Purchase Documents using Deep Learning and Rule-based Corrections |
d11226551 | We apply pattern-based methods for collecting hypernym relations from the web. We compare our approach with hypernym extraction from morphological clues and from large text corpora. We show that the abundance of available data on the web enables obtaining good results with relatively unsophisticated techniques. | Extracting Hypernym Pairs from the Web |
d234777930 | We curated WikiPII, an automatically labeled dataset composed of Wikipedia biography pages, annotated for personal information extraction. Although automatic annotation can lead to a high degree of label noise, it is an inexpensive process and can generate large volumes of annotated documents. We trained a BERT-based NER model with WikiPII and showed that with an adequately large training dataset, the model can significantly decrease the cost of manual information extraction, despite the high level of label noise. In a similar approach, organizations can leverage text mining techniques to create customized annotated datasets from their historical data without sharing the raw data for human annotation. Also, we explore collaborative training of NER models through federated learning when the annotation is noisy. Our results suggest that depending on the level of trust to the ML operator and the volume of the available data, distributed training can be an effective way of training a personal information identifier in a privacy-preserved manner. Research material is available at https://github.com/ ratmcu/wikipiifed. | A Privacy-Preserving Approach to Extraction of Personal Information through Automatic Annotation and Federated Learning |
d219299717 | ||
d1282002 | Question answering (QA) systems are sensitive to the many different ways natural language expresses the same information need. In this paper we turn to paraphrases as a means of capturing this knowledge and present a general framework which learns felicitous paraphrases for various QA tasks. Our method is trained end-toend using question-answer pairs as a supervision signal. A question and its paraphrases serve as input to a neural scoring model which assigns higher weights to linguistic expressions most likely to yield correct answers. We evaluate our approach on QA over Freebase and answer sentence selection. Experimental results on three datasets show that our framework consistently improves performance, achieving competitive results despite the use of simple QA models. | Learning to Paraphrase for Question Answering |
d260063138 | The automatic extraction of hypernym knowledge from large language models like BERT is an open problem, and it is unclear whether methods fail due to a lack of knowledge in the model or shortcomings of the extraction methods. In particular, methods fail on challenging cases which include rare or abstract concepts, and perform inconsistently under paraphrased prompts. In this study, we revisit the long line of work on pattern-based hypernym extraction, and use it as a diagnostic tool to thoroughly examine the hypernomy knowledge encoded in BERT and the limitations of hypernym extraction methods. We propose to construct prompts from established pattern structures: definitional (X is a Y); lexico-syntactic (Y such as X); and their anchored versions (Y such as X or Z). We devise an automatic method for anchor prediction, and compare different patterns in: (i) their effectiveness for hypernym retrieval from BERT across six English data sets; (ii) on challenge sets of rare and abstract concepts; and (iii) on consistency under paraphrasing. We show that anchoring is particularly useful for abstract concepts and in enhancing consistency across paraphrases, demonstrating how established methods in the field can inform prompt engineering. 1 | Seeking Clozure: Robust Hypernym Extraction from BERT with Anchored Prompts |
d5754528 | We reduce phrase-based parsing to dependency parsing. Our reduction is grounded on a new intermediate representation, "head-ordered dependency trees," shown to be isomorphic to constituent trees. By encoding order information in the dependency labels, we show that any off-theshelf, trainable dependency parser can be used to produce constituents. When this parser is non-projective, we can perform discontinuous parsing in a very natural manner. Despite the simplicity of our approach, experiments show that the resulting parsers are on par with strong baselines, such as the Berkeley parser for English and the best non-reranking system in the SPMRL-2014 shared task. Results are particularly striking for discontinuous parsing of German, where we surpass the current state of the art by a wide margin. * This research was carried out during an internship at Priberam Labs. | Parsing as Reduction |
d449533 | This paper presents a unified solution, which is based on the idea of "roles tagging", to the complicated problems of Chinese unknown words recognition. In our approach, an unknown word is identified according to its component tokens and context tokens. In order to capture the functions of tokens, we use the concept of roles. Roles are tagged through applying the Viterbi algorithm in the fashion of a POS tagger. In the resulted most probable roles sequence, all the eligible unknown words are recognized through a maximum patterns matching. We have got excellent precision and recalling rates, especially for person names and transliterations. The result and experiments in our system ICTCLAS shows that our approach based on roles tagging is simple yet effective. | Automatic Recognition of Chinese Unknown Words 1 Based on Roles Tagging 2 |
d258959081 | ATOMIC is a large-scale commonsense knowledge graph (CSKG) containing everyday ifthen knowledge triplets, i.e., {head event, relation, tail event}. The one-hop annotation manner made ATOMIC a set of independent bipartite graphs, which ignored the numerous links between events in different bipartite graphs and consequently caused shortages in knowledge coverage and multi-hop paths. In this work, we aim to construct Dense-ATOMIC with high knowledge coverage and massive multi-hop paths. The events in ATOMIC are normalized to a consistent pattern at first. We then propose a CSKG completion method called Rel-CSKGC to predict the relation given the head event and the tail event of a triplet, and train a CSKG completion model based on existing triplets in ATOMIC. We finally utilize the model to complete the missing links in ATOMIC and accordingly construct Dense-ATOMIC. Both automatic and human evaluation on an annotated subgraph of ATOMIC demonstrate the advantage of Rel-CSKGC over strong baselines. We further conduct extensive evaluations on Dense-ATOMIC in terms of statistics, human evaluation, and simple downstream tasks, all proving Dense-ATOMIC's advantages in Knowledge | Dense-ATOMIC: Towards Densely-connected ATOMIC with High Knowledge Coverage and Massive Multi-hop Paths |
d5063437 | Semantic representations have long been argued as potentially useful for enforcing meaning preservation and improving generalization performance of machine translation methods. In this work, we are the first to incorporate information about predicate-argument structure of source sentences (namely, semantic-role representations) into neural machine translation. We use Graph Convolutional Networks (GCNs) to inject a semantic bias into sentence encoders and achieve improvements in BLEU scores over the linguistic-agnostic and syntaxaware versions on the English-German language pair. | Exploiting Semantics in Neural Machine Translation with Graph Convolutional Networks |
d234742319 | Many people aim for change, but not everyone succeeds. While there are a number of social psychology theories that propose motivation-related characteristics of those who persist with change, few computational studies have explored the motivational stage of personal change. In this paper, we investigate a new dataset consisting of the writings of people who manifest intention to change, some of whom persist while others do not. Using a variety of linguistic analysis techniques, we first examine the writing patterns that distinguish the two groups of people. Persistent people tend to reference more topics related to long-term self-improvement and use a more complicated writing style. Drawing on these consistent differences, we build a classifier that can reliably identify the people more likely to persist, based on their language. Our experiments provide new insights into the motivation-related behavior of people who persist with their intention to change. | Room to Grow: Understanding Personal Characteristics Behind Self Improvement Using Social Media |
d60031180 | s book summarizes more than a decade of his research on knowledge representation for narrative text. The centerpiece of Zarri's work is the Narrative Knowledge Representation Language (NKRL), which he describes and compares to other competing theories. In addition, he discusses how to model the meaning of narrative text by giving many real-world examples. NKRL provides three different components or capabilities: (a) a representation system, (b) inferencing, and (c) an implementation. It is implemented via a Java-based system that shows how a representational theory can be applied to narrative texts.The book consists of five chapters and two appendices. Chapter 1 introduces the basic principles of NKRL. The chapter first defines the focus on nonfiction narratives by contrasting the domain with fictional narratives, for example, a novel. Zarri chooses n-ary predicates in order to represent events formally. He argues for a neo-Davidsonian knowledge representation following Schank (1980), Schubert (1976), and others, and at the same time he sets his approach apart from the knowledge representation proposals one can find in Semantic Web representation languages such as RDF and OWL. However, Zarri emphasizes that NKRL, despite its similarity to conceptual graphs (Sowa 1999), is more focused on practical applications. The chapter concludes by introducing so-called templates in an attempt to demonstrate the practical usefulness of NKRL.Chapter 2 provides an in-depth description of NKRL. Four connected components are introduced: r The definitional component provides a hierarchy of abstract concepts (e.g., artifact, company, activity) called HClass (hierarchy of classes).r The descriptive component is a hierarchy of event types called HTemp (hierarchy of templates) commonly found in the domain of non-fiction narratives (e.g., moving an object, producing a task or activity). | Advanced Information and Knowledge Processing series |
d260091296 | Although machine translation systems are mostly designed to serve in the general domain, there is a growing tendency to adapt these systems to other domains like literary translation. In this paper, we focus on English-Turkish literary translation and develop machine translation models that take into account the stylistic features of translators. We fine-tune a pre-trained machine translation model by the manuallyaligned works of a particular translator. We make a detailed analysis of the effects of manual and automatic alignments, data augmentation methods, and corpus size on the translations. We propose an approach based on stylistic features to evaluate the style of a translator in the output translations. We show that the human translator style can be highly recreated in the target machine translations by adapting the models to the style of the translator. | Incorporating Human Translator Style into English-Turkish Literary Machine Translation |
d58224172 | This paper describes our system for generating Chinese aspect expressions. In the system, the semantics of different aspects is characterized by specific temporal and conceptual features. The semantic applicability conditions of each individual aspect are theoretically represented by an aspect selection function (ASF). The generation is realized by evaluating implemented inquiries which formally define the ASFs, traversing the grammatical network, and making aspect selections. | The Chinese Aspect Generation Based on Aspect Selection Functions |
d203692667 | Defining words in a textual context is a useful task both for practical purposes and for gaining insight into distributed word representations. Building on the distributional hypothesis, we argue here that the most natural formalization of definition modeling is to treat it as a sequenceto-sequence task, rather than a word-tosequence task: given an input sequence with a highlighted word, generate a contextually appropriate definition for it. We implement this approach in a Transformerbased sequence-to-sequence model. Our proposal allows to train contextualization and definition generation in an end-to-end fashion, which is a conceptual improvement over earlier works. We achieve stateof-the-art results both in contextual and non-contextual definition modeling. | Mark my Word: A Sequence-to-Sequence Approach to Definition Modeling |
d244116777 | Automatic post-editing (APE) is an important remedy for reducing errors of raw translated texts that are produced by machine translation (MT) systems or software-aided translation. In this paper, we present a systematic approach to tackle the APE task for Vietnamese. Specifically, we construct the first large-scale dataset of 5M Vietnamese translated and corrected sentence pairs. We then apply strong neural MT models to handle the APE task, using our constructed dataset. Experimental results from both automatic and human evaluations show the effectiveness of the neural MT models in handling the Vietnamese APE task. | Automatic Post-Editing for Vietnamese |
d227231347 | Social media have become a valuable source of information. However, its power to shape public opinion can be dangerous, especially in the case of misinformation. The existing studies on misinformation detection hypothesise that the initial message is fake. In contrast, we focus on information distortion occurring in cascades as the initial message is quoted or receives a reply. We show a significant topic shift in information cascades on Twitter during the Covid-19 pandemic providing valuable insights for the automatic analysis of information distortion.This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http:// creativecommons.org/licenses/by/4.0/. | Covid or not Covid? Topic Shift in Information Cascades on Twitter |
d261711859 | Patent summaries are machine-translated using bilingual term entries extracted from parallel texts for evaluation. The result shows that bilingual term entries extracted from 2,000 pairs of parallel texts which share a specific domain with the input texts introduce more improvements than a technical term dictionary with 38,000 entries which covers a broader domain. The result also shows that only 10 pairs of parallel texts found by similar document retrieval have comparable effects to the technical term dictionary, suggesting that parallel texts to be used do not need to be classified into fields prior to term extraction. | Machine Translation Using Bilingual Term Entries Extracted from Parallel Texts |
d239998365 | We present IndoNLI, the first human-elicited NLI dataset for Indonesian. We adapt the data collection protocol for MNLI and collect ∼18K sentence pairs annotated by crowd workers and experts. The expert-annotated data is used exclusively as a test set. It is designed to provide a challenging test-bed for Indonesian NLI by explicitly incorporating various linguistic phenomena such as numerical reasoning, structural changes, idioms, or temporal and spatial reasoning. Experiment results show that XLM-R outperforms other pretrained models in our data. The best performance on the expert-annotated data is still far below human performance (13.4% accuracy gap), suggesting that this test set is especially challenging. Furthermore, our analysis shows that our expert-annotated data is more diverse and contains fewer annotation artifacts than the crowd-annotated data. We hope this dataset can help accelerate progress in Indonesian NLP research. | IndoNLI: A Natural Language Inference Dataset for Indonesian |
d258378222 | jTLEX is a programming library that provides a Java implementation of the TimeLine EXtraction algorithm (TLEX;Finlayson et al., 2021), along with utilities for programmatic manipulation of TimeML graphs. Timelines are useful for a number of natural language understanding tasks, such as question answering, cross-document event coreference, and summarization & visualization. jTLEX provides functionality for (1) parsing TimeML annotations into Java objects, (2) construction of TimeML graphs from scratch, (3) partitioning of TimeML graphs into temporally connected subgraphs, (4) transforming temporally connected subgraphs into point algebra (PA) graphs, (5) extracting exact timeline of TimeML graphs, (6) detecting inconsistent subgraphs, and (7) calculating indeterminate sections of the timeline. The library has been tested on the entire TimeBank corpus, and comes with a suite of unit tests. We release the software as open source with a free license for non-commercial use. | jTLEX: a Java Library for TimeLine EXtraction |
d8024533 | This paper describes our deep learningbased approach to multilingual aspectbased sentiment analysis as part of Se-mEval 2016 Task 5. We use a convolutional neural network (CNN) for both aspect extraction and aspect-based sentiment analysis. We cast aspect extraction as a multi-label classification problem, outputting probabilities over aspects parameterized by a threshold. To determine the sentiment towards an aspect, we concatenate an aspect vector with every word embedding and apply a convolution over it. Our constrained system (unconstrained for English) achieves competitive results across all languages and domains, placing first or second in 5 and 7 out of 11 language-domain pairs for aspect category detection (slot 1) and sentiment polarity (slot 3) respectively, thereby demonstrating the viability of a deep learning-based approach for multilingual aspect-based sentiment analysis. | INSIGHT-1 at SemEval-2016 Task 5: Deep Learning for Multilingual Aspect-based Sentiment Analysis |
d202712637 | Because it is not feasible to collect training data for every language, there is a growing interest in cross-lingual transfer learning. In this paper, we systematically explore zeroshot cross-lingual transfer learning on reading comprehension tasks with a language representation model pre-trained on multi-lingual corpus. The experimental results show that with pre-trained language representation zeroshot learning is feasible, and translating the source data into the target language is not necessary and even degrades the performance. We further explore what does the model learn in zero-shot setting 0 . | Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with Multi-lingual Language Representation Model |
d235125766 | Although neural models have achieved competitive results in dialogue systems, they have shown limited ability in representing core semantics, such as ignoring important entities. To this end, we exploit Abstract Meaning Representation (AMR) to help dialogue modeling. Compared with the textual input, AMR explicitly provides core semantic knowledge and reduces data sparsity. We develop an algorithm to construct dialogue-level AMR graphs from sentence-level AMRs and explore two ways to incorporate AMRs into dialogue systems. Experimental results on both dialogue understanding and response generation tasks show the superiority of our model. To our knowledge, we are the first to leverage a formal semantic representation into neural dialogue modeling. | Semantic Representation for Dialogue Modeling |
d156053187 | Pinyin-to-character (P2C) conversion is the core component of pinyin-based Chinese input method engine (IME). However, the conversion is seriously compromised by the ambiguities of Chinese characters corresponding to pinyin as well as the predefined fixed vocabularies. To alleviate such inconveniences, we propose a neural P2C conversion model augmented by an online updated vocabulary with a sampling mechanism to support open vocabulary learning during IME working. Our experiments show that the proposed method outperforms commercial IMEs and state-of-theart traditional models on standard corpus and true inputting history dataset in terms of multiple metrics and thus the online updated vocabulary indeed helps our IME effectively follows user inputting behavior. | Open Vocabulary Learning for Neural Chinese Pinyin IME |
d218596002 | This paper proposes Dynamic Memory Induction Networks (DMIN) for few-shot text classification. The model utilizes dynamic routing to provide more flexibility to memory-based few-shot learning in order to better adapt the support sets, which is a critical capacity of fewshot classification models. Based on that, we further develop induction models with query information, aiming to enhance the generalization ability of meta-learning. The proposed model achieves new state-of-the-art results on the miniRCV1 and ODIC dataset, improving the best performance (accuracy) by 2∼4%. Detailed analysis is further performed to show the effectiveness of each component. | Dynamic Memory Induction Networks for Few-Shot Text Classification |
d289313 | Several techniques have recently been proposed for training "self-normalized" discriminative models. These attempt to find parameter settings for which unnormalized model scores approximate the true label probability. However, the theoretical properties of such techniques (and of self-normalization generally) have not been investigated. This paper examines the conditions under which we can expect self-normalization to work. We characterize a general class of distributions that admit self-normalization, and prove generalization bounds for procedures that minimize empirical normalizer variance. Motivated by these results, we describe a novel variant of an established procedure for training self-normalized models. The new procedure avoids computing normalizers for most training examples, and decreases training time by as much as factor of ten while preserving model quality. | When and why are log-linear models self-normalizing? |
d222208998 | Semantic parsing is one of the key components of natural language understanding systems. A successful parse transforms an input utterance to an action that is easily understood by the system. Many algorithms have been proposed to solve this problem, from conventional rulebased or statistical slot-filling systems to shiftreduce based neural parsers. For complex parsing tasks, the state-of-the-art method is based on autoregressive sequence to sequence models to generate the parse directly. This model is slow at inference time, generating parses in O(n) decoding steps (n is the length of the target sequence). In addition, we demonstrate that this method performs poorly in zero-shot cross-lingual transfer learning settings. In this paper, we propose a non-autoregressive parser which is based on the insertion transformer to overcome these two issues. Our approach 1) speeds up decoding by 3x while outperforming the autoregressive model and 2) significantly improves cross-lingual transfer in the low-resource setting by 37% compared to autoregressive baseline. We test our approach on three well-known monolingual datasets: ATIS, SNIPS and TOP. For cross lingual semantic parsing, we use the MultiATIS++ and the multilingual TOP datasets. | Don't Parse, Insert: Multilingual Semantic Parsing with Insertion Based Decoding |
d248780533 | Ensuring relevance quality in product search is a critical task as it impacts the customer's ability to find intended products in the short-term as well as the general perception and trust of the e-commerce system in the long term. In this work we leverage a high-precision crossencoder BERT model for semantic similarity between customer query and products and survey its effectiveness for three ranking applications where offline-generated scores could be used: (1) as an offline metric for estimating relevance quality impact, (2) as a re-ranking feature covering head/torso queries, and (3) as a training objective for optimization. We present results on effectiveness of this strategy for the large e-commerce setting, which has general applicability for choice of other high-precision models and tasks in ranking. | Improving Relevance Quality in Product Search using High-Precision Query-Product Semantic Similarity |
d11310392 | We motivate and describe a new freely available human-human dialogue data set for interactive learning of visually grounded word meanings through ostensive definition by a tutor to a learner. The data has been collected using a novel, character-by-character variant of the DiET chat tool(Healey et al., 2003;Mills and Healey, submitted) with a novel task, where a Learner needs to learn invented visual attribute words (such as "burchak" for square) from a tutor. As such, the text-based interactions closely resemble face-to-face conversation and thus contain many of the linguistic phenomena encountered in natural, spontaneous dialogue. These include self-and other-correction, mid-sentence continuations, interruptions, overlaps, fillers, and hedges. We also present a generic n-gram framework for building user (i.e. tutor) simulations from this type of incremental data, which is freely available to researchers. We show that the simulations produce outputs that are similar to the original data (e.g. 78% turn match similarity). Finally, we train and evaluate a Reinforcement Learning dialogue control agent for learning visually grounded word meanings, trained from the BURCHAK corpus. The learned policy shows comparable performance to a rulebased system built previously. | The BURCHAK corpus: a Challenge Data Set for Interactive Learning of Visually Grounded Word Meanings |
d6526161 | Unsupervised learning algorithms based on Expectation Maximization (EM) are often straightforward to implement and provably converge on a local likelihood maximum. However, these algorithms often do not perform well in practice. Common wisdom holds that they yield poor results because they are overly sensitive to initial parameter values and easily get stuck in local (but not global) maxima. We present a series of experiments indicating that for the task of learning syllable structure, the initial parameter weights are not crucial. Rather, it is the choice of model class itself that makes the difference between successful and unsuccessful learning. We use a language-universal rule-based algorithm to find a good set of parameters, and then train the parameter weights using EM. We achieve word accuracy of 95.9% on German and 97.1% on English, as compared to 97.4% and 98.1% respectively for supervised training. | Representational Bias in Unsupervised Learning of Syllable Structure |
d13751870 | The field of machine translation faces an under-recognized problem because of inconsistency in the reporting of scores from its dominant metric. Although people refer to "the" BLEU score, BLEU is in fact a parameterized metric whose values can vary wildly with changes to these parameters. These parameters are often not reported or are hard to find, and consequently, BLEU scores between papers cannot be directly compared. I quantify this variation, finding differences as high as 1.8 between commonly used configurations. The main culprit is different tokenization and normalization schemes applied to the reference. Pointing to the success of the parsing community, I suggest machine translation researchers settle upon the BLEU scheme used by the annual Conference on Machine Translation (WMT), which does not allow for usersupplied reference processing, and provide a new tool, SACREBLEU, 1 to facilitate this. . 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. ArXiv eprints, abs/1609.08144. | A Call for Clarity in Reporting BLEU Scores |
d251622357 | Automatically summarizing patients' main problems from daily progress notes using natural language processing methods helps to battle against information and cognitive overload in hospital settings and potentially assists providers with computerized diagnostic decision support. Problem list summarization requires a model to understand, abstract, and generate clinical documentation. In this work, we propose a new NLP task that aims to generate a list of problems in a patient's daily care plan using input from the provider's progress notes during hospitalization. We investigate the performance of T5 and BART, two state-of-the-art seq2seq transformer architectures, in solving this problem. We provide a corpus built on top of progress notes from publicly available electronic health record progress notes in the Medical Information Mart for Intensive Care (MIMIC)-III. T5 and BART are trained on general domain text, and we experiment with a data augmentation method and a domain adaptation pre-training method to increase exposure to medical vocabulary and knowledge. Evaluation methods include ROUGE, BERTScore, cosine similarity on sentence embedding, and F-score on medical concepts. Results show that T5 with domain adaptive pre-training achieves significant performance gains compared to a rulebased system and general domain pre-trained language models, indicating a promising direction for tackling the problem summarization task. Judith L Bowen. 2006. Educational strategies to promote clinical diagnostic reasoning. New England Journal of Medicine, 355(21):2217-2225. PJ Brown, JL Marquard, B Amster, M Romoser, J Friderici, S Goff, and D Fisher. 2014. What do physicians read (and ignore) in electronic progress notes? Applied clinical informatics, 5(02):430-444. | Summarizing Patients' Problems from Hospital Progress Notes Using Pre-trained Sequence-to-Sequence Models |
d257663900 | Recent breakthroughs in self-supervised training have led to a new class of pretrained vision-language models. While there have been investigations of bias in multimodal models, they have mostly focused on gender and racial bias, giving much less attention to other relevant groups, such as minorities with regard to religion, nationality, sexual orientation, or disabilities. This is mainly due to lack of suitable benchmarks for such groups. We seek to address this gap by providing a visual and textual bias benchmark called MMBias, consisting of around 3,800 images and phrases covering 14 population subgroups. We utilize this dataset to assess bias in several prominent selfsupervised multimodal models, including CLIP, ALBEF, and ViLT. Our results show that these models demonstrate meaningful bias favoring certain groups. Finally, we introduce a debiasing method designed specifically for such large pretrained models that can be applied as a post-processing step to mitigate bias, while preserving the remaining accuracy of the model. 1 www.un.org/en/development/desa/population/migration 2 www.worldbank.org/en/topic/disability | Multi-Modal Bias: Introducing a Framework for Stereotypical Bias Assessment beyond Gender and Race in Vision-Language Models |
d252070474 | Task-oriented dialogue systems aim to fulfill user goals through natural language interactions. They are ideally evaluated with human users, which however is unattainable to do at every iteration of the development phase. Simulated users could be an alternative, however their development is nontrivial. Therefore, researchers resort to offline metrics on existing human-human corpora, which are more practical and easily reproducible. They are unfortunately limited in reflecting real performance of dialogue systems. BLEU for instance is poorly correlated with human judgment, and existing corpus-based metrics such as success rate overlook dialogue context mismatches. There is still a need for a reliable metric for taskoriented systems with good generalization and strong correlation with human judgements. In this paper, we propose the use of offline reinforcement learning for dialogue evaluation based on a static corpus. Such an evaluator is typically called a critic and utilized for policy optimization. We go one step further and show that offline RL critics can be trained the static corpus for any dialogue system as external evaluators, allowing dialogue performance comparisons across various types of systems. This approach has the benefit of being corpusand model-independent, while attaining strong correlation with human judgements, which we confirm via an interactive user trial. | Dialogue Evaluation with Offline Reinforcement Learning |
d253098559 | Knowledge-grounded conversation (KGC) shows excellent potential to deliver an engaging and informative response. However, existing approaches emphasize selecting one golden knowledge given a particular dialogue context, overlooking the one-to-many phenomenon in dialogue. As a result, the existing paradigm limits the diversity of knowledge selection and generation. To this end, we establish a multireference KGC dataset and propose a series of metrics to systematically assess the one-tomany efficacy of existing KGC models. Furthermore, to extend the hypothesis space of knowledge selection to enhance the mapping relationship between multiple knowledge and multiple responses, we devise a span-based variational model and optimize the model in a wake-sleep style with an ameliorated evidence lower bound objective to learn the oneto-many generalization. Both automatic and human evaluations demonstrate the efficacy of our approach. | There Is No Standard Answer: Knowledge-Grounded Dialogue Generation with Adversarial Activated Multi-Reference Learning |
d821034 | Most NLP tools are applied to text that is different from the kind of text they were evaluated on. Common evaluation practice prescribes significance testing across data points in available test data, but typically we only have a single test sample. This short paper argues that in order to assess the robustness of NLP tools we need to evaluate them on diverse samples, and we consider the problem of finding the most appropriate way to estimate the true effect size across datasets of our systems over their baselines. We apply meta-analysis and show experimentally -by comparing estimated error reduction over observed error reduction on held-out datasetsthat this method is significantly more predictive of success than the usual practice of using macro-or micro-averages. Finally, we present a new parametric meta-analysis based on nonstandard assumptions that seems superior to standard parametric meta-analysis. | Estimating effect size across datasets |
d235794903 | Lexical substitution is the task of generating meaningful substitutes for a word in a given textual context. Contextual word embedding models have achieved state-of-the-art results in the lexical substitution task by relying on contextual information extracted from the replaced word within the sentence. However, such models do not take into account structured knowledge that exists in external lexical databases. | LexSubCon: Integrating Knowledge from Lexical Resources into Contextual Embeddings for Lexical Substitution |
d32452685 | Paraphrases are sentences or phrases that convey the same meaning using different wording. Although the logical definition of paraphrases requires strict semantic equivalence, linguistics accepts a broader, approximate, equivalence-thereby allowing far more examples of "quasiparaphrase." But approximate equivalence is hard to define. Thus, the phenomenon of paraphrases, as understood in linguistics, is difficult to characterize. In this article, we list a set of 25 operations that generate quasi-paraphrases. We then empirically validate the scope and accuracy of this list by manually analyzing random samples of two publicly available paraphrase corpora. We provide the distribution of naturally occurring quasi-paraphrases in English text. | Squibs What Is a Paraphrase? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.