_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d6628496 | Machine Translation is a task to translate the text from a source language to a target language in an automatic manner. Here, we describe a system that translate the English language to Assamese language text which is based on Phrase based statistical translation technique. To overcome the translation problem related with highly open word class like Proper Noun or the Out Of Vocabulary words we develop a transliteration system which is also embedded with our translation system. We enhance the translation output by replacing words with their most appropriate synonymous word for that particular context with the help of Assamese Word-Net Synset. This Machine Translation system outcomes with a reasonable translation output when analyzed by linguist for Assamese language which is a less computationally aware language among the Indian languages. | Assamese WordNet based Quality Enhancement of Bilingual Machine Translation System |
d18620281 | Information Extraction is an important task in Natural Language Processing, consisting of finding a structured representation for the information expressed in natural language text. Two key steps in information extraction are identifying the entities mentioned in the text, and the relations among those entities. In the context of Information Extraction for the World Wide Web, unsupervised relation extraction methods, also called Open Relation Extraction (ORE) systems, have become prevalent, due to their effectiveness without domain-specific training data. In general, these systems exploit part-of-speech tags or semantic information from the sentences to determine whether or not a relation exists, and if so, its predicate. This paper discusses some of the issues that arise when even moderately complex sentences are fed into ORE systems. A process for re-structuring such sentences is discussed and evaluated. The proposed approach replaces complex sentences by several others that, together, convey the same meaning and are more amenable to extraction by current ORE systems. The results of an experimental evaluation show that this approach succeeds in reducing the processing time and increasing the accuracy of the state-of-the-art ORE systems. | Improving Open Relation Extraction via Sentence Re-Structuring |
d46057261 | This paper describes the planning and creation of the Mixer and Transcript Reading corpora, their properties and yields, and reports on the lessons learned during their development. | The Mixer and Transcript Reading Corpora: Resources for Multilingual, Crosschannel Speaker Recognition Research * |
d7937869 | Text is composed of words and phrases. In bag-of-word model, phrases in texts are split into words. This may discard the inner semantics of phrases which in turn may give inconsistent relatedness score between two texts. T rW P , the unsupervised text relatedness approach combines both word and phrase relatedness. The word relatedness is computed using an existing unsupervised co-occurrence based method. The phrase relatedness is computed using an unsupervised phrase relatedness function f that adopts Sum-Ratio technique based on the statistics in the Google ngram corpus of overlapping n-grams associated with the two input phrases. The second run of T rW P ranked 30th out of 73 runs in SemEval-2015 task2a (English STS). | |
d6996543 | The text categorization is an important field for the automatic text information processing. Moreover, the authorship identification of a text can be treated as a special text categorization. This paper adopts the conceptual primitives' expression based on the Hierarchical Network of Concepts (HNC) theory, which can describe the words meaning in hierarchical symbols, in order to avoid the sparse data shortcoming that is aroused by the natural language surface features in text categorization. The KNN algorithm is used as computing classification element. Then, the experiment has been done on the Chinese text authorship identification. The experiment result gives out that the processing mode that is put forward in this paper achieves high correct rate, so it is feasible for the text authorship identification. | Text Categorization for Authorship based on the Features of Lingual Conceptual Expression * |
d41524632 | The paper defines the notion of "pedagogical stance", viewed as the type of position taken, the role assumed, the image projected and the types of social behaviours performed by a teacher in her teaching interaction with a pupil. Two aspects of pedagogical stance, "didactic" and "affective -relational", are distinguished and a hypothesis is put forward about their determinant factors (the teacher's personality, idea of one's role and of the learning process, and model of the pupil). Based on a qualitative analysis of the verbal and bodily behaviour of teachers in a corpus of teacher-pupil interactions, the paper singles out two didactic stances (maieutic and efficient) and four affective-relational ones (friendly, dominating, paternalistic, and secure base). Some examples of these stances are analysed in detail and the respective patterns of verbal and behavioural signals that typically characterize the six types of stances are outlined. | Pedagogical Stance and its multimodal expression |
d10738932 | Jens Erlandsen lAMLNjalsgade 96 DK 2300 kbh. S. GESA, et GEnerelt System til Analyse af naturlige sprog, ud formet som et oversaetter-fortolker system med virtuel mellem kode . Parsingsystemer til automatisk analyse af naturlige sprog kan udformes på utallige måder -og det gaelder naesten uanset hvilken metode, man vaelger for selve parsing-processen. Det er et større projekt at udforme en parser til et rimelig stort udsnit af naturligt sprog. Som regel vil udformningen af så store systemer ske i flere tempi: Primaert sker den i kravspecifikationsfasen, hvor der med udgangspunkt i bl.a. brugerbehov og ind-og uddata opstilles en raekke krav til sy stemet; Sekundaert i designfasen, hvor systemets struktur, al goritmer og datastrukturer endeligt fastlaegges. I designfasen vil det typisk vaere sådan, at der i forhold til de opstillede krav findes flere løsninger. I denne artikel vil jeg kort skitsere fire system-modeller, og derefter opridse nogle af de overvejelser i designfasen, der førte til at GESA-systemet blev udformet som et oversaet ter-fortolker system med virtuel mellemkode. Jeg har altså her anlagt en typisk "system-designer synsvinkel" på udform ningsproblematikken, idet jeg har set bort fra alle overve jelser angående systemets sproglige kapacitet. En mere udførlig beskrivelse af GESA, hvor også disse overvejelser er taget med, er givet i SAML nr. 10. Skemaet med de fire modeller. De fire udformningsmåder -eller rettere system-modellerjeg har valgt at tage med her, kan opstilles i et skema, hvor der er taget hensyn til to forhold: 74 GESA, et GEnerelt System til Analyse af naturlige sprog, udformet som et oversae tter-fortolker system med virtuel mellem-kode Jens Erlandsen Proceedings of NODALIDA 1983, pages 74-83 | |
d33901303 | This paper describes the procedure of semantic role labeling and the development of the first manually annotated Persian Proposition Bank (PerPB) which added a layer of predicate-argument information to the syntactic structures of Persian Dependency Treebank (known as PerDT). Through the process of annotating, the annotators could see the syntactic information of all the sentences and so they annotated 29982 sentences with more than 9200 unique verbs. In the annotation procedure, the direct syntactic dependents of the verbs were the first candidates for being annotated. So we did not annotate the other indirect dependents unless their phrasal heads were propositional and had their own arguments or adjuncts. Hence besides the semantic role labeling of verbs, the argument structure of 1300 unique propositional nouns and 300 unique propositional adjectives were annotated in the sentences, too. The accuracy of annotation process was measured by double annotation of the data at two separate stages and finally the data was prepared in the CoNLL dependency format. | Persian Proposition Bank |
d259212840 | Language documentation aims to collect a representative corpus of the language. Nevertheless, the question of how to quantify the comprehensive of the collection persists. We propose leveraging computational modelling to provide a supplementary metric to address this question in a low-resource language setting. We apply our proposed methods to the Papuan language Nen. Nen is actively in the process of being described and documented. Given the enormity of the task of language documentation, we focus on one subdomain, namely Nen verbal morphology. This study examines four verb types: copula, positional, middle, and transitive. We propose model-based paradigm generation for each verb type as a new way to measure completeness, where accuracy is analogous to the coverage of the paradigm. We contrast the paradigm attestation within the corpus (constructed from fieldwork data) and the accuracy of the paradigm generated by Transformer models trained for inflection. This analysis is extended by extrapolating from the learning curve established to provide predictions for the quantity of data required to generate a complete paradigm correctly. We also explore the correlation between high-frequency morphosyntactic features and model accuracy. We see a positive correlation between high-frequency feature combinations and model accuracy, but this is only sometimes the case. We also see high accuracy for low-frequency morphosyntactic features. Our results show that model coverage is significantly higher for the middle and transitive verbs but not the positional verb. This is an interesting finding, as the positional verb paradigm is the smallest of the four. | A Quest for Paradigm Coverage: The Story of Nen |
d11611001 | While speech recognition systems have come a long way in the last thirty years, there is still room for improvement. Although readily available, these systems are sometimes inaccurate and insufficient. The research presented here outlines a technique called Distributed Listening which demonstrates noticeable improvements to existing speech recognition methods. The Distributed Listening architecture introduces the idea of multiple, parallel, yet physically separate automatic speech recognizers called listeners. Distributed Listening also uses a piece of middleware called an interpreter. The interpreter resolves multiple interpretations using the Phrase Resolution Algorithm (PRA). These efforts work together to increase the accuracy of the transcription of spoken utterances. | Distributed Listening: A Parallel Processing Approach to Automatic Speech Recognition |
d12461243 | A process that attempts to solve abbreviation ambiguity is presented. Various contextrelated features and statistical features have been explored. Almost all features are domain independent and language independent. The application domain is Jewish Law documents written in Hebrew. Such documents are known to be rich in ambiguous abbreviations. Various implementations of the one sense per discourse hypothesis are used, improving the features with new variants. An accuracy of 96.09% has been achieved by SVM. | Combined One Sense Disambiguation of Abbreviations |
d259370884 | The bias-variance tradeoff is the idea that learning methods need to balance model complexity with data size to minimize both under-fitting and over-fitting. Recent empirical work and theoretical analyses with over-parameterized neural networks challenge the classic bias-variance trade-off notion suggesting that no such trade-off holds: as the width of the network grows, bias monotonically decreases while variance initially increases followed by a decrease. In this work, we first provide a variance decomposition-based justification criteria to examine whether large pretrained neural models in a fine-tuning setting are generalizable enough to have low bias and variance. We then perform theoretical and empirical analysis using ensemble methods explicitly designed to decrease variance due to optimization. This results in essentially a two-stage fine-tuning algorithm that first ratchets down bias and variance iteratively, and then uses a selected fixed-bias model to further reduce variance due to optimization by ensembling. We also analyze the nature of variance change with the ensemble size in low-and high-resource classes. Empirical results show that this two-stage method obtains strong results on SuperGLUE tasks and clinical information extraction tasks. Code and settings are available: https://github.com/christa60/ bias-var-fine-tuning-plms.git | Two-Stage Fine-Tuning for Improved Bias and Variance for Large Pretrained Language Models |
d261826438 | This paper describes NiuTrans neural machine translation systems of the WMT20 news translation tasks.We participated in Japanese↔English, English→Chinese, Inuktitut→English and Tamil→English total five tasks and rank first in Japanese↔English both sides. We mainly utilized iterative backtranslation, different depth and widen model architectures, iterative knowledge distillation and iterative fine-tuning. And we find that adequately widened and deepened the model simultaneously, the performance will significantly improve. Also, iterative fine-tuning strategy we implemented is effective during adapting domain. For Inuktitut→English and Tamil→English tasks, we built multilingual models separately and employed pretraining word embedding to obtain better performance. | The NiuTrans Machine Translation Systems for WMT20 |
d30645217 | Recognizing the plan underlying a query aids in the generation of an appropriate response. In this paper, we address the problem of how to generate cooperative responses when the user's plan is ambiguous. We show that it is not always necessary to resolve the ambiguity, and provide a procedure that estimates whether the ambiguity matters to the task of formulating a response. If the ambiguity does matter, we propose to resolve the ambiguity by entering into a clarification dialogue with the user and provide a procedure that performs this task. Together, these procedures allow a question-answering system to take advantage of the interactive and collaborative nature of dialogue in recognizing plans and resolving ambiguity. | Resolving Plan Ambiguity for Response Generation |
d794289 | We present an educational tool that integrates computational linguistics resources for use in non-technical undergraduate language science courses. By using the tool in conjunction with evidence-driven pedagogical case studies, we strive to provide opportunities for students to gain an understanding of linguistic concepts and analysis through the lens of realistic problems in feasible ways. Case studies tend to be used in legal, business, and health education contexts, but less in the teaching and learning of linguistics. The approach introduced also has potential to encourage students across training backgrounds to continue on to computational language analysis coursework. | An Analysis and Visualization Tool for Case Study Learning of Linguistic Concepts |
d31892462 | Extracting information from semistructured text has been studied only for limited domain sources due to its heterogeneous formats. This paper proposes a Ripple-Down Rules (RDR) based approach to extract relations from both semistructured and unstructured text in open domain Web pages. We find that RDR's 'case-by-case' incremental knowledge acquisition approach provides practical flexibility for (1) handling heterogeneous formats of semi-structured text; (2) conducting knowledge engineering on any Web pages with minimum start-up cost and(3)allowing open-ended settings on relation schema. The efficacy of the approach has been demonstrated by extracting contact information from randomly collected open domain Web pages. The rGALA system achieved 0.87 F1 score on a testing dataset of 100 Web pages, after only 7 hours of knowledge engineering on a training set of 100 Web pages. | Incremental Knowledge Acquisition Approach for Information Extraction on both Semi-structured and Unstructured Text from the Open Domain Web |
d2675444 | Paraphrase generation is important in various applications such as search, summarization, and question answering due to its ability to generate textual alternatives while keeping the overall meaning intact. Clinical paraphrase generation is especially vital in building patient-centric clinical decision support (CDS) applications where users are able to understand complex clinical jargons via easily comprehensible alternative paraphrases. This paper presents Neural Clinical Paraphrase Generation (NCPG), a novel approach that casts the task as a monolingual neural machine translation (NMT) problem. We propose an end-to-end neural network built on an attention-based bidirectional Recurrent Neural Network (RNN) architecture with an encoderdecoder framework to perform the task. Conventional bilingual NMT models mostly rely on word-level modeling and are often limited by out-of-vocabulary (OOV) issues. In contrast, we represent the source and target paraphrase pairs as character sequences to address this limitation. To the best of our knowledge, this is the first work that uses attention-based RNNs for clinical paraphrase generation and also proposes an end-to-end character-level modeling for this task. Extensive experiments on a large curated clinical paraphrase corpus show that the attention-based NCPG models achieve improvements of up to 5.2 BLEU points and 0.5 METEOR points over a non-attention based strong baseline for word-level modeling, whereas further gains of up to 6.1 BLEU points and 1.3 METEOR points are obtained by the character-level NCPG models over their word-level counterparts. Overall, our models demonstrate comparable performance relative to the state-of-the-art phrase-based non-neural models.This work is licensed under a Creative Commons Attribution 4.0 International License. License details: | Neural Clinical Paraphrase Generation with Attention |
d14139497 | This paper reports an exploratory study of the grounding styles of older dyads, namely, the characteristic ways in which they mutually agree to have shared a piece of information in dialogue. On the basis of Traum's classification of grounding acts, we conducted an exploratory comparison of dialogue data on older and younger dyads, and found that a fairly clear contrast holds mainly in the types of acknowledgement utterances used by the two groups. We will discuss the implications of this contrast, concerning how some of the negative stereotypes about conversations with older people may arise from this difference in grounding styles. | Grounding styles of aged dyads: an exploratory study |
d53105871 | We introduce tree-stack LSTM to model state of a transition based parser with recurrent neural networks. Tree-stack LSTM does not use any parse tree based or hand-crafted features, yet performs better than models with these features. We also develop new set of embeddings from raw features to enhance the performance. There are 4 main components of this model: stack's σ-LSTM, buffer's β-LSTM, actions' LSTM and tree-RNN. All LSTMs use continuous dense feature vectors (embeddings) as an input. Tree-RNN updates these embeddings based on transitions. We show that our model improves performance with low resource languages compared with its predecessors. We participate in CoNLL 2018 UD Shared Task as the "KParse" team and ranked 16th in LAS, 15th in BLAS and BLEX metrics, of 27 participants parsing 82 test sets from 57 languages. | Tree-stack LSTM in Transition Based Dependency Parsing |
d6568971 | The Chinese aspect marker le has long been considered very difficult for CSL learners. Therefore, we created an computer-based interactive multimedia CSL program of the perfective le based on the linguistic studies of the perfective le[3,25,26,28,29]and explored its effectiveness. Results of this study didn't show that the multimedia program as a self-learning tool outperform the printed materials significantly. Nevertheless, the result indicated that both the interactive multimedia program and the printed materials within their own groups do have significant effects on the members of the individual groups. This significance is the evidence supporting that the CSL program of le based on linguistic generalizations is effective. | On the Learning of Chinese Aspect Marker le through Interactive Multimedia Program * ڍ᧯յ೯ᓰ࿓ኙဎழᎎᑑಖ ڍ᧯յ೯ᓰ࿓ኙဎழᎎᑑಖ ڍ᧯յ೯ᓰ࿓ኙဎழᎎᑑಖ ڍ᧯յ೯ᓰ࿓ኙဎழᎎᑑಖψ ψ ψ ψԱ Ա Ա Աω ω ω ωᖂګயհઔߒ ᖂګயհઔߒ ᖂګயհઔߒ ᖂګயհઔߒ |
d30226677 | Obsessive-compulsive disorder (OCD) is an anxiety-based disorder that affects around 2.5% of the population. A common treatment for OCD is exposure therapy, where the patient repeatedly confronts a feared experience, which has the long-term effect of decreasing their anxiety. Some exposures consist of reading and writing stories about an imagined anxiety-provoking scenario. In this paper, we present a technology that enables patients to interactively contribute to exposure stories by supplying natural language input (typed or spoken) that advances a scenario. This interactivity could potentially increase the patient's sense of immersion in an exposure and contribute to its success. We introduce the NLP task behind processing inputs to predict new events in the scenario, and describe our initial approach. We then illustrate the future possibility of this work with an example of an exposure scenario authored with our application. | Natural-language Interactive Narratives in Imaginal Exposure Therapy for Obsessive-Compulsive Disorder |
d53098332 | Named-entity Recognition (NER) is an important task in the NLP field , and is widely used to solve many challenges. However, in many scenarios, not all of the entities are explicitly mentioned in the text. Sometimes they could be inferred from the context or from other indicative words. Consider the following sentence: "CMA can easily hydrolyze into free acetic acid." Although water is not mentioned explicitly, one can infer that H2O is an entity involved in the process. In this work, we present the problem of Latent Entities Extraction (LEE). We present several methods for determining whether entities are discussed in a text, even though, potentially, they are not explicitly written. Specifically, we design a neural model that handles extraction of multiple entities jointly. We show that our model, along with multi-task learning approach and a novel task grouping algorithm, reaches high performance in identifying latent entities. Our experiments are conducted on a large biological dataset from the biochemical field. The dataset contains text descriptions of biological processes, and for each process, all of the involved entities in the process are labeled, including implicitly mentioned ones. We believe LEE is a task that will significantly improve many NER and subsequent applications and improve text understanding and inference. | Latent Entities Extraction: How to Extract Entities that Do Not Appear in the Text? |
d245838303 | Since the inception of Generative Adversarial Networks (GANs), synthetic image generation has taken a giant leap because of the ability of these networks to generate high-quality images, however, the same cannot be said for text generation. A major challenge encountered in text generation using GAN's is the nondifferentiability of the discrete text. Most of the previous studies for text generation using GANs focus on solving this, but none of them incorporate any additional features in the GAN. These features could be useful in the training of the models, especially in the case of lowresource languages. In this paper, we propose a novel model called the POS-Senti-GAN (PS-GAN), where we show that the use of Parts-Of-Speech tag and sentiment features aid in the generation of better sentences. We also provide 'Pravar', a first-ever dataset consisting of stories from different categories that enable text/story generation for Telugu, a low resource language. Finally, we show the performance of the proposed models on three datasets, namely, Pravar 1 , Telugu Wikipedia 2 and Telugu News 3 . * The authors contributed equally to this work. | PS-GAN: Feature augmented text generation in Telugu |
d15853016 | In this paper we propose a domainindependent text segmentation method, which consists of three components. Latent Dirichlet allocation (LDA) is employed to compute words semantic distribution, and we measure semantic similarity by the Fisher kernel. Finally global best segmentation is achieved by dynamic programming. Experiments on Chinese data sets with the technique show it can be effective. Introducing latent semantic information, our algorithm is robust on irregular-sized segments. | Text Segmentation with LDA-Based Fisher Kernel |
d14302031 | Domain-independent meaning representation of text has received a renewed interest in the NLP community. Comparison plays a crucial role in shaping objective and subjective opinion and measurement in natural language, and is often expressed in complex constructions including ellipsis. In this paper, we introduce a novel framework for jointly capturing the semantic structure of comparison and ellipsis constructions. Our framework models ellipsis and comparison as interconnected predicate-argument structures, which enables automatic ellipsis resolution. We show that a structured prediction model trained on our dataset of 2,800 gold annotated review sentences yields promising results. Together with this paper we release the dataset and an annotation tool which enables two-stage expert annotation on top of tree structures. | Learning to Jointly Predict Ellipsis and Comparison Structures |
d814656 | Naively collecting translations by crowdsourcing the task to non-professional translators yields disfluent, low-quality results if no quality control is exercised. We demonstrate a variety of mechanisms that increase the translation quality to near professional levels. Specifically, we solicit redundant translations and edits to them, and automatically select the best output among them. We propose a set of features that model both the translations and the translators, such as country of residence, LM perplexity of the translation, edit rate from the other translations, and (optionally) calibration against professional translators. Using these features to score the collected translations, we are able to discriminate between acceptable and unacceptable translations. We recreate the NIST 2009 Urdu-to-English evaluation set with Mechanical Turk, and quantitatively show that our models are able to select translations within the range of quality that we expect from professional translators. The total cost is more than an order of magnitude lower than professional translation. | Crowdsourcing Translation: Professional Quality from Non-Professionals |
d1389241 | This paper describes an approach to adapt an existing multilingual Open-Domain | Experiments Adapting an Open-Domain Question Answering System to the Geographical Domain Using Scope-Based Resources |
d17649769 | There are many languages considered to be low-density languages, either because the population speaking the language is not very large, or because insufficient digitized text material is available in the language even though millions of people speak the language. Bangla is one of the latter ones. Readability classification is an important Natural Language Processing (NLP) application that can be used to judge the quality of documents and assist writers to locate possible problems. This paper presents a readability classifier of Bangla textbook documents based on information-theoretic and lexical features. The features proposed in this paper result in an F-score that is 50% higher than that for traditional readability formulas. | Text Readability Classification of Textbooks of a Low-Resource Language |
d2069120 | TRACTOR is the TELRI Research | An archive for all of Europe: the TRACTOR initiative |
d9212247 | The RNC now it is a 120 million-word collection of Russian text, thus, it is the most representative and authoritative corpus of the Russian language. It is available in the Internet at www.ruscorpora.ru. The RNC contains texts of all genres and types, which covers Russian from 19 up to 21 centuries. The practice of national corpora constructing has revealed that it's indispensable to include in the RNC the sub-corpora of spoken language. Therefore, the constructors of the RNC have an intention to include in it about 10 million words of Spoken Russian. Oral speech in the Corpus is represented in the standard Russian orthography. Although this decision made impossible any phonetic exploration of the Spoken Russian Corpus, but studying Spoken Russian from any other linguistic point of view is completely available. In addition to traditional annotations (metatextual and morphological), in Spoken Sub-corpus there is sociological annotation. Unlike the standard oral speech, which is spontaneous and isn't intended to be reproduced, Multimedia Spoken Russian (MSR) is otherwise in great deal premeditated and evidently meant to be reproduced. MSR is also to be included in the RNC: first of all we plan to make the very interesting and provocative part of the RNC from the textual ingredient of about 300 Russian films. | Spoken Russian in the Russian National Corpus (RNC) |
d6213728 | This paper presents a new model of anaphoric processing that utilizes the establishment of coherence relations between clauses in a discourse. We survey data that comprises a currently stalemated argument over whether VP-ellipsis is an inherently syntactic or inherently semantic phenomenon, and show that the data can be handled within a uniform discourse processing architecture. This architecture, which revises the dichotomy between ellipsis vs. Model Interpretive Anaphora given bySag and Hankamer (1984), is also able to accommodate divergent theories and data for pronominal reference resolution. The resulting architecture serves as a baseline system for modeling the role of cohesive devices in natural language. | THE EFFECT OF ESTABLISHING COHERENCE IN ELLIPSIS AND ANAPHORA RESOLUTION |
d219310347 | ||
d261341914 | In recent years, large language models (LLMs) have garnered significant attention across various domains, resulting in profound impacts. In this paper, we aim to explore the potential of LLMs in the field of human-machine conversations. It begins by examining the rise and milestones of these models, tracing their origins from neural language models to the transformative impact of the Transformer architecture on conversation processing. Next, we discuss the emergence of large pre-training models and their utilization of contextual knowledge at a large scale, as well as the scaling to billion-parameter models that push the boundaries of language generation. We further highlight advancements in multi-modal conversations, showcasing how LLMs bridge the gap between language and vision. We also introduce various applications in human-machine conversations, such as intelligent assistant-style dialogues and emotionally supportive conversations, supported by successful case studies in diverse fields. Lastly, we explore the challenges faced by LLMs in this context and provide insights into future development directions and prospects. Overall, we offer a comprehensive overview of the potential and future development of LLMs in human-machine conversations, encompassing their milestones, applications, and the challenges ahead. | Unleashing the Power of Large Models: Exploring Human-Machine Conversations |
d245838300 | Near synonym has become a central issue for lexicon and semantics for the nuances in meaning, distribution and context. This study, drawing upon the MARVS framework of and , aims to carry out the descriptive lexical-semantic analysis, elaborate the similarities and differences and investigate the factors influencing the discrepancy of the near synonyms. A mixed method of quantitative and qualitative analysis was used to identify the sense, role module and event module. The findings show that the near synonyms of mental-state verbs vary in frequency, role module and role internal attribution and indicate that there is a positive relationship between event structure modules and the senses. The study should, therefore, be of value to theoretical, pragmatical and pedagogical implication to a better understanding for the mental state verbs as well as the nature of the cognition. | Study of Near Synonymous Mental-State Verbs: A MARVS Perspective |
d15692911 | ÇÒ ËØ Ø ×Ø Ð Å Ø Ó × Ò AE ØÙÖ Ð Ä Ò Ù ÈÖÓ ×× Ò ÂÓ Ñ AE ÚÖ ½ ÁÒØÖÓ Ù Ø ÓÒ | |
d713490 | We describe a statistical approach for modeling agreements and disagreements in conversational interaction. Our approach first identifies adjacency pairs using maximum entropy ranking based on a set of lexical, durational, and structural features that look both forward and backward in the discourse. We then classify utterances as agreement or disagreement using these adjacency pairs and features that represent various pragmatic influences of previous agreement or disagreement on the current utterance. Our approach achieves 86.9% accuracy, a 4.9% increase over previous work. ¤ 3 ¦ § © © © ¦ ¢ ¡ | Identifying Agreement and Disagreement in Conversational Speech: Use of Bayesian Networks to Model Pragmatic Dependencies |
d252624774 | This paper introduces the National Corpus of Irish, an initiative to develop a large national corpus of written and spoken contemporary Irish as well as related specialised corpora. The newly-compiled corpora will be hosted at corpas.ie, in what will become a hub for corpus-based research on the Irish language. Users will be able to search the corpora and download data generated during the project from the corpas.ie website and appropriate third-party repositories. Corpus 1 will be a balanced general-purpose corpus containing c. 155m words. Corpus 2 will be a written corpus consisting of c. 100m words. Corpus 3 will be a spoken corpus containing 6.5m words. Corpus 4 will be a monitor corpus with a target size of 1m words per year from 2000 onwards. Token, lemma, and n-gram frequency lists will be published at regular intervals on the project website, and language models will be published there and on other appropriate platforms during the course of the project. This paper focuses on the background and crucial scoping stage of the project, and examines user needs as identified in a survey of potential users. | Introducing the National Corpus of Irish Project |
d8287503 | We propose a novel unsupervised extractive approach for summarizing online reviews by exploiting review helpfulness ratings. In addition to using the helpfulness ratings for review-level filtering, we suggest using them as the supervision of a topic model for sentence-level content scoring. The proposed method is metadata-driven, requiring no human annotation, and generalizable to different kinds of online reviews. Our experiment based on a widely used multi-document summarization framework shows that our helpfulness-guided review summarizers significantly outperform a traditional content-based summarizer in both human evaluation and automated evaluation. | Empirical analysis of exploiting review helpfulness for extractive summarization of online reviews |
d10157763 | The development of natural language processing (NLP) systems that perform machine translation (MT) and information retrieval (IR) has highlighted the need for the automatic recognition of proper names. While various name recognizers have been developed, they suffer from being too limited; some only recognize one name class, and all are language specific. This work develops an approach to multilingual name recognition that uses machine learning and a portable framework to simplify the porting task by maximizing reuse and automation. | A Synopsis of Learning to Recognize Names Across Languages |
d10121099 | This paper discusses the application of Unification Categorial Grammar (UCG) to the framework of Isomorphic Grammars for Machine Translation pioneered by Landsbergen. The Isomorphic Grammars approach to MT involves developing the grammars of the Source and Target languages in parallel, in order to ensure that SL and TL expressions which stand in the translation relation have isomorphic derivations. The principle advantage of this approach is that knowledge concerning translation equivalence of expressions may be directly exploited, obviating the need for answers to semantic questions that we do not yet have. Semantic and other information may still be incorporated, but as constraints on the translation relation, not as levels of textual representation.After introducing this approach to MT system design, and the basics of monolingual UCG, we will show how the two can be integrated, and present an example from an implemented bidirectional Engllsh-Spanish fragment. Finally we will present some outstanding problems with the approach. | Machine Translation Using Isomorphic UCGs |
d229365786 | ||
d30817944 | Since the effectiveness of MT adaptation relies on the text repetitiveness, the question on how to measure repetitions in a text naturally arises. This work deals with the issue of looking for and evaluating text features that might help the prediction of the impact of MT adaptation on translation quality. In particular, the repetition rate metric, we recently proposed, is compared to other features employed in very related NLP tasks. The comparison is carried out through a regression analysis between feature values and MT performance gains by dynamically adapted versus non-adapted MT engines, on five different translation tasks. The main outcome of experiments is that the repetition rate correlates better than any other considered feature with the MT gains yielded by the online adaptation, although using all features jointly results in better predictions than with any single feature. 1 In this paper the word repetitiveness is not used with a negative meaning, e.g. boring, unpleasant. 2 Although sometimes they are given slightly different meaning, in this work we consider prediction and forecast as synonyms. | The Repetition Rate of Text as a Predictor of the Effectiveness of Machine Translation Adaptation |
d53079155 | Most models for learning word embeddings are trained based on the context information of words, more precisely first order cooccurrence relations. In this paper, a metric is designed to estimate second order cooccurrence relations based on context overlap. The estimated values are further used as the augmented data to enhance the learning of word embeddings by joint training with existing neural word embedding models. Experimental results show that better word vectors can be obtained for word similarity tasks and some downstream NLP tasks by the enhanced approach. | Quantifying Context Overlap for Training Word Embeddings |
d53079675 | Distributional semantic models (DSMs) generally require sufficient examples for a word to learn a high quality representation. This is in stark contrast with human who can guess the meaning of a word from one or a few referents only. In this paper, we propose Mem2Vec, a memory based embedding learning method capable of acquiring high quality word representations from fairly limited context. Our method directly adapts the representations produced by a DSM with a longterm memory to guide its guess of a novel word. Based on a pre-trained embedding space, the proposed method delivers impressive performance on two challenging few-shot word similarity tasks. Embeddings learned with our method also lead to considerable improvements over strong baselines on NER and sentiment classification. | Memory, Show the Way: Memory Based Few Shot Word Representation Learning |
d2878849 | Collecting supervised training data for automatic speech recognition (ASR) systems is both time consuming and expensive. In this paper we use the notion of virtual evidence in a graphical-model based system to reduce the amount of supervisory training data required for sequence learning tasks. We apply this approach to a TIMIT phone recognition system, and show that our VE-based training scheme can, relative to a baseline trained with the full segmentation, yield similar results with only 15.3% of the frames labeled (keeping the number of utterances fixed). | Virtual Evidence for Training Speech Recognizers using Partially Labeled data |
d1070525 | ||
d6161073 | This paper describes an application of reinforcement learning to determine a dialog policy for a complex collaborative task where policies for both the system and a proxy for a user of the system are learned simultaneously. With this approach a useful dialog policy is learned without the drawbacks of other approaches that require significant human interaction. The specific task that the agents were trained on was chosen for its complexity and requirement that both conversants bring task knowledge to the interaction, thus ensuring its collaborative nature. The results of our experiment show that you can use reinforcement learning to create an effective dialog policy, which employs a mixed initiative strategy, without the drawbacks of large amounts of data or significant human input. | Learning Mixed Initiative Dialog Strategies By Using Reinforcement Learning On Both Conversants |
d252548357 | Due to the increasing use of service chatbots 001 in E-commerce platforms in recent years, cus-002 tomer satisfaction prediction (CSP) is gaining 003 more and more attention. CSP is dedicated to 004 evaluating subjective customer satisfaction in 005 conversational service and thus helps improve 006 customer service experience. However, pre-007 vious methods focus on modeling customer-008 chatbot interaction at different single turns, 009 neglecting the important dynamic satisfaction 010 states throughout the customer journey. In this 011 work, we investigate the problem of satisfac-012 tion states tracking and its effects on CSP in 013 E-commerce service chatbots. To this end, we 014 propose a dialogue-level classification model 015 named DialogueCSP to track satisfaction states 016 for CSP. In particular, we explore a novel two-017 step interaction module to represent the dy-018 namic satisfaction states at each turn. In order 019 to capture dialogue-level satisfaction states for 020 CSP, we further introduce dialogue-aware at-021 tentions to integrate historical informative cues 022 into the interaction module. To evaluate the 023 proposed approach, we also build a Chinese 024 E-commerce dataset for CSP. Experiment re-025 sults demonstrate that our model significantly 026 outperforms multiple baselines, illustrating the 027 benefits of satisfaction states tracking on CSP.028 | Tracking Satisfaction States for Customer Satisfaction Prediction in E-commerce Service Chatbots Anonymous ACL submission |
d235415386 | Domain adaptation methods often exploit domain-transferable input features, a.k.a. pivots. The task of Aspect and Opinion Term Extraction presents a special challenge for domain transfer: while opinion terms largely transfer across domains, aspects change drastically from one domain to another (e.g. from restaurants to laptops). In this paper, we investigate and establish empirically a prior conjecture, which suggests that the linguistic relations connecting opinion terms to their aspects transfer well across domains and therefore can be leveraged for cross-domain aspect term extraction. We present several analyses supporting this conjecture, via experiments with four linguistic dependency formalisms to represent relation patterns. Subsequently, we present an aspect term extraction method that drives models to consider opinion-aspect relations via explicit multitask objectives. This method provides significant performance gains, even on top of a prior state-of-the-art linguistically-informed model, which are shown in analysis to stem from the relational pivoting signal. | Opinion-based Relational Pivoting for Cross-domain Aspect Term Extraction |
d2606461 | This paper proposes how to automatically identify Korean comparative sentences from text documents. This paper first investigates many comparative sentences referring to previous studies and then defines a set of comparative keywords from them. A sentence which contains one or more elements of the keyword set is called a comparative-sentence candidate. Finally, we use machine learning techniques to eliminate non-comparative sentences from the candidates. As a result, we achieved significant performance, an F1-score of 88.54%, in our experiments using various web documents. | Extracting Comparative Sentences from Korean Text Documents Us- ing Comparative Lexical Patterns and Machine Learning Techniques |
d2712224 | In this paper, we describe an approach to annotate the propositions in the Penn Chinese Treebank. We describe how diathesis alternation patterns can be used to make coarse sense distinctions for Chinese verbs as a necessary step in annotating the predicate-structure of Chinese verbs. We then discuss the representation scheme we use to label the semantic arguments and adjuncts of the predicates. We discuss several complications for this type of annotation and describe our solutions. We then discuss how a lexical database with predicate-argument structure information can be used to ensure consistent annotation. Finally, we discuss possible applications for this resource. | Annotating the Propositions in the Penn Chinese Treebank |
d259370746 | Automated Essay Scoring (AES) aims to score essays written in response to specific prompts. Many AES models have been proposed, but most of them are either prompt-specific or prompt-adaptive and cannot generalize well on "unseen" prompts. This work focuses on improving the generalization ability of AES models from the perspective of domain generalization, where the data of target prompts cannot be accessed during training. Specifically, we propose a prompt-aware neural AES model to extract comprehensive representation for essay scoring, including both promptinvariant and prompt-specific features. To improve the generalization of representation, we further propose a novel disentangled representation learning framework. In this framework, a contrastive norm-angular alignment strategy and a counterfactual self-training strategy are designed to disentangle the prompt-invariant information and prompt-specific information in representation. Extensive experimental results on datasets of both ASAP and TOEFL11 demonstrate the effectiveness of our method under the domain generalization setting. . 2020. Improving disentangled text representation learning with information-theoretic guidance. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, | Improving Domain Generalization for Prompt-Aware Essay Scoring via Disentangled Representation Learning |
d259376475 | This paper describes our system submitted for SemEval Task 7, Multi-Evidence Natural Language Inference for Clinical Trial Data. The task consists of 2 subtasks. Subtask 1 is to determine the relationships between clinical trial data (CTR) and statements. Subtask 2 is to output a set of supporting facts extracted from the premises with the input of CTR premises and statements. Through experiments, we found that our GPT2-based pre-trained models can obtain good results in Subtask 2. Therefore, we use the GPT2-based pre-trained model to fine-tune Subtask 2. We transform the evidence retrieval task into a binary class task by combining premises and statements as input, and the output is whether the premises and statements match. We obtain a top-5 score in the evaluation phase of Subtask 2. | CPIC at SemEval-2023 Task 7: GPT2-based Model for Multi-evidence Natural Language Inference for Clinical Trial Data |
d90750 | ||
d218974520 | ||
d14550638 | We introduce several ideas that improve the performance of supervised information extraction systems with a pipeline architecture, when they are customized for new domains. We show that: (a) a combination of a sequence tagger with a rule-based approach for entity mention extraction yields better performance for both entity and relation mention extraction; (b) improving the identification of syntactic heads of entity mentions helps relation extraction; and (c) a deterministic inference engine captures some of the joint domain structure, even when introduced as a postprocessing step to a pipeline system. All in all, our contributions yield a 20% relative increase in F1 score in a domain significantly different from the domains used during the development of our information extraction system. | Customizing an Information Extraction System to a New Domain |
d9985645 | The present paper describes the current release of the Bochum English Countability Lexicon (BECL 2.1), a large empirical database consisting of lemmata from Open ANC (http://www.anc.org) with added senses from WordNet (Fellbaum, 1998). BECL 2.1 contains ≈ 11,800 annotated noun-sense pairs, divided into four major countability classes and 18 fine-grained subclasses. In the current version, BECL also provides information on nouns whose senses occur in more than one class allowing a closer look on polysemy and homonymy with regard to countability. Further included are sets of similar senses using the Leacock and Chodorow (LCH) score for semantic similarity(Leacock & Chodorow, 1998), information on orthographic variation, on the completeness of all WordNet senses in the database and an annotated representation of different types of proper names. The further development of BECL will investigate the different countability classes of proper names and the general relation between semantic similarity and countability as well as recurring syntactic patterns for noun-sense pairs. Our current work on those patterns concerning mass nouns is briefly discussed pointing to further research. The BECL 2.1 database is also publicly available via http://count-and-mass.org. | A Sense-based Lexicon of Count and Mass Expressions: The Bochum English Countability Lexicon |
d11663521 | We describe in this paper how different learning strategies can be applied on the same NLP task, namely chunking. The reference corpus is extracted from the French Treebank, the symbolic learning strategy used is grammatical inference and the statistical one is CRFs (Conditional Random Fields). As expected, the symbolic approach allows readability but is less effective than the statistical one. We then propose two distinct ways to combine both approaches and show that in both cases they benefit from one another. | How Symbolic Learning Can Help Statistical Learning (and vice versa) |
d1437152 | As part of the 2016 Computational Linguistics and Clinical Psychology (CLPsych) shared task, participants were asked to construct systems to automatically classify mental health forum posts into four categories, representing how urgently posts require moderator attention. This paper details the system implementation from the University of Florida, in which we compare several distinct models and show that best performance is achieved with domain-specific preprocessing, n-gram feature extraction, and cross-validated linear models. | |
d236486139 | We participated CASE shared task in ACL-IJCNLP 2021. This paper is a summary of our experiments and ideas about this shared task. For each subtask we shared our approach, successful and failed methods and our thoughts about them. We submit our results once for every subtask, except for subtask3, in task submission system and present scores based on our validation set formed from given training samples in this paper. Techniques and models we mentioned includes BERT, Multilingual BERT, oversampling, undersampling, data augmentation and their implications with each other. Most of the experiments we came up with were not completed, as time did not permit, but we share them here as we plan to do them as suggested in the future work part of document. | ALEM at CASE 2021 Task 1: Multilingual Text Classification on News Articles |
d5840780 | A mixture of positive (friendly) and negative (antagonistic) relations exist among users in most social media applications. However, many such applications do not allow users to explicitly express the polarity of their interactions. As a result most research has either ignored negative links or was limited to the few domains where such relations are explicitly expressed (e.g. Epinions trust/distrust). We study text exchanged between users in online communities. We find that the polarity of the links between users can be predicted with high accuracy given the text they exchange. This allows us to build a signed network representation of discussions; where every edge has a sign: positive to denote a friendly relation, or negative to denote an antagonistic relation. We also connect our analysis to social psychology theories of balance. We show that the automatically predicted networks are consistent with those theories. Inspired by that, we present a technique for identifying subgroups in discussions by partitioning singed networks representing them. | Detecting Subgroups in Online Discussions by Modeling Positive and Negative Relations among Participants |
d8216034 | News comment is a new text genre in the Web 2.0 era. Many people often write comments to express their opinions about recent news events or topics after they read news articles. Because news comments are freely written without checking, they are very different from formal news texts. In particular, named entities in news comments are usually composed of some wrongly written words, informal abbreviations or aliases, which brings great difficulties for machine detection and understanding. This paper addresses the task of named entity recognition in Chinese news comments on the Web. We propose to leverage the entity information in the referred news article to improve named entity recognition in the news comments. Three different schemes are investigated to find useful entities in the news article for new feature generation in the CRFs model. Finally, a dictionary-based correction step is employed to further improve the results. We manually labelled a benchmark dataset with 60 pieces of news and 6000 comments downloaded from a popular Chinese news portal -www.sina.com.cn. The experimental results on the dataset show that our method is effective for this special task. | Named Entity Recognition in Chinese News Comments on the Web |
d252624500 | The Qur'an QA 2022 shared task aims at assessing the possibility of building systems that can extract answers to religious questions given relevant passages from the Holy Qur'an. This paper describes SMASH's system that was used to participate in this shared task. Our experiments reveal a data leakage issue among the different splits of the dataset. This leakage problem hinders the reliability of using the models' performance on the development dataset as a proxy for the ability of the models to generalize to new unseen samples. After creating better faithful splits from the original dataset, the basic strategy of fine-tuning a language model pretrained on classical Arabic text yielded the best performance on the new evaluation split. The results achieved by the model suggests that the small scale dataset is not enough to fine-tune large transformer-based language models in a way that generalizes well. Conversely, we believe that further attention could be paid to the type of questions that are being used to train the models given the sensitivity of the data. | SMASH at Qur'an QA 2022: Creating Better Faithful Data Splits for Low-resourced Question Answering Scenarios |
d1428529 | For research and development of an approach for automatically answering why-questions (why-QA) a data collection was created. The data set was obtained by way of elicitation and comprises a total of 395 why-questions. For each question, the data set includes the source document and one or two user-formulated answers. In addition, for a subset of the questions, user-formulated paraphrases are available. All question-answer pairs have been annotated with information on topic and semantic answer type. The resulting data set is of importance not only for our research, but we expect it to contribute to and stimulate other research in the field of why-QA. | Data for question answering: The case of why |
d13567024 | We show that applying semantic role label constraints to bracketing ITG alignment to train MT systems improves the quality of MT output in comparison to the conventional BITG and GIZA alignments. Moreover, we show that applying soft constraints to SRL-constrained BITG alignment leads to a better translation system compared to using hard constraints which appear too harsh to produce meaningful biparses. We leverage previous work demonstrating that BITG alignments are able to fully cover cross-lingual semantic frame alternations, by using semantic role labeling to further narrow BITG constraints, in a soft fashion that avoids losing relevant portions of the search space. SRL-based evaluation metrics like MEANT have shown that tuning towards preserving the shallow semantic structure across translations, robustly improves translation performance. Our approach brings the same intuition into the training phase. We show that our new alignment outperforms both conventional Moses and BITG alignment baselines in terms of the adequacy-oriented MEANT scores, while still producing comparable results in terms of edit distance metrics. | Improving Semantic SMT via Soft Semantic Role Label Constraints on ITG Alignments |
d9111381 | Current approaches for semantic parsing take a supervised approach requiring a considerable amount of training data which is expensive and difficult to obtain. This supervision bottleneck is one of the major difficulties in scaling up semantic parsing.We argue that a semantic parser can be trained effectively without annotated data, and introduce an unsupervised learning algorithm. The algorithm takes a self training approach driven by confidence estimation. Evaluated over Geoquery, a standard dataset for this task, our system achieved 66% accuracy, compared to 80% of its fully supervised counterpart, demonstrating the promise of unsupervised approaches for this task. | Confidence Driven Unsupervised Semantic Parsing |
d219307891 | ||
d7715096 | Abbreviations are common in biomedical documents and many are ambiguous in the sense that they have several potential expansions. Identifying the correct expansion is necessary for language understanding and important for applications such as document retrieval. Identifying the correct expansion can be viewed as a Word Sense Disambiguation (WSD) problem. A WSD system that uses a variety of knowledge sources, including two types of information specific to the biomedical domain, is also described. This system was tested on a corpus of ambiguous abbreviations, created by automatically identifying the correct expansion in Medline abstracts, and found to identify the correct expansion with up to 99% accuracy. | Disambiguation of Biomedical Abbreviations |
d13450512 | We examine how the recently explored class of linear transductions relates to finite-state models. Linear transductions have been neglected historically, but gainined recent interest in statistical machine translation modeling, due to empirical studies demonstrating that their attractive balance of generative capacity and complexity characteristics lead to improved accuracy and speed in learning alignment and translation models. Such work has until now characterized the class of linear transductions in terms of either (a) linear inversion transduction grammars (LITGs) which are linearized restrictions of inversion transduction grammars or (b) linear transduction grammars (LTGs) which are bilingualized generalizations of linear grammars. In this paper, we offer a new alternative characterization of linear transductions, as relating four finite-state languages to each other. We introduce the devices of zipper finite-state automata (ZFSAs) and zipper finite-state transducers (ZFSTs) in order to construct the bridge between linear transductions and finite-state models. | Linear Transduction Grammars and Zipper Finite-State Transducers |
d1861417 | We present a data-driven approach which exploits word alignment in a large parallel corpus with the objective of identifying those verb-and adjective-preposition combinations which are difficult for L2 language learners. This allows us, on the one hand, to provide language-specific ranked lists in order to help learners to focus on particularly challenging combinations given their native language (L1). On the other hand, we provide extensive statistics on such combinations with the objective of facilitating automatic error correction for preposition use in learner texts. We evaluate these lists, first manually, and secondly automatically by applying our statistics to an error-correction task.Johannes Graën and Gerold Schneider 2017. Crossing the border twice: Reimporting prepositions to alleviate L1-specific transfer errors. | Crossing the Border Twice: Reimporting Prepositions to Alleviate L1-Specific Transfer Errors |
d248780221 | Dialogue agents can leverage external textual knowledge to generate responses of a higher quality. To our best knowledge, most existing works on knowledge grounded dialogue settings assume that the user intention is always answerable. Unfortunately, this is impractical as there is no guarantee that the knowledge retrievers could always retrieve the desired knowledge. Therefore, this is crucial to incorporate fallback responses to respond to unanswerable contexts appropriately while responding to the answerable contexts in an informative manner. We propose a novel framework that automatically generates a control token with the generator to bias the succeeding response towards informativeness for answerable contexts and fallback for unanswerable contexts in an endto-end manner. Since no existing knowledge grounded dialogue dataset considers this aim, we augment the existing dataset with unanswerable contexts to conduct our experiments. Automatic and human evaluation results indicate that naively incorporating fallback responses with controlled text generation still hurts informativeness for answerable context. In contrast, our proposed framework effectively mitigates this problem while still appropriately presenting fallback responses to unanswerable contexts. Such a framework also reduces the extra burden of the additional classifier and the overheads introduced in the previous works, which operates in a pipeline manner. 1 | On Controlling Fallback Responses for Grounded Dialogue Generation |
d15416634 | We present an experimental statistical tree-to-tree machine translation system based on the multi-bottom up tree transducer including rule extraction, tuning and decoding. Thanks to input parse forests and a "no pruning" strategy during decoding, the obtained translations are competitive. The drawbacks are a restricted coverage of 70% on test data, in part due to exact input parse tree matching, and a relatively high runtime. Advantages include easy redecoding with a different weight vector, since the full translation forests can be stored after the first decoding pass. | Exact Decoding with Multi Bottom-Up Tree Transducers * |
d3832345 | The frequency of words and syntactic constructions has been observed to have a substantial effect on language processing. This begs the question of what causes certain constructions to be more or less frequent. A theory of grounding (Phillips, 2010) would suggest that cognitive limitations might cause languages to develop frequent constructions in such a way as to avoid processing costs. This paper studies how current theories of working memory fit into theories of language processing and what influence memory limitations may have over reading times. Measures of such limitations are evaluated on eye-tracking data and the results are compared with predictions made by different theories of processing. | An Analysis of Frequency-and Memory-Based Processing Costs |
d219310267 | ||
d53145837 | Transfer learning approaches for Neural Machine Translation (NMT) train a NMT model on the assisting-target language pair (parent model) which is later fine-tuned for the sourcetarget language pair of interest (child model), with the target language being the same. In many cases, the assisting language has a different word order from the source language. We show that divergent word order adversely limits the benefits from transfer learning when little to no parallel corpus between the source and target language is available. To bridge this divergence, We propose to pre-order the assisting language sentence to match the word order of the source language and train the parent model. Our experiments on many language pairs show that bridging the word order gap leads to significant improvement in the translation quality. | Addressing word-order Divergence in Multilingual Neural Machine Translation for extremely Low Resource Languages |
d43782875 | Parallel Natural Language Processing is an application-oriented text addressing parallel implementations of aspects of natural language understanding. Although the focus is not on any particular cognitive theory, attention is paid to human performance data throughout. The implementation issues and difficulties considered provide a grounding and a checkpoint to balance the existing body of more theoretical work.The book is a collection of papers, each chapter standing on its own yet each also tied in with several other chapters in the topics addressed and the aspects examined. In general, for each language aspect included, at least two papers cover the topic from slightly differing viewpoints. Additionally, most topics are balanced by a contrasting point of view; for example, formal context-free grammars versus parsing of free speech with mistakes, and complete story understanding versus single noun-phrase shades of meaning. Each chapter is focused and thus manageable, enabling the authors to make some useful observations about their topic. The book as a whole thus serves as a look at the entirety of natural language understanding and the editors have done a good job indeed of encasing and covering this topic.The first, rather lengthy, chapter provides a good review and overview of the topic and issues at hand. Starting with a look at psycholinguistic and cognitive arguments both for and against autonomous components versus interactive models of natural language processing, the editors conclude that parallel implementations of language processing should be further explored. This is followed by a discussion of parallelism from a computer science perspective: computing models, architectures, operating systems, and programming languages. A review of parallelism in natural language processing follows, with an eye toward the issues raised in the computer science review. A 24-page bibliography provides a thorough reference for the plethora of topics covered in this chapter.The remaining 12 chapters of the book are individually authored papers covering parallel natural language processing. The first series of papers focuses on context-free parsing, starting with a theoretical account of the advantages obtained through parallelism, and results of implementations on parallel hardware. Further proposed implementations are presented using object-oriented systems and connectionist paradigms. Finally, parsing is considered as a constraint satisfaction and energy minimization problem.The next series of papers refocuses on the interaction perspective, considering various methodologies: connectionism, concurrent processes, frame-based actors and bulletin boards, and object-oriented functional programming. Finally, the book closes with an examination of language generation using connectionist and parallel unification models. The interested reader may refer to Chapter 1, Section 5.3 for a thorough overview of individual chapter contents.Due to the implementation focus, the reader may need a strong computer science background to get the most from this book, which assumes more than passing familiarity with a wide range of topics such as object-oriented and functional programming, deadlock avoidance techniques, and efficient implementations of Cocke-Younger-Kasami parsing. However, in general each concept is briefly introduced so the attentive reader can follow the specific topic even without a formal understanding.--Jeanne Milostan, University of California, San Diego Natural language understanding (second edition) | Briefly Noted Parallel natural language processing Geert Adriaens and Udo Hahn (editors) |
d259370636 | Modeling political actors is at the core of quantitative political science. Existing works have incorporated contextual information to better learn the representation of political actors for specific tasks through graph models. However, they are limited to the structure and objective of training settings and can not be generalized to all politicians and other tasks. In this paper, we propose a Unified Pre-training Architecture for Political Actor Modeling based on language (UPPAM). In UPPAM, we aggregate statements to represent political actors and learn the mapping from languages to representation, instead of learning the representation of particular persons. We further design structureaware contrastive learning and behavior-driven contrastive learning tasks, to inject multidimensional information in the political context into the mapping. In this framework, we can profile political actors from different aspects and solve various downstream tasks. Experimental results demonstrate the effectiveness and capability of generalization of our method. * Corresponding author. | UPPAM: A Unified Pre-training Architecture for Political Actor Modeling based on Language |
d317731 | This paper proposes a novel hierarchical learning strategy to deal with the data sparseness problem in relation extraction by modeling the commonality among related classes. For each class in the hierarchy either manually predefined or automatically clustered, a linear discriminative function is determined in a topdown way using a perceptron algorithm with the lower-level weight vector derived from the upper-level weight vector. As the upper-level class normally has much more positive training examples than the lower-level class, the corresponding linear discriminative function can be determined more reliably. The upperlevel discriminative function then can effectively guide the discriminative function learning in the lower-level, which otherwise might suffer from limited training data. Evaluation on the ACE RDC 2003 corpus shows that the hierarchical strategy much improves the performance by 5.6 and 5.1 in F-measure on least-and medium-frequent relations respectively. It also shows that our system outperforms the previous best-reported system by 2.7 in F-measure on the 24 subtypes using the same feature set. | Modeling Commonality among Related Classes in Relation Extraction |
d32484755 | This work provides a TAG account of gapping in English, based on a novel deletion-like operation that is referred to as de-anchoring. Deanchoring applies onto elementary trees, but it is licensed by the derivation tree in two ways. Firstly, de-anchored trees must be linked to the root of the derivation tree by a chain of adjunctions, and the sub-graph of de-anchored nodes in a derivation tree must satisfy certain internal constraints. Secondly, de-anchoring must be licensed by the presence of a homomorphic antecedent derivation tree. *We are grateful to Maribel Romero and Andreas Konietzko; the paper has benefitted a lot from discussions with them. Furthermore, we would like to thank three anonymous reviewers for their valuable comments. | Gapping through TAG Derivation Trees * |
d49664123 | Most of the neural sequence-to-sequence (seq2seq) models for grammatical error correction (GEC) have two limitations: (1) a seq2seq model may not be well generalized with only limited error-corrected data;(2) a seq2seq model may fail to completely correct a sentence with multiple errors through normal seq2seq inference. We attempt to address these limitations by proposing a fluency boost learning and inference mechanism. Fluency boosting learning generates fluency-boost sentence pairs during training, enabling the error correction model to learn how to improve a sentence's fluency from more instances, while fluency boosting inference allows the model to correct a sentence incrementally through multi-round seq2seq inference until the sentence's fluency stops increasing. Experiments show our approaches improve the performance of seq2seq models for GEC, achieving state-of-the-art results on both CoNLL-2014 and JFLEG benchmark datasets. | Fluency Boost Learning and Inference for Neural Grammatical Error Correction |
d9138951 | With the increasing automation of health care information processing, it has become crucial to extract meaningful information from textual notes in electronic medical records. One of the key challenges is to extract and normalize entity mentions. State-of-the-art approaches have focused on the recognition of entities that are explicitly mentioned in a sentence. However, clinical documents often contain phrases that indicate the entities but do not contain their names. We term those implicit entity mentions and introduce the problem of implicit entity recognition (IER) in clinical documents. We propose a solution to IER that leverages entity definitions from a knowledge base to create entity models, projects sentences to the entity models and identifies implicit entity mentions by evaluating semantic similarity between sentences and entity models. The evaluation with 857 sentences selected for 8 different entities shows that our algorithm outperforms the most closely related unsupervised solution. The similarity value calculated by our algorithm proved to be an effective feature in a supervised learning setting, helping it to improve over the baselines, and achieving F1 scores of .81 and .73 for different classes of implicit mentions. Our gold standard annotations are made available to encourage further research in the area of IER. | Implicit Entity Recognition in Clinical Documents |
d226283775 | Natural language processing covers a wide variety of tasks with token-level or sentencelevel understandings. In this paper, we provide a simple insight that most tasks can be represented in a single universal extraction format. We introduce a prototype model and provide an open-source and extensible toolkit called OpenUE for various extraction tasks. OpenUE allows developers to train custom models to extract information from the text and supports quick model validation for researchers. Besides, OpenUE provides various functional modules to maintain sufficient modularity and extensibility. Except for the toolkit, we also deploy an online demo 1 with restful APIs to support real-time extraction without training and deploying. Additionally, the online system can extract information in various tasks, including relational triple extraction, slot & intent detection, event extraction, and so on. We release the source code, datasets, and pretrained models to promote future researches in | OpenUE: An Open Toolkit of Universal Extraction from Text |
d252819333 | Although automatic text summarization (ATS) has been researched for several decades, the application of graph neural networks (GNNs) to this task started relatively recently. In this survey we provide an overview on the rapidly evolving approach of using GNNs for the task of automatic text summarization. In particular we provide detailed information on the functionality of GNNs in the context of ATS, and a comprehensive overview of models utilizing this approach. | A Survey of Automatic Text Summarization using Graph Neural Networks |
d45287459 | Current research being undertaken at both Cambridge and IBM is aimed at the construction of substantial lexicons containing lexical semantic information capable of use in automated natural language processing (NLP) applications. This work extends previous research on the semi-automatic extraction of lexical information from machine-readable versions of conventional dictionaries (MRDs) (see e.g. the papers and references in Boguraev & Briseoe, 1989;Walker et al., 1988). The motivation for this and previous research using MRDs is that entirely marina1 development of lexicons for practical NLP applications ks infeasible, given the labour-intensive nature of lexicography (e.g. Atkins, 1988) and the resources likely to he allocated to NLP in the foreseeable future. In tiffs paper, we motivate a particular approach to lexicai semantics, briefly demonstrate its computational tractability, and explore the possibility of extracting the lexical information this approach requires from MRDs and, to some extent, textual corpora. | Enjoy the Paper: Lexical Semantics via Lexicology |
d9862431 | In this paper, we introduce our recent work on re-annotating the deep information, which includes both the grammatical functional tags and the traces, in a Chinese scientific treebank. The issues with regard to re-annotation and its corresponding solutions are discussed. Furthermore, the process of the re-annotation work is described. | The Deep Re-annotation in a Chinese Scientific Treebank |
d7036291 | Many sentiment-analysis methods for the classification of reviews use training and test-data based on star ratings provided by reviewers. However, when reading reviews it appears that the reviewers' ratings do not always give an accurate measure of the sentiment of the review. We performed an annotation study which showed that reader perceptions can also be expressed in ratings in a reliable way and that they are closer to the text than the reviewer ratings. Moreover, we applied two common sentiment-analysis techniques and evaluated them on both reader and reviewer ratings. We come to the conclusion that it would be better to train models on reader ratings, rather than on reviewer ratings (as is usually done). | Sentiment Analysis of Reviews: Should we analyze writer intentions or reader perceptions? |
d14270923 | We explore how features based on syntactic dependency relations can be utilized to improve performance on opinion mining. Using a transformation of dependency relation triples, we convert them into "composite back-off features" that generalize better than the regular lexicalized dependency relation features. Experiments comparing our approach with several other approaches that generalize dependency features or ngrams demonstrate the utility of composite back-off features. | Generalizing Dependency Features for Opinion Mining |
d9774571 | In this paper we report on the Flemish-Dutch Agency for Human Language Technologies (HLT Agency or TST-Centrale in Dutch) in the Low Countries. We present its activities in its first decade of existence. The main goal of the HLT Agency is to ensure the sustainability of linguistic resources for Dutch. 10 years after its inception, the HLT Agency faces new challenges and opportunities. An important contextual factor is the rise of the infrastructure networks and proliferation of resource centres. We summarise some lessons learnt and we propose as future work to define for Dutch (which by extension can apply to any national language) a set of Basic LAnguage Infrastructure SErvices (BLAISE). As a conclusion, we state that the HLT Agency, also by its peculiar institutional status, has fulfilled and still is fulfilling an important role in maintaining Dutch as a fully fledged digital functional language. | A decade of HLT Agency activities in the Low Countries: from resource maintenance (BLARK) to service offerings (BLAISE) |
d13433981 | Application of NLP technology to production of closed-caption TV programs in Japanese for the hearing impaired Takahiro Wakao Terumasa Ehara Telecommunications NHK Science and Advancement Technical Organization (TAO) Research Labs. of Japan / TAO | |
d14298737 | In recent years, microblogs such as Twitter have emerged as a new communication channel. Twitter in particular has become the target of a myriad of content-based applications including trend analysis and event detection, but there has been little fundamental work on the analysis of word usage patterns in this text type. In this paper -inspired by the one-sense-perdiscourse heuristic of Gale et al. (1992) -we investigate user-level sense distributions, and detect strong support for "one sense per tweeter". As part of this, we construct a novel sense-tagged lexical sample dataset based on Twitter and a web corpus. | One Sense per Tweeter ... and Other Lexical Semantic Tales of Twitter |
d248780067 | The introduction of immensely large causal language models (CLMs) has rejuvenated the interest in open-ended text generation. However, controlling the generative process for these Transformer-based models is at large an unsolved problem. Earlier work has explored either plug-and-play decoding strategies or more powerful but blunt approaches such as prompting. There hence currently exists a trade-off between fine-grained control and the capability for more expressive highlevel instructions. To alleviate this trade-off, we propose an encoder-decoder architecture that enables intermediate text prompts at arbitrary time steps. We propose a resourceefficient method for converting a pre-trained CLM into this architecture and demonstrate its potential in various experiments, including the novel task of contextualized word inclusion. Our method provides strong results in multiple experimental settings, proving itself to be both expressive and versatile. 1 | Fine-Grained Controllable Text Generation Using Non-Residual Prompting |
d246904994 | This paper describes the ESPnet-ST group's IWSLT 2021 submission in the offline speech translation track. This year we made various efforts on training data, architecture, and audio segmentation. On the data side, we investigated sequence-level knowledge distillation (SeqKD) for end-to-end (E2E) speech translation. Specifically, we used multi-referenced SeqKD from multiple teachers trained on different amounts of bitext. On the architecture side, we adopted the Conformer encoder and the Multi-Decoder architecture, which equips dedicated decoders for speech recognition and translation tasks in a unified encoder-decoder model and enables search in both source and target language spaces during inference. We also significantly improved audio segmentation by using the pyannote.audio toolkit and merging multiple short segments for long context modeling. Experimental evaluations showed that each of them contributed to large improvements in translation performance. Our best E2E system combined all the above techniques with model ensembling and achieved 31.4 BLEU on the 2-ref of tst2021 and 21.2 BLEU and 19.3 BLEU on the two single references of tst2021. | ESPnet-ST IWSLT 2021 Offline Speech Translation System |
d218977385 | ||
d219309480 | ||
d7425534 | We are indebted to the participants as well as colleagues at CKIP for their comments. We would also like to thank the SemaNet 2002 reviewers for their helpful comments. It is our own responsibilities that, due to the short revision time, we were not able to incorporate all their suggestions, especially comparative studies with some relative GWA papers. We are also responsible for all remaining errors Abstract Establishing correspondences between wordnets of different languages is essential to both multilingual knowledge processing and for bootstrapping wordnets of low-density languages. We claim that such correspondences must be based on lexical semantic relations, rather than top ontology or word translations. In particular, we define a translation equivalence relation as a bilingual lexical semantic relation. Such relations can then be part of a logical entailment predicting whether source language semantic relations will hold in a target language or not. Our claim is tested with a study of 210 Chinese lexical lemmas and their possible semantic relations links bootstrapped from the Princeton WordNet. The results show that lexical semantic relation translations are indeed highly precise when they are logically inferable. | Translating Lexical Semantic Relations: The First Step Towards Multilingual Wordnets * |
d9227029 | We report on an investigation of the pragmatic category of topic in Danish dialog and its correlation to surface features of NPs. Using a corpus of 444 utterances, we trained a decision tree system on 16 features. The system achieved nearhuman performance with success rates of 84-89% and F 1 -scores of 0.63-0.72 in 10fold cross validation tests (human performance: 89% and 0.78). The most important features turned out to be preverbal position, definiteness, pronominalisation, and non-subordination. We discovered that NPs in epistemic matrix clauses (e.g. "I think . . . ") were seldom topics and we suspect that this holds for other interpersonal matrix clauses as well. | A corpus-based approach to topic in Danish dialog * |
d5502205 | This paper describes a Verb Phrase Ellipsis (VPE) detection system, built for robustness, accuracy and domain independence. The system is corpus-based, and uses machine learning techniques on free text that has been automatically parsed. Tested on a mixed corpus comprising a range of genres, the system achieves a 70% F1-score. This system is designed as the first stage of a complete VPE resolution system that is input free text, detects VPEs, and proceeds to find the antecedents and resolve them. | Robust VPE detection using Automatically Parsed Text |
d12044970 | In this paper, we present preliminary work on corpus-based anaphora resolution of discourse deixis in German. Our annotation guidelines provide linguistic tests for locating the antecedent, and for determining the semantic types of both the antecedent and the anaphor. The corpus consists of selected speaker turns from the Europarl corpus. | Annotating Discourse Anaphora |
d248780200 | In this paper, we challenge the ACL community to reckon with historical and ongoing colonialism by adopting a set of ethical obligations and best practices drawn from the Indigenous studies literature. While the vast majority of NLP research focuses on a very small number of very high resource languages (English, Chinese, etc), some work has begun to engage with Indigenous languages. No research involving Indigenous language data can be considered ethical without first acknowledging that Indigenous languages are not merely very low resource languages. The toxic legacy of colonialism permeates every aspect of interaction between Indigenous communities and outside researchers. Ethical research must actively challenge this colonial legacy by actively acknowledging and opposing its continuing presence, and by explicitly acknowledging and centering Indigenous community goals and Indigenous ways of knowing. To this end, we propose that the ACL draft and adopt an ethical framework for NLP researchers and computational linguists wishing to engage in research involving Indigenous languages. cluded abstracts for papers published at ACL, EACL, AACL, NAACL, EMNLP, and the Comptuational Linguistics journal. See Appendix A for details. | |
d15996543 | Machine comprehension tests the system's ability to understand a piece of text through a reading comprehension task. For this task, we propose an approach using the Abstract Meaning Representation (AMR) formalism. We construct meaning representation graphs for the given text and for each question-answer pair by merging the AMRs of comprising sentences using cross-sentential phenomena such as coreference and rhetorical structures. Then, we reduce machine comprehension to a graph containment problem. We posit that there is a latent mapping of the question-answer meaning representation graph onto the text meaning representation graph that explains the answer. We present a unified max-margin framework that learns to find this mapping (given a corpus of texts and question-answer pairs), and uses what it learns to answer questions on novel texts. We show that this approach leads to state of the art results on the task. | Machine Comprehension using Rich Semantic Representations |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.