_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d58514196 | This paper describes a unification based dependency parsing method for governor-final languages. Our method can parse not only projective sentences but also non-projective sentences. The feature structures in the tradition of the unification-based formalism are used for writing dependency relations. We use a structure sharing and a local ambiguity packing to save storage.This paper was supported in part by NON DIRECTED RESEARCH FUND, Korea Research Fo undation, 1989182 and morphological markings are ob viously contingent on relations between wordforms rather than on constituen cy(Mel' cuk, 1988).One approach to parse free word order languages is the principle-based parsing(Berwick, 1987). The other ap proach is the dependency pars ing(Mel' cuk, 1988). This paper describes a unification based dependency parsing method for governor-final(head-final) languages like Korean and Japanese. We develop the parsing method with special refer ence to Korean but the method can be adapted directly to Japanese parsing. Korean and Japanese are relatively free word order languages(Kwon, 1990). Al though their word order is free except that dependents always precede their governor, word order variations lead to different emphasis on the topic and the focus. In contrast, their morpheme order is fixed at the level of words. | UNIFICATION-BASED DEPENDENCY PAR SING OF GOVERNOR-FINAL LANGUAGES |
d46941044 | Second Language Acquisition Modeling is the task to predict whether a second language learner would respond correctly in future exercises based on their learning history. In this paper, we propose a neural network based system to utilize rich contextual, linguistic and user information. Our neural model consists of a Context encoder, a Linguistic feature encoder, a User information encoder and a Format information encoder (CLUF). Furthermore, a decoder is introduced to combine such encoded features and make final predictions. Our system ranked in first place in the English track and second place in the Spanish and French track with an AUROC score of 0.861, 0.835 and 0.854 respectively. | CLUF: a Neural Model for Second Language Acquisition Modeling |
d226300380 | Cet article présente un travail de description phonotactique du russe basé sur une analyse de 15 000 lemmes transcrits phonologiquement et syllabés. Un ensemble de données quantitatives relatives aux structures syllabiques a été examiné dans une perspective typologique. À partir d'une analyse distributionnelle des segments consonantiques ±PAL, des probabilités phonotactiques ont été estimées. Les résultats montrent que le russe suit globalement les tendances générales observées dans les langues de la base de données G-ULSID (Vallée, Rousset & Rossato, 2009) et mettent en évidence des asymétries de distribution des consonnes ±PAL à l'intérieur de la syllabe. Le fait que le système consonantique du russe présente une distinctivité ±PAL étendue à tous les lieux d'articulation, semble contraindre les coccurrences entre consonne et voyelle d'une même syllabe prédites par la théorie Frame/Content (MacNeilage, 1998) et trouvées dans de nombreuses langues. ABSTRACT This paper presents a phonotactic description of Russian based on an analysis of 15,000 phonologically transcribed and syllabified lemmas. A set of quantitative data relating to the syllabic structures of Russian has been examined in a typological perspective. From a distributional analysis of ±PAL consonant segments, phonotactic probabilities were estimated. Our results show that Russian broadly follows the general trends observed in the languages of the G-ULSID database(Vallée, Rousset & Rossato, 2009)and highlight asymmetries in the distribution of ±PAL consonants withinsyllable units. The fact that Russian presents ±PAL distinctiveness extended to all its consonant places of articulation seems to constrain tautosyllabic consonant/vowel cooccurrences predicted by the Frame/Content Theory (MacNeilage, 1998) and overrepresented in lot of languages. | La phonotaxe du russe dans la typologie des langues : focus sur la palatalisation |
d233364976 | ||
d138976 | In spoken language translation, it is crucial that an automatic speech recognition (ASR) system produces outputs that can be adequately translated by a statistical machine translation (SMT) system. While word error rate (WER) is the standard metric of ASR quality, the assumption that each ASR error type is weighted equally is violated in a SMT system that relies on structured input. In this paper, we outline a statistical framework for analyzing the impact of specific ASR error types on translation quality in a speech translation pipeline. Our approach is based on linear mixed-effects models, which allow the analysis of ASR errors on a translation quality metric. The mixed-effects models take into account the variability of ASR systems and the difficulty of each speech utterance being translated in a specific experimental setting. We use mixed-effects models to verify that the ASR errors that compose the WER metric do not contribute equally to translation quality and that interactions exist between ASR errors that cumulatively affect a SMT system's ability to translate an utterance. Our experiments are carried out on the English to French language pair using eight ASR systems and seven post-edited machine translation references from the IWSLT 2013 evaluation campaign. We report significant findings that demonstrate differences in the contributions of specific ASR error types toward speech translation quality and suggest further error types that may contribute to translation difficulty. | Assessing the Impact of Speech Recognition Errors on Machine Translation Quality |
d12013822 | We describe our submission to the CoNLL 2017 shared task, which exploits the shared common knowledge of a language across different domains via a domain adaptation technique. Our approach is an extension to the recently proposed adversarial training technique for domain adaptation, which we apply on top of a graph-based neural dependency parsing model on bidirectional LSTMs. In our experiments, we find our baseline graphbased parser already outperforms the official baseline model (UDPipe) by a large margin. Further, by applying our technique to the treebanks of the same language with different domains, we observe an additional gain in the performance, in particular for the domains with less training data. | Adversarial Training for Cross-Domain Universal Dependency Parsing |
d12624704 | This paper describes a sentiment classification system designed for SemEval-2015, Task 10, Subtask B. The system employs a constrained, supervised text categorization approach. Firstly, since thorough preprocessing of tweet data was shown to be effective in previous SemEval sentiment classification tasks, various preprocessessing steps were introduced to enhance the quality of lexical information. Secondly, a Naive Bayes classifier is used to detect tweet sentiment. The classifier is trained only on the training data provided by the task organizers. The system makes use of external human-generated lists of positive and negative words at several steps throughout classification. The system produced an overall F-score of 59.26 on the official test set. | SWASH: A Naive Bayes Classifier for Tweet Sentiment Identification |
d16117437 | In this paper, we present a new ranking scheme, collaborative ranking (CR). In contrast to traditional non-collaborative ranking scheme which solely relies on the strengths of isolated queries and one stand-alone ranking algorithm, the new scheme integrates the strengths from multiple collaborators of a query and the strengths from multiple ranking algorithms. We elaborate three specific forms of collaborative ranking, namely, micro collaborative ranking (MiCR), macro collaborative ranking (MaCR) and micro-macro collaborative ranking (MiMaCR). Experiments on entity linking task show that our proposed scheme is indeed effective and promising. | Collaborative Ranking: A Case Study on Entity Linking |
d13902850 | We present a novel approach for translation model (TM) adaptation using phrase training. The proposed adaptation procedure is initialized with a standard general-domain TM, which is then used to perform phrase training on a smaller in-domain set. This way, we bias the probabilities of the general TM towards the in-domain distribution. Experimental results on two different lectures translation tasks show significant improvements of the adapted systems over the general ones. Additionally, we compare our results to mixture modeling, where we report gains when using the suggested phrase training adaptation method. | Phrase Training Based Adaptation for Statistical Machine Translation |
d196184195 | The development of computational methods to detect abusive language in social media within variable and multilingual contexts has recently gained significant traction. The growing interest is confirmed by the large number of benchmark corpora for different languages developed in the latest years. However, abusive language behaviour is multifaceted and available datasets are featured by different topical focuses. This makes abusive language detection a domain-dependent task, and building a robust system to detect general abusive content a first challenge. Moreover, most resources are available for English, which makes detecting abusive language in low-resource languages a further challenge. We address both challenges by considering ten publicly available datasets across different domains and languages. A hybrid approach with deep learning and a multilingual lexicon to cross-domain and cross-lingual detection of abusive content is proposed and compared with other simpler models. We show that training a system on general abusive language datasets will produce a cross-domain robust system, which can be used to detect other more specific types of abusive content. We also found that using the domain-independent lexicon HurtLex is useful to transfer knowledge between domains and languages. In the cross-lingual experiment, we demonstrate the effectiveness of our jointlearning model also in out-domain scenarios. | Cross-domain and Cross-lingual Abusive Language Detection: a Hybrid Approach with Deep Learning and a Multilingual Lexicon |
d59729889 | L'extraction et la valorisation de données biographiques contenues dans les dépêches de presse est un processus complexe. Pour l'appréhender correctement, une définition complète, précise et fonctionnelle de cette information est nécessaire. Or, la difficulté que l'on rencontre lors de l'analyse préalable de la tâche d'extraction réside dans l'absence d'une telle définition. Nous proposons ici des conventions dans le but d'en développer une. Le principal concept utilisé pour son expression est la structuration de l'information sous forme de triplets {sujet, relation, objet}. Le début de définition ainsi construit est exploité lors de l'étape d'extraction d'informations par transducteurs à états finis. Il permet également de suggérer une solution d'implémentation pour l'organisation des données extraites en base de connaissances.Mots-clés : information biographique, modélisation, extraction d'information, transducteur à états finis, entité nommée, relation, base de connaissances.AbstractExtraction and valorization of biographical information from news wires is a complex task. In order to handle it correctly, it is necessary to have a complete, accurate and functional definition. The preliminary analysis of the extraction task reveals the lack of such a definition. This article proposes some conventions to develop it. Information modelling as triples {subject, relation, object} is the main concept used at this level. This incomplete definition can be used during the information extraction step. It also allows to suggest some implementation solutions for data organisation as a knowledge base. | L'information biographique : modélisation, extraction et organisation en base de connaissances |
d8983941 | In this paper we describe our participation in the SemEval 2007 Web People Search task. Our main aim in participating was to adapt language modeling tools for the task, and to experiment with various document representations. Our main finding is that single pass clustering, using title, snippet and body to represent documents, is the most effective setting. | UVA: Language Modeling Techniques for Web People Search |
d4640543 | In neural interactive translation prediction, a system provides translation suggestions ("auto-complete" functionality) for human translators. These translation suggestions may be rejected by the translator in predictable ways; being able to estimate confidence in the quality of translation suggestions could be useful in providing additional information for users of the system. We show that a very small set of features (which are already generated as byproducts of the process of translation prediction) can be used in a simple model to estimate confidence for interactive translation prediction. | Lightweight Word-Level Confidence Estimation for Neural Interactive Translation Prediction |
d37814695 | Although represented as such in wordnets, word senses are not discrete. To handle word senses as fuzzy objects, we exploit the graph structure of synonymy pairs acquired from different sources to discover synsets where words have different membership degrees that reflect confidence. Following this approach, a wide-coverage fuzzy thesaurus was discovered from a synonymy network compiled from seven Portuguese lexical-semantic resources. Based on a crowdsourcing evaluation, we can say that the quality of the obtained synsets is far from perfect but, as expected in a confidence measure, it increases significantly for higher cut-points on the membership and, at a certain point, reaches 100% correction rate. | Discovering Fuzzy Synsets from the Redundancy in Different Lexical-Semantic Resources |
d28775375 | Refer-iTTS: A System for Referring in Spoken Installments to Objects in Real-World Images | |
d28363891 | There is a rich variety of data sets for sentiment analysis (viz., polarity and subjectivity classification). For the more challenging task of detecting discrete emotions following the definitions of Ekman and Plutchik, however, there are much fewer data sets, and notably no resources for the social media domain. This paper contributes to closing this gap by extending the SemEval 2016 stance and sentiment dataset with emotion annotation. We (a) analyse annotation reliability and annotation merging; (b) investigate the relation between emotion annotation and the other annotation layers (stance, sentiment); (c) report modelling results as a baseline for future work. | Annotation, Modelling and Analysis of Fine-Grained Emotions on a Stance and Sentiment Detection Corpus |
d252763323 | ||
d44090948 | Words play a central role in language and thought. Factor analysis studies have shown that the primary dimensions of meaning are valence, arousal, and dominance (VAD). We present the NRC VAD Lexicon, which has human ratings of valence, arousal, and dominance for more than 20,000 English words. We use Best-Worst Scaling to obtain fine-grained scores and address issues of annotation consistency that plague traditional rating scale methods of annotation. We show that the ratings obtained are vastly more reliable than those in existing lexicons. We also show that there exist statistically significant differences in the shared understanding of valence, arousal, and dominance across demographic variables such as age, gender, and personality. | Obtaining Reliable Human Ratings of Valence, Arousal, and Dominance for 20,000 English Words |
d44104104 | Meaning Representation (AMR) parsing aims at abstracting away from the syntactic realization of a sentence, and denoting only its meaning in a canonical form. As such, it is ideal for paraphrase detection, a problem in which one is required to specify whether two sentences have the same meaning. We show that naïve use of AMR in paraphrase detection is not necessarily useful, and turn to describe a technique based on latent semantic analysis in combination with AMR parsing that significantly advances state-of-the-art results in paraphrase detection for the Microsoft Research Paraphrase Corpus. Our best results in the transductive setting are 86.6% for accuracy and 90.0% for F 1 measure. | Abstract Meaning Representation for Paraphrase Detection |
d18324297 | This paper presents a collapsed variational Bayesian inference algorithm for PCFGs that has the advantages of two dominant Bayesian training algorithms for PCFGs, namely variational Bayesian inference and Markov chain Monte Carlo. In three kinds of experiments, we illustrate that our algorithm achieves close performance to the Hastings sampling algorithm while using an order of magnitude less training time; and outperforms the standard variational Bayesian inference and the EM algorithms with similar training time. | Collapsed Variational Bayesian Inference for PCFGs |
d11310214 | This paper deals with the derivational morphology of automatic word form recognition. It presents a set of declarative rules which augment lexical entries with information governing the allomorphic changes of derivation in addition to the existing allomorphy rules for inflection. The resulting component generates a single lexicon for derivational and inflectional allomorphy from an elementary base-form lexicon. Thereby our focus lies both on avoiding redundant allomorph entries and on the suitability of the resulting lexical entries for morphological analysis. We prove the usability of our approach by using the generated allomorphs as the lexicon for automatic wordform recognition. | A Multilayered Declarative Approach to Cope with Morphotactics and Allomorphy in Derivational Morphology |
d204800316 | The use of the internet as a fast medium of spreading fake news reinforces the need for computational tools that combat it. Techniques that train fake news classifiers exist, but they all assume an abundance of resources including large labeled datasets and expert-curated corpora, which low-resource languages may not have. In this work, we make two main contributions: First, we alleviate resource scarcity by constructing the first expertly-curated benchmark dataset for fake news detection in Filipino, which we call "Fake News Filipino." Second, we benchmark Transfer Learning (TL) techniques and show that they can be used to train robust fake news classifiers from little data, achieving 91% accuracy on our fake news dataset, reducing the error by 14% compared to established few-shot baselines. Furthermore, lifting ideas from multitask learning, we show that augmenting transformer-based transfer techniques with auxiliary language modeling losses improves their performance by adapting to writing style. Using this, we improve TL performance by 4-6%, achieving an accuracy of 96% on our best model. Lastly, we show that our method generalizes well to different types of news articles, including political news, entertainment news, and opinion articles. | Localization of Fake News Detection via Multitask Transfer Learning |
d219301287 | ||
d252186437 | Cognitive effort is the core element of translation and interpreting process studies, but theoretical and practical issues such as the concept, the characteristics and the measurement of cognitive effort still need to be clarified. This paper firstly analyzes the concept and the research characteristics of cognitive effort in translation and interpreting process studies. Then, based on the cost concept (internal cost, opportunity cost) and the reward concept (need for cognition, learned industriousness) of cognitive effort, it carries out multi-dimensional analysis of the characteristics of cognitive effort. Finally, it points out the enlightenment of multi-dimensional consideration of cognitive effort to translation and interpreting process studies. | Multi-dimensional Consideration of Cognitive Effort in Translation and Interpreting Process Studies |
d10724165 | ON THE RELATIONSHIP BETWEEN USER MODELS AND DISCOURSE MODELS | |
d13962642 | Coherence relations have usually been taken to link clauses and larger units.After arguing that some phrases can be seen as discourse units, a computational account for such phrases is presented that integrates surface-based criteria with inferential ones. This approach can be generalized to treat intra-sentential cue-phrases. Since cue-phrases are not always present, referential relations between nominal expressions are additionally used to derive a text's discourse structure. | From Elementary Discourse Units to Complex Ones |
d14565472 | This paper reports on a speech database that includes non-native pronunciation variants of city names/town names from several European languages. The database is designed as a research tool for the study of pronunciation variants in this specific domain that occur in different groups of non-native speakers. The ongoing data collection currently comprises 20 to 27 native speakers of 3 languages each who pronounce material from 5 languages. The languages covered are English, German, French, Italian, and Dutch. All languages are examined as the source language (L1) and as the target language (L2). For the first stage of the data collection, the targeted status is a collection of 5 x 5 language directions with at least 20 speakers per native language. After a brief overview of related studies and an outline of some specifics of proper names (place names in particular) in the context of speech technology applications, the database design and the current stage of the data collection is described. | A Database for the Analysis of Cross-Lingual Pronunciation Variants of European City Names |
d248780019 | This paper provides an overview of NVIDIA NeMo's speech translation systems for the IWSLT 2022 Offline Speech Translation Task. Our cascade system consists of 1) Conformer RNN-T automatic speech recognition model, 2) punctuation-capitalization model based on pretrained T5 encoder, 3) ensemble of Transformer neural machine translation models fine-tuned on TED talks. Our end-to-end model has less parameters and consists of Conformer encoder and Transformer decoder. It relies on the cascade system by re-using its pre-trained ASR encoder and training on synthetic translations generated with the ensemble of NMT models. Our En→De cascade and end-to-end systems achieve 29.7 and 26.2 BLEU on the 2020 test set correspondingly, both outperforming the previous year's best of 26 BLEU. | NVIDIA NeMo Offline Speech Translation Systems for IWSLT 2022 |
d2626219 | This paper addresses the task of detecting identity deception in language. Using a novel identity deception dataset, consisting of real and portrayed identities from 600 individuals, we show that we can build accurate identity detectors targeting both age and gender, with accuracies of up to 88%. We also perform an analysis of the linguistic patterns used in identity deception, which lead to interesting insights into identity portrayers. | Identity Deception Detection |
d9898742 | We are annotating the complete 20 million Dutch PAROLE corpus with PoS and lemma. The morphosyntactic tagging of 250,000 words during the PAROLE project was the first confrontation of the fine-grained Dutch PAROLE tagset and its 'functional' mode of application, with real corpus data. The correction of the manual tagging and the compilation of a 100,000 words training corpus for the automatic tagger initiated the evaluation of the suitability of the tagset and the methodology of tag assignment, which topics will both be discussed in this paper. The reality of corpus data brought about a number of adaptations, linguistic restrictions and generalisations. The most salient tagger results will be presented. Our experience is relevant for a new project: the Integrated Language Database of 8th -21st Century Dutch (ILD), which will contain a text corpus covering all these centuries. The corpus will be annotated with lemma and PoS, in which process historical lexica will be used. Obviously, we will have to tailor tagset and methodology of tag assignment optimally to these purposes. | Implementation and Evaluation of PAROLE PoS in a National Context |
d16249912 | We report about design and characteristics of the LAST MINUTE corpus. The recordings in this data collection are taken from a WOZ experiment that allows to investigate how users interact with a companion system in a mundane situation with the need for planning, re-planning and strategy change. The resulting corpus is distinguished with respect to aspects of size (e.g. number of subjects, length of sessions, number of channels, total length of records) as well as quality (e.g. balancedness of cohort, well designed scenario, standard based transcripts, psychological questionnaires, accompanying in-depth interviews). | LAST MINUTE: a Multimodal Corpus of Speech-based User-Companion Interactions |
d829201 | A growing body of machine translation research aims to exploit lexical patterns (e.g., ngrams and phrase pairs) with gaps(Simard et al., 2005;Chiang, 2005;Xiong et al., 2011). Typically, these "gappy patterns" are discovered using heuristics based on word alignments or local statistics such as mutual information. In this paper, we develop generative models of monolingual and parallel text that build sentences using gappy patterns of arbitrary length and with arbitrarily many gaps. We exploit Bayesian nonparametrics and collapsed Gibbs sampling to discover salient patterns in a corpus. We evaluate the patterns qualitatively and also add them as features to an MT system, reporting promising preliminary results. | Generative Models of Monolingual and Bilingual Gappy Patterns |
d2253147 | We present a Bayesian formulation for weakly-supervised learning of a Combinatory Categorial Grammar (CCG) supertagger with an HMM. We assume supervision in the form of a tag dictionary, and our prior encourages the use of crosslinguistically common category structures as well as transitions between tags that can combine locally according to CCG's combinators. Our prior is theoretically appealing since it is motivated by languageindependent, universal properties of the CCG formalism. Empirically, we show that it yields substantial improvements over previous work that used similar biases to initialize an EM-based learner. Additional gains are obtained by further shaping the prior with corpus-specific information that is extracted automatically from raw text and a tag dictionary. | Weakly-Supervised Bayesian Learning of a CCG Supertagger |
d236999909 | ||
d250390452 | We describe our system for the SemEval 2022 task on detecting misogynous content in memes. This is a pressing problem and we explore various methods ranging from traditional machine learning to deep learning models such as multimodal transformers. We propose a multimodal BERT architecture that uses information from both image and text. We further incorporate common world knowledge from pretrained CLIP and Urban dictionary. We also provide qualitative analysis to support out model. Our best performing model achieves an F1 score of 0.679 on Task A (Rank 5) and 0.680 on Task B (Rank 13) of the hidden test set. Our code is available at https://github. com/paridhimaheshwari2708/MAMI. | TeamOtter at SemEval-2022 Task 5: Detecting Misogynistic Content in Multimodal Memes |
d13469071 | Current researches on Question Answering concern more complex questions than factoid ones. Since definition questions are investigated by many researches, how to acquire accurate answers still becomes a core problem for definition QA. Although some systems use web knowledge bases to improve answer acquisition, we propose an approach that leverage them in an effective way. After summarizing definitions from web knowledge bases and merge them to a definition set, a two-stage retrieval model based on Probabilistic Latent Semantic Analysis is produced to seek documents and sentences in which the topic is similar to those in definition set. Then, an answer ranking model is employed to select both statistically and semantically similar sentences between sentences retrieved and sentences in definition set. Finally, sentences are ranked as answer candidates according to their scores. Experiments indicate following conclusions: 1) specific summarization technologies improves definition QA systems to a better performance; 2) topic based models can be more helpful than centroid-based models for definition QA systems in solving synonym and data sparse problems; 3) shallow semantic analysis is effective to find discriminative characteristics of definitions automatically. | Finding Answers to Definition Questions Using Web Knowledge Bases * * * * |
d1458921 | In this paper, we propose a novel class of graphs, the tripartite directed acyclic graphs (tDAGs), to model first-order rule feature spaces for sentence pair classification. We introduce a novel algorithm for computing the similarity in first-order rewrite rule feature spaces. Our algorithm is extremely efficient and, as it computes the similarity of instances that can be represented in explicit feature spaces, it is a valid kernel function. | Efficient kernels for sentence pair classification |
d237204594 | ||
d15378527 | Spanish is the third-most used language on the Internet, after English and Chinese, with a total of 7.7% of Internet users (more than 277 million of users) and a huge users growth of more than 1,400%. However, most work on sentiment analysis has focused on English. This paper describes a deep learning system for Spanish sentiment analysis. To the best of our knowledge, this is the first work that explores the use of a convolutional neural network to polarity classification of Spanish tweets. | Exploring Convolutional Neural Networks for Sentiment Analysis of Spanish tweets |
d16484985 | The rapid growth of Internet Technology, especially user friendliness approach, helps increase the number of Internet users and the amount of information in the cyberspace.There is a countless amount of information in languages. This has spread developments of MT systems. The focus of our approach is to increase the reusability of those MT systems by using Cross System machine translation.Using natural language as an intermediate language, such as English, will help us use the information in Internet qualitatively. In this paper, we point out some problems that may cause the efficiency to decrease when a sentence is translated from a second language to a third language. A novel method is proposed to solve this problem. | |
d229365797 | ||
d8360741 | We consider the problem of disambiguating concept mentions appearing in documents and grounding them in multiple knowledge bases, where each knowledge base addresses some aspects of the domain. This problem poses a few additional challenges beyond those addressed in the popular Wikification problem. Key among them is that most knowledge bases do not contain the rich textual and structural information Wikipedia does; consequently, the main supervision signal used to train Wikification rankers does not exist anymore. In this work we develop an algorithmic approach that, by carefully examining the relations between various related knowledge bases, generates an indirect supervision signal it uses to train a ranking model that accurately chooses knowledge base entries for a given mention; moreover, it also induces prior knowledge that can be used to support a global coherent mapping of all the concepts in a given document to the knowledge bases.Using the biomedical domain as our application, we show that our indirectly supervised ranking model outperforms other unsupervised baselines and that the quality of this indirect supervision scheme is very close to a supervised model. We also show that considering multiple knowledge bases together has an advantage over grounding concepts to each knowledge base individually. | Concept Grounding to Multiple Knowledge Bases via Indirect Supervision |
d252819308 | This paper presents a detailed foundational empirical case study of the nature of out-ofvocabulary words encountered in modern text in a moderate-resource language such as Bulgarian, and a multi-faceted distributional analysis of the underlying word-formation processes that can aid in their compositional translation, tagging, parsing, language modeling, and other NLP tasks. Given that out-of-vocabulary (OOV) words generally present a key open challenge to NLP and machine translation systems, especially toward the lower limit of resource availability, there are useful practical insights, as well as corpus-linguistic insights, from both a detailed manual and automatic taxonomic analysis of the types, multidimensional properties, and processing potential for multiple representative OOV data samples. | Deciphering and Characterizing Out-of-Vocabulary Words for Morphologically Rich Languages |
d235097573 | Although recent advances in abstractive summarization systems have achieved high scores on standard natural language metrics like ROUGE, their lack of factual consistency remains an open challenge for their use in sensitive real-world settings such as clinical practice. In this work, we propose a novel approach to improve factual correctness of a summarization system by re-ranking the candidate summaries based on a factual vector of the summary. We applied this process during our participation in MEDIQA 2021 Task 3: Radiology Report Summarization, where the task is to generate an impression summary of a radiology report, given findings and background as inputs. In our system, we first used a transformer-based encoder-decoder model to generate top N candidate impression summaries for a report, then trained another transformer-based model to predict a 14observations-vector of the impression based on the findings and background of the report, and finally, utilized this vector to re-rank the candidate summaries. We also employed a source-specific ensembling technique to accommodate for distinct writing styles from different radiology report sources. Our approach yielded 2nd place in the challenge. | IBMResearch at MEDIQA 2021: Toward Improving Factual Correctness of Radiology Report Abstractive Summarization |
d227320045 | ||
d11826614 | ADVANCED DECISION SYSTEMS : DESCRIPTION OF THE CODEX SYSTEM AS USED FOR MUC-3 | |
d226283986 | ||
d367687 | In this paper, we discuss techniques to combine an interlingua translation framework with phrase-based statistical methods, for translation from Chinese into English. Our goal is to achieve high-quality translation, suitable for use in language tutoring applications. We explore these ideas in the context of a flight domain, for which we have a large corpus of English queries, obtained from users interacting with a dialogue system. Our techniques exploit a pre-existing English-to-Chinese translation system to automatically produce a synthetic bilingual corpus. Several experiments were conducted combining linguistic and statistical methods, and manual evaluation was conducted for a set of 460 Chinese sentences. The best performance achieved an "adequate" or better analysis (3 or above rating) on nearly 94% of the 409 parsable subset. Using a Rover scheme to combine four systems resulted in an "adequate or better" rating for 88% of all the utterances. | Combining Linguistic and Statistical Methods for Bi-directional English Chinese Translation in the Flight Domain |
d6628930 | The role of lexical resources is often understated in NLP research. The complexity of Chinese, Japanese and Korean (CJK) poses special challenges to developers of NLP tools, especially in the area of word segmentation (WS), information retrieval (IR), named entity extraction (NER), and machine translation (MT). These difficulties are exacerbated by the lack of comprehensive lexical resources, especially for proper nouns, and the lack of a standardized orthography, especially in Japanese. This paper summarizes some of the major linguistic issues in the development NLP applications that are dependent on lexical resources, and discusses the central role such resources should play in enhancing the accuracy of NLP tools. | The Role of Lexical Resources in CJK Natural Language Processing |
d226283459 | ||
d209060776 | This paper discusses the development and evaluation of a Speech Synthesizer for Plains Cree, an Algonquian language of North America. Synthesis is achieved using Sim-ple4All and evaluation was performed using a modified Cluster Identification, Semantically Unpredictable Sentence, and a basic dichotomized judgment task. Resulting synthesis was not well received; however, observations regarding the process of speech synthesis evaluation in North American indigenous communities were made: chiefly, that tolerance for variation is often much lower in these communities than for majority languages. The evaluator did not recognize grammatically consistent but semantically nonsense strings as licit language. As a result, monosyllabic clusters and semantically unpredictable sentences proved not the most appropriate evaluate tools. Alternative evaluation methods are discussed. | A Preliminary Plains Cree Speech Synthesizer |
d38227633 | This paper describes a repository of example sentences in three endangered Athabascan languages: Koyukon, Upper Tanana, Lower Tanana. The repository allows researchers or language teachers to browse the example sentence corpus to either investigate the languages or to prepare teaching materials. The originally heterogeneous text collection was imported into a SOLR store via the POIO bridge. This paper describes the requirements, implementation, advantages and drawbacks of this approach and discusses the potential to apply it for other languages of the Athabascan family or beyond. | The Alaskan Athabascan Grammar Database |
d41076434 | We showcase TODAY, a semanticsenhanced task-oriented dialogue translation system, whose novelties are: (i) taskoriented named entity (NE) definition and a hybrid strategy for NE recognition and translation; and (ii) a novel grounded semantic method for dialogue understanding and task-order management. TODAY is a case-study demo which can efficiently and accurately assist customers and agents in different languages to reach an agreement in a dialogue for the hotel booking. | Semantics-Enhanced Task-Oriented Dialogue Translation: A Case Study on Hotel Booking |
d2445242 | An area of recent interest in crosslanguage information retrieval (CLIR) is the question of which parallel corpora might be best suited to tasks in CLIR, or even to what extent parallel corpora can be obtained or are necessary. One proposal, which in our opinion has been somewhat overlooked, is that the Bible holds a unique value as a multilingual corpus, being (among other things) widely available in a broad range of languages and having a high coverage of modern-day vocabulary. In this paper, we test empirically whether this claim is justified through a series of validation tests on various information retrieval tasks. Our results appear to indicate that our methodology may significantly outperform others recently proposed. | Evaluation of the Bible as a Resource for Cross-Language Information Retrieval |
d11512482 | This paper describes a comprehensive set of experiments conducted in order to classify Arabic Wikipedia articles into predefined sets of Named Entity classes. We tackle using four different classifiers, namely: Naïve Bayes, Multinomial Naïve Bayes, Support Vector Machines, and Stochastic Gradient Descent. We report on several aspects related to classification models in the sense of feature representation, feature set and statistical modelling. The results reported show that, we are able to correctly classify the articles with scores of 90% on Precision, Recall and balanced F-measure. | Mapping Arabic Wikipedia into the Named Entities Taxonomy |
d39379668 | So we need to be alert. It's not just that we may find ourselves putting the cart before the horse. We can get obsessed with the wheels, and finish up with uncritically reinvented, or square, or over-refined or otherwise unsatisfactory wheels, or even just unicycles.Karen Spärck Jones Résumé papier de prise de position.Historiquement deux types de traitement de la langue ont été étudiés: le traitement par le cerveau (approche psycholinguistique) et le traitement par la machine (approche TAL). Nous pensons qu'il y a place pour un troisième type: le traitement interactif de la langue (TIL), l'ordinateur assistant le cerveau. Ceci correspond à un besoin réel dans la mesure où les gens n'ont souvent que des connaissances partielles par rapport au problème à résoudre. Le but du TIL est de construire des ponts entre ces connaissances momentanées d'un utilisateur et la solution recherchée. À l'aide de quelques exemples, nous essayons de montrer que ceci est non seulement faisable et souhaitable, mais également d'un coût très raisonnable.Abstract position paper.Historically two types of NLP have been investigated: fully automated processing of language by machines (NLP) and autonomous processing of natural language by people, i.e. the human brain (psycholinguistics). We believe that there is room and need for another kind, INLP: interactive natural language processing. This intermediate approach starts from peoples' needs, trying to bridge the gap between their actual knowledge and a given goal. Given the fact that peoples' knowledge is variable and often incomplete, the aim is to build bridges linking a given knowledge state to a given goal. We present some examples, trying to show that this goal is worth pursuing, achievable and at a reasonable cost.Mots-clés : traitement interactif de la langue, prise en compte de l'usager, outils de traitement de la langue, apprentissage des langues, dictionnaires, livres de phrases, concordanciers, traduction. | Du TAL au TIL |
d711209 | The standard preference ordering on the well-known centering transitions Continue, Retain, Shift is argued to be unmotivated: a partial, context-dependent ordering emerges from the interaction between principles dubbed cohesion (maintaining the same center of attention) and salience (realizing the center of attention as the most prominent NP). A new formulation of Rule 2 of centering theory is proposed that incorporates these principles as well as a streamlined version of Strube and Hahn's (1999) notion 0fcheapness. It is argued that this formulation provides a natural way to handle "topic switches" that appear to violate the canonical preference ordering. | A Reformulation of Rule 2 of Centering Theory |
d220837054 | ||
d15798344 | In this paper we presen t a system for automatically producing multimedia pages of information that draws both from results in data-driven aggregation in information visualization and from results in communicative-goal oriented natural language generation. Our system constitutes an architectural synthesis of these two directions, allowing a beneficial cross-fertilization of research methods. We suggest that data-driven visualization provides a general approach to aggregation in NLG, and that text planning allows higher user-responsiveness in visualization via automatic diagram design. | COMMUNICATIVE GOAL-DRIVEN NL GENERATION AND DATA-DRIVEN GRAPHICS GENERATION: AN ARCHITECTURAL SYNTHESIS FOR MULTIMEDIA PAGE GENERATION |
d8251771 | Recent years have seen increasing attention in temporal processing of texts as well as a lot of standardization effort of temporal information in natural language. A central part of this information lies in the temporal relations between events described in a text, when their precise times or dates are not known. Reliable human annotation of such information is difficult, and automatic comparisons must follow procedures beyond mere precision-recall of local pieces of information, since a coherent picture can only be considered at a global level. We address the problem of evaluation metrics of such information, aiming at fair comparisons between systems, by proposing some measures taking into account the globality of a text. | Evaluation Metrics for Automatic Temporal Annotation of Texts |
d23892230 | Recent neural models have shown significant progress on the problem of generating short descriptive texts conditioned on a small number of database records. In this work, we suggest a slightly more difficult data-to-text generation task, and investigate how effective current approaches are on this task. In particular, we introduce a new, large-scale corpus of data records paired with descriptive documents, propose a series of extractive evaluation methods for analyzing performance, and obtain baseline results using current neural generation methods. Experiments show that these models produce fluent text, but fail to convincingly approximate humangenerated documents. Moreover, even templated baselines exceed the performance of these neural models on some metrics, though copy-and reconstructionbased extensions lead to noticeable improvements. | Challenges in Data-to-Document Generation |
d219302619 | ||
d17588382 | a reflexive has a local antecedent or a long-distance antecedent. The result is that 'caki' is almost even in having local and long-distance antecedents, but 'casin' has more and 'cakicasin' has much more local antecedents. I also examined the thematic roles of the local antecedents of reflexives, which shows that 'casin' has relatively more Experiencer antecedents than icaki` has, although in both cases Agent antecedents dominate. The outcome of these frequency analysis suggests that a tendency (probably not grammaticalized yet), or degree of "naturalness" is real and can be captured in the usage data provided that we have a sizable amount of material manageable in an efficient way as provided by the corpus linguistic method of the present day. | Three Kinds of Korean Reflexives: A Corpus Linguistic Investigation on Grammar and Usage |
d252818969 | With the Surge in COVID-19, the number of social media postings related to the vaccine has grown, specifically tracing the confirmed reports by the users regarding the COVID-19 vaccine dose termed as Vaccine Surveillance.To mitigate this research problem, we present our novel ensembled approach for self-reporting COVID-19 vaccination status tweets into two labels, namely Vaccine Chatter and Self Report. We utilize state-of-the-art models, namely BERT, RoBERTa, and XLNet. Our model provides promising results with 0.77, 0.93, and 0.66 as precision, recall, and F1-score (respectively), comparable to the corresponding median scores of 0.77, 0.9, and 0.68 (respectively). The model gave an overall accuracy of 93.43. We also present an empirical analysis of the results to present how well the tweet was able to classify and report. We release our code base here https://github.com/Zohair0209/ SMM4H-2022-Task6.git | Innovators@SMM4H'22: An Ensembles Approach for self-reporting of COVID-19 Vaccination Status Tweets |
d53081551 | Publication information in a researcher's academic homepage provides insights about the researcher's expertise, research interests, and collaboration networks. We aim to extract all the publication strings from a given academic homepage. This is a challenging task because the publication strings in different academic homepages may be located at different positions with different structures. To capture the positional and structural diversity, we propose an end-to-end hierarchical model named PubSE based on Bi-LSTM-CRF. We further propose an alternating training method for training the model. Experiments on real data show that PubSE outperforms the stateof-the-art models by up to 11.8% in F1-score. | PubSE: A Hierarchical Model for Publication Extraction from Academic Homepages |
d53082490 | We propose Labeled Anchors, an interactive and supervised topic model based on the anchor words algorithm(Arora et al., 2013). Labeled Anchors is similar to Supervised Anchors(Nguyen et al., 2014)in that it extends the vector-space representation of words to include document labels. However, our formulation also admits a classifier which requires no training beyond inferring topics, which means our approach is also fast enough to be interactive. We run a small user study that demonstrates that untrained users can interactively update topics in order to improve classification accuracy. | Labeled Anchors and a Scalable, Transparent, and Interactive Classifier |
d53121868 | The Winograd Schema Challenge targets pronominal anaphora resolution problems which require the application of cognitive inference in combination with world knowledge. These problems are easy to solve for humans but most difficult to solve for machines. Computational models that previously addressed this task rely on syntactic preprocessing and incorporation of external knowledge by manually crafted features. We address the Winograd Schema Challenge from a new perspective as a sequence ranking task, and design a Siamese neural sequence ranking model which performs significantly better than a random baseline, even when solely trained on sequences of words. We evaluate against a baseline and a state-of-the-art system on two data sets and show that anonymization of noun phrase candidates strongly helps our model to generalize. | Addressing the Winograd Schema Challenge as a Sequence Ranking Task |
d44147701 | We present a corpus of 240 argumentative essays written by non-native speakers of English annotated for metaphor. The corpus is made publicly available. We provide benchmark performance of state-of-the-art systems on this new corpus, and explore the relationship between writing proficiency and metaphor use. | A Corpus of Non-Native Written English Annotated for Metaphor |
d219307317 | ||
d208991947 | ||
d256461436 | Reflection is an essential counselling strategy, where the therapist listens actively and responds with their own interpretation of the client's words. Recent work leveraged pretrained language models (PLMs) to approach reflection generation as a promising tool to aid counsellor training. However, those studies used limited dialogue context for modelling and simplistic error analysis for human evaluation. In this work, we take the first step towards addressing those limitations. First, we fine-tune PLMs on longer dialogue contexts for reflection generation. Then, we collect free-text error descriptions from non-experts about generated reflections, identify common patterns among them, and accordingly establish discrete error categories using thematic analysis. Based on this scheme, we plan for future work a mass non-expert error annotation phase for generated reflections followed by an expert-based validation phase, namely "whether a coherent and consistent response is a good reflection". | Towards In-Context Non-Expert Evaluation of Reflection Generation for Counselling Conversations |
d8164563 | We describe an experiment carried out using a French version of CALL-SLT, a web-enabled CALL game in which students at each turn are prompted to give a semi-free spoken response which the system then either accepts or rejects. The central question we investigate is whether the response is appropriate; we do this by extracting pairs of utterances where both members of the pair are responses by the same student to the same prompt, and where one response is accepted and one rejected. When the two spoken responses are presented in random order, native speakers show a reasonable degree of agreement in judging that the accepted utterance is better than the rejected one. We discuss the significance of the results and also present a small study supporting the claim that native speakers are nearly always recognised by the system, while non-native speakers are rejected a significant proportion of the time. | Evaluating Appropriateness Of System Responses In A Spoken CALL Game |
d3196382 | Recent work in computer vision has aimed to associate image regions with keywords describing the depicted entities, but actual image 'understanding' would also require identifying their attributes, relations and activities. Since this information cannot be conveyed by simple keywords, we have collected a corpus of "action" photos each associated with five descriptive captions. In order to obtain a consistent semantic representation for each image, we need to first identify which NPs refer to the same entities. We present three hierarchical Bayesian models for cross-caption coreference resolution. We have also created a simple ontology of entity classes that appear in images and evaluate how well these can be recovered. | Cross-caption coreference resolution for automatic image understanding |
d9371149 | In this paper, we describe a method for automatic creation of a knowledge source for text generation using information extraction over the Internet. We present a prototype system called PROFILE which uses a client-server architecture to extract noun-phrase descriptions of entities such as people, places, and organizations. The system serves two purposes: as an information extraction tool, it allows users to search for textual descriptions of entities; as a utility to generate functional descriptions (FD), it is used in a functional-unification based generation system. We present an evaluation of the approach and its applications to natural language generation and summarization. | Building a Generation Knowledge Source using Internet-Accessible Newswire |
d6741598 | PROJECT GOALSThe primary long term goal of the speech research at Dragon Systems is to develop algorithms that allow us to achieve high performance large vocabulary continuous speech recognition. At the same time, we are concerned to keep the computational demands of our algorithms as modest as possible, so that the results of our research can be incorporated into products that will run on modestly priced personal computers. | SPEECH RECOGNITION AT DRAGON SYSTEMS UNDER THE DARPA SLS PROGRAM |
d52158272 | The advent of social media | Neural Character-based Composition Models for Abuse Detection |
d15635475 | This system uses a background knowledge base to identify semantic relations between base noun phrases in English text, as evaluated in SemEval 2007, Task 4. Training data for each relation is converted to statements in the Scone Knowledge Representation Language. At testing time a new Scone statement is created for the sentence under scrutiny, and presence or absence of a relation is calculated by comparing the total semantic distance between the new statement and all positive examples to the total distance between the new statement and all negative examples. | CMU-AT: Semantic Distance and Background Knowledge for Identify- ing Semantic Relations |
d14029756 | Sentence pair modeling is critical for many NLP tasks, such as paraphrase identification, semantic textual similarity, and natural language inference. Most state-of-the-art neural models for these tasks rely on pretrained word embedding and compose sentence-level semantics in varied ways; however, few works have attempted to verify whether we really need pretrained embeddings in these tasks. In this paper, we study how effective subwordlevel (character and character n-gram) representations are in sentence pair modeling. Though it is well-known that subword models are effective in tasks with single sentence input, including language modeling and machine translation, they have not been systematically studied in sentence pair modeling tasks where the semantic and string similarities between texts matter. Our experiments show that subword models without any pretrained word embedding can achieve new state-of-the-art results on two social media datasets and competitive results on news data for paraphrase identification.157 | Character-based Neural Networks for Sentence Pair Modeling |
d12123445 | The conventional solution for handling sparsely labelled data is extensive feature engineering. This is time consuming and task and domain specific. We present a novel approach for learning embedded features that aims to alleviate this problem. Our approach jointly learns embeddings at different levels of granularity (word, sentence and document) along with the class labels. The intuition is that topic semantics represented by embeddings at multiple levels results in better classification. We evaluate this approach in unsupervised and semi-supervised settings on two sparsely labelled classification tasks, outperforming the handcrafted models and several embedding baselines. | Robust Text Classification for Sparsely Labelled Data Using Multi-level Embeddings |
d30239179 | This paper describes the use of a free, on-line language spelling and grammar checking aid as a vehicle for the collection of a significant (31 million words and rising) corpus of text for academic research in the context of less resourced languages where such data in sufficient quantities are often unavailable. It describes two versions of the corpus: the texts as submitted, prior to the correction process, and the texts following the user's incorporation of any suggested changes. An overview of the corpus' contents is given and an analysis of use including usage statistics is also provided. Issues surrounding privacy and the anonymization of data are explored as is the data's potential use for linguistic analysis, lexical research and language modelling. The method used for gathering this corpus is believed to be unique, and is a valuable addition to corpus studies in a minority language. | Cysill Ar-lein: A Corpus of Written Contemporary Welsh Compiled from an On-line Spelling and Grammar Checker |
d2008605 | Sys t (.~mSurface ~rker Coluan=B apanese ] Deep English rfaee Harker| Case Relation Surface Harker J Column-(: Japanese ] Translation Part~of-Hodifler [ Equivalent | the The Transfer Phase of Mu Machine |
d8583707 | We present a method for inference in hierarchical phrase-based translation, where both optimisation and sampling are performed in a common exact inference framework related to adaptive rejection sampling. We also present a first implementation of that method along with experimental results shedding light on some fundamental issues. In hierarchical translation, inference needs to be performed over a high-complexity distribution defined by the intersection of a translation hypergraph and a target language model. We replace this intractable distribution by a sequence of tractable upper-bounds for which exact optimisers and samplers are easy to obtain. Our experiments show that exact inference is then feasible using only a fraction of the time and space that would be required by the full intersection, without recourse to pruning techniques that only provide approximate solutions. While the current implementation is limited in the size of inputs it can handle in reasonable time, our experiments provide insights towards obtaining future speedups, while staying in the same general framework. | Investigations in Exact Inference for Hierarchical Translation |
d15982624 | The goal of this article is to present our work about a combination of several syntactic parsers to produce a more robust parser. We have built a platform which allows us to compare syntactic parsers for a given language by splitting their results in elementary pieces, normalizing them, and comparing them with reference results. The same platform is used to combine several parsers to produce a dependency parser that has larger coverage and is more robust than its component parsers.In the future, it should be possible to "compile" the knowledge extracted from several analyzers into an autonomous dependency parser. | Syntactic parser combination for improved dependency analysis |
d16092148 | Multigram language model has become important in Speech Recognition, Natural Language Processing and Information Retrieval. An essential task in multigram language model is to establish a set of significant multigram compounds. In Yamamotto and Church (2001), an 0(NlogN) time complexity method based on Generalised Suffix Array (GSA) has been found, which computes the (term frequency) and df (document frequency) over 0(N) classes of substrings. The ff'and df form the essential statistics on which the metrics, such as MI (Mutual Information) and RIDF (Residual Inverse Document Frequency)', are based for multigram compound discovery. In this paper, it is shown that two related data structures to GSA, Generalised Suffix Tree (GST) and Generalised Directed Acyclic Word Graph (GDAWG) can afford even more efficient methods of multigram compound discovery than GSA. Namely, 0(N) algorithms for computing ff-and df have been found in GST and GDAWG. These data structures also exhibit a series of related, and desirable properties, including an 0(N) time complexity algorithm to classify 0(N2) substrings into 0(N) classes. An experiment based on 6 million bytes of text demonstrates that our theoretical analysis is consistent with the empirical results that can be observed. | Efficient Methods for Multigram Compound Discovery |
d53622891 | This paper describes a method of creating synthetic treebanks for cross-lingual dependency parsing using a combination of machine translation (including pivot translation), annotation projection and the spanning tree algorithm. Sentences are first automatically translated from a lesser-resourced language to a number of related highly-resourced languages, parsed and then the annotations are projected back to the lesser-resourced language, leading to multiple trees for each sentence from the lesser-resourced language. The final treebank is created by merging the possible trees into a graph and running the spanning tree algorithm to vote for the best tree for each sentence. We present experiments aimed at parsing Faroese using a combination of Danish, Swedish and Norwegian. In a similar experimental setup to the CoNLL 2018 shared task on dependency parsing we report state-of-the-art results on dependency parsing for Faroese using an off-theshelf parser. | Multi-source synthetic treebank creation for improved cross-lingual dependency parsing |
d53590468 | While analysis of online explicit abusive language detection has lately seen an everincreasing focus, implicit abuse detection remains a largely unexplored space. We carry out a study on a subcategory of implicit hate: euphemistic hate speech. We propose a method to assist in identifying unknown euphemisms (or code words) given a set of hateful tweets containing a known code word. Our approach leverages word embeddings and network analysis (through centrality measures and community detection) in a manner that can be generalized to identify euphemisms across contexts-not just hate speech. | Determining Code Words in Euphemistic Hate Speech Using Word Embedding Networks |
d248780122 | Aspect-based sentiment analysis (ABSA) tasks aim to extract sentiment tuples from a sentence. Recent generative methods such as Seq2Seq models have achieved good performance by formulating the output as a sequence of sentiment tuples. However, the orders between the sentiment tuples do not naturally exist and the generation of the current tuple should not condition on the previous ones. In this paper, we propose Seq2Path to generate sentiment tuples as paths of a tree. A tree can represent "1-to-n" relations (e.g., an aspect term may correspond to multiple opinion terms) and the paths of a tree are independent and do not have orders. For training, we treat each path as an independent target, and we calculate the average loss of the ordinary Seq2Seq model over paths. For inference, we apply beam search with constrained decoding. By introducing an additional discriminative token and applying a data augmentation technique, valid paths can be automatically selected. We conduct experiments on five tasks including AOPE, ASTE, TASD, UABSA, ACOS. We evaluate our method on four common benchmark datasets including Laptop14, Rest14, Rest15, Rest16.Our proposed method achieves state-of-the-art results in almost all cases. | Seq2Path: Generating Sentiment Tuples as Paths of a Tree |
d259376894 | Lexical simplification (LS) automatically replaces words that are deemed difficult to understand for a given target population with simpler alternatives, whilst preserving the meaning of the original sentence. The TSAR-2022 shared task on LS provided participants with a multilingual lexical simplification test set. It contained nearly 1,200 complex words in English, Portuguese, and Spanish and presented multiple candidate substitutions for each complex word. The competition did not make training data available; therefore, teams had to use either off-the-shelf pre-trained large language models (LLMs) or out-domain data to develop their LS systems. As such, participants were unable to fully explore the capabilities of LLMs by re-training and/or fine-tuning them on in-domain data. To address this important limitation, we present ALEXSIS+, a multilingual dataset in the aforementioned three languages, and ALEXSIS++, an English monolingual dataset that together contains more than 50,000 unique sentences retrieved from news corpora and annotated with cosine similarities to the original complex word and sentence. Using these additional contexts, we are able to generate new high-quality candidate substitutions that improve LS performance on the TSAR-2022 test set regardless of the language or model. | ALEXSIS+: Improving Substitute Generation and Selection for Lexical Simplification with Information Retrieval |
d219308959 | ||
d1837942 | Question-answering (QA) systems aim at providing either a small passage or just the answer to a question in natural language. We have developed several QA systems that work on both English and French. This way, we are able to provide answers to questions given in both languages by searching documents in both languages also. In this article, we present our French monolingual system FRASQUES which participated in the EQueR evaluation campaign of QA systems for French in 2004. First, the QA architecture common to our systems is shown. Then, for every step of the QA process, we consider which steps are language-independent, and for those that are language-dependent, the tools or processes that need to be adapted to switch for one language to another. Finally, our results at EQueR are given and commented; an error analysis is conducted, and the kind of knowledge needed to answer a question is studied. | FRASQUES: A Question Answering system in the EQueR evaluation campaign |
d252819093 | Recently, fine-tuning the pre-trained language model (PrLM) on labeled sentiment datasets demonstrates impressive performance. However, collecting labeled sentiment dataset is time-consuming, and fine-tuning the whole PrLM brings about much computation cost. To this end, we focus on multi-source unsupervised sentiment adaptation problem with the pre-trained features, which is more practical and challenging. We first design a dynamic feature network to fully exploit the extracted pre-trained features for efficient domain adaptation. Meanwhile, with the difference of the traditional source-target domain alignment methods, we propose a novel asymmetric mutual learning strategy, which can robustly estimate the pseudo-labels of the target domain with the knowledge from all the other source models. Experiments on multiple sentiment benchmarks show that our method outperforms the recent state-of-the-art approaches, and we also conduct extensive ablation studies to verify the effectiveness of each the proposed module. | Asymmetric Mutual Learning for Multi-source Unsupervised Sentiment Adaptation with Dynamic Feature Network |
d226239164 | ||
d52168364 | Linguistic Linked Open Data (LLOD) -Building the cloud | |
d243865580 | Cross-document event coreference resolution (CDCR) is the task of identifying which event mentions refer to the same events throughout a collection of documents. Annotating CDCR data is an arduous and expensive process, explaining why existing corpora are small and lack domain coverage. To overcome this bottleneck, we automatically extract event coreference data from hyperlinks in online news: When referring to a significant real-world event, writers often add a hyperlink to another article covering this event. We demonstrate that collecting hyperlinks which point to the same article(s) produces extensive and highquality CDCR data and create a corpus of 2M documents and 2.7M silver-standard event mentions called HyperCoref. We evaluate a state-of-the-art system on three CDCR corpora and find that models trained on small subsets of HyperCoref are highly competitive, with performance similar to models trained on goldstandard data. With our work, we free CDCR research from depending on costly humanannotated training data and open up possibilities for research beyond English CDCR, as our data extraction approach can be easily adapted to other languages. 1 | Event Coreference Data (Almost) for Free: Mining Hyperlinks from Online News |
d17830435 | In this paper, we propose a novel decoding algorithm for discriminative joint Chinese word segmentation, part-of-speech (POS) tagging, and parsing. Previous work often used a pipeline method -Chinese word segmentation followed by POS tagging and parsing, which suffers from error propagation and is unable to leverage information in later modules for earlier components. In our approach, we train the three individual models separately during training, and incorporate them together in a unified framework during decoding. We extend the CYK parsing algorithm so that it can deal with word segmentation and POS tagging features. As far as we know, this is the first work on joint Chinese word segmentation, POS tagging and parsing. Our experimental results on Chinese Tree Bank 5 corpus show that our approach outperforms the state-of-the-art pipeline system. | Joint Chinese Word Segmentation, POS Tagging and Parsing |
d7455198 | Human listeners can almost instantaneously judge whether or not another speaker is part of their speech community. The basis of this judgment is the speaker's accent. Even though humans judge speech accents with ease, it has been tremendously difficult to automatically evaluate and rate accents in any consistent manner. This paper describes an experiment using the Amazon Mechanical Turk to develop an automatic speech accent rating dataset. | The Wisdom of the Crowd's Ear: Speech Accent Rating and Annotation with Amazon Mechanical Turk |
d2271006 | A concrete pro~.ect-like our "Concordanza lemmatizzata delle "Operette morali" di G. Leopardi" (a lemmatized concordance of an italian text of the 18th century with some archaic phenomena and of about 70 "000 tokens and 9 "500 %Tpes) is a good opportunity to introduce a new software package for lin~tistic data processing not. as mere cumulation of routines -86 - | LDVLIB(LEH): A SYSTEM POR INTERACTIVE LEHMATIZING AND ITS APPLICATION |
d13320571 | The high-dimensionality of lexical features in parsing can be memory consuming and cause over-fitting problems. We propose a general framework to replace all lexical feature templates by low-dimensional features induced from word embeddings. Applied to a near state-of-the-art dependency parser(Huang et al., 2012), our method improves the baseline, performs better than using cluster bit string features, and outperforms a recent neural network based parser. A further analysis shows that our framework has the effect hypothesized byAndreas and Klein (2014), namely (i) connecting unseen words to known ones, and (ii) encouraging common behaviors among invocabulary words. | Reducing Lexical Features in Parsing by Word Embeddings |
d5743807 | A ready set of commonly confused words plays an important role in spelling error detection and correction in texts. In this paper, we present a system named ACE (Automatic Confusion words Extraction), which takes a Chinese word as input (e.g., "不脛而走") and automatically outputs its easily confused words (e.g., "不徑 徑 徑 徑而走", "不逕 逕 逕 逕而走"). The purpose of ACE is similar to web-based set expansion -the problem of finding all instances (e.g. "Halloween", "Thanksgiving Day", "Independence Day", etc.) of a set given a small number of class names (e.g. "holidays"). Unlike set expansion, our system is used to produce commonly confused words of a given Chinese word. In brief, we use some handcoded patterns to find a set of sentence fragments from search engine, and then assign an array of tags to each character in each sentence fragment. Finally, these tagged fragments are served as inputs to a pre-learned conditional random fields (CRFs) model. We present experiment results on 3,211 test cases, showing that our system can achieve 95.2% precision rate while maintaining 91.2% recall rate. | Automatic Chinese Confusion Words Extraction Using Conditional Random Fields and the Web |
d232021596 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.