_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d2087764 | In this paper we present an API for programmatic access to BabelNet -a wide-coverage multilingual lexical knowledge base -and multilingual knowledge-rich Word Sense Disambiguation (WSD). Our aim is to provide the research community with easy-to-use tools to perform multilingual lexical semantic analysis and foster further research in this direction. | Multilingual WSD with Just a Few Lines of Code: the BabelNet API |
d15702213 | The BLEU scores and translation fluency for the current state-of-the-art SMT systems based on IBM models are still too low for publication purposes. The major issue is that stochastically generated sentences hypotheses, produced through a stack decoding process, may not strictly follow the natural target language grammar, since the decoding process is directed by a highly simplified translation model and n-gram language model, and a large number of noisy phrase pairs may introduce significant search errors. This paper proposes a statistical post-editing (SPE) model, based on a special monolingual SMT paradigm, to " translate"disfluent sentences into fluent sentences. However, instead of conducting a stack decoding process, the sentence hypotheses are searched from fluent target sentences in a large target language corpus or on the Web to ensure fluency. Phrase-based local editing, if necessary, is then applied to correct weakest phrase alignments between the disfluent and searched hypotheses using fluent target language phrases; such phrases are segmented from a large target language corpus with a global optimization criterion to maximize the likelihood of the training sentences, instead of using noisy phrases combined from bilingually wordaligned pairs. With such search-based decoding, the absolute BLEU scores are much higher than automatic post editing systems that conduct a classical SMT decoding process. We are also able to fully correct a significant number of disfluent sentences into completely fluent versions. The BLEU scores are significantly improved. The evaluation shows that on average 46% of translation errors can be fully recovered, and the BLEU score can be improved by about 26%. | Improving Translation Fluency with Search-Based Decoding and a Monolingual Statistical Machine Translation Model for Automatic Post-Editing |
d253628207 | Emojis have become essential components in our digital communication. Emojis, especially smiley face emojis and heart emojis, are considered the ones conveying more emotions. In this paper, two functions of emoji usages are discussed across two languages, Taiwanese Mandarin and English. The first function discussed here is sentiment enhancement and the other is sentiment modification. Multilingual language model is adopted for seeing the probability distribution of the text sentiment, and relative entropy is used to quantify the degree of changes. The results support the previous research that emojis are more frequently-used in positive contexts, smileys tend to be used for expressing emotions and prove the language-independent nature of emojis. | A Quantitative Analysis of Comparison of Emoji Sentiment: Taiwan Mandarin Users and English Users |
d53083161 | It is a challenging task to automatically compose poems with not only fluent expressions but also aesthetic wording. Although much attention has been paid to this task and promising progress is made, there exist notable gaps between automatically generated ones with those created by humans, especially on the aspects of term novelty and thematic consistency. Towards filling the gap, in this paper, we propose a conditional variational autoencoder with adversarial training for classical Chinese poem generation, where the autoencoder part generates poems with novel terms and a discriminator is applied to adversarially learn their thematic consistency with their titles. Experimental results on a large poetry corpus confirm the validity and effectiveness of our model, where its automatic and human evaluation scores outperform existing models. | Generating Classical Chinese Poems via Conditional Variational Autoencoder and Adversarial Training |
d52131263 | In a dialog, there can be multiple valid next utterances at any point. The present end-toend neural methods for dialog do not take this into account. They learn with the assumption that at any time there is only one correct next utterance. In this work, we focus on this problem in the goal-oriented dialog setting where there are different paths to reach a goal. We propose a new method, that uses a combination of supervised learning and reinforcement learning approaches to address this issue. We also propose a new and more effective testbed, permuted-bAbI dialog tasks 1 by introducing multiple valid next utterances to the original-bAbI dialog tasks, which allows evaluation of goal-oriented dialog systems in a more realistic setting. We show that there is a significant drop in performance of existing end-toend neural methods from 81.5% per-dialog accuracy on original-bAbI dialog tasks to 30.3% on permuted-bAbI dialog tasks. We also show that our proposed method improves the performance and achieves 47.3% per-dialog accuracy on permuted-bAbI dialog tasks. * Equal Contribution 1 permuted-bAbI-dialog-taskshttps://github. com/IBM/permuted-bAbI-dialog-tasks | Learning End-to-End Goal-Oriented Dialog with Multiple Answers |
d21670015 | Cross-lingual word embeddings are the representations of words across languages in a shared continuous vector space. Cross-lingual word embeddings have been shown to be helpful in the development of cross-lingual natural language processing tools. In case of more than two languages involved, we call them multilingual word embeddings. In this work, we introduce a multilingual word embedding corpus which is acquired by using neural machine translation. Unlike other cross-lingual embedding corpora, the embeddings can be learned from significantly smaller portions of data and for multiple languages at once. An intrinsic evaluation on monolingual tasks shows that our method is fairly competitive to the prevalent methods but on the cross-lingual document classification task, it obtains the best figures. We are in the process to produce the embeddings for more languages, especially the languages which belong to the same family or sematically close to each others, such as Japanese-Korean, Chinese-Vietnamese, German-Dutch, or Latin-based languagues. Furthermore, the corpus is being analyzedd regarding its usage and usefulness in other cross-lingual tasks. | KIT-Multi: A Translation-Oriented Multilingual Embedding Corpus |
d32406344 | This paper examines how Natural Language Process (NLP) resources and online dialogue corpora can be used to extend coverage of Information Extraction (IE) templates in a Spoken Dialogue system. IE templates are used as part of a Natural Language Understanding module for identifying meaning in a user utterance. The use of NLP tools in Dialogue systems is a difficult task given 1) spoken dialogue is often not well-formed and 2) there is a serious lack of dialogue data. In spite of that, we have devised a method for extending IE patterns using standard NLP tools and available dialogue corpora found on the web. In this paper, we explain our method which includes using a set of NLP modules developed using GATE (a General Architecture for Text Engineering), as well as a general purpose editing tool that we built to facilitate the IE rule creation process. Lastly, we present directions for future work in this area. | Using Dialogue Corpora to Extend Information Extraction Patterns for Natural Language Understanding of Dialogue |
d2991968 | We present Tightly Packed Tries (TPTs), a compact implementation of read-only, compressed trie structures with fast on-demand paging and short load times.We demonstrate the benefits of TPTs for storing n-gram back-off language models and phrase tables for statistical machine translation. Encoded as TPTs, these databases require less space than flat text file representations of the same data compressed with the gzip utility. At the same time, they can be mapped into memory quickly and be searched directly in time linear in the length of the key, without the need to decompress the entire file. The overhead for local decompression during search is marginal. | Tightly Packed Tries: How to Fit Large Models into Memory, and Make them Load Fast, Too |
d4380524 | MEANING: a Roadmap to Knowledge Technologies | |
d203705258 | ||
d201692196 | BabelDr is a medical speech to speech translator, where the doctor has to approve the sentence that will be translated for the patient before translation; this step is done using monolingual backtranslation, which converts the speech recognition result into a core sentence. In this work, we model this step as a simplification task and propose to use neural networks to perform the backtranslation by generating and choosing the best core sentence. Results of a task-based evaluation show that neural networks outperform previous versions of the system. | Monolingual backtranslation in a medical speech translation system for diagnostic interviews -a NMT approach |
d26147350 | A partir de l'évaluation d'extracteurs de termes menée initialement pour détecter le meilleur outil d'acquisition du lexique d'une langue contrôlée, nous proposons dans cet article une stratégie d'optimisation du processus d'extraction terminologique. Nos travaux, menés dans le cadre du projet ANR Sensunique, prouvent que la « multiextraction », c'est-à-dire la coopération de plusieurs extracteurs de termes, donne des résultats significativement meilleurs que l'extraction via un seul outil. Elle permet à la fois de réduire le silence et de filtrer automatiquement le bruit grâce à la variation d'un indice relatif au potentiel terminologique.ABSTRACT _________________________________________________________________________________________________________Multi-extraction as a strategy of optimized extraction of terminological and lexical resourcesBased on the evaluation of terminological extractors, initially to find the best tool for building a controlled language lexicon, we propose a strategy of optimized extraction of terminological resources. Our work highlights that the cooperation of several extraction tools gives better results than the use of a single one. It both reduces silence and automatically filters noise thanks to a variable related to termhood. MOTS-CLÉS : terminologie, extraction, langue contrôlée, potentiel terminologique, filtrage de termes. | La "multi-extraction" comme stratégie d'acquisition optimisée de ressources (non) terminologiques |
d202605494 | As is the case with many languages, research into code-switching in Modern Irish has, until recently, mainly been focused on the spoken language. Online usergenerated content (UGC) is less restrictive than traditional written text, allowing for code-switching, and as such, provides a new platform for text-based research in this field of study. This paper reports on the annotation of (English) code-switching in a corpus of 1496 Irish tweets and provides a computational analysis of the nature of code-switching amongst Irishspeaking Twitter users, with a view to providing a basis for future linguistic and socio-linguistic studies.1 Note that English is the more dominant language, with only 17.4% of the population reporting use of Irish outside the education system | Code-switching in Irish tweets: A preliminary analysis |
d220060815 | ||
d164391651 | ||
d248780242 | Discovering Out-of-Domain(OOD) intents is essential for developing new skills in a taskoriented dialogue system. The key challenge is how to transfer prior IND knowledge to OOD clustering. Different from existing work based on shared intent representation, we propose a novel disentangled knowledge transfer method via a unified multi-head contrastive learning framework. We aim to bridge the gap between IND pre-training and OOD clustering. Experiments and analysis on two benchmark datasets show the effectiveness of our method. 1 | Disentangled Knowledge Transfer for OOD Intent Discovery with Unified Contrastive Learning |
d9918604 | In analyzing the formation of a given compound, both its internal syntactic structure and semantic relations need to be considered. The Generative Lexicon Theory (GL Theory) provides us with an explanatory model of compounds that captures the qualia modification relations in the semantic composition within a compound, which can be applied to natural language processing tasks. In this paper, we primarily discuss the qualia structure of noun-noun compounds found in Chinese as well as a couple of other languages like German, Spanish, Japanese and Italian. We briefly review the construction of compounds and focus on the noun-noun construction. While analyzing the semantic relationship between the words that compose a compound, we use the GL Theory to demonstrate that the proposed qualia structure enables compositional interpretation within the compound. Besides, we attempt to examine whether or not for each semantic head, its modifier can fit in one of the four quales. Finally, our analysis reveals the potentials and limits of qualia-based treatment of composition of nominal compounds and suggests a path for future work. | Qualia Modification in Noun-Noun Compounds: A Cross-Language Survey |
d248780144 | Task-Oriented Dialogue (TOD) systems allow users to accomplish tasks by giving directions to the system using natural language utterances. With the widespread adoption of conversational agents and chat platforms, TOD has become mainstream in NLP research today. However, developing TOD systems require massive amounts of data, and there has been limited work done for TOD in low-resource languages like Tamil. Towards this objective, we introduce TamilATIS -a TOD dataset for Tamil which contains 4874 utterances. We present a detailed account of the entire data collection and data annotation process. We train stateof-the-art NLU models and report their performances. The Joint BERT model with XLM-Roberta as utterance encoder achieved the highest score with an intent accuracy of 96.26% and slot F1 of 94.01%.International Conference on NaturalLanguage Processing, pages 18-25. Suman Banerjee, Nikita Moghe, Siddharth Arora, and Mitesh M. Khapra. 2018. A dataset for building code-mixed goal oriented conversation systems. In COLING. Giovanni Campagna, Agata Foryciarz, Mehrad Moradshahi, and Monica Lam. 2020. Zero-shot transfer learning with synthesized data for multi-domain dialogue state tracking. pages 122-132. . 2016. Endto-end memory networks with knowledge carryover for multi-turn spoken language understanding. In INTERSPEECH. | TamilATIS: Dataset for Task-Oriented Dialog in Tamil |
d9739046 | Treebanks are key resources for developing accurate statistical parsers. However, building treebanks is expensive and timeconsuming for humans. For domains requiring deep subject matter expertise such as law and medicine, treebanking is even more difficult. To reduce annotation costs for these domains, we develop methods to improve cross-domain parsing inference using paraphrases. Paraphrases are easier to obtain than full syntactic analyses as they do not require deep linguistic knowledge, only linguistic fluency. A sentence and its paraphrase may have similar syntactic structures, allowing their parses to mutually inform each other. We present several methods to incorporate paraphrase information by jointly parsing a sentence with its paraphrase. These methods are applied to state-of-the-art constituency and dependency parsers and provide significant improvements across multiple domains. | Parsing Paraphrases with Joint Inference |
d209076846 | ||
d237332940 | In this study, we propose a model that extends the continuous space topic model (CSTM), which flexibly controls word probability in a document, using pre-trained word embeddings.To develop the proposed model, we pre-train word embeddings, which capture the semantics of words and plug them into the CSTM. Intrinsic experimental results show that the proposed model exhibits a superior performance over the CSTM in terms of perplexity and convergence speed. Furthermore, extrinsic experimental results show that the proposed model is useful for a document classification task when compared with the baseline model. We qualitatively show that the latent coordinates obtained by training the proposed model are better than those of the baseline model. | Modeling Text using the Continuous Space Topic Model with Pre-Trained Word Embeddings |
d16347077 | We introduce an online framework for discriminative learning problems over hidden structures, where we learn both the latent structure and the classifier for a supervised learning task. Previous work on leveraging latent representations for discriminative learners has used batch algorithms that require multiple passes though the entire training data. Instead, we propose an online algorithm that efficiently jointly learns the latent structures and the classifier. We further extend this to include multiple views on the latent structures with different representations. Our proposed online algorithm with multiple views significantly outperforms batch learning for latent representations with a single view on a grammaticality prediction task. | An Online Algorithm for Learning over Constrained Latent Representations using Multiple Views * |
d9309095 | We re-assess the impact brought by a set of widely-used SMT models and techniques by means of human evaluation. These include different types of development sets (crowdsourced vs translated professionally), reordering, operation sequence and bilingual neural language models as well as common approaches to data selection and combination. In some cases our results corroborate previous findings found in the literature, when those approaches were evaluated in terms of automatic metrics, but in some other cases they do not. | Re-assessing the Impact of SMT Techniques with Human Evaluation: a Case Study on English↔Croatian |
d15658932 | We address appropriate user modeling in order to generate cooperative responses to each user in spoken dialogue systems. Unlike previous studies that focus on user's knowledge or typical kinds of users, the user model we propose is more comprehensive. Specifically, we set up three dimensions of user models: skill level to the system, knowledge level on the target domain and the degree of hastiness. Moreover, the models are automatically derived by decision tree learning using real dialogue data collected by the system. We obtained reasonable classification accuracy for all dimensions. Dialogue strategies based on the user modeling are implemented in Kyoto city bus information system that has been developed at our laboratory. Experimental evaluation shows that the cooperative responses adaptive to individual users serve as good guidance for novice users without increasing the dialogue duration for skilled users. | Flexible Guidance Generation using User Model in Spoken Dialogue Systems |
d249240915 | ||
d234103448 | How to apply for financial aid: Exploring perplexity and jargon in texts for non-expert audiences | |
d37387939 | In domain-specific NER, due to insufficient labeled training data, deep models usually fail to behave normally. In this paper, we proposed a novel Neural Inductive TEaching framework (NITE) to transfer knowledge from existing domain-specific NER models into an arbitrary deep neural network in a teacher-student training manner. NITE is a general framework that builds upon transfer learning and multiple instance learning, which collaboratively not only transfers knowledge to a deep student network but also reduces the noise from teachers. NITE can help deep learning methods to effectively utilize existing resources (i.e., models, labeled and unlabeled data) in a small domain. The experiment resulted on Disease NER proved that without using any labeled data, NITE can significantly boost the performance of a CNN-bidirectional LSTM-CRF NER neural network nearly over 30% in terms of F1-score. | NITE: A Neural Inductive Teaching Framework for Domain-Specific NER |
d226283916 | Contextualized word representations encode rich information about syntax and semantics, alongside specificities of each context of use. While contextual variation does not always reflect actual meaning shifts, it can still reduce the similarity of embeddings for word instances having the same meaning. We explore the imprint of two specific linguistic alternations, namely passivization and negation, on the representations generated by neural models trained with two different objectives: masked language modeling and translation. Our exploration methodology is inspired by an approach previously proposed for removing societal biases from word vectors. We show that passivization and negation leave their traces on the representations, and that neutralizing this information leads to more similar embeddings for words that should preserve their meaning in the transformation. We also find clear differences in how the respective features generalize across datasets. | Tracking the Traces of Passivization and Negation in Contextualized Representations |
d15451282 | 摘要 主動式學習 (active learning)在機器學習領域中越來越受到重視,因為它可以用來優 化訓練的過程,讓結果更好[13]。主要的概念是假如學習演算法可以在學習的過程中選 擇比較決定性的資料點而不是挑選全部資料來做學習。接著根據對於模型而言具有代表 性的資料點做挑選,將會對於學習的效果更有幫助,獲得更佳的結果。換句話說,透過 觀察已知的標記資料,主動地挑選未標記的資料,並藉此獲得比挑選全部資料或是隨機 抽樣資料的監督式學習方式更高的準確率以及更少的資料量。 對於任何監督式學習 (supervised learning)來說,假如想要促使學習系統表現的更 好,則需要大量的被標記的資料來做訓練。但是,在這些被標記的資料中,可能會存在 著對於學習系統有著負面影響的資料,從而降低學習效果與準確率。在這篇論文中,我 們將會應用主動式學習的概念在系統學習的過程上,藉此來分辨資料對於系統的好壞; 並測試主動式學習在訓練過程中的實際效果[9]。AbstractActive learning is becoming more and more important in machine learning that can optimize the learning process[13]. The main concept is that if learning algorithm can choose the most informative data points from which it learns, instead of choosing all of them, it will perform better with less training. In other words, we recursively select the unlabeled data instances by observing the known labeled data instances to obtain higher recognition accuracy while using smaller amounts of data instances, i.e., a subset of all of the dataset or random choose data when training the supervised learning system.[9]For any supervised learning, if you would like to make the system perform well, it had to be trained on lots of labeled instances. But, in these labeled instances, there might be some worthless instances which affect the learning system and raise your training cost. So, we used the active learning concept during training process to discriminate whether the data instance is good for the learning system or not. In this work, we would like to know that the concept of active learning to select the training data, will work or not.關鍵詞:主動式學習、資料選取、多模態訊號處理、機器學習 | 基於多模態主動式學習法進行需備標記樣本之挑選用於候用校長 評鑑之自動化評分系統建置 A Multimodal Active Learning Approach toward Identifying Samples to Label during the Development of Automatic Oral Presentation Assessment System for Pre-service Principals Certification Program |
d16648418 | This paper shows the actual state of development of the manual annotation tool ELAN. It presents usage requirements from three different groups of users and how one annotation model and a number of generic design principles guided the choices made during the development process of ELAN. | Annotating Multi-media / Multi-modal resources with ELAN |
d219868422 | ||
d250390474 | Aspect-based sentiment analysis (ABSA) is a fine-grained sentiment classification task. Most recent efforts adopt pre-trained model to classify the sentences with aspects. However, the aspect sentiment bias from pre-trained model brings some noise to the ABSA task. Besides, traditional methods using cross-entropy loss are hard to find the potential associations between sentiment polarities. In this work, we analyze the ABSA task from a novel cognition perspective: humans can often judge the sentiment of an aspect even if they do not know what the aspect is. Moreover, it is easier to distinguish positive and negative sentiments than others for human beings because positive and negative are two opposite sentiments. To this end, we propose a no-aspect differential sentiment (NADS) framework for the ABSA task. We first design a no-aspect template by replacing the aspect with a special unbiased character to eliminate the sentiment bias and obtain a stronger representation. To better get the benefits from the template, we adopt contrastive learning between the no-aspect template and the original sentence. Then we propose a differential sentiment loss instead of the cross-entropy loss to better classify the sentiments by distinguishing the different distances between sentiments. Our proposed model is a general framework and can be combined with almost all traditional ABSA methods. Experiments on SemEval 2014 show that our framework is still able to predict the sentiment of the aspect even we don't konw what the aspect is. Moreover, our NADS framework boosts three typical ABSA methods and achieves state-of-the-art performance. | Aspect Is Not You Need: No-aspect Differential Sentiment Framework for Aspect-based Sentiment Analysis |
d227231216 | ||
d203190957 | ||
d244054905 | This work investigates neural machine translation (NMT) systems for translating English user reviews into Croatian and Serbian, two similar morphologically complex languages. Two types of reviews are used for testing the systems: IMDb movie reviews and Amazon product reviews.Two types of training data are explored: large out-of-domain bilingual parallel corpora, as well as small synthetic in-domain parallel corpus obtained by machine translation of monolingual English Amazon reviews into the target languages. Both automatic scores and human evaluation show that using the synthetic in-domain corpus together with a selected subset of out-of-domain data is the best option.Separated results on IMDb and Amazon reviews indicate that MT systems perform differently on different review types so that user reviews generally should not be considered as a homogeneous genre. Nevertheless, more detailed research on larger amount of different reviews covering different domains/topics is needed to fully understand these differences. | On Machine Translation of User Reviews |
d2489047 | We define a new formalism, based on Sikkel's parsing schemata for constituency parsers, that can be used to describe, analyze and compare dependency parsing algorithms. This abstraction allows us to establish clear relations between several existing projective dependency parsers and prove their correctness. | A Deductive Approach to Dependency Parsing * |
d2545879 | Negative expressions are common in natural language text and play a critical role in information extraction. However, the performances of current systems are far from satisfaction, largely due to its focus on intrasentence information and its failure to consider inter-sentence information. In this paper, we propose a graph model to enrich intrasentence features with inter-sentence features from both lexical and topic perspectives. Evaluation on the *SEM 2012 shared task corpus indicates the usefulness of contextual discourse information in negation focus identification and justifies the effectiveness of our graph model in capturing such global information. * | Negation Focus Identification with Contextual Discourse Information |
d708539 | Coreference resolution is the process of identifying expressions that refer to the same entity. This paper presents a clustering algorithm for unsupervised Chinese coreference resolution. We investigate why Chinese coreference is hard and demonstrate that techniques used in coreference resolution for English can be extended to Chinese. The proposed system exploits clustering as it has advantages over traditional classification methods, such as the fact that no training data is required and it is easily extended to accommodate additional features. We conduct a set of experiments to investigate how noun phrase identification and feature selection can contribute to coreference resolution performance. Our system is evaluated on an annotated version of the TDT3 corpus using the MUC-7 scorer, and obtains comparable performance. We believe that this is the first attempt at an unsupervised approach to Chinese noun phrase coreference resolution. | A Clustering Approach for Unsupervised Chinese Coreference Resolution |
d13482033 | Previously topic models such as PLSI (Probabilistic Latent Semantic Indexing) and LDA (Latent Dirichlet Allocation) were developed for modeling the contents of plain texts. Recently, topic models for processing hypertexts such as web pages were also proposed. The proposed hypertext models are generative models giving rise to both words and hyperlinks. This paper points out that to better represent the contents of hypertexts it is more essential to assume that the hyperlinks are fixed and to define the topic model as that of generating words only. The paper then proposes a new topic model for hypertext processing, referred to as Hypertext Topic Model (HTM). | HTM: A Topic Model for Hypertexts |
d226262292 | We present a scalable, low-bias, and low-cost method for building a commonsense inference dataset that combines automatic extraction from a corpus and crowdsourcing. Each problem is a multiple-choice question that asks contingency between basic events. We applied the proposed method to a Japanese corpus and acquired 104k problems. While humans can solve the resulting problems with high accuracy (88.9%), the accuracy of a highperformance transfer learning model is reasonably low (76.0%). We also confirmed through dataset analysis that the resulting dataset contains low bias. We released the datatset to facilitate language understanding research. 1 | A Method for Building a Commonsense Inference Dataset based on Basic Events |
d240225780 | Code-mixing (CM) is a frequently observed phenomenon that uses multiple languages in an utterance or sentence. There are no strict grammatical constraints observed in codemixing, and it consists of non-standard variations of spelling. The linguistic complexity resulting from the above factors made the computational analysis of the code-mixed language a challenging task. Language identification (LI) and part of speech (POS) tagging are the fundamental steps that help analyze the structure of the code-mixed text. Often, the LI and POS tagging tasks are interdependent in the code-mixing scenario. We project the problem of dealing with multilingualism and grammatical structure while analyzing the code-mixed sentence as a joint learning task. In this paper, we jointly train and optimize language detection and part of speech tagging models in the code-mixed scenario. We used a Transformer with convolutional neural network architecture. We train a joint learning method by combining POS tagging and LI models on code-mixed social media text obtained from the ICON shared task. | A Pre-trained Transformer and CNN model with Joint Language ID and Part-of-Speech Tagging for Code-Mixed Social-Media Text |
d8493310 | Statistical NLG has largely meant n-gram modelling which has the considerable advantages of lending robustness to NLG systems, and of making automatic adaptation to new domains from raw corpora possible. On the downside, n-gram models are expensive to use as selection mechanisms and have a built-in bias towards shorter realisations. This paper looks at treebank-training of generators, an alternative method for building statistical models for NLG from raw corpora, and two different ways of using treebank-trained models during generation. Results show that the treebank-trained generators achieve improvements similar to a 2-gram generator over a baseline of random selection. However, the treebank-trained generators achieve this at a much lower cost than the 2-gram generator, and without its strong preference for shorter realisations. | Statistical Generation: Three Methods Compared and Evaluated |
d8508974 | We describe the OntoNotes methodology and its result, a large multilingual richly-annotated corpus constructed at 90% interannotator agreement. An initial portion (300K words of English newswire and 250K words of Chinese newswire) will be made available to the community during 2007. | OntoNotes: The 90% Solution |
d244082155 | Combining the annotation strengths of PDTB and RST, this study constructs a specialized Chinese discourse corpus on "run-on" sentences. "Run-on" sentences are a typical and prevalent form of discourse/text in Chinese. Despite their widespread use in Chinese, previous studies have only explored "run-on" sentences by using small-scale examples. In order to carry out computational tasks in realistic context and increase diversity of discourse corpora resources, we establish this discourse corpus. The present study selects 500 "runon"sentences and annotates them on the levels of discourse, syntax and semantics. We mainly adopt an integrated annotation pipeline combining with RST and PDTB to process these sentences. After that, three state-of-the-art discourse parsers are employed to test the feasibility of this corpus, and the result shows that this corpus performs stably and can be used as a benchmark for evaluating discourse parsing. | Constructing the Corpus of Chinese Textual 'Run-on' Sentences (CCTRS): Discourse Corpus Benchmark with Multi-layer Annotations |
d6441832 | Recent years have seen a surge of interest in stance classification in online debates. Oftentimes, however, it is important to determine not only the stance expressed by an author in her debate posts, but also the reasons behind her supporting or opposing the issue under debate. We therefore examine the new task of reason classification in this paper. Given the close interplay between stance classification and reason classification, we design computational models for examining how automatically computed stance information can be profitably exploited for reason classification. Experiments on our reason-annotated corpus of ideological debate posts from four domains demonstrate that sophisticated models of stances and reasons can indeed yield more accurate reason and stance classification results than their simpler counterparts. | Why are You Taking this Stance? Identifying and Classifying Reasons in Ideological Debates |
d4954256 | In this position paper we argue that an adequate semantic model must account for language in use, taking into account how discourse context affects the meaning of words and larger linguistic units. Distributional semantic models are very attractive models of meaning mainly because they capture conceptual aspects and are automatically induced from natural language data. However, they need to be extended in order to account for language use in a discourse or dialogue context. We discuss phenomena that the new generation of distributional semantic models should capture, and propose concrete tasks on which they could be tested. | Distributional Semantics in Use |
d7943644 | In this paper we describe the system used to participate in the sub task 5b in the Phrasal Semantics challenge (task 5) in SemEval 2013. This sub task consists in discriminating literal and figurative usage of phrases with compositional and non-compositional meanings in context. The proposed approach is based on part-of-speech tags, stylistic features and distributional statistics gathered from the same development-training-test text collection. The system obtained a relative improvement in accuracy against the most-frequentclass baseline of 49.8% in the "unseen contexts" (LexSample) setting and 8.5% in "unseen phrases" (AllWords). | UNAL: Discriminating between Literal and Figurative Phrasal Usage Using Distributional Statistics and POS tags |
d6902253 | This study presents a new hybrid approach for translation equivalent selection within a transfer-based machine translation system using an intertwined net of traditional linguistic methods together with statistical techniques. Detailed evaluation reveals that the translation quality can be improved substantially in this way. | From Statistical Term Extraction to Hybrid Machine Translation |
d5197760 | We discuss the data sources available for utterance disambiguation in a bilingual dialogue system, distinguishing global, contextual, and user-specific domains, and syntactic and semantic levels. We propose a framework for combining the available information, and techniques for increasing a stochastic grammar's sensitivity to local context and a speaker's idiolect. | A framework for utterance disambiguation in dialogue |
d21695349 | Terms are notoriously difficult to identify, both automatically and manually. This complicates the evaluation of the already challenging task of automatic term extraction. With the advent of multilingual automatic term extraction from comparable corpora, accurate evaluation becomes increasingly difficult, since term linking must be evaluated as well as term extraction. A gold standard with manual annotations for a complete comparable corpus has been developed, based on a novel methodology created to accommodate for the intrinsic difficulties of this task. In this contribution, we show how the effort involved in the development of this gold standard resulted, not only in a tool for evaluation, but also in a rich source of information about terms. A detailed analysis of term characteristics illustrates how such knowledge about terms may inspire improvements for automatic term extraction. | A Gold Standard for Multilingual Automatic Term Extraction from Comparable Corpora: Term Structure and Translation Equivalents |
d201706782 | In this work, we customized a neural machine translation system for translation of subtitles in the domain of entertainment. The neural translation model was adapted to the subtitling content and style and extended by a simple, yet effective technique for utilizing intersentence context for short sentences such as dialog turns. The main contribution of the paper is a novel subtitle segmentation algorithm that predicts the end of a subtitle line given the previous word-level context using a recurrent neural network learned from human segmentation decisions. This model is combined with subtitle length and duration constraints established in the subtitling industry. We conducted a thorough human evaluation with two post-editors (English-to-Spanish translation of a documentary and a sitcom). It showed a notable productivity increase of up to 37% as compared to translating from scratch and significant reductions in human translation edit rate in comparison with the post-editing of the baseline non-adapted system without a learned segmentation model. | Customizing Neural Machine Translation for Subtitling |
d16617118 | Measuring transaction success and dialogue smoothness is extremely time consuming and costly when done manually and on many dialogues but is the only possibility today for spoken dialogue systems without a very clear success state. This paper investigates the possibility of automatic derivation of transaction success for task-oriented dialogues based on simple act-topic annotations. | From Acts and Topics to Transactions and Dialogue Smoothness |
d250391850 | We present an extension of the SynSemClass event-type ontology, originally conceived as a bilingual Czech-English resource. We added German entries to the classes representing the concepts of the ontology. Having a different starting point than the original work (unannotated parallel corpus without links to a valency lexicon and, of course, different existing lexical resources), it was a challenge to adapt the annotation guidelines, the data model and the tools used for the original version. We describe the process and results of working in such a setup. We also show the next steps to adapt the annotation process, data structures and formats and tools necessary to make the addition of a new language in the future more smooth and efficient, and possibly to allow for various teams to work on SynSemClass extensions to many languages concurrently. We also present the latest release which contains the results of adding German, freely available for download as well as for online access. | Making a Semantic Event-type Ontology Multilingual |
d220445698 | ||
d53083305 | We introduce an effective and efficient method that grounds (i.e., localizes) natural sentences in long, untrimmed video sequences. Specifically, a novel Temporal GroundNet (TGN) 1 is proposed to temporally capture the evolving fine-grained frame-by-word interactions between video and sentence. TGN sequentially scores a set of temporal candidates ended at each frame based on the exploited frameby-word interactions, and finally grounds the segment corresponding to the sentence. Unlike traditional methods treating the overlapping segments separately in a sliding window fashion, TGN aggregates the historical information and generates the final grounding result in one single pass. We extensively evaluate our proposed TGN on three public datasets with significant improvements over the stateof-the-arts. We further show the consistent effectiveness and efficiency of TGN through an ablation study and a runtime test.Recently, several related works(Gao et al., 2017;Hendricks et al., 2017)leverage one temporal sliding window approach over video sequences to generate video segment candidates, which are then independently combined(Gao et al., 2017) or compared (Hendricks et al., 2017 with the given sentence to make the grounding prediction. Although the existing works have achieved promising performances, they are still suffering from inferior effectiveness and efficiency. First, existing methods project the video segment and sentence into one common space, as shown inFigure 1 (b), | Temporally Grounding Natural Sentence in Video |
d8664382 | Langendoen (1977)advanced an argument against English being a context-free language involving crossserial subject-verb agreement in respectively constructions such as (1).(1) The man and the women dances and sing, respectively. | SUBJECT-VERB AGREEMENT IN RESPECTIVE COORDINATIONS AND CONTEXT-FREENESS |
d2300698 | A growing body of work has highlighted the challenges of identifying the stance a speaker holds towards a particular topic, a task that involves identifying a holistic subjective disposition. We examine stance classification on a corpus of 4873 posts across 14 topics on ConvinceMe.net, ranging from the playful to the ideological. We show that ideological debates feature a greater share of rebuttal posts, and that rebuttal posts are significantly harder to classify for stance, for both humans and trained classifiers. We also demonstrate that the number of subjective expressions varies across debates, a fact correlated with the performance of systems sensitive to sentimentbearing terms. We present results for identifing rebuttals with 63% accuracy, and for identifying stance on a per topic basis that range from 54% to 69%, as compared to unigram baselines that vary between 49% and 60%. Our results suggest that methods that take into account the dialogic context of such posts might be fruitful. | Cats Rule and Dogs Drool!: Classifying Stance in Online Debate |
d14317748 | Morphology, Morphotactics, Finite State, Separated Dependencies] This paper examines dependencies between separated (non-adjacent) morphemes in naturallanguage words and a variety of ways to constrain them in finite-state morphology. Methods include running separate constraining transducers at runtime, composing in constraints at compile time, feature unification, and the use of FLAG DIACRITICS. Examples are provided from Modern Standard Arabic. In choosing a practical solution, developers must iveigh the size, performance and flexibility of the overall system. | Constraining Separated Morphotactic Dependencies in Finite-State Grammars |
d15230028 | Manually constructed taxonomies provide a crucial resource for many NLP technologies, yet these resources are often limited in their lexical coverage due to their construction procedure. While multiple approaches have been proposed to enrich such taxonomies with new concepts, these techniques are typically evaluated by measuring the accuracy at identifying relationships between words, e.g., that a dog is a canine, rather relationships between specific concepts. Task 14 provides an evaluation framework for automatic taxonomy enrichment techniques by measuring the placement of a new concept into an existing taxonomy: Given a new word and its definition, systems were asked to attach or merge the concept into an existing WordNet concept. Five teams submitted 13 systems to the task, all of which were able to improve over the random baseline system. However, only one participating system outperformed the second, morecompetitive baseline that attaches a new term to the first word in its gloss with the appropriate part of speech, which indicates that techniques must be adapted to exploit the structure of glosses. | SemEval-2016 Task 14: Semantic Taxonomy Enrichment |
d13986201 | We extend our prior work on speculative sentence recognition and speculation scope detection in biomedical text to the CoNLL-2010 Shared Task on Hedge Detection. In our participation, we sought to assess the extensibility and portability of our prior work, which relies on linguistic categorization and weighting of hedging cues and on syntactic patterns in which these cues play a role. For Task 1B, we tuned our categorization and weighting scheme to recognize hedging in biological text. By accommodating a small number of vagueness quantifiers, we were able to extend our methodology to detecting vague sentences in Wikipedia articles. We exploited constituent parse trees in addition to syntactic dependency relations in resolving hedging scope. Our results are competitive with those of closeddomain trained systems and demonstrate that our high-precision oriented methodology is extensible and portable. | A High-Precision Approach to Detecting Hedges and Their Scopes |
d220446107 | ||
d672076 | This paper describes an experiment to compare four tools to recognize named entities in Portuguese texts. The experiment was made over the HAREM corpora, a golden standard for named entities recognition in Portuguese. The tools experimented are based on natural language processing techniques and also machine learning. Specifically, one of the tools is based on Conditional random fields, an unsupervised machine learning model that has being used to named entities recognition in several languages, while the other tools follow more traditional natural language approaches. The comparison results indicate advantages for different tools according to the different classes of named entities. Despite of such balance among tools, we conclude pointing out foreseeable advantages to the machine learning based tool. | Comparative Analysis of Portuguese Named Entities Recognition Tools |
d236460343 | One of the main bottlenecks in developing discourse dependency parsers is the lack of annotated training data. A potential solution is to utilize abundant unlabeled data by using unsupervised techniques, but there is so far little research in unsupervised discourse dependency parsing. Fortunately, unsupervised syntactic dependency parsing has been studied for decades, which could potentially be adapted for discourse parsing. In this paper, we propose a simple yet effective method to adapt unsupervised syntactic dependency parsing methodology for unsupervised discourse dependency parsing. We apply the method to adapt two state-of-the-art unsupervised syntactic dependency parsing methods. Experimental results demonstrate that our adaptation is effective. Moreover, we extend the adapted methods to the semi-supervised and supervised setting and surprisingly, we find that they outperform previous methods specially designed for supervised discourse parsing. Further analysis shows our adaptations result in superiority not only in parsing accuracy but also in time and space efficiency. | Adapting Unsupervised Syntactic Parsing Methodology for Discourse Dependency Parsing |
d7058755 | The CSR (Connected Speech Recognition) corpus represents a new DARPA speech recognition technology development initiative to advance the state of the art in CSR. This corpus essentially supersedes the now old Resource Management (RM) corpus that has fueled DARPA speech recognition technology development for the past 5 years. The new CSR corpus supports research on major new problems including unlimited vocabulary, natural grammar, and spontaneous speech. This paper presents an overview of the CSR corpus, reviews the definition and development of the "CSR pilot corpus", and examines the dynamic challenge of extending the CSR corpus to meet future needs.OVERVIEW | CSR Corpus Development |
d9351555 | Towards a Quality Improvement in Machine Translation: Modelling Discourse Structure and Including Discourse Development in the Determination of Translation Equivalents | |
d260434433 | We propose a fully Bayesian framework for learning ground truth labels from noisy annotators. Our framework ensures scalability by factoring a generative, Bayesian soft clustering model over label distributions into the classic David and Skene joint annotator-data model. Earlier research along these lines has neither fully incorporated label distributions nor explored clustering by annotators only or data only. Our framework incorporates all of these properties within a graphical model designed to provide better ground truth estimates of annotator responses as input to any black box supervised learning algorithm. We conduct supervised learning experiments with variations of our models and compare them to the performance of several baseline models. | Improving Label Quality by Joint Probabilistic Modeling of Items and Annotators |
d151775 | This paper introduces a method for learning to find translation of a given source term on the Web. In the approach, the source term is used as a query and part of patterns to retrieve and extract translations in Web pages. The method involves using a bilingual term list to learn sourcetarget surface patterns. At runtime, the given term is submitted to a search engine then the candidate translations are extracted from the returned summaries and subsequently ranked based on the surface patterns, occurrence counts, and transliteration knowledge. We present a prototype called TermMine that applies the method to translate terms. Evaluation on a set of encyclopedia terms shows that the method significantly outperforms the state-of-the-art online machine translation systems. | Learning Source-Target Surface Patterns for Web-based Terminology Translation |
d105006 | Discovering relations among Named Entities (NEs) from large corpora is both a challenging, as well as useful task in the domain of Natural Language Processing, with applications in Information Retrieval (IR), Summarization (SUM), Question Answering (QA) and Textual Entailment (TE). The work we present resulted from the attempt to solve practical issues we were confronted with while building systems for the tasks of Textual Entailment Recognition and Question Answering, respectively. The approach consists in applying grammar induced extraction patterns on a large corpus -Wikipedia -for the extraction of relations between a given Named Entity and other Named Entities. The results obtained are high in precision, determining a reliable and useful application of the built resource. | Named Entity Relation Mining Using Wikipedia |
d259376479 | The availability of annotated legal corpora is crucial for a number of tasks, such as legal search, legal information retrieval, and predictive justice. Annotation is mostly assumed to be a straightforward task: as long as the annotation scheme is well defined and the guidelines are clear, annotators are expected to agree on the labels. This is not always the case, especially in legal annotation, which can be extremely difficult even for expert annotators. We propose a legal annotation procedure that takes into account annotator certainty and improves it through negotiation. We also collect annotator feedback and show that our approach contributes to a positive annotation environment. Our work invites reflection on often neglected ethical concerns regarding legal annotation. | Annotators-in-the-loop: Testing a Novel Annotation Procedure on Italian Case Law |
d259376722 | In this manuscript, we describe the participation of UMUTeam in the Explainable Detection of Online Sexism shared task proposed at Se-mEval 2023. This task concerns the precise and explainable detection of sexist content on Gab and Reddit, i.e., developing detailed classifiers that not only identify what is sexist, but also explain why it is sexism. Our participation in the three EDOS subtasks is based on extending new unlabeled sexism data in the Masked Language Model task of a pre-trained model, such as RoBERTa-large to improve its generalization capacity and its performance on classification tasks. Once the model has been pre-trained with the new data, fine-tuning of this model is performed for different specific sexism classification tasks. Our system has achieved excellent results in this competitive task, reaching top 24 (84) in Task A, top 23 (69) in Task B, and top 13 (63) in Task C. | UMUTeam at SemEval-2023 Task 10: Fine-grained detection of sexism in English |
d8601805 | In this paper, we investigate the acoustic prosodic marking o[" demonstrative and personal pronouns in taskoriented dialog. Although it has been hypothesized that acouslie marking affects pronoun resolution, we find flint {l~e prosodic information extracted from tile data is not sufficienl to predict antcceden! lype reliably. Interspeaker variation accottnts for mt, ch of lhe prosodic variation that we find in our data. We conclude that prosodic cues shot, ld be handled with care in robust, speakerindependenl dialog systems. | Prosody and the Resolution of Pronominal Anaphora |
d198995979 | ||
d243601666 | Agriculture is an important aspect of India's economy, and the country currently has one of the highest rates of farm producers in the world. Farmers need hand holding with support of technology. A chatbot is a tool or assistant that you may communicate with via instant messages. The goal of this project is to create a Chatbot that uses Natural Language Processing with a Deep Learning model. In this project we have tried implementing Multi-Layer Perceptron model and Recurrent Neural Network models on the dataset. The accuracy given by RNN was 97.83%. | Text Based Smart Answering System in Agriculture using RNN |
d171550200 | ||
d236999867 | ||
d2430682 | Dans cet article, nous présentons la notion d'algorithme local et d'algorithme global pour la désambiguïsation lexicale de textes. Un algorithme local permet de calculer la proximité sémantique entre deux objets lexicaux. L'algorithme global permet de propager ces mesures locales à un niveau supérieur. Nous nous servons de cette notion pour confronter un algorithme à colonies de fourmis à d'autres méthodes issues de l'état de l'art, un algorithme génétique et un recuit simulé. En les évaluant sur un corpus de référence, nous montrons que l'efficacité temporelle des algorithmes à colonies de fourmis rend possible l'amélioration automatique du paramétrage et, en retour, leur amélioration qualitative. Enfin, nous étudions plusieurs stratégies de fusion tardive des résultats de nos algorithmes pour améliorer leurs performances.ABSTRACT. In this article, we present the notions of local and global algorithms, for the word sense disambiguation of texts. A local algorithm allows to calculate the semantic similarity between two lexical objects. Global algorithms propagate local measures at the upper level. We use this notion to compare an ant colony algorithm to other methods from the state of the art: a genetic algorithm and simulated annealing. Through their evaluation on a reference corpus, we show that the run-time efficiency of the ant colony algorithm makes the automated estimation of parameters possible and in turn the improvement of the quality results as well. Last, we study several late classifier fusion strategies over the results to improve the performance. MOTS-CLÉS : désambiguïsation lexicale fondée sur des similarités, algorithmes locaux/globaux, algorithmes à colonies de fourmis, algorithmes stochastiques d'optimisation.KEYWORDS: Word Sense Disambiguation based on similarity measures, local/global algorithms, Ant Colony Algorithms, Stochastic optimization algorithms.TAL. Volume 54 -n˚1/2013, pages 99 à 138 1. L'ensemble des relations sémantiques présentes dans WordNet est utilisé. 2. Banerjee et Pedersen (2002) introduisent également une notion de sous-séquence identique dans les définitions. Nous n'avons pas encore testé cette variante dont la complexité algorithmique est nettement supérieure à celle de notre algorithme. 3. | Désambiguïsation lexicale de textes : efficacité qualitative et temporelle d'un algorithme à colonies de fourmis |
d33547199 | IIHs pape]' describes a systemU~at extracl:s in1:ormal:ion lrom 14urlgarlan descriptive texLs o£ medical domain. lexLs of cZinJca] nam'atJves defim; a sublanguao e that uses ]imite.d synLax bui: holds the main character-istJcs o£ t:he language, namely #roe word order and r;ich mm'pho]ogy. We offer a fairly general parsirlo method for : [ree word orde£ ]anguaoes and the way how to use ~-1: for parsing Hungarian c]in~ca] texts, lhe system can hand]e si.mple cases e[ el.]ipses, anaphoT'a, Luqknown woFd8 al/{] LypJ ca:[ abbrev:i.at]ons ef e] ] n:iea] practice. lhe system trans].ates texts {d! anamneses, paLJeet v:isiLs, ]ahoral:ory l:esl:s, medlcai examinations aed discharge sumnlaries Jnl:o an JnJ:ornlaL]on :[ormaL u'.~abZI.u for a ined]ea:l exper I: aystem. Szm:i.] aKly to Lhis expert system, Uqe :in1:ormal;:i.on formattinO prograln has been wr:il:Len in MPROI OG language and JLs experJmenta] versj.on ['lllls OIl PROPER-]6, a tlungarian made (]BM-XI compatib]e) m]c~:ucomputer.]. OVERV]:I:W .l:n the pasl: :Jew years we Imve deve]oped a comHutatJona] system te analyze I lungariar~ I:exLs oeJog a morphologica] ana:l.yzer (Ih'risx~ky e[ a] 1982) and a genre:a] pars]no tn:o!jram ca]led ANAGRAMMA (=ANA]yt](: [;RAMMAr) (Pr6szdky 1984). Ihe who]e sysLeln for JllJ'Orlllat:Jon fc]rll]a L L] [1N is hosed on Ltlese modules alld co, isis Ls oY' five coMsoquer/t parts: (i) morpho]ogJr:ai ana:l.ysJs, (J J ) nornlalizal;ion, (~ i:i ) pak'sJrl!j, (iv) evallJa LJcJN, (v) mappino into the Jflformal:h)n formaL. Ihe lasl: []]ock JIS all eperaLJon that converts the output o[ ANAGRAMMA, which Js (]J)+(JJi)~(Jv), to a :[OFIIlat Lhal: can be use.d by a ned:lea] export oystem. ]he approach leads to a structure shown by Figure 1. | PROCESSING CLINICAl NARRAI]}VES IN IIUNGAR]:AN |
d3452903 | ||
d16870912 | The theoretical study of the range concatenation grammar [RCG] formalism has revealed many attractive properties which may be used in NLP. In particular, range concatenation languages [RCL] can be parsed in polynomial time and many classical grammatical formalisms can be translated into equivalent RCGs without increasing their worst-case parsing time complexity. For example, after translation into an equivalent RCG, any tree adjoining grammar can be parsed inIn this paper, we study a parsing technique whose purpose is to improve the practical efficiency of RCL parsers. The non-deterministic parsing choices of the main parser for a language are directed by a guide which uses the shared derivation forest output by a prior RCL parser for a suitable superset of . The results of a practical evaluation of this method on a wide coverage English grammar are given. ). Last, in Section 4, we relate some experiments with a wide coverage tree-adjoining grammar [TAG] for English. | Guided Parsing of Range Concatenation Languages |
d53593895 | There are millions of articles in PubMed database. To facilitate information retrieval, curators in the National Library of Medicine (NLM) assign a set of Medical Subject Headings (MeSH) to each article. MeSH is a hierarchically-organized vocabulary, containing about 28K different concepts, covering the fields from clinical medicine to information sciences. Several automatic MeSH indexing models have been developed to improve the time-consuming and financially expensive manual annotation, including the NLM official tool -Medical Text Indexer, and the winner of BioASQ Task5a challenge -DeepMeSH. However, these models are complex and not interpretable. We propose a novel end-to-end model, AttentionMeSH, which utilizes deep learning and attention mechanism to index MeSH terms to biomedical text. The attention mechanism enables the model to associate textual evidence with annotations, thus providing interpretability at the word level. The model also uses a novel masking mechanism to enhance accuracy and speed. In the final week of BioASQ Chanllenge Task6a, we ranked 2nd by average MiF using an on-construction model. After the contest, we achieve close to state-of-the-art MiF performance of ∼ 0.684 using our final model. Human evaluations show AttentionMeSH also provides high level of interpretability, retrieving about 90% of all expert-labeled relevant words given an MeSHarticle pair at 20 output. | AttentionMeSH: Simple, Effective and Interpretable Automatic MeSH Indexer |
d226283658 | ||
d6899907 | Although researches have been conducted on the polysemous nature of some Korean psych-adjectives, no consensus has been made on the criteria used for evaluating the polysemy. Furthermore, few formalizations (semantic structures) have been proposed for the polysemous phenomena. The purpose of this paper is twofold: 1) to propose new criteria for distinguishing polysemous psych-adjectives from monosemous ones, and 2) to provide exact semantic structures for the polysemous psych-adjectives. For the second goal in particular, I will work under the framework of Jackendoff's Conceptual Semantics. | Semantic Structures of Polysemous Psych-adjectives in Korean: A Conceptual Semantics Approach |
d8191080 | INTRODUCTIO NThis paper describes the system used for the University of Sussex team's participation in the MUC-5 messag e understanding trials . What is described below is the result of 12 person-months of intensive effort over six month s to adapt a pre-existing system, designed with very different objectives and application in mind, to the MUC-5 English Joint Ventures task. This task, starting from cold, is colossal : the overhead of understanding the task , the training data, the scoring, the background resources, of developing a suitable harness for the system, not to mention sorting out contractual arrangements, leaves little time for even basic porting -actual developmen t tailored to the task was a very remote prospect . So, despite the quirks and failings exposed by the discussio n below of the 'walkthrough ' example, we are pleased with our system's performance, and believe the effort to hav e been a worthwhile part of our ongoing research . | SUSSEX UNIVERSITY : DESCRIPTION OF THE SUSSEX SYSTEM USED FOR MUC-5 INTRODUCTIO N |
d250390627 | Kyle (1985)proposes two types of rumors: informed rumors that are based on some private information and uninformed rumors that are not based on any information (i.e. bluffing). Also, prior studies find that when people have credible source of information, they are likely to use a more confident textual tone in their spreading of rumors. Motivated by these theoretical findings, we propose a double-channel structure to determine the ex-ante veracity of rumors on social media. Our ultimate goal is to classify each rumor into true, false, or unverifiable category. We first assign each text into either certain (informed rumor) or uncertain (uninformed rumor) category. Then, we apply lie detection algorithm to informed rumors and thread-reply agreement detection algorithm to uninformed rumors. Using the dataset of Se-mEval 2019 Task 7, which requires ex-ante threefold classification (true, false, or unverifiable) of social media rumors, our model yields a macro-F1 score of 0.4027, outperforming all the baseline models and the second-place winner(Gorrell et al., 2019). Furthermore, we empirically validate that the double-channel structure outperforms single-channel structures which use either lie detection or agreement detection algorithm to all posts. 1 | Detecting Rumor Veracity with Only Textual Information by Double-Channel Structure |
d5810690 | Automatic post-editors (APEs) enable the re-use of black box machine translation (MT) systems for a variety of tasks where different aspects of translation are important. In this paper, we describe APEs that target adequacy errors, a critical problem for tasks such as cross-lingual question-answering, and compare different approaches for post-editing: a rule-based system and a feedback approach that uses a computer in the loop to suggest improvements to the MT system. We test the APEs on two different MT systems and across two different genres. Human evaluation shows that the APEs significantly improve adequacy, regardless of approach, MT system or genre: 30-56% of the post-edited sentences have improved adequacy compared to the original MT. | Can Automatic Post-Editing Make MT More Meaningful? |
d2579165 | Large corpora are crucial resources for building many statistical language technology systems, and the Web is a readilyavailable source of vast amounts of linguistic data from which to construct such corpora. Nevertheless, little research has considered how to best build corpora from the Web. In this study we consider the importance of language identification in Web corpus construction. Beginning with a Web crawl consisting of documents identified as English using a standard language identification tool, we build corpora of varying sizes both with, and without, further filtering of non-English documents with a stateof-the-art language identifier. We show that the perplexity of a standard English corpus is lower under a language model trained from a Web corpus built with this extra language identification step, demonstrating the importance of state-of-the-art language identification in Web corpus construction.1 | langid.py for better language modelling |
d14483333 | In applications such as translation and paraphrase, operations are carried out on grammars at the meta level. This paper shows how a meta-grammar, defining structure at the meta level, is useful in the case of such operations; in particular, how it solves problems in the current definition of Synchronous TAG(Shieber, 1994)caused by ignoring such structure in mapping between grammars, for applications such as translation. Moreover, essential properties of the formalism remain unchanged. | A Meta-Level Grammar: Redefining Synchronous TAG for Translation and Paraphrase |
d42540340 | Coprus-based approaches to natural language analysis that utilize recent sophisticated machine learning algorithms have now become to achieve very good performance. In this talk I will overview and categorize machine learning based natural language processing tasks and our experiences of using machine learning to various tasks such as segmentation, POS tagging, phrase and NE chunking, and syntactic parsing. I then discuss pros and cons of machine learning approaches and future issues. Finally, I will introduce an ongoing project of annotated corpus maintenance tools for developing consistent data for corpus and machine learning based NLP research.PACLIC 18, December 8th-10th, 2004, Waseda University, Tokyo -15 - | Machine Learning based NLP: Experiences and Supporting Tools |
d157618 | We report on an effort to build a corpus of Modern Hebrew tagged with parts of speech and morphology. We designed a tagset specific to Hebrew while focusing on four aspects: the tagset should be consistent with common linguistic knowledge; there should be maximal agreement among taggers as to the tags assigned to maintain consistency; the tagset should be useful for machine taggers and learning algorithms; and the tagset should be effective for applications relying on the tags as input features. In this paper, we illustrate these issues by explaining our decision to introduce a tag for beinoni forms in Hebrew. We explain how this tag is defined, and how it helped us improve manual tagging accuracy to a high-level, while improving automatic tagging and helping in the task of syntactic chunking. | Tagging a Hebrew Corpus: The Case of Participles |
d5935518 | This paper deals with the task of finding generally applicable substitutions for a given input term. We show that the output of a distributional similarity system baseline can be filtered to obtain terms that are not simply similar but frequently substitutable. Our filter relies on the fact that when two terms are in a common entailment relation, it should be possible to substitute one for the other in their most frequent surface contexts. Using the Google 5-gram corpus to find such characteristic contexts, we show that for the given task, our filter improves the precision of a distributional similarity system from 41% to 56% on a test set comprising common transitive verbs. | Finding Word Substitutions Using a Distributional Similarity Baseline and Immediate Context Overlap |
d1898739 | We present an approach to learning a modeltheoretic semantics for natural language tied to Freebase. Crucially, our approach uses an open predicate vocabulary, enabling it to produce denotations for phrases such as "Republican front-runner from Texas" whose semantics cannot be represented using the Freebase schema. Our approach directly converts a sentence's syntactic CCG parse into a logical form containing predicates derived from the words in the sentence, assigning each word a consistent semantics across sentences. This logical form is evaluated against a learned probabilistic database that defines a distribution over denotations for each textual predicate. A training phase produces this probabilistic database using a corpus of entitylinked text and probabilistic matrix factorization with a novel ranking objective function. We evaluate our approach on a compositional question answering task where it outperforms several competitive baselines. We also compare our approach against manually annotated Freebase queries, finding that our open predicate vocabulary enables us to answer many questions that Freebase cannot. | Learning a Compositional Semantics for Freebase with an Open Predicate Vocabulary |
d17103462 | This paper presents a system which adopts a standard sequence labeling technique for hedge detection and scope finding. For the first task, hedge detection, we formulate it as a hedge labeling problem, while for the second task, we use a two-step labeling strategy, one for hedge cue labeling and the other for scope finding. In particular, various kinds of syntactic features are systemically exploited and effectively integrated using a large-scale normalized feature selection method. Evaluation on the CoNLL-2010 shared task shows that our system achieves stable and competitive results for all the closed tasks. Furthermore, post-deadline experiments show that the performance can be much further improved using a sufficient feature selection. | Hedge Detection and Scope Finding by Sequence Labeling with Normalized Feature Selection * |
d259833873 | Grapheme-to-phoneme (G2P) conversion is a task that is inherently related to both written and spoken language. Therefore, our submission to the G2P shared task builds off of mSLAM (Bapna et al., 2022), a 600M parameter encoder model pretrained simultaneously on text from 101 languages and speech from 51 languages. For fine-tuning a G2P model, we combined mSLAM's text encoder, which uses characters as its input tokens, with an uninitialized single-layer RNN-T decoder (Graves, 2012) whose vocabulary is the set of all 381 phonemes appearing in the shared task data. We took an explicitly multilingual approach to modeling the G2P tasks, fine-tuning and evaluating a single model that covered all the languages in each task, and adding language codes as prefixes to the input strings as a means of specifying the language of each example.Our models perform well in the shared task's "high" setting (in which they were trained on 1,000 words from each language), though they do poorly in the "low" task setting (training on only 100 words from each language). Our models also perform reasonably in the "mixed" setting (training on 100 words in the target language and 1000 words in a related language), hinting that mSLAM's multilingual pretraining may be enabling useful cross-lingual sharing. | Fine-tuning mSLAM for the SIGMORPHON 2022 Shared Task on Grapheme-to-Phoneme Conversion References |
d14078717 | Psycho-acoustical research investigates how human listeners are able to separate sounds that stem from different sources. This ability might be one of the reasons that human speech processing is robust to noise but methods that exploit this are, to our knowledge, not used in systems for automatic formant extraction or in modern speech recognition systems. Therefore we investigate the possibility to use harmonics that are consistent with a harmonic complex as the basis for a robust formant extraction algorithm. With this new method we aim to overcome limitations of most modern automatic speech recognition systems by taking advantage of the robustness of harmonics at formant positions. We tested the effectiveness of our formant detection algorithm on Hillenbrand's annotated American English Vowels dataset and found that in pink noise the results are competitive with existing systems. Furthermore, our method needs no training and is implementable as a realtime system which contrasts many of the existing systems. | Psycho-acoustically motivated formant feature extraction |
d237365379 | Metaphor detection plays an important role in tasks such as machine translation and humanmachine dialogue. As more users express their opinions on products or other topics on social media through metaphorical expressions, this task is particularly especially topical. Most of the research in this field focuses on English, and there are few studies on minority languages that lack language resources and tools. Moreover, metaphorical expressions have different meanings in different language environments. We therefore established a deep neural network (DNN) framework for Uyghur metaphor detection tasks. The proposed method can focus on the multilevel semantic information of the text from word embedding, part of speech and location, which makes the feature representation more complete. We also use the emotional information of words to learn the emotional consistency features of metaphorical words and their context. A qualitative analysis further confirms the need for broader emotional information in metaphor detection. Our results indicate the performance of Uyghur metaphor detection can be effectively improved with the help of multi-attention and emotional information. | Uyghur Metaphor Detection Via Considering Emotional Consistency |
d235599183 | ||
d18012161 | This paper describes a deep-parsing approach to SemEval-2014 Task 6, a novel context-informed supervised parsing and semantic analysis problem in a controlled domain. The system comprises a handbuilt rule-based solution based on a preexisting broad coverage deep grammar of English, backed up by a off-the-shelf datadriven PCFG parser, and achieves the best score reported among the task participants.This work is licensed under a Creative Commons Attribution 4.0 International Licence. Page numbers and proceedings footer are added by the organizers. Licence details: | UW-MRS: Leveraging a Deep Grammar for Robotic Spatial Commands |
d17306062 | Ambiguous Arabic Words Disambiguation: The results | |
d2507032 | In this paper, we present the Taalportaal project. Taalportaal will create an online portal containing an exhaustive and fully searchable electronic reference of Dutch and Frisian phonology, morphology and syntax. Its content will be in English. The main aim of the project is to serve the scientific community by organizing, integrating and completing the grammatical knowledge of both languages, and to make this data accessible in an innovative way. The project is carried out by a consortium of four universities and research institutions. Content is generated in two ways: (1) by a group of authors who, starting from existing grammatical resources, write text directly in XML, and (2) by integrating the full Syntax of Dutch into the portal, after an automatic conversion from Word to XML. We discuss the project's workflow, content creation and management, the actual web application, and the way in which we plan to enrich the portal's content, such as by crosslinking between topics and linking to external resources. | Taalportaal: an online grammar of Dutch and Frisian |
d1486279 | A probabilistic approach to lexieal access from a recognized phone sequence is presented. Lexical access is seen as finding the word sequence that maximizes the lexical likelihood of a sequence of phones and durations as recognized by a phone recognizer. This is theoretically correct for minimum error rate recognition within the model presented and is intuitively pleasing since it means that the "confusion matrix" of the phone recognizer will be learned and its regularities exploited. The lexical likelihoods are estimated from training data provided by the phone recognizer using statistical decision trees. Classification trees are used to estimate the phone realiziation distributions and regression trees are used to estimate the phone duration distributions, We find they can capture effectively allophonic variation, alternative pronunciation, word co-articulation and segmental durations. We describe a simpified, but efficient implementation of these models to lexical access in the DARPA resource management recognitiion task. | LEXICAL ACCESS WITH A STATISTICALLY-DERIVED PHONETIC NETWORK |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.