_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d1922244 | Verb-particle combinations (VPCs) consist of a verbal and a preposition/particle component, which often have some additional meaning compared to the meaning of their parts. If a data-driven morphological parser or a syntactic parser is trained on a dataset annotated with extra information for VPCs, they will be able to identify VPCs in raw texts. In this paper, we examine how syntactic parsers perform on this task and we introduce VPCTagger, a machine learning-based tool that is able to identify English VPCs in context. Our method consists of two steps: it first selects VPC candidates on the basis of syntactic information and then selects genuine VPCs among them by exploiting new features like semantic and contextual ones. Based on our results, we see that VPC-Tagger outperforms state-of-the-art methods in the VPC detection task. | VPCTagger: Detecting Verb-Particle Constructions With Syntax-Based Methods |
d14866455 | We describe an experiment on collecting large language and topic specific corpora automatically by using a focused Web crawler. Our crawler combines efficient crawling techniques with a common text classification tool. Given a sample corpus of medical documents, we automatically extract query phrases and then acquire seed URLs with a standard search engine. Starting from these seed URLs, the crawler builds a new large collection consisting only of documents that satisfy both the language and the topic model. The manual analysis of acquired English and German medicine corpora reveals the high accuracy of the crawler. However, there are significant differences between both languages. | Language Specific and Topic Focused Web Crawling |
d1278819 | This paper explores the use of Propositional Dynamic Logic (PDL) as a suitable formal framework for describing Sign Language (SL), the language of deaf people, in the context of natural language processing. SLs are visual, complete, standalone languages which are just as expressive as oral languages. Signs in SL usually correspond to sequences of highly specific body postures interleaved with movements, which make reference to real world objects, characters or situations. Here we propose a formal representation of SL signs, that will help us with the analysis of automatically-collected hand tracking data from French Sign Language (FSL) video corpora. We further show how such a representation could help us with the design of computer aided SL verification tools, which in turn would bring us closer to the development of an automatic recognition system for these languages. | Sign Language Lexical Recognition With Propositional Dynamic Logic |
d20806800 | Electronic databases are increasingly popular tools in typological research. Despite the advantages of such tools, there are problems connected both with their construction and with their standardization. For instance, there is generally a considerable gap between the information stored in typological databases and primary data: primary morphosyntactic data are much more difficult to handle computationally than typological generalizations. Moreover, the need for standardization has led typologists to develop highly refined glossing practices and guidelines for collecting data, but there are still too few initiatives to increase standardization in typological databases. The aim of this paper is to suggest a radically new approach to the storage of data for typological analysis. The Med-Typ Database, which is currently being developed at the University of Pavia, has been providing us with concrete experience of the problems that need to be addressed when creating typological databases. This database uses XML annotation and aims to be both a collection of data for future analyses of areal distribution of features within the Mediterranean area and a tool for systematic analysis of the range of variation found in various typological domains. | MED-TYP: A Typological Database for Mediterranean Languages Introduction: the MED-TYP project |
d221097972 | ||
d2533066 | Recent studies into Web retrieval have shown that word sense disambiguation can increase retrieval effectiveness. However, it remains unclear as to the minimum disambiguation accuracy required and the granularity with which one must define word sense in order to maximize these benefits. This study answers these questions using a simulation of the effects of ambiguity on information retrieval. It goes beyond previous studies by differentiating between homonymy and polysemy. Results show that retrieval is more sensitive to polysemy than homonymy and that, when resolving polysemy, accuracy as low as 55% can potentially lead to increased performance. | Differentiating Homonymy and Polysemy in Information Retrieval |
d238638465 | ||
d218973710 | ||
d529001 | Describing Discourse Semantics | |
d53106215 | This paper describes the system of our team Phoenix for participating CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Given the annotated gold standard data in CoNLL-U format, we train the tokenizer, tagger and parser separately for each treebank based on an open source pipeline tool UDPipe. Our system reads the plain texts for input, performs the preprocessing steps (tokenization, lemmas, morphology) and finally outputs the syntactic dependencies. For the low-resource languages with no training data, we use cross-lingual techniques to build models with some close languages instead. In the official evaluation, our system achieves the macro-averaged scores of 65.61%, 52.26%, 55.71% for LAS, MLAS and BLEX respectively. | Multilingual Universal Dependency Parsing from Raw Text with Low-resource Language Enhancement |
d9271055 | The ACL-2005 Workshop on Parallel Texts hosted a shared task on building statistical machine translation systems for four European language pairs: French-English, German-English, Spanish-English, and Finnish-English. Eleven groups participated in the event. This paper describes the goals, the task definition and resources, as well as results and some analysis. | Shared Task: Statistical Machine Translation between European Languages |
d18035473 | While automatic metrics of translation quality are invaluable for machine translation research, deeper understanding of translation errors require more focused evaluations designed to target specific aspects of translation quality. We show that Word Sense Disambiguation (WSD) can be used to evaluate the quality of machine translation lexical choice, by applying a standard phrase-based SMT system on the SemEval2010 Cross-Lingual WSD task. This case study reveals that the SMT system does not perform as well as a WSD system trained on the exact same parallel data, and that local context models based on source phrases and target n-grams are much weaker representations of context than the simple templates used by the WSD system. | A Semantic Evaluation of Machine Translation Lexical Choice |
d17625735 | Lexical Resources are a critical component for Natural Language Processing applications. However, the high cost of comparing and merging different resources has been a bottleneck to obtain richer resources and a broader range of potential uses for a significant number of languages. With the objective of reducing cost by eliminating human intervention, we present a new method towards the automatic merging of resources. This method includes both, the automatic mapping of resources involved to a common format and merging them, once in this format. This paper presents how we have addressed the merging of two verb subcategorization frame lexica for Spanish, but our method will be extended to cover other types of Lexical Resources. The achieved results, that almost replicate human work, demonstrate the feasibility of the approach. | A Method Towards the Fully Automatic Merging of Lexical Resources |
d16719235 | String comparison methods such as BLEU(Papineni et al., 2002)are the de facto standard in MT evaluation (MTE) and in MT system parameter tuning(Och, 2003). It is difficult for these metrics to recognize legitimate lexical and grammatical paraphrases, which is important for MT system tuning (Madnani, 2010). We present two methods to address this: a shallow lexical substitution technique and a grammar-driven paraphrasing technique. Grammatically precise paraphrasing is novel in the context of MTE, and demonstrating its usefulness is a key contribution of this paper. We use these techniques to paraphrase a single reference, which, when used for parameter tuning, leads to superior translation performance over baselines that use only human-authored references. | Shallow and Deep Paraphrasing for Improved Machine Translation Parameter Optimization |
d2260483 | ÈÐÙ Ò ÈÐ Ý ËÔ ÍÒ Öר Ò Ò Å ÒÒÝ Ê ÝÒ Ö¸Á Ò Ä Û Ò ² Ò Ú Ú ÓÖÖ ÐÐ Ò Ø × ÓÒ× ÄØ Ï ÐÐ Ò ØÓÒ ÀÓÙ× ¸ ר ÊÓ ¸ Ñ Ö ½ ½ À¸Íà | |
d11757102 | This paper describes our SeemGo system for the task of Aspect Based Sentiment Analysis in SemEval-2014. The subtask of aspect term extraction is cast as a sequence labeling problem modeled with Conditional Random Fields that obtains the F-score of 0.683 for Laptops and 0.791 for Restaurants by exploiting both word-based features and context features. The other three subtasks are solved by the Maximum Entropy model, with the occurrence counts of unigram and bigram words of each sentence as features. The subtask of aspect category detection obtains the best result when applying the Boosting method on the Maximum Entropy model, with the precision of 0.869 for Restaurants. The Maximum Entropy model also shows good performance in the subtasks of both aspect term and aspect category polarity classification. | SeemGo: Conditional Random Fields Labeling and Maximum Entropy Classification for Aspect Based Sentiment Analysis |
d9512840 | Traditional accounts of quantifier scope employ qualitative constraints or rules to account for scoping preferences. This paper outlines a feature-based parsing algorithm for a grammar with multiple simultaneous levels of representation, one of which corresponds to a partial ordering among quantifiers according to scope. The optimal such ordering (as well as the ranking of other orderings) is determined in this grammar not by absolute constraints, but by stochastic heuristics based on the degree of alignment among the representational levels. A Prolog implementation is described and its accuracy is compared with that of other accounts. | Parsing Parallel Grammatical Representations |
d2306140 | In this work we present further development of the SpLaSH (Spoken Language Search Hawk) project. SpLaSH implements a data model for annotated speech corpora integrated with textual markup (i.e. POS tagging, syntax, pragmatics) including a toolkit used to perform complex queries across speech and text labels. The integration of time aligned annotations (TMA), represented making use of Annotation Graphs, with text aligned ones (TXA), stored in generic XML files, are provided by a data structure, the Connector Frame, acting as table-look-up linking temporal data to words in the text. SpLaSH imposes a very limited number of constraints to the data model design, allowing the integration of annotations developed separately within the same dataset and without any relative dependency. It also provides a GUI allowing three types of queries: simple query on TXA or TMA structures, sequence query on TMA structure and cross query on both TXA and TMA integrated structures. In this work new SpLaSH features will be presented: SpLaSH Query Language (SpLaSHQL) and Query Sequence. | New features in Spoken Language Search Hawk (SpLaSH): Query Language and Query Sequence |
d8222729 | Compounds constitute a specific issue in search, in particular in languages where they are written in one word, as is the case for Danish and the other Scandinavian languages. For such languages, expansion of the query compound into separate lemmas is a way of finding the often frequent alternative synonymous phrases in which the content of a compound can also be expressed. However, it is crucial to note that the number of irrelevant hits is generally very high when using this expansion strategy. The aim of this paper is to examine how we can obtain better search results on split compounds, partly by looking at the internal structure of the original compound, partly by analyzing the context in which the split compound occurs. We perform an NP analysis and introduce a new, linguistically based threshold for retrieved hits. The results obtained by using this strategy demonstrate that compound splitting combined with a shallow linguistic analysis focusing on the recognition of NPs can improve search by bringing down the number of irrelevant hits. | Query Expansion on Compounds |
d172520 | Most evaluations of part-of-speech tagging compare the utput of an automatic tagger to some established standard, define the differences as tagging errors and try to remedy them by, e.g., more training of the tagger. The present article is based on a manual analysis of a large number of tagging errors. Some clear patterns among the errors can be discerned, and the sources of the errors as well as possible alternative methods of remedy are presented and discussed. In particular are the problems with undecidable cases treated. | Linguistic Indeterminacy as a Source of Errors in Tagging |
d21690111 | Medical texts such as electronic health records are necessary for medical AI development. Nevertheless, it is difficult to use data directly because medical texts are written mostly in natural language, requiring natural language processing (NLP) for medical texts. To boost the fundamental accuracy of Medical NLP, a high coverage dictionary is required, especially one that fills the gap separating standard medical names and real clinical words. This study developed a Japanese disease name dictionary called "J-MeDic" to fill this gap. The names that comprise the dictionary were collected from approximately 45,000 manually annotated real clinical case reports. We allocated the standard disease code (ICD-10) to them with manual, semi-automatic, or automatic methods, in accordance with its frequency. The J-MeDic covers 7,683 concepts (in ICD-10) and 51,784 written forms. Among the names covered by J-MeDic, 55.3% (6,391/11,562) were covered by SDNs; 44.7% (5,171/11,562) were covered by names added from the CR corpus. Among them, 8.4% (436/5,171) were basically coded by humans), and 91.6% (4,735/5,171) were basically coded automatically. We investigated the coverage of this resource using discharge summaries from a hospital; 66.2% of the names are matched with the entries, revealing the practical feasibility of our dictionary. | J-MeDic: A Japanese Disease Name Dictionary based on Real Clinical Usage |
d6593598 | Temporal relation classification task has issues of fourteen target relations, skewed distribution of the target relations, and relatively small amount of data. To overcome the issues, methods such as merging target relations and increasing data size with closure algorithm have been used. However, the method using merged relations has a problem on how to recover original relations. In this paper, a new reduced-relation method is proposed. The method decomposes a target relation into four pairs of endpoints with three target relations. After classifying a relation of each endpoint pair, four classified relations are combined into a relation of original fourteen target relations. In the combining step, two heuristics are examined. | Temporal Relation Identification with Endpoints |
d16480028 | All of BBN's research under the TIPSTER III program has focused on doing extraction by applying statistical models trained on annotated data, rather than by using programs that execute hand-written rules. Within the context of MUC-7, the SIFT system for extraction of template entities (TE) and template relations (TR) used a novel, integrated syntactic/semantic language model to extract sentence level information, and then synthesized information across sentences using in part a trained model for cross-sentence relations. At the named entity (NE) level as well, in both MET-1 and MUC-7, BBN employed a trained, HMM-based model.The results in these TIPSTER evaluations are evidence that such trained systems, even at their current level of development, can perform roughly on a par with those based on rules handtailored by experts. In addition, such trained systems have some significant advantages:• They can be easily ported to new domains by simply annotating fresh data.• The complex interactions that make rulebased systems difficult to develop and maintain can here be learned automatically from the training data.We believe that improved and extended versions of such trained models have the potential for significant further progress toward practical systems for information extraction. | ALGORITHMS THAT LEARN TO EXTRACT INFORMATION m BBN: TIPSTER PHASE III |
d18474269 | The main purpose of this paper is the exploitation and application of an audio and video bimodal corpus of the Chinese language in broadcasting. It deals with the designation of the size and structure of speech samples according to radio and television program features. Secondly, it discusses annotation method of broadcast speech with achievements made and suggested future improvements. Finally, it presents an attempt to describe the distribution of annotated items in our corpus. | Broadcast Audio and Video Bimodal Corpus Exploitation and Application |
d6789293 | In this paper, we report on methods to detect and repair lexical errors for deep grammars. The lack of coverage has for long been the major problem for deep processing. The existence of various errors in the hand-crafted large grammars prevents their usage in real applications. The manual detection and repair of errors requires a significant amount of human effort. An experiment with the British National Corpus shows about 70% of the sentences contain unknown word(s) for the English Resource Grammar (ERG;(Copestake and Flickinger, 2000)). With the help of error mining methods, many lexical errors are discovered, which cause a large part of the parsing failures. Moreover, with a lexical type predictor based on a maximum entropy model, new lexical entries are automatically generated. The contribution of various features for the model are evaluated. With the disambiguated full parsing results, the precision of the predictor is enhanced significantly. | Automated Deep Lexical Acquisition for Robust Open Texts Processing |
d21714217 | Neural word embedding models trained on sizable corpora have proved to be a very efficient means of representing meaning. However, the abstract vectors representing words and phrases in these models are not interpretable for humans by themselves. In this paper we present the Thing Recognizer, a method that assigns explicit symbolic semantic features from a finite list of terms to words present in an embedding model, making the model interpretable for humans and covering the semantic space by a controlled vocabulary of semantic features. We do this in a cross-lingual manner, applying semantic tags taken form lexical resources in one language (English) to the embedding space of another (Hungarian). | Cross-Lingual Generation and Evaluation of a Wide-Coverage Lexical Semantic Resource |
d14158419 | We describe the FAUST entry to the BioNLP 2011 shared task on biomolecular event extraction. The FAUST system explores several stacking models for combination using as base models the UMass dual decomposition (Riedel and McCallum, 2011) and Stanford event parsing(McClosky et al., 2011b)approaches. We show that using stacking is a straightforward way to improving performance for event extraction and find that it is most effective when using a small set of stacking features and the base models use slightly different representations of the input data. The FAUST system obtained 1st place in three out of four tasks: 1st place in Genia Task 1 (56.0% f-score) and Task 2 (53.9%), 2nd place in the Epigenetics and Post-translational Modifications track (35.0%), and 1st place in the Infectious Diseases track (55.6%). | Model Combination for Event Extraction in BioNLP 2011 |
d218974383 | ||
d8764553 | We describe the word sense annotation layer in Eukalyptus, a freely available five-domain corpus of contemporary Swedish with several annotation layers. The annotation uses the SALDO lexicon to define the sense inventory, and allows word sense annotation of compound segments and multiword units. We give an overview of the new annotation tool developed for this project, and finally present an analysis of the inter-annotator agreement between two annotators. | A Multi-domain Corpus of Swedish Word Sense Annotation |
d28894878 | 摘要 由 於 網 際 網路的 蓬勃 發展 與海量資料 時代 的來臨,近幾年 來自 動摘要 (Automatic Summarization)已儼然成為一項熱門的研究議題。節錄式(Extractive)自動摘要是根據事 先定義的摘要比例,從文字文件(Text Documents)或語音文件(Spoken Documents)中選取 一些能夠代表原始文件主旨或主題的重要語句當作摘要。在相關研究中,使用語言模型 (Language Modeling)結合庫爾貝克-萊伯勒離散度(Kullback-Leibler Divergence)的架構來 挑選重要語句之方法,已初步地被驗證在文字與語音文件的自動摘要任務上有不錯的成 果。基於此架構,本論文探究語句明確度(Clarity)資訊對於語音文件摘要任務之影響性, 並進一步地藉由明確度的輔助來重新詮釋如何能在自動摘要任務中適當地挑選重要且 具代表性的語句。此外,本論文亦針對語句模型的調適方法進行研究;在運用關聯性 (Relevance)的概念下,嘗試藉由每一語句各自的關聯性資訊,重新估測並建立語句的語 言模型,使其得以更精準地代表語句的語意內容,並增進自動摘要之效能。本論文的語 音文件摘要實驗語料是採用公視廣播新聞(MATBN);實驗結果顯示,相較於其它現有 的非監督式摘要方法,我們所發展的新穎式摘要方法能提供明顯的效能改善。 關鍵詞:節錄式自動摘要、語言模型、庫爾貝克-萊伯勒離散度、語句明確度、關聯性 一、緒論 隨著海量資料時代的來臨,巨量的文字及多媒體影音資訊被快速地傳遞並分享於全球各 地,資訊超載(Information Overload)的問題也因此產生。如何能讓人們快速且有效率地 瀏覽與日俱增的文字資訊或多媒體影音資訊,已成為一個刻不容緩的研究課題。在眾多 的研究方法中,自動摘要(Automatic Summarization)被視為是一項不可或缺的關鍵技術 [16]。自動摘要之目的在於擷取單一文件(Single-Document)或多重文件(Multi-Document) 中的重要語意與主題資訊,藉此讓使用者能更有效率地瀏覽與理解文件的主旨,以便快 速地獲得其所需的資訊,避免花費大量時間在審視文件內容。另一方面,語音是多媒體 文件中最具資訊的成分之一;如何透過語音(文件)摘要技術來自動地、有效率地處理具 時序性的多媒體影音內容,例如:電視新聞、廣播新聞、郵件、電子郵件、會議及演講 | Improved Sentence Modeling Techniques for Extractive Speech Summarization |
d53080683 | State-of-the-art networks that model relations between two pieces of text often use complex architectures and attention. In this paper, instead of focusing on architecture engineering, we take advantage of small amounts of labelled data that model semantic phenomena in text to encode matching features directly in the word representations. This greatly boosts the accuracy of our reference network, while keeping the model simple and fast to train. Our approach also beats a tree kernel model that uses similar input encodings, and neural models which use advanced attention and compare-aggregate mechanisms. | Semantic Linking in Convolutional Neural Networks for Answer Sentence Selection |
d12691266 | This paper extracts collocationbasing on semantic dependency parsing, and then constructs a collocation bankwith two levels according to frequency: the instance-level and semantic level. Compared with conventional extracting ways, the collocationsextracted in this paper have closer relationship and higher quality both on the lexical structure and semantic structure. PACLIC 29 232 | Construction of Semantic Collocation Bank Based on Semantic Dependency Parsing |
d13207981 | This paper presents development and test sets for machine translation of search queries in cross-lingual information retrieval in the medical domain. The data consists of the total of 1,508 real user queries in English translated to Czech, German, and French. We describe the translation and review process involving medical professionals and present a baseline experiment where our data sets are used for tuning and evaluation of a machine translation system. | Multilingual Test Sets for Machine Translation of Search Queries for Cross-Lingual Information Retrieval in the Medical Domain |
d245855885 | This paper describes Huawei Artificial Intelligence Application Research Center's neural machine translation systems and submissions to the WMT21 biomedical translation shared task. Four of the submissions achieve state-of-the-art BLEU scores based on the official-released automatic evaluation results (EN→FR, EN↔IT and ZH→EN). We perform experiments to unveil the practical insights of the involved domain adaptation techniques, including finetuning order, terminology dictionaries, and ensemble decoding. Issues associated with overfitting and under-translation are also discussed. | Huawei AARC's Submissions to the WMT21 Biomedical Translation Task: Domain Adaption from a Practical Perspective |
d17564610 | Chart parsing is directional in the sense that it works from the starting point (usually the beginning of the sentence) extending its activity usually in a rightward manner. We shall introduce the concept of a chart that works outward from islands and makes sense of as much of the sentence as it is actually possible, and after that will lead to predictions of missing fragments. So, for any place where the easily identifiable fragments occur in the sentence, the process will extend to both the left and the right of the islands, until possibly completely missing fragments are reached. At that point, by virtue of the fact that both a left and a right context were found, heuristics can be introduced that predict the nature of the missing fragments. | Island Parsing and Bidirectional Charts |
d195740730 | ||
d227231502 | ||
d61325961 | We introduce a new generation of commercial translation software, based primarily on statistical learning and statistical language models. | Language Weaver: The Next Generation of Machine Translation |
d62314944 | Text normalization is an indispensable stage for natural language processing of social media data with available NLP tools. We divide the normalization problem into 7 categories, namely; letter case transformation, replacement rules & lexicon lookup, proper noun detection, deasciification, vowel restoration, accent normalization and spelling correction. We propose a cascaded approach where each ill formed word passes from these 7 modules and is investigated for possible transformations. This paper presents the first results for the normalization of Turkish and tries to shed light on the different challenges in this area. We report a 40 percentage points improvement over a lexicon lookup baseline and nearly 50 percentage points over available spelling correctors. | A Cascaded Approach for Social Media Text Normalization of Turkish |
d44173291 | Nowadays, social media have become a platform where people can easily express their opinions and emotions about any topic such as politics, movies, music, electronic products and many others. On the other hand, politicians, companies, and businesses are interested in analyzing automatically people's opinions and emotions. In the last decade, a lot of efforts has been put into extracting sentiment polarity from texts. Recently, the focus has expanded to also cover emotion recognition from texts. In this work, we expand an existing emotion lexicon, DepecheMood, by leveraging semantic knowledge from English WordNet (EWN). We create an expanded lexicon, EmoWordNet, consisting of 67K terms aligned with EWN, almost 1.8 times the size of DepecheMood. We also evaluate EmoWord-Net in an emotion recognition task using Se-mEval 2007 news headlines dataset and we achieve an improvement compared to the use of DepecheMood. EmoWordNet is publicly available to speed up research in the field on http://oma-project.com. | EmoWordNet: Automatic Expansion of Emotion Lexicon Using English WordNet |
d8155280 | We present a novel translation model, which simultaneously exploits the constituency and dependency trees on the source side, to combine the advantages of two types of trees. We take head-dependents relations of dependency trees as backbone and incorporate phrasal nodes of constituency trees as the source side of our translation rules, and the target side as strings. Our rules hold the property of long distance reorderings and the compatibility with phrases. Large-scale experimental results show that our model achieves significantly improvements over the constituency-to-string (+2.45 BLEU on average) and dependencyto-string (+0.91 BLEU on average) models, which only employ single type of trees, and significantly outperforms the state-of-theart hierarchical phrase-based model (+1.12 BLEU on average), on three Chinese-English NIST test sets. | Translation with Source Constituency and Dependency Trees |
d234777813 | ||
d250390646 | The growing demand for learning English as a second language has led to an increasing interest in automatic approaches for assessing spoken language proficiency. One of the most significant challenges in this field is the lack of publicly available annotated spoken data. Another common issue is the lack of consistency and coherence in human assessment. To tackle both problems, in this paper we address the task of automatically predicting the scores of spoken test responses of Englishas-a-second-language learners by training neural models on written data and using the presence of grammatical errors as a feature, as they can be considered consistent indicators of proficiency through their distribution and frequency. Specifically, we train a feature extractor on EF-CAMDAT, a large written corpus containing error annotations and proficiency levels assigned by human experts, in order to extract information related to grammatical errors and, in turn, we use the resulting model for inference on the CLC-FCE corpus, on the ICNALE corpus, and on the spoken section of the TLT-school corpus, a collection of proficiency tests taken by Italian students. The work investigates the impact of the feature extractor on spoken proficiency assessment as well as the written-to-spoken approach. We find that our error-based approach can be beneficial for assessing spoken proficiency. The results obtained on the considered datasets are discussed and evaluated with appropriate metrics. | Cross-corpora experiments of automatic proficiency assessment and error detection for spoken English |
d253762026 | The industrialization process associated with the so-called Industrial Revolution in 19 thcentury Great Britain was a time of profound changes, including in the English lexicon. An important yet understudied phenomenon is the semantic shift in the lexicon of mechanisation. In this paper we present the first large-scale analysis of terms related to mechanization over the course of the 19 th century in English. We draw on a corpus of historical British newspapers comprising 4.6 billion tokens and train historical word embedding models. We test existing semantic change detection techniques and analyse the results in light of previous historical linguistic scholarship. | Machines in the media: semantic change in the lexicon of mechanization in 19 th -century British newspapers |
d219301511 | ||
d1308273 | Languages are born, evolve and, eventually, die. During this evolution their spelling rules (and sometimes the syntactic and semantic ones) change, putting old documents out of use. In Portugal, a pair of political agreements with Brazil forced relevant changes on the way the Portuguese language is written. In this article we will detail these two Orthographic Agreements (one in the thirties and the other more recently, in the nineties), and the challenges present on the automatic migration of old documents spelling to their actual one. We will reveal Bigorna, a toolkit for the classification of language variants, their comparison and the conversion of texts in different language versions. These tools will be explained together with examples of migration issues. As Birgorna relies on a set of conversion rules we will also discuss how to infer conversion rules from a set of documents (texts with different ages). The document concludes with a brief evaluation on the conversion and classification tool results and their relevance in the current Portuguese language scenario. | Bigorna -a toolkit for orthography migration challenges |
d5908935 | In this paper, we present a stochastic language model for Japanese using dependency. The prediction unit in this model is all attribute of "bunsetsu". This is represented by the product of the head of content words and that of function words. The relation between the attributes of "bunsetsu" is ruled by a context-free grammar. The word sequences axe predicted from the attribute using word n-gram model. The spell of Unknow word is predicted using character n-grain model. This model is robust in that it can compute the probability of an arbitrary string and is complete in that it models from unknown word to dependency at the same time. | A Stochastic Language Model using Dependency and Its Improvement by Word Clustering |
d236898629 | ||
d220327192 | ||
d6158215 | This paper describes the features and the machine learning methods used by Dublin City University (DCU) and SYMANTEC for the WMT 2012 quality estimation task. Two sets of features are proposed: one constrained, i.e. respecting the data limitation suggested by the workshop organisers, and one unconstrained, i.e. using data or tools trained on data that was not provided by the workshop organisers. In total, more than 300 features were extracted and used to train classifiers in order to predict the translation quality of unseen data. In this paper, we focus on a subset of our feature set that we consider to be relatively novel: features based on a topic model built using the Latent Dirichlet Allocation approach, and features based on source and target language syntax extracted using part-of-speech (POS) taggers and parsers. We evaluate nine feature combinations using four classification-based and four regression-based machine learning techniques. | DCU-Symantec Submission for the WMT 2012 Quality Estimation Task |
d6534419 | This paper introduces a spelling correction system which integrates seamlessly with morphological analysis using a multi-tape formalism. Handling of various Semitic error problems is illustrated, with reference to Arabic and Syriac examples. The model handles errors vocalisation, diacritics, phonetic syncopation and morphographemic idiosyncrasies, in addition to Damerau errors. A complementary correction strategy for morphologically sound but morphosyntactically ill-formed words is outlined. | A Morphographemic Model for Error Correction in Nonconcatenative Strings |
d235097688 | ||
d31255536 | On what basis are the input processing capabilities of Natural Language software judged? That is, what are the capabilities to be described and measured, and what are the standards against which we measure them? Rome Laboratory is currently supporting an effort to develop a concise terminology for describing the linguistic processing capabilities of Natural Language Systems, and a uniform methodology for appropriately applying the terminology. This methodology is meant to produce quantitative, objective profiles of NL system capabilities without requiring system adaptation to a new test domain or text corpus. The effort proposes to develop a repeatable procedure that produces consistent results for independent evaluators. | NEAL-MONTGOMERY NLP SYSTEM EVALUATION METHODOLOGY |
d20132278 | HUGHES TRAINABLE TEXT SKIMMER : MUC-3 TEST RESULTS AND ANALYSI S | |
d12023372 | The new frontier of computer assisted translation technology is the effective integration of statistical MT within the translation workflow. In this respect, the SMT ability of incrementally learning from the translations produced by users plays a central role. A still open problem is the evaluation of SMT systems that evolve over time. In this paper, we propose a new metric for assessing the quality of an adaptive MT component that is derived from the theory of learning curves: the percentage slope. | Evaluating the Learning Curve of Domain Adaptive Statistical Machine Translation Systems |
d10830035 | Within scientific institutes there exist many language resources. These resources are often quite specialized and relatively unknown. The current infrastructural initiatives try to tackle this issue by collecting metadata about the resources and establishing centers with stable repositories to ensure the availability of the resources. It would be beneficial if the researcher could, by means of a simple query, determine which resources and which centers contain information useful to his or her research, or even work on a set of distributed resources as a virtual corpus. In this article we propose an architecture for a distributed search environment allowing researchers to perform searches in a set of distributed language resources. | Federated Search: Towards a Common Search Infrastructure |
d18542663 | We present an automatic text generation system (ATG) developed for the generation of natural language text for automatically produced test items. This ATG has been developed to work with an automatic item generation system for analytical reasoning items for use in tests with high-stakes outcomes (such as college admissions decisions). As such, the development and implementation of this ATG is couched in the context and goals of automated item generation for educational assessment. | Automatic Item Text Generation in Educational Assessment |
d227231119 | ||
d7352979 | We describe the WHY2-ATLAS intelligent tutoring system for qualitative physics that interacts with students via natural language dialogue. We focus on the issue of analyzing and responding to multisentential explanations. We explore an approach that combines a statistical classifier, multiple semantic parsers and a formal reasoner for achieving a deeper understanding of these explanations in order to provide appropriate feedback on them. * | Understanding Complex Natural Language Explanations in Tutorial Applications * |
d201741543 | ||
d235097609 | First-order meta-learning algorithms have been widely used in practice to learn initial model parameters that can be quickly adapted to new tasks due to their efficiency and effectiveness. However, existing studies find that meta-learner can overfit to some specific adaptation when we have heterogeneous tasks, leading to significantly degraded performance. In Natural Language Processing (NLP) applications, datasets are often diverse and each task has its unique characteristics. Therefore, to address the overfitting issue when applying firstorder meta-learning to NLP applications, we propose to reduce the variance of the gradient estimator used in task adaptation. To this end, we develop a variance-reduced first-order meta-learning algorithm. The core of our algorithm is to introduce a novel variance reduction term to the gradient estimation when performing the task adaptation. Experiments on two NLP applications: few-shot text classification and multi-domain dialog state tracking demonstrate the superior performance of our proposed method. | Variance-reduced First-order Meta-learning for Natural Language Processing Tasks |
d248780374 | With the increase of deception and misinformation especially in social media, it has become crucial to be able to develop machine learning methods to automatically identify deceptive language. In this proposal, we identify key challenges underlying deception detection in cross-domain, cross-lingual and multi-modal settings. To improve cross-domain deception classification, we propose to use inter-domain distance to identify a suitable source domain for a given target domain. We propose to study the efficacy of multilingual classification models vs translation for cross-lingual deception classification. Finally, we propose to better understand multi-modal deception detection and explore methods to weight and combine information from multiple modalities to improve multi-modal deception classification. | Improving Cross-domain, Cross-lingual and Multi-modal Deception Detection |
d17950214 | The purpose of this work is to propose a new methodology to ameliorate the Markov Cluster (MCL) Algorithm that is well known as an efficient way of graph clustering(Van Dongen, 2000). The MCL when applied to a graph of word associations has the effect of producing concept areas in which words are grouped into the similar topics or similar meanings as paradigms. However, since a word is determined to belong to only one cluster that represents a concept, Markov clusters cannot show the polysemy or semantic indetermination among the properties of natural language. Our Recurrent MCL (RMCL) allows us to create a virtual adjacency relationship among the Markov hard clusters and produce a downsized and intrinsically informative semantic network of word association data. We applied one of the RMCL algorithms (Stepping-stone type) to a Japanese associative concept dictionary and obtained a satisfactory level of performance in refining the semantic network generated from MCL. | Recurrent Markov Cluster (RMCL) Algorithm for the Refinement of the Semantic Network |
d6367073 | In this paper we describe the new developments brought to LRE Map, especially in terms of the user interface of the Web application, of the searching of the information therein, and of the data model updates. Thus, users now have several new search facilities, such as faceted search and fuzzy textual search, they can now register, log in and store search bookmarks for further perusal. Moreover, the data model now includes the notion of paper and author, which allows for linking the resources to the scientific works. Also, users can now visualise author-provided field values and normalised values. The normalisation has been manual and enables a better grouping of the entries. Last but not least, provisions have been made towards linked open data (LOD) aspects, by exposing an RDF access point allowing to query on the authors, papers and resources. Finally, a complete technological overhaul of the whole application has been undertaken, especially in terms of the Web infrastructure and of the text search backend. | New Developments in the LRE Map |
d12942027 | This paper introduces a new technique for phrase-structure parser analysis, categorizing possible treebank structures by integrating regular expressions into derivation trees. We analyze the performance of the Berkeley parser on OntoNotes WSJ and the English Web Treebank. This provides some insight into the evalb scores, and the problem of domain adaptation with the web data. We also analyze a "test-ontrain" dataset, showing a wide variance in how the parser is generalizing from different structures in the training material. | Parser Evaluation Using Derivation Trees: A Complement to evalb |
d51918715 | Having consistent personalities is important for chatbots if we want them to be believable. Typically, many questionanswer pairs are prepared by hand for achieving consistent responses; however, the creation of such pairs is costly. In this study, our goal is to collect a large number of question-answer pairs for a particular character by using role playbased question-answering in which multiple users play the roles of certain characters and respond to questions by online users. Focusing on two famous characters, we conducted a large-scale experiment to collect question-answer pairs by using real users. We evaluated the effectiveness of role play-based questionanswering and found that, by using our proposed method, the collected pairs lead to good-quality chatbots that exhibit consistent personalities.• We verified that role play-based question- | Role play-based question-answering by real users for building chatbots with consistent personalities |
d207976984 | ||
d1685866 | This paper deals with the prepositions which introduce an adjunct of duration, such as the English for and in. On the basis of both crosslingual and monolingual evidence these adjuncts are argued to be ambiguous between a floating and an anchored interpretation. To capture the distinction in formal terms I employ the framework of HEAD-DRIVEN PHRASE STRUCTURE GRAMMAR, enriched with a number of devices which are familiar from DISCOURSE REPRESENTA-TION THEORY. The resulting analysis is demonstrated to be relevant for machine translation, natural language generation and natural language understanding. MARK indefinite¨© | On the prepositions which introduce an adjunct of duration |
d174799877 | ||
d16662659 | The semantic orientation of terms is fundamental for sentiment analysis in sentence and document levels. Although some Chinese sentiment dictionaries are available, how to predict the orientation of terms automatically is still important. In this paper, we predict the semantic orientation of terms of E-HowNet. We extract many useful features from different sources to represent a Chinese term in E-HowNet, and use a supervised machine learning algorithm to predict its orientation. Our experimental results showed that the proposed approach can achieve 92.33% accuracy. | Predicting the Semantic Orientation of Terms in E-HowNet |
d34207373 | This paper discusses the main characteristics of a possible unified framework for specifying annotation schemes dedicated to the task of reference identification and linking on linguistic corpora. Built upon the foundation principles of the Linguistic Annotation Framework, the model (RAF, Reference Annotation Framework) is based on the combination of a simple meta-model (expressing markables and links between them) and a selection of data categories representing the information actually attached to each component of the meta-model. Based on the observation of existing practices we show how this model can be used in a variety of practical and theoretical configurations. | Towards a Reference Annotation Framework |
d96338 | Recently there has been a growing interest in infrastructures for sharing NLP tools and resources. This paper presents SiSSA, a project that aims at developing an infrastructure for prototyping, editing and validation of NLP application architectures. The system will provide the user with a graphical environment for (1) selecting the NLP activities relevant for the particular NLP task and the associated linguistic processors that execute them; (2) connecting new linguistic processors to SiSSA; (3) checking that the chosen architectural hypothesis corresponds to the functional specifications of the given application. | SiSSA -An Infrastructure for NLP Application Development |
d18859174 | Propp's influential structural analysis of fairy tales created a powerful schema for representing storylines in terms of character functions, which is straightforward to exploit in computational semantic analysis and procedural generation of stories of this genre. We tackle two resources that draw on the Proppian model -one formalizes it as a semantic markup scheme and the other as an ontology -both lacking linguistic phenomena explicitly represented in them. The need for integrating linguistic information into structured semantic resources is motivated by the emergence of suitable standards that facilitate this, and the benefits such joint representation would create for transdisciplinary research across Digital Humanities, Computational Linguistics, and Artificial Intelligence. | Integration of Linguistic Markup into Semantic Models of Folk Narratives: The Fairy Tale Use Case |
d214667652 | ||
d16527671 | The development and proliferation of social media services has led to the emergence of new approaches for surveying the population and addressing social issues. One popular application of social media data is health surveillance, e.g., predicting the outbreak of an epidemic by recognizing diseases and symptoms from text messages posted on social media platforms. In this paper, we propose a novel task that is crucial and generic from the viewpoint of health surveillance: estimating a subject (carrier) of a disease or symptom mentioned in a Japanese tweet. By designing an annotation guideline for labeling the subject of a disease/symptom in a tweet, we perform annotations on an existing corpus for public surveillance. In addition, we present a supervised approach for predicting the subject of a disease/symptom. The results of our experiments demonstrate the impact of subject identification on the effective detection of an episode of a disease/symptom. Moreover, the results suggest that our task is independent of the type of disease/symptom. | Who caught a cold? -Identifying the subject of a symptom |
d208144896 | ||
d14631942 | This paper examines the effect of paraphrasing noun-noun compounds in statistical machine translation from Swedish to English. The paraphrases are meant to elicit the underlying relationship that holds between the compounding nouns, with the use of prepositional and verb phrases. Though some types of noun-noun compounds are too lexicalized, or have some other qualities that make them unsuitable for paraphrasing, a set of roughly two hundred noun-noun compounds are identified, split and paraphrased to be used in experiments on statistical machine translation. The results indicate a slight improvement in translation of the paraphrased compound nouns, with a minor loss in overall BLEU score. | Paraphrasing Swedish Compound Nouns in Machine Translation |
d12081317 | The TJP system is presented, which participated in SemEval 2014 Task 9, Part A: Contextual Polarity Disambiguation. Our system is 'constrained', using only data provided by the organizers. The goal of this task is to identify whether marking contexts are positive, negative or neutral. Our system uses a support vector machine, with extensive pre-processing and achieved an overall F-score of 81.96%. | TJP: Identifying the Polarity of Tweets from Context |
d231697118 | ||
d156043304 | In this paper, we present a framework for quantitative characterization of codeswitching patterns in multi-party conversations, which allows us to compare and contrast the socio-cultural and functional aspects of code-switching within a set of cultural contexts. Our method applies some of the proposed metrics for quantification of code-switching(Gamback and Das, 2016;Guzman et al., 2017)at the level of entire conversations, dyads and participants. We apply this technique to analyze the conversations from 18 recent Hindi movies. In the process, we are able to tease apart the use of code-switching as a device for establishing identity, sociocultural contexts of the characters and the events in a movie. | Quantitative Characterization of Code Switching Patterns in Complex Multi-Party Conversations: A Case Study on Hindi Movie Scripts |
d14690913 | Systems that locate mentions of concepts from ontologies in free text are known as ontology concept recognition systems. This paper describes an approach to the evaluation of the workings of ontology concept recognition systems through use of a structured test suite and presents a publicly available test suite for this purpose. It is built using the principles of descriptive linguistic field work and of software testing. More broadly, we also seek to investigate what general principles might inform the construction of such test suites. The test suite was found to be effective in identifying performance errors in an ontology concept recognition system. The system could not recognize 2.1% of all canonical forms and no non-canonical forms at all. Regarding the question of general principles of test suite construction, we compared this test suite to a named entity recognition test suite constructor. We found that they had twenty features in total and that seven were shared between the two models, suggesting that there is a core of feature types that may be applicable to test suite construction for any similar type of application. | Test suite design for ontology concept recognition systems |
d868133 | In this paper we describe Mephisto, our system for Task 9 of the SemEval-2 workshop. Our approach to this task is to develop a machine learning classifier which determines for each verb pair describing a noun compound which verb should be ranked higher. These classifications are then combined into one ranking. Our classifier uses features from the Google Ngram Corpus, WordNet and the provided training data. | UvT: Memory-based pairwise ranking of paraphrasing verbs |
d246702334 | Music forms a big part of our identity and as such, people with a shared preference for certain kinds of music may also share similar traits. In this study, we explore differences in the emotional language of fan communities of different music genres. In focusing on Reddit, we analyze the utterances on online community forums of different music genres using lexicon-based sentiment (emotion) analysis. Upon clustering Subreddit forums, we obtained two clusters: forums discussing genres like Rock, RnB, Country, and Jazz were found to have a higher abundance of positively valenced emotions and a lower amount of negatively valenced emotions. Likewise, Subreddits discussing genres like Metal, Punk, and Rap had a lower amount of positively valenced emotions and a higher abundance of negatively valenced emotion. We observed a high correlation between counts in lyrics of a genre and counts in a fan community for the emotions of anger, disgust, fear, and joy. In sum, we found differences in the emotional language of fan utterances by genre, and these could be partially attributed to the emotions contained in the lyrics. | Are Metal Fans Angrier than Jazz Fans? A Genre-Wise Exploration of the Emotional Language of Music Listeners on Reddit |
d9330805 | We present a system for the generation of natural language instructions, as are found in instruction manuals for household appliances; that is able to automatically generate safety warnings tO the user at appropriate points. Situations in which accidents and injuries to the user can occur are considered at every step in the planning of the normal operation of the device, and these "'injury sub-plans, are then used to instruct the user to avoM these situations. | GENERATING WARNING INSTRUCTIONS BY PLANNING ACCIDENTS AND INJURIES |
d5221425 | SCFG-based statistical MT models have proven effective for modelling syntactic aspects of translation, but still suffer problems of overgeneration. The production of German verbal complexes is particularly challenging since highly discontiguous constructions must be formed consistently, often from multiple independent rules. We extend a strong SCFG-based string-to-tree model to incorporate a rich feature-structure based representation of German verbal complex types and compare verbal complex production against that of the reference translations, finding a high baseline rate of error. By developing model features that use source-side information to influence the production of verbal complexes we are able to substantially improve the type accuracy as compared to the reference. | Using Feature Structures to Improve Verb Translation in English-to-German Statistical MT |
d15736291 | It has been known that the syllable structures in Korean are different from those in English. The goal of this paper is to provide computational implementations for Korean syllable structures in the typed feature structure formalism. The system that we adopted in this paper is the Linguistic Knowledge Building system. We first implemented the type hierarchies and AVMs for segment and suprasegment. The types consonant and vowel were included under the type segment, and the various different types were included under the type suprasegment for syllable structures. Then, we provided the rules for syllable structures. Unlike English syllabification, it has been known that onset and nucleus form a unit in Korean, which is called core. Accordingly, we provided the rules for onset, nucleus, and coda; then, the rules for core and syllable to combine segments into syllable structures. This paper also employed the type nf to solve the ambiguity problems. | Implementation of Korean Syllable Structures in the Typed Feature Structure Formalism |
d198967887 | ||
d17610957 | We use a Combinatory Categorial Grammar (CCG) parser with a structured perceptron learner to address Shared Task 6 of SemEval-2014, Supervised Semantic Parsing of Robotic Spatial Commands. Our system reaches an accuracy of 79% ignoring spatial context and 87% using the spatial planner, showing that CCG can successfully be applied to the task. | RoBox: CCG with Structured Perceptron for Supervised Semantic Parsing of Robotic Spatial Commands |
d6324935 | Film scripts provide a means of examining generalized western social perceptions of accepted human behavior. In particular, we focus on how dialogue in films describes gender, identifying linguistic and structural differences in speech for men and women and in same and different-gendered pairs. Using the Cornell Movie-Dialogs Corpus(Danescu-Niculescu-Mizil et al., 2012a), we identify significant linguistic and structural features of dialogue that differentiate genders in conversation and analyze how those effects relate to existing literature on gender in film. | Gender-Distinguishing Features in Film Dialogue |
d10753351 | This paper describes a set of experiments on two sub-tasks of Quality Estimation of Machine Translation (MT) output. Sentence-level ranking of alternative MT outputs is done with pairwise classifiers using Logistic Regression with blackbox features originating from PCFG Parsing, language models and various counts. Post-editing time prediction uses regression models, additionally fed with new elaborate features from the Statistical MT decoding process. These seem to be better indicators of post-editing time than blackbox features. Prior to training the models, feature scoring with ReliefF and Information Gain is used to choose feature sets of decent size and avoid computational complexity. | Machine learning methods for comparative and time-oriented Quality Estimation of Machine Translation output |
d218973716 | ||
d10936879 | At a recent NSF workshop on speech understanding, Charles Wayne gave a presentation of Darpa's program on Speech and Language research. Towards the end of his presentation, he enlightened all of us with a brief outline of a possible new technology development program: High Performance Multi-lingual Speech and Text Processing. This is a very exciting and timely initiative, if it comes to that, as many of us in the federal government, NSF included, are preparing to launch new efforts to implement the President's High Performance Computing and Communications (HPCC) program beginning in this fiscal year.The HPCC program is a real challenge and opportunity for all of us. Many in the research community , however, perceive "High Performance" as merely providing supercomputer cycles and services or developing new computer architectures and software for scientific or engineering research. As such language and speech research, they feel, will have very limited role to play. I beleive this perception is incorrect and unhealthy. In fact, language and speech is one of the most critical elements of the HPCC program for at least two reasons.First, one of HPCC's goals is to address "Grand Challenge" problems. These are the fundamental problems in science and engineering, with broad economic and scientific impact, whose solution could be advanced by applying high performance computing techniques and resources. In this context, "language and speech" has been identified as one of those grand challenge problems, ranking it among such problems of national concern as weather prediction, drug design, superconductivity, human genome, and transportation systems and others[1,2]. The implication here is that computational speech and language is a critical problem for us to solve and its solution could be advanced by further research in high performance computing.Another and perhaps more important reason, in my view, is that speech and language can help refine and drive HPCC research -ranging from architectures, software, algorithms and future novel applications as we move towards a knowledge intensive society. Here again, Charles Wayne's list of possible objectives for future speech and language research tells us why: Rapid, effortless human-machine interaction vis speech, text, and other modahties, unlimited vocabulary speech recognition, natural speech syntheses, real time translation of speech/language, etc. Few other topics in the computing and communications field offer an equally rich set of research opportunities, with measurable goals and potential benefits.At NSF, we are also beginning the implementation of the HPCC program. One of the components is a new "Grand Challenge" Applications Groups program focusing on cross-disciplinary research to address problems such as computational speech and language. Other similar HPCC initiatives may further develop, but their strategies and contents are likely more driven by the visions of the research community. Since HPCC is a multi-agency program, it will also provide a rare opportunity for Darpa and NSF to coordinate their goals and strategies to support research in speech and language. | Language and Speech Meets the HPCC Grand Challenge |
d1661089 | NLP systems for tasks such as question answering and information extraction typically rely on statistical parsers. But the efficacy of such parsers can be surprisingly low, particularly for sentences drawn from heterogeneous corpora such as the Web. We have observed that incorrect parses often result in wildly implausible semantic interpretations of sentences, which can be detected automatically using semantic information obtained from the Web.Based on this observation, we introduceWeb-based semantic filtering-a novel, domain-independent method for automatically detecting and discarding incorrect parses. We measure the effectiveness of our filtering system, called WOODWARD, on two test collections. On a set of TREC questions, it reduces error by 67%. On a set of more complex Penn Treebank sentences, the reduction in error rate was 20%. | Detecting Parser Errors Using Web-based Semantic Filters |
d28742266 | This paper describes changing needs among the communities that exploit language resources and recent LDC activities and publications that support those needs by providing greater volumes of data and associated resources in a growing inventory of languages with ever more sophisticated annotation. Specifically, it covers the evolving role of data centers with specific emphasis on the LDC, the publications released by the LDC in the two years since our last report and the sponsored research programs that provide LRs initially to participants in those programs but eventually to the larger HLT research communities and beyond. | Adapting to Trends in Language Resource Development: A Progress Report on LDC Activities |
d12325165 | We introduce a dialogue task between a virtual patient and a doctor where the dialogue system, playing the patient part in a simulated consultation, must reconcile a specialized level, to understand what the doctor says, and a lay level, to output realistic patient-language utterances. This increases the challenges in the analysis and generation phases of the dialogue. This paper proposes methods to manage linguistic and terminological variation in that situation and illustrates how they help produce realistic dialogues. Our system makes use of lexical resources for processing synonyms, inflectional and derivational variants, or pronoun/verb agreement. Specialized knowledge is used for processing medical roots and affixes, ontological relations and concept mapping, and for generating lay variants of terms according to the patient's non-expert discourse. We report the results of a evaluation of the non-contextual analysis module-which supports the Spoken Language Understanding step-after 11 users interacted with the system. The annotation of domain entities obtained 91.8% of Precision, 82.5% of Recall, 86.9% of F-measure, 19.0% of Slot Error Rate, and 32.9% of Sentence Error Rate. | Managing Linguistic and Terminological Variation in a Medical Dialogue System |
d13507979 | Broad-coverage repositories of semantic relations between verbs could benefit many NLP tasks. We present a semi-automatic method for extracting fine-grained semantic relations between verbs. We detect similarity, strength, antonymy, enablement, and temporal happens-before relations between pairs of strongly associated verbs using lexicosyntactic patterns over the Web. On a set of 29,165 strongly associated verb pairs, our extraction algorithm yielded 65.5% accuracy. Analysis of error types shows that on the relation strength we achieved 75% accuracy. We provide the resource, called VERBOCEAN, for download at | VERBOCEAN: Mining the Web for Fine-Grained Semantic Verb Relations |
d17553381 | A script is a type of knowledge representation in artificial intelligence (AI). This paper presents two methods for synthetically using collected scripts for story generation. The first method recursively generates long sequences of events and the second creates script networks. Although related studies generally use one or more scripts for story generation, this research synthetically uses many scripts to flexibly generate a diverse narrative. | Using Synthetically Collected Scripts for Story Generation |
d219302517 | ||
d44152919 | With the rise of e-commerce, people are accustomed to writing their reviews after receiving the goods. These comments are so important that a bad review can have a direct impact on others buying. Besides, the abundant information within user reviews is very useful for extracting user preferences and item properties. In this paper, we investigate the approach to effectively utilize review information for recommender systems. The proposed model is named LSTM-Topic matrix factorization (LTMF) which integrates both LSTM and Topic Modeling for review understanding. In the experiments on popular review dataset Amazon , our LTMF model outperforms previous proposed HFT model and Con-vMF model in rating prediction. Furthermore, LTMF shows the better ability on making topic clustering than traditional topic model based method, which implies integrating the information from deep learning and topic modeling is a meaningful approach to make a better understanding of reviews. | Combining Deep Learning and Topic Modeling for Review Understanding in Context-Aware Recommendation |
d5611528 | Traditional approaches for information retrieval from texts depend on the term frequency. A shortcoming of these schemes, which consider only occurrences of the terms in a document, is that they have some limitations on extracting semantically exact indexes that represent the semantic content of a document. However, one word can always represent more than one meaning. The word sense ambiguities will also affect the system behavior. To address this issue, we proposed a brand new strategy -a concept extracting strategy to extract the concept of the word and to determine the semantic importance of the concepts in the sentences via analyzing the conceptual structures of the sentences. In this approach, a conceptual vector space model using auto-threshold detection is proposed to process the concepts, and a cluster searching model is also designed. This autothreshold detection method can help the model to obtain the optimal settings of retrieval parameters automatically. An experiment on the TREC6 collection shows that the proposed method outperforms the other two information retrieval (IR) methods based on term frequency (TF), especially for the lower-ranked documents | An Information Retrieval Model Based On Word Concept |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.