_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d17475367 | With recent developments in web technologies, percentage of web content in Hindi language is growing up at a lightning speed. Opinion classification research has gained tremendous momentum in recent times mostly for English language. However, there has been little work in this area for Indian languages. There is a need to analyse the Hindi language content and get insight of opinions expressed by people and various communities. In this paper, a method is proposed to increase the coverage of the Hindi SentiWordNet for better classification results. In addition to this, impact of the negation and discourse rules are investigated for Hindi sentiment analysis. Proposed algorithm produces 82.89% for positive reviews and 76.59 % for negative reviews, and an overall accuracy of 80.21%. | Sentiment Analysis of Hindi Review based on Negation and Discourse Relation |
d62345967 | Translation and Communication: Translating and the Computer 6 | |
d11317625 | We propose a principled probabilisitc framework which uses trees over the vocabulary to capture similarities among terms in an information retrieval setting. This allows the retrieval of documents based not just on occurrences of specific query terms, but also on similarities between terms (an effect similar to query expansion). Additionally our principled generative model exhibits an effect similar to inverse document frequency. We give encouraging experimental evidence of the superiority of the hierarchical Dirichlet tree compared to standard baselines. | Hierarchical Dirichlet Trees for Information Retrieval |
d199373173 | ||
d11994194 | The development of the evaluation of domain-specific cross-language information retrieval (CLIR) is shown in the context of the Cross-Language Evaluation Forum (CLEF) campaigns from 2000 to 2003. The pre-conditions and the usable data and additionally available instruments are described. The main goals of this task of CLEF are to allow the evaluation of Cross-Language Information Retrieval (CLIR) systems in the context of structured data and in a domain-specific area (not in the more general context of floating, journalistic texts), and with the additional possibility to make use of thesauri which had been used for intellectual indexing of the documents and are provided with the data. The parallel German-English GIRT4 corpus is described and some of the results of the CLEF 2004 campaign are discussed. | Evaluation of Cross-Language Information Retrieval Using the Domain-Specific GIRT Data as Parallel German-English Corpus Domain-Specific CLIR in the Context of CLEF |
d207908258 | ||
d4358375 | In this paper a multiclassifier based approach is presented for a word sense disambiguation (WSD) problem. A vector representation is used for training and testing cases and the Singular Value Decomposition (SVD) technique is applied to reduce the dimension of the representation. The approach we present consists in creating a set of k-NN classifiers and combining the predictions generated in order to give a final word sense prediction for each case to be classified. The combination is done by applying a Bayesian voting scheme. The approach has been applied to a database of 100 words made available by the lexical sample WSD subtask of SemEval-2007 (task 17) organizers. Each of the words was considered an independent classification problem. A methodological parameter tuning phase was applied in order to optimize parameter setting for each word. Results achieved are among the best and make the approach encouraging to apply to other WSD tasks. | A Multiclassifier based Approach for Word Sense Disambiguation using Singular Value Decomposition |
d18637494 | We review a number of different 'algebraic' perspectives on TAG and STAG in the framework of interpreted regular tree grammars (IRTGs). We then use this framework to derive a new parsing algorithm for TAGs, based on two algebras that describe strings and derived trees. Our algorithm is extremely modular, and can easily be adapted to the synchronous case. | Decomposing TAG Algorithms Using Simple Algebraizations |
d251464085 | This article presents the first results of the CLARIAH-funded project 'Patterns in Translation: Using Colibri Core for the Syriac Bible' (PaTraCoSy). This project seeks to use Colibri Core to detect translation patterns in the Peshitta, the Syriac translation of the Hebrew Bible. We first describe how we constructed word and phrase alignment between these two texts. This step is necessary to successfully implement the functionalities of Colibri Core. After this, we further describe our first investigations with the software. We describe how we use the built-in pattern modeller to detect n-gram and skipgram patterns in both Hebrew and Syriac texts. Colibri Core does not allow the creation of a bilingual model, which is why we compare the separate models. After a presentation of a few general insights on the overall translation behaviour of the Peshitta, we delve deeper into the concrete patterns we can detect by the n-gram/skipgram analysis. We provide multiple examples from the book of Genesis, a book which has been treated broadly in scholarly research into the Syriac translation, but which also appears to have interesting features based on our Colibri Core research. | From Pattern to Interpretation. Using Colibri Core to Detect Translation Patterns in the Peshitta |
d2388157 | Information about the lexical capacity of the speakers of a specific language is indispensible for empirical and experimental studies on the human behavior of using speech as a communicative means. Unlike the increasing number of gigantic text-or web-based corpora that have been developed in recent decades, publicly distributed spoken resources, espcially conversations, are few in number. This article studies the lexical coverage of a corpus of Taiwan Mandarin conversations recorded in three speaking scenarios. A wordlist based on this corpus has been prepared and provides information about frequency counts of words and parts of speech processed by an automatic system. Manual post-editing of the results was performed to ensure the usability and reliability of the wordlist. Syllable information was derived by automatically converting the Chinese characters to a conventional romanization scheme, followed by manual correction of conversion errors and disambiguiation of homographs. As a result, the wordlist contains 405,435 ordinary words and 57,696 instances of discourse particles, markers, fillers, and feedback words. Lexical coverage in Taiwan Mandarin conversation is revealed and is compared with a balanced corpus of texts in terms of words, syllables, and word categories. | Lexical Coverage in Taiwan Mandarin Conversation |
d14538981 | Language identification is a simple problem that becomes much more difficult when its usual assumptions are broken. In this paper we consider the task of classifying short segments of text in closely-related languages for the Discriminating Similar Languages shared task, which is broken into six subtasks, (A) Bosnian, Croatian, and Serbian, (B) Indonesian and Malay, (C) Czech and Slovak, (D) Brazilian and European Portuguese, (E) Argentinian and Peninsular Spanish, and (F) American and British English. We consider a number of different methods to boost classification performance, such as feature selection and data filtering, but we ultimately find that a simple naïve Bayes classifier using character and word n-gram features is a strong baseline that is difficult to improve on, achieving an average accuracy of 0.8746 across the six tasks. | Experiments in Sentence Language Identification with Groups of Similar Languages |
d39618232 | De in Chinese relative clauses is commonly analyzed as a complementizer signifying a relative clause. In this paper we argue that De has two roles in RCs. Besides being a relativization marker, which is its basic function, De can also mark the realis state of the event expressed by the RC.Being a realis state marker, De needs to bind an event variable, which can be supplied by the VP in the RC. When some other operator in RCs competes with De for such an event variable, the variable will first go into that operator's interpretation and De thus fails to bind an event variable. In such a case, the RC cannot express a realis event unless Le or Guo occurs. | The function of DE in Chinese RCs |
d15422102 | In this paper we present a maximum entropy Word Sense Disambiguation system we developed which performs competitively on SENSEVAL-2 test data for English verbs. We demonstrate that using richer linguistic contextual features significantly improves tagging accuracy, and compare the system's performance with human annotator performance in light of both fine-grained and coarse-grained sense distinctions made by the sense inventory. | Combining Contextual Features for Word Sense Disambiguation |
d9905604 | We present a fully unsupervised method for automated construction of WordNets based upon recent advances in distributional representations of sentences and word-senses combined with readily available machine translation tools. The approach requires very few linguistic resources and is thus extensible to multiple target languages. To evaluate our method we construct two 600-word test sets for word-to-synset matching in French and Russian using native speakers and evaluate the performance of our method along with several other recent approaches. Our method exceeds the best language-specific and multi-lingual automated WordNets in F-score for both languages. The databases we construct for French and Russian, both languages without large publicly available manually constructed WordNets, will be publicly released along with the test sets. | Automated WordNet Construction Using Word Embeddings |
d226283822 | ||
d102350407 | English verbs have multiple forms. For instance, talk may also appear as talks, talked or talking, depending on the context. The NLP task of lemmatization seeks to map these diverse forms back to a canonical one, known as the lemma. We present a simple joint neural model for lemmatization and morphological tagging that achieves state-of-the-art results on 20 languages from the Universal Dependencies corpora. Our paper describes the model in addition to training and decoding procedures. Error analysis indicates that joint morphological tagging and lemmatization is especially helpful in low-resource lemmatization and languages that display a larger degree of morphological complexity. Code and pre-trained models are available at https://sigmorphon.github.io/ sharedtasks/2019/task2/. | A Simple Joint Model for Improved Contextual Neural Lemmatization |
d35284925 | The distinction between u~uderlying and surface structure is more or less well established in contemporary ~ammatical analysis. The form and depth of the underlyiD E structure and its relationship to observable language reality are, however, permanently in the focus of linguistic disputes.In the standard generative ~ranaformationsl model (Ohc~sky, 1965) underlyinE structure was moderately deep. I% reflected the surface structure of English and catered for semantic distinctions mainly through the inherent senuantic features of the lexicon. The semantic component, to which the derived sentences were being sent for semantic processing, was not well defined.The generative semantics models deepened the underlying structure and imposed a considerable gap between the latter and the surface structure. This gap was to be bridged by transformations, which with McCawley included very extensive lexical changes.The functions of the underlying participants in the action or state came to differ signifficantly from those of the surface nominal constituents. In Fillmore's model (Fillmore, 1969) the subject of the surface structure correlated not only subject" or "actor" but also with an with the N ~de~lyi~ underlying patient, experienc'er, locative...To provide for this correlation, Fillmore set up rules for systhematic sub-Jectivization of non-agentive "underlying cases". What "s more, -291 - | THE RELATIONSHIP OP UNDERLYING AND SURPAOE STRUCTURE IN GENERATIVE DESORIPTION OF LANGUAGE |
d219300439 | ||
d6989088 | Event extraction generally suffers from the data sparseness problem. In this paper, we address this problem by utilizing the labeled data from two different languages. As a preliminary study, we mainly focus on the subtask of trigger type determination in event extraction. To make the training data in different languages help each other, we propose a uniform text representation with bilingual features to represent the samples and handle the difficulty of locating the triggers in the translated text from both monolingual and bilingual perspectives. Empirical studies demonstrate the effectiveness of the proposed approach to bilingual classification on trigger type determination. | Bilingual Event Extraction: a Case Study on Trigger Type Determina- tion |
d250390962 | Writing the conclusion section of radiology reports is essential for communicating the radiology findings and its assessment to physician in a condensed form. In this work, we employ a transformer-based Seq2Seq model for generating the conclusion section of German radiology reports. The model is initialized with the pretrained parameters of a German BERT model and fine-tuned in our downstream task on our domain data. We proposed two strategies to improve the factual correctness of the model. In the first method, next to the abstractive learning objective, we introduce an extraction learning objective to train the decoder in the model to both generate one summary sequence and extract the key findings from the source input. The second approach is to integrate the pointer mechanism into the transformer-based Seq2Seq model. The pointer network helps the Seq2Seq model to choose between generating tokens from the vocabulary or copying parts from the source input during generation. The results of the automatic and human evaluations show that the enhanced Seq2Seq model is capable of generating human-like radiology conclusions and that the improved models effectively reduce the factual errors in the generations despite the small amount of training data. * Corresponding authors contributed equally. § Work completed during master thesis at DKFZ. | Fine-tuning BERT Models for Summarizing German Radiology Findings |
d259376587 | In this paper, we describe our system for SemEval-2023 Task 7: Multi-evidence Natural Language Inference for Clinical Trial Data. Given a CTR premise, and a statement, this task involves 2 sub-tasks (i) identifying the inference relation between CTR -statement pairs (Task 1: Textual Entailment), and (ii) extracting a set of supporting facts, from the premise, to justify the label predicted in Task 1 (Task 2: Evidence Retrieval). We adopt an explanation driven NLI approach to tackle the tasks. Given a statement to verify, the idea is to first identify relevant evidence from the target CTR(s), perform evidence level inferences and then ensemble them to arrive at the final inference. We have experimented with various BERT based models and T5 models. Our final model uses T5 base that achieved better performance compared to BERT models. In summary, our system achieves F1 score of 70.1% for Task 1 and 80.2% for Task 2. We ranked 8th respectively under both the tasks. Moreover, ours was one of the 5 systems that ranked within the Top 10 under both tasks. | I2R at SemEval-2023 Task 7: Explanations-driven Ensemble Approach for Natural Language Inference over Clinical Trial Data |
d259376602 | The SemEval 2023 Task 9 Multilingual Tweet Intimacy Analysis , (Pei et al., 2023) is a shared task for analysing the intimacy in the tweets posted on Twitter. The dataset was provided by Pei and Jurgens (2020), who are part of the task organizers, for this task consists of tweets in various languages, such as Chinese, English, French, Italian, Portuguese, and Spanish. The testing dataset also had unseen languages such as Hindi, Arabic, Dutch and Korean. The tweets may or may not be related to intimacy. The task provided was to score the intimacy in tweets and place it in the range of 0-5 based on the level of intimacy in the tweet using the dataset provided which consisted of tweets along with its scores. The intimacy score is used to indicate whether a tweet is intimate or not. Our team participated in the task and proposed the ROBERTa model ,Liu et al. (2019)to analyse the intimacy of the tweets. | CKingCoder at SemEval-2023 Task 9: Multilingual Tweet Intimacy Analysis |
d259376720 | Animate entities in narrative comics stories are expressed through a number of visual representations across panels. Identifying these entities is necessary for recognizing characters and analysing narrative affordances unique to comics, and integrating these with linguistic reference annotation, however an annotation process for animate entity identification has not received adequate attention. This research explores methods for identifying animate entities visually in comics using annotation experiments. Two rounds of inter-annotator agreement experiments are run: the first asks annotators to outline areas on comic pages using a Polygon segmentation tool, and the second prompts annotators to assign each outlined entity's animacy type to derive a quantitative measure of agreement. The first experiment results show that Polygon-based outlines successfully produce a qualitative measure of agreement; the second experiment supports that animacy status is best conceptualised as a graded, rather than binary, concept. | Identifying Visual Depictions of Animate Entities in Narrative Comics: An Annotation Study |
d252847572 | As an important component of task-oriented dialogue systems, dialogue state tracking is designed to track the dialogue state through the conversations between users and systems. Multi-domain dialogue state tracking is a challenging task, in which the correlation among different domains and slots needs to consider. Recently, slot self-attention is proposed to provide a data-driven manner to handle it. However, a full-support slot self-attention may involve redundant information interchange. In this paper, we propose a top-k attention-based slot self-attention for multi-domain dialogue state tracking. In the slot self-attention layers, we force each slot to involve information from the other k prominent slots and mask the rest out. The experimental results on two mainstream multi-domain task-oriented dialogue datasets, MultiWOZ 2.0 and MultiWOZ 2.4, present that our proposed approach is effective to improve the performance of multi-domain dialogue state tracking. We also find that the best result is obtained when each slot interchanges information with only a few slots. | Multi-Domain Dialogue State Tracking with Top-K Slot Self Attention |
d396735 | We present a machine learning approach for the task of ranking previously answered questions in a question repository with respect to their relevance to a new, unanswered reference question. The ranking model is trained on a collection of question groups manually annotated with a partial order relation reflecting the relative utility of questions inside each group. Based on a set of meaning and structure aware features, the new ranking model is able to substantially outperform more straightforward, unsupervised similarity measures. | Learning the Relative Usefulness of Questions in Community QA |
d236460053 | Aspect-based sentiment analysis (ABSA) has received increasing attention recently. Most existing work tackles ABSA in a discriminative manner, designing various task-specific classification networks for the prediction. Despite their effectiveness, these methods ignore the rich label semantics in ABSA problems and require extensive task-specific designs. In this paper, we propose to tackle various ABSA tasks in a unified generative framework. Two types of paradigms, namely annotation-style and extraction-style modeling, are designed to enable the training process by formulating each ABSA task as a text generation problem. We conduct experiments on four ABSA tasks across multiple benchmark datasets where our proposed generative approach achieves new state-of-the-art results in almost all cases. This also validates the strong generality of the proposed framework which can be easily adapted to arbitrary ABSA task without additional taskspecific model design. 1 | Towards Generative Aspect-Based Sentiment Analysis * |
d221097974 | The present study aims to compare three systems: a generic statistical machine translation, a generic neural machine translation and a tailored-NMT system focusing on the English to Greek language pair. The comparison is carried out following a mixed-methods approach, i.e. automatic metrics, as well as side-by-side ranking, adequacy and fluency rating, measurement of actual post editing effort and human error analysis performed by 16 postgraduate Translation students. The findings reveal a higher score for both the generic NMT and the tailored-NMT outputs as regards automatic metrics and human evaluation metrics, with the tailored-NMT output faring even better than the generic NMT output. | Machine Translation Quality: A comparative evaluation of SMT, NMT and tailored-NMT outputs |
d10228521 | Encyclopedias, which describe general/technical terms, are valuable language resources (LRs). As with other types of LRs relying on human introspection and supervision, constructing encyclopedias is quite expensive. To resolve this problem, we automatically produced a large-scale encyclopedic corpus over the World Wide Web. We first searched the Web for pages containing a term in question. Then we used linguistic patterns and HTML structures to extract text fragments describing the term. Finally, we organized extracted term descriptions based on domains. The resultant corpus contains approximately 100,000 terms. We also evaluated the quality of 2,000 test terms, and found that correct descriptions were obtained for 65% of test terms. | Producing a Large-scale Encyclopedic Corpus over the Web |
d6756958 | This paper describes our syllable-based phrase transliteration system for the NEWS 2012 shared task on English-Chinese track and its back. Grapheme-based Transliteration maps the character(s) in the source side to the target character(s) directly. However, character-based segmentation on English side will cause ambiguity in alignment step. In this paper we utilize Phrase-based model to solve machine transliteration with the mapping between Chinese characters and English syllables rather than English characters. Two heuristic rulebased syllable segmentation algorithms are applied. This transliteration model also incorporates three phonetic features to enhance discriminative ability for phrase. The primary system achieved 0.330 on Chinese-English and 0.177 on English-Chinese in terms of top-1 accuracy. | Syllable-based Machine Transliteration with Extra Phrase Features |
d8297210 | This paper outlines an approach to the unsupervised construction from unannotated parallel corpora of a lexical semantic resource akin to WordNet. The paper also describes how this resource can be used to add lexical semantic tags to the text corpus at hand. Finally, we discuss the possibility to add some of the predicates typical for WordNet to its automatically constructed multilingual version, and the ways in which the success of this approach can be measured. | Unsupervised Construction of a Multilingual WordNet from Parallel Corpora |
d16942880 | This paper explores the effect of constrastive focus on the binding possibilities of the Korean anaphor caki (`self). Contrastive focus on caki has a special effect in that it improves the acceptability of an atypical binding pattern. To account for this fact, I propose (i) that caki with contrastive focus needs to be treated as an exempted anaphor in terms of Sag (1992, 1994), (ii) that the binding possibilities of the exempted caki is determined by a discourse constraint not by a syntactic constraint, and (iii) that the discourse constraint needs to include the familiarity presupposition in Heim (1982) and linear order. | Contrastive Focus and Exempted Anaphor Caki in Korean* |
d9491739 | The development of FrameNet, a large database of semantically annotated sentences, has primed research into statistical methods for semantic tagging. We advance previous work by adopting a Maximum Entropy approach and by using previous tag information to find the highest probability tag sequence for a given sentence. Further we examine the use of sentence level syntactic pattern features to increase performance. We analyze our strategy on both human annotated and automatically identified frame elements, and compare performance to previous work on identical test data. Experiments indicate a statistically significant improvement (p<0.01) of over 6%. | Maximum Entropy Models for FrameNet Classification |
d9787694 | In this paper we investigate the use of machine learning techniques to classify a wide range of non-sentential utterance types in dialogue, a necessary first step in the interpretation of such fragments. We train different learners on a set of contextual features that can be extracted from PoS information. Our results achieve an 87% weighted f-score-a 25% improvement over a simple rule-based algorithm baseline. | Using Machine Learning for Non-Sentential Utterance Classification |
d233199 | In this paper, we propose an automatic quantitative expansion method for a sentence set that contains sentences of the same meaning (called an equivalent sentence set). This task is regarded as paraphrasing. The features of our method are: 1) The paraphrasing rules are dynamically acquired by Hierarchical Phrase Alignment from the equivalent sentence set, and 2) A large equivalent sentence set is generated by substituting source syntactic structures. Our experiments show that 561 sentences on average are correctly generated from 8.48 equivalent sentences. | Automatic Expansion of Equivalent Sentence Set Based on Syntactic Substitution |
d259833801 | This work introduces a novel three-class annotation scheme for text-based dementia classification in patients, based on their recorded visit interactions. Multiple models were developed utilising BERT, RoBERTa and Dis-tilBERT. Two approaches were employed to improve the representation of dementia samples: oversampling the underrepresented data points in the original Pitt dataset and combining the Pitt with the Holland and Kempler datasets. The DistilBERT models trained on either an oversampled Pitt dataset or the combined dataset performed best in classifying the dementia class. Specifically, the model trained on the oversampled Pitt dataset and the one trained on the combined dataset obtained stateof-the-art performance with 98.8% overall accuracy and 98.6% macro-averaged F1-score, respectively. The models' outputs were manually inspected through saliency highlighting, using Local Interpretable Model-agnostic Explanations (LIME), to provide a better understanding of its predictions. | Training Models on Oversampled Data and a Novel Multi-class Annotation Scheme for Dementia Detection |
d252763321 | ||
d7547384 | An appealing methodology for natural language generation in dialogue systems is to train the system to match a target corpus. We show how users can provide such a corpus as a natural side effect of interacting with a prototype system, when the system uses mixed-initiative interaction and a reversible architecture to cover a domain familiar to users. We experiment with integrated problems of sentence planning and realization in a referential communication task. Our model learns general and context-sensitive patterns to choose descriptive content, vocabulary, syntax and function words, and improves string match with user utterances to 85.8% from a handcrafted baseline of 54.4%. | Training an Integrated Sentence Planner on User Dialogue |
d7106458 | The Federal Open Market Committee (FOMC) is a committee within the central banking system of the US and decides on the target rate. Analyzing the positions of its members is a challenge even for experts with a deep knowledge of the financial domain. In our work, we aim at automatically determining opinion groups in transcriptions of the FOMC discussions. We face two main challenges: first, the positions of the members are more complex as in common opinion mining tasks because they have more dimensions than pro or contra. Second, they cannot be learned as there is no labeled data available. We address the challenge using graph clustering methods to group the members, including the similarity of their speeches as well as agreement and disagreement they show towards each other in discussions. We show that our approach produces stable opinion clusters throughout successive meetings and correlates with positions of speakers on a dove-hawk scale estimated by experts. | Lost in Discussion? -Tracking Opinion Groups in Complex Political Discussions by the Example of the FOMC Meeting Transcriptions |
d4013259 | The project described in this paper, which is still in the preliminary phase, concerns the design and implementation of a computational lexicon for Maltese, a language very much in current use but so far lacking most of the infrastructure required for NLP. One of the main characteristics of Maltese, a source of many difficulties, is that it is an amalgam of different language types (chiefly Semitic and Romance), as illustrated in the first part of the paper. The latter part of the paper describes our general approach to the problem of constructing the lexicon. | Maltilex: A Computational Lexicon for Maltese |
d11172004 | Searching documents that are similar to a query document is an important component in modern information retrieval. Some existing hashing methods can be used for efficient document similarity search. However, unsupervised hashing methods cannot incorporate prior knowledge for better hashing. Although some supervised hashing methods can derive effective hash functions from prior knowledge, they are either computationally expensive or poorly discriminative. This paper proposes a novel (semi-)supervised hashing method named Semi-Supervised SimHash (S 3 H) for high-dimensional data similarity search. The basic idea of S 3 H is to learn the optimal feature weights from prior knowledge to relocate the data such that similar data have similar hash codes. We evaluate our method with several state-of-the-art methods on two large datasets. All the results show that our method gets the best performance. | Semi-Supervised SimHash for Efficient Document Similarity Search |
d44172406 | This paper presents our submissions to Se-mEval 2018 Task 12: the Argument Reasoning Comprehension Task. We investigate an endto-end attention-based neural network to represent the two lexically close candidate warrants. On the one hand, we extract their different parts as attention vectors to obtain distinguishable representations. On the other hand, we use their surrounds (i.e., claim, reason, debate context) as another attention vectors to get contextual representations, which work as final clues to select the correct warrant. Our model achieves 60.4% accuracy and ranks 3 rd among 22 participating systems. | ECNU at SemEval-2018 Task 12: An End-to-End Attention-based Neural Network for the Argument Reasoning Comprehension Task |
d15539230 | In this paper, we study the impact of using a domain-specific bilingual lexicon on the performance of an Example-Based Machine Translation system. We conducted experiments for the English-French language pair on in-domain texts from Europarl (European Parliament Proceedings) and out-of-domain texts from Emea (European Medicines Agency Documents), and we compared the results of the Example-Based Machine Translation system against those of the Statistical Machine Translation system Moses. The obtained results revealed that adding a domain-specific bilingual lexicon (extracted from a parallel domain-specific corpus) to the general-purpose bilingual lexicon of the Example-Based Machine Translation system improves translation quality for both in-domain as well as outof-domain texts, and the Example-Based Machine Translation system outperforms Moses when texts to translate are related to the specific domain. | Improving the Performance of an Example-Based Machine Translation System Using a Domain-specific Bilingual Lexicon |
d4954835 | The concept of Linked Data has attracted increased interest in recent times due to its free and open availability and the sheer of volume. We present a framework to generate patterns which can be used to lexicalize Linked Data. We use DBpedia as the Linked Data resource which is one of the most comprehensive and fastest growing Linked Data resource available for free. The framework incorporates a text preparation module which collects and prepares the text after which Open Information Extraction is employed to extract relations which are then aligned with triples to identify patterns. The framework also uses lexical semantic resources to mine patterns utilizing VerbNet and WordNet. The framework achieved 70.36% accuracy and a Mean reciprocal Rank value of 0.72 for five DBpedia ontology classes generating 101 lexicalizations. | Generating Lexicalization Patterns for Linked Open Data |
d189125446 | ||
d7689518 | Typically, the lexicon models used in statistical machine translation systems do not include any kind of linguistic or contextual information, which often leads to problems in performing a correct word sense disambiguation. One way to deal with this problem within the statistical framework is to use maximum entropy methods. In this paper, we present how to use this type of information within a statistical machine translation system. We show that it is possible to significantly decrease training and test corpus perplexity of the translation models. In addition, we perform a rescoring of ¢ -Best lists using our maximum entropy model and thereby yield an improvement in translation quality. Experimental results are presented on the so-called "Verbmobil Task". | Refined Lexicon Models for Statistical Machine Translation using a Maximum Entropy Approach |
d1224 | The output of Chinese word segmentation can vary according to different linguistic definitions of words and different engineering requirements, and no single standard can satisfy all linguists and all computer applications. Most of the disagreements in language processing come from the segmentation of morphologically derived words (MDWs). This paper presents a system that can be conveniently customized to meet various user-defined standards in the segmentation of MDWs. In this system, all MDWs contain word trees where the root nodes correspond to maximal words and leaf nodes to minimal words.Each non-terminal node in the tree is associated with a resolution parameter which determines whether its daughters are to be displayed as a single word or separate words. Different outputs of segmentation can then be obtained from the different cuts of the tree, which are specified by the user through the different value combinations of those resolution parameters. We thus have a single system that can be customized to meet different segmentation specifications. Phone: (O) 1-425-706-0985 (H) 1-425-868-8075 1 See Sproat [2000] for a theoretical account of this orthographic convention. 2 Character-based processing is also possible and has performed well in certain applications.Andi Wu used to unambiguously determine "wordhood" in every case. 3 While native speakers of Chinese are often able to agree on how to segment a string of characters into words, there are a substantial number of cases where no agreement can be reached[Sproat et al. 1996]. Besides, different natural language processing (NLP) applications may have different requirements that call for different definitions of words and different granularities of word segmentation. This presents a challenging problem for the development of annotated Chinese corpora that are expected to be useful for training multiple types of NLP systems. It is also a challenge to any Chinese word segmentation system that claims to be capable of supporting multiple user applications. In what follows, we will discuss this problem mainly from the viewpoint of NLP and propose a solution that we have implemented and evaluated in an existing Chinese NLP system 4 .In Section 2, we will look at the problem areas where disagreements among different standards are most likely to arise. We will identify the alternatives in each case, discuss the computational motivation behind each segmentation option, and suggest possible solutions.This section can be skipped by readers who are already familiar with Chinese morphology and the associated segmentation problems. Section 3 presents a customizable system where most of the solutions suggested in Section 2 are implemented. The implementation will be described in detail and evaluation results will be presented. We also offer a proposal for the development of linguistic resources that can be customized for different purposes. In Section 4, we conclude that, with the preservation of word-internal structures and a set of resolution parameters, we can have a Chinese system or a single annotated corpus that can be conveniently customized to meet different word segmentation requirements.Target Areas for CustomizationHow to identify words in Chinese has been a long-standing research topic in Chinese linguistics and Chinese language processing. Many different criteria have been proposed and any serious discussion of this issue will take no less than a book such as[Packard 2000].Among the reasons that make this a hard and intriguing problem are:• Chinese orthography has no indication of word boundaries except punctuation marks.• The criteria for wordhood can vary depending on whether we are talking about the phonological word, lexical word, morphological word, syntactic word, semantic word, or psychological word [Packard | Customizable Segmentation of Morphologically Derived Words in Chinese |
d18843444 | We present an approach to model hidden attributes in the compositional semantics of adjective-noun phrases in a distributional model. For the representation of adjective meanings, we reformulate the pattern-based approach for attribute learning of Almuhareb(2006)in a structured vector space model (VSM). This model is complemented by a structured vector space representing attribute dimensions of noun meanings. The combination of these representations along the lines of compositional semantic principles exposes the underlying semantic relations in adjective-noun phrases. We show that our compositional VSM outperforms simple pattern-based approaches by circumventing their inherent sparsity problems. | A Structured Vector Space Model for Hidden Attribute Meaning in Adjective-Noun Phrases |
d226283768 | A noun compound is a sequence of contiguous nouns that acts as a single noun, although the predicate denoting the semantic relation between its components is dropped. Noun Compound Interpretation is the task of uncovering the relation, in the form of a preposition or a free paraphrase. Prepositional paraphrasing refers to the use of preposition to explain the semantic relation, whereas free paraphrasing refers to invoking an appropriate predicate denoting the semantic relation. In this paper, we propose an unsupervised methodology for these two types of paraphrasing. We use pre-trained contextualized language models to uncover the 'missing' words (preposition or predicate). These language models are usually trained to uncover the missing word/words in a given input sentence. Our approach uses templates to prepare the input sequence for the language model. The template uses a special token to indicate the missing predicate. As the model has already been pre-trained to uncover a missing word (or a sequence of words), we exploit it to predict missing words for the input sequence.Our experiments using four datasets show that our unsupervised approach (a) performs comparably to supervised approaches for prepositional paraphrasing, and (b) outperforms supervised approaches for free paraphrasing. Paraphrasing (prepositional or free) using our unsupervised approach is potentially helpful for NLP tasks like machine translation and information extraction. | Looking inside Noun Compounds: Unsupervised Prepositional and Free Paraphrasing |
d250551977 | We investigate model calibration in the setting of zero-shot cross-lingual transfer with largescale pre-trained language models. The level of model calibration is an important metric for evaluating the trustworthiness of predictive models. There exists an essential need for model calibration when natural language models are deployed in critical tasks. We study different post-training calibration methods in structured and unstructured prediction tasks. We find that models trained with data from the source language become less calibrated when applied to the target language and that calibration errors increase with intrinsic task difficulty and relative sparsity of training data. Moreover, we observe a potential connection between the level of calibration error and an earlier proposed measure of the distance from English to other languages. Finally, our comparison demonstrates that among other methods Temperature Scaling (TS) generalizes well to distant languages, but TS fails to calibrate more complex confidence estimation in structured predictions compared to more expressive alternatives like Gaussian Process Calibration. | Calibrating Zero-shot Cross-lingual (Un-)structured Predictions |
d4658133 | This paper describes the AMBRA system, entered in the SemEval-2015 Task 7: 'Diachronic Text Evaluation' subtasks one and two, which consist of predicting the date when a text was originally written. The task is valuable for applications in digital humanities, information systems, and historical linguistics. The novelty of this shared task consists of incorporating label uncertainty by assigning an interval within which the document was written, rather than assigning a clear time marker to each training document. To deal with non-linear effects and variable degrees of uncertainty, we reduce the problem to pairwise comparisons of the form is Document A older than Document B?, and propose a nonparametric way to transform the ordinal output into time intervals. | AMBRA: A Ranking Approach to Temporal Text Classification |
d252818906 | Dialogue systems that aim to acquire user models through interactions with users need to have interviewing functionality. In this study, we propose a method to generate interview dialogues to build a dialogue system that acquires user preferences for food. First, we collected 118 text-based dialogues between the interviewer and customer and annotated the communicative function and semantic content of the utterances. Next, using the corpus as training data, we created a classification model for the communicative function of the interviewer's next utterance and a generative model that predicts the semantic content of the utterance based on the dialogue history. By representing semantic content as a sequence of tokens, we evaluated the semantic content prediction model using BLEU. The results demonstrated that the semantic content produced by the proposed method was closer to the ground truth than the semantic content transformed from the output text generated by the retrieval model and GPT-2. Further, we present some examples of dialogue generation by applying model outputs to template-based sentence generation. | Semantic Content Prediction for Generating Interviewing Dialogues to Elicit Users' Food Preferences |
d14604012 | This paper proposes an extension of Dependency Tree Semantics (DTS), an underspecified logic originally proposed in[20], that uniformily implements constraints on Nested Quantification, Island Constraints and logical Redundancy. Unfortunately, this extension makes the complexity exponential in the number of NPs, in the worst cases. Nevertheless, we conducted an experiment on the Turin University Treebank [6], a Treebank of italian sentences annotated in a syntactic dependency format, whose results seem to indicate that these cases are very rare in real sentences. | Disambiguating quantifier scope in DTS |
d1699492 | Improving lexical network's quality is an important issue in the creation process of these language resources. This can be done by automatically inferring new relations from already existing ones with the purpose of (1) densifying the relations to cover the eventual lack of information and (2) detecting errors. In this paper, we devise such an approach applied to the JeuxDeMots lexical network, which is a freely available lexical and semantic resource for French. We first present the principles behind the lexical network construction with crowdsourcing and games with a purpose and illustrated them with JeuxDeMots (JDM). Then, we present the outline of an elicitation engine based on an inference engine using schemes like deduction, induction and abduction which will be referenced and briefly presented and we will especially highlight the new scheme (Relation Inference Scheme with Refinements) added to our system. An experiment showing the relevance of this scheme is then presented. | Relation Inference in Lexical Networks ... with Refinements |
d8427158 | In the business world, analyzing and dealing with risk permeates all decisions and actions. However, to date, risk identification, the first step in the risk management cycle, has always been a manual activity with little to no intelligent software tool support. In addition, although companies are required to list risks to their business in their annual SEC filings in the USA, these descriptions are often very highlevel and vague. In this paper, we introduce Risk Mining, which is the task of identifying a set of risks pertaining to a business area or entity. We argue that by combining Web mining and Information Extraction (IE) techniques, risks can be detected automatically before they materialize, thus providing valuable business intelligence. We describe a system that induces a risk taxonomy with concrete risks (e.g., interest rate changes) at its leaves and more abstract risks (e.g., financial risks) closer to its root node. The taxonomy is induced via a bootstrapping algorithms starting with a few seeds. The risk taxonomy is used by the system as input to a risk monitor that matches risk mentions in financial documents to the abstract risk types, thus bridging a lexical gap. Our system is able to automatically generate company specific "risk maps", which we demonstrate for a corpus of earnings report conference calls. | Hunting for the Black Swan: Risk Mining from Text |
d8305946 | In this paper, we investigate the utility of unsupervised lexical acquisition techniques to improve the quality of Named Entity Recognition and Classification (NERC) for the resource poor languages. As it is not a priori clear which unsupervised lexical acquisition techniques are useful for a particular task or language, careful feature selection is necessary. We treat feature selection as a multiobjective optimization (MOO) problem, and develop a suitable framework that fits well with the unsupervised lexical acquisition. Our experiments show performance improvements for two unsupervised features across three languages. | Multiobjective Optimization and Unsupervised Lexical Acquisition for Named Entity Recognition and Classification |
d18932500 | This paper presents an unsupervised batch learner for the quantity-insensitive stress systems described inGordon (2002). Unlike previous stress learning models, the learner presented here is neither cue based (Dresher andKaye, 1990), nor reliant on a priori Optimality-theoretic constraints(Tesar, 1998). Instead our learner exploits a property called neighborhooddistinctness, which is shared by all of the target patterns. Some consequences of this approach include a natural explanation for the occurrence of binary and ternary rhythmic patterns, the lack of higher n-ary rhythms, and the fact that, in these systems, stress always falls within a certain window of word edges. | Learning Quantity Insensitive Stress Systems via Local Inference |
d15961108 | Morfessor is a family of probabilistic machine learning methods for finding the morphological segmentation from raw text data. Recent developments include the development of semi-supervised methods for utilizing annotated data. Morfessor 2.0 is a rewrite of the original, widely-used Morfessor 1.0 software, with well documented command-line tools and library interface. It includes new features such as semi-supervised learning, online training, and integrated evaluation code. | Morfessor 2.0: Toolkit for statistical morphological segmentation |
d2088138 | Online shopping caters the needs of millions of users on a daily basis. To build an accurate system that can retrieve relevant products for a query like "MB252 with travel bags" one requires product and query categorization mechanisms, which classify the text as Home&Garden>Kitchen&Dining>Kitchen Appliances>Blenders. One of the biggest challenges in e-Commerce is that providers like Amazon, e-Bay, Google, Yahoo! and Walmart organize products into different product taxonomies making it hard and time-consuming for sellers to categorize goods for each shopping platform.To address this challenge, we propose an automatic product categorization mechanism, which for a given product title assigns the correct product category from a taxonomy. We conducted an empirical evaluation on 445, 408 product titles and used a rich product taxonomy of 319 categories organized into 6 levels. We compared performance against multiple algorithms and found that the best performing system reaches .88 f-score. | Everyone Likes Shopping! Multi-class Product Categorization for e-Commerce |
d52141050 | Language models have been used in many natural language processing applications. In recent years, the recurrent neural network based language models have defeated the conventional n-gram based techniques. However, it is difficult for neural network architectures to use linguistic annotations. We try to incorporate part-of-speech features in recurrent neural network language model, and use them to predict the next word. Specifically, we proposed a parallel structure which contains two recurrent neural networks, one for word sequence modeling and another for part-of-speech sequence modeling. The state of part-of-speech network helped improve the word sequence's prediction. Experiments show that the proposed method performs better than the traditional recurrent network on perplexity and is better at reranking machine translation outputs. | A Parallel Recurrent Neural Network for Language Modeling with POS Tags |
d198874926 | In this paper, we propose an end-toend CNN-LSTM model for generating descriptions for sequential images with a local-object attention mechanism. To generate coherent descriptions, we capture global semantic context using a multilayer perceptron, which learns the dependencies between sequential images. A paralleled LSTM network is exploited for decoding the sequence descriptions. Experimental results show that our model outperforms the baseline across three different evaluation metrics on the datasets published by Microsoft. | Generating Descriptions for Sequential Images with Local-Object Attention and Global Semantic Context Modelling |
d16351576 | Kintsch and van Dijk proposed a model of human comprehension and summarisation which is based on the idea of processing propositions on a sentence-bysentence basis, detecting argument overlap, and creating a summary on the basis of the best connected propositions. We present an implementation of that model, which gets around the problem of identifying concepts in text by applying coreference resolution, named entity detection, and semantic similarity detection, implemented as a two-step competition. We evaluate the resulting summariser against two commonly used extractive summarisers using ROUGE, with encouraging results. | A Summariser based on Human Memory Limitations and Lexical Competition |
d14432659 | Semi-supervised learning methods address the problem of building classifiers when labeled data is scarce. Text classification is often augmented by rich set of labeled features representing a particular class. As tuple level labling is resource consuming, semi-supervised and weakly supervised learning methods are explored recently. Compared to labeling data instances (documents), feature labeling takes much less effort and time. Posterior regularization (PR) is a framework recently proposed for incorporating bias in the form prior knowledge into posterior for the label. Our work focuses on incorporating labeled features into a naive bayes classifier in a semi-supervised setting using PR. Generative learning approaches utilize the unlabeled data more effectively compared to discriminative approaches in a semi-supervised setup. In the current study we formulate a classification method which uses the labeled features as constraints for the posterior in a semi-supervised generative learning setting. Our empirical study shows that performance gains are significant compared to an approach solely based on Generelized Expectation(GE) or limited amount of labeled data alone. We also show an application of our framework in a transfer learning setup for text classification. As we allow labeled data as well as labeled features to be used, our setup allows the presence of limited amount of labeled data on the target side of transfer learning where feature constraints are used for transferring knowledge from source domain to target domain. | Semi-supervised Learning of Naive Bayes Classifier with feature constraints |
d11895889 | In this paper we investigate the phenomenon of verb-particle constructions, discussing their characteristics and their availability for use with NLP systems. We concentrate in particular on the coverage provided by some electronic resources. Given the constantly growing number of verb-particle combinations, possible ways of extending the coverage of the available resources are investigated, taking into account regular patterns found in some productive combinations of verbs and particles. We discuss, in particular, the use ofLevin's (1993)classes of verbs as a means to obtain productive verb-particle constructions, and discuss the issues involved in adopting such an approach. | Verb-Particle Constructions and Lexical Resources |
d235097558 | ||
d241583404 | Every day, individuals post suicide notes on social media asking for support, resources, and reasons to live. Some posts receive few comments while others receive many. While prior studies have analyzed whether specific responses are more or less helpful, it is not clear if the quantity of comments received is beneficial in reducing symptoms or in keeping the user engaged with the platform and hence with life. In the present study, we create a large dataset of users' first r/SuicideWatch (SW) posts from Reddit (N=21,274), collect the comments as well as the user's subsequent posts (N=1,615,699) to determine whether they post in SW again in the future. We use propensity score stratification, a causal inference method for observational data, and estimate whether the amount of comments -as a measure of social support-increases or decreases the likelihood of posting again on SW. One hypothesis is that receiving more comments may decrease the likelihood of the user posting in SW in the future, either by reducing symptoms or because comments from untrained peers may be harmful. On the contrary, we find that receiving more comments increases the likelihood a user will post in SW again. We discuss how receiving more comments is helpful, not by permanently relieving symptoms since users make another SW post and their second posts have similar mentions of suicidal ideation, but rather by reinforcing users to seek support and remain engaged with the platform. Furthermore, since receiving only 1 comment -the most common case-decreases the likelihood of posting again by 14% on average depending on the time window, it is important to develop systems that encourage more commenting. | It's quality and quantity: the effect of the amount of comments on online suicidal posts |
d541460 | A distributed system is described that reliably mines parallel text from large corpora. The approach can be regarded as cross-language near-duplicate detection, enabled by an initial, low-quality batch translation. In contrast to other approaches which require specialized metadata, the system uses only the textual content of the documents. Results are presented for a corpus of over two billion web pages and for a large collection of digitized public-domain books. | Large Scale Parallel Document Mining for Machine Translation |
d12342168 | Conditional Random Fields (CRFs) are a popular formalism for structured prediction in NLP. It is well known how to train CRFs with certain topologies that admit exact inference, such as linear-chain CRFs. Some NLP phenomena, however, suggest CRFs with more complex topologies. Should such models be used, considering that they make exact inference intractable? Stoyanov et al.(2011)recently argued for training parameters to minimize the task-specific loss of whatever approximate inference and decoding methods will be used at test time. We apply their method to three NLP problems, showing that (i) using more complex CRFs leads to improved performance, and that (ii) minimumrisk training learns more accurate models. | Minimum-Risk Training of Approximate CRF-Based NLP Systems |
d1364249 | We present a generative model for the unsupervised learning of dependency structures. We also describe the multiplicative combination of this dependency model with a model of linear constituency. The product model outperforms both components on their respective evaluation metrics, giving the best published figures for unsupervised dependency parsing and unsupervised constituency parsing. We also demonstrate that the combined model works and is robust cross-linguistically, being able to exploit either attachment or distributional regularities that are salient in the data. | Corpus-Based Induction of Syntactic Structure: Models of Dependency and Constituency |
d18468911 | This paper reports on the design of a lexical database for English which is currently under construction ("FrameNet-2" ), and describes the kinds of linguistic facts that the database is intended to make available, for both human and computer consumers. Building on a recently completed pilot study ("FrameNet-1" ), it is centered on the nature of the relation between lexical meanings and the conceptual structures which underlie them (semantic frames). The database will show the semantic and syntactic combinatorial possibilities (based on frame membership) of the lexical items it includes, as these are documented through grammatical and semantic annotations of sentences extracted from a large corpus of contemporary written English. The notions of profiling within a frame, frame inheritance including multiple inheritance, frame blending, and frame composition will be explained and illustrated, and the manner of storing information about them in the database will be outlined. The building of the database, with its necessary labor-intensive manual component, will be explained. | Building a Large Lexical Databank Which Provides Deep Semantics* |
d10667603 | Dictionary on Computer~ hereafter DOC, is part of an overall effort to harness an on-line computer for phonological research.For certain problems the linguist finds it necessary to organize large amounts of data, or to perform rather involved logical tasks --such as checking out a body of rules with intricate ordering relations. In these situations a computer can be invaluable in that it forces the linguist to think through his problems with great precision and in that it can do certain jobs with a speed and accuracy not otherwise possible.The overt aim of DOC is to reconstruct the phonological histories of the major Chinese dialects. At a deeper level our interest is to find out more about how phonological structures changein general and the relation between these changes and the synchronic systems they lead to. To achieve these objectives we must attempt to account for oceans of data (the regular and irregular developments of thousands of morphemes in dozens of dialects). The hypotheses we posit, i.e., the reconstructed forms and the associated rules~ are likewise numerous and complex.The project is further complicated by the tens | ProOect DOC: Its Methodological Basis 1 |
d15951571 | An architecture for voice dialogue machines is described with emphasis on the problem solving and high level decision making mechanisms. The architecture provides facilities for generating voice interactions aimed at cooperative human-machine problem solving. It assumes that the dialogue will consist of a series of local selfconsistent subdialogues each aimed at subgoals related to the overall task. The discourse may consist of a set of such subdiaiogues with jumps from one subdialogue to the other in a search for a successful conclusion. The architecture maintains a user model to assure that interactions properly account for the level of competence of the user, and it includes an ability for the machine to take the initiative or yield the initiative to the user. It uses expectation from the dialogue processor to aid in the correction of errors from the speech recognizer. | Efficient Collaborative Discourse: A Theory and Its Implementation |
d60258366 | The system demo introduces Grail, a general-purpose parser for multimodal categorial grammars, with special emphasis on recent research which makes Grail suitable for wide-coverage French syntax and semantics. These developments have been possible thanks to a categorial grammar which has been extracted semi-automatically from the Paris 7 treebank and a semantic lexicon which maps word, part-of-speech tags and formulas combinations to Discourse Representation Structures.Résumé. Cette démonstration décrit Grail : un analyseur syntaxique pour grammaires catégorielles.Elle met l'accent sur les recherches récentes qui ont permis à Grail de donner des analyses syntaxiques et sémantiques du Français. Ces développements sont possibles grâce à une grammaire extraite semiautomatiquement du corpus de Paris 7 ainsi qu'un lexique sémantique qui traduit des combinaisons de mots, des étiquettes syntaxiques et des formules en Discourse Representation Structures. | Wide-Coverage French Syntax and Semantics using Grail * |
d243997694 | Recent task-oriented dialogue systems learn a model from annotated dialogues, and such dialogues are in turn collected and annotated so that they are consistent with certain domain knowledge. However, in real scenarios, domain knowledge is subject to frequent changes, and initial training dialogues may soon become obsolete, resulting in a significant decrease of the model performance.In this paper, we investigate the relationship between training dialogues and domain knowledge, and propose dialogue domain adaptation, a methodology aiming at adapting initial training dialogues to changes intervened in the domain knowledge. We focus on slot-value changes (e.g., when new slotvalues are available to describe domain entities) and define an experimental setting for dialogue domain adaptation. First, we show that current state-of-the-art models for dialogue state tracking are still poorly robust to slot-value changes of the domain knowledge. Then, we compare different domain adaptation strategies, showing that simple techniques are effective to reduce the gap between training dialogues and domain knowledge. | Addressing Slot-Value Changes in Task-oriented Dialogue Systems Through Dialogue Domain Adaptation |
d5882669 | This paper presents a bidirectional inference algorithm for sequence labeling problems such as part-of-speech tagging, named entity recognition and text chunking. The algorithm can enumerate all possible decomposition structures and find the highest probability sequence together with the corresponding decomposition structure in polynomial time. We also present an efficient decoding algorithm based on the easiest-first strategy, which gives comparably good performance to full bidirectional inference with significantly lower computational cost. Experimental results of part-of-speech tagging and text chunking show that the proposed bidirectional inference methods consistently outperform unidirectional inference methods and bidirectional MEMMs give comparable performance to that achieved by state-of-the-art learning algorithms including kernel support vector machines. | Bidirectional Inference with the Easiest-First Strategy for Tagging Sequence Data |
d12207704 | In this paper, we investigate the practical applicability of Co-Training for the task of building a classifier for reference resolution. We are concerned with the question if Co-Training can significantly reduce the amount of manual labeling work and still produce a classifier with an acceptable performance. | Applying Co-Training to Reference Resolution |
d62005149 | This report documents the details of the | Report of NEWS 2009 Machine Transliteration Shared Task |
d1713067 | The first step in Chinese NLP is to tokenize or segment character sequences into words, since the text contains no word delimiters. Recent heavy activity in this area has shown the biggest stumbling block to be words that are absent from the lexicon, since successful tokenizers to date have been based on dictionary lookup (e.g.,Chang &Chen 1993;Chiang et al. 1992;Linet al. 1993;Wu & Tseng 1993;Sproat et al. 1994).We present empirical evidence for four points concerning tokenization of Chinese text: (I) More rigorous "blind" evaluation methodology is needed to avoid inflated accuracy measurements; we introduce the nk-blind method.(2)The extent of the unknown-word problem is far more serious than generally thought, when tokenizing unrestricted texts in realistic domains. (3) Statistical lexical acquisition is a practical means to greatly improve tokenization accuracy with unknown words, reducing error rates as much as 32.0%. (4) When augmenting the lexicon, linguistic constraints can provide simple inexpensive filters yielding significantly better precision, reducing error rates as much as 49.4%. | IMPROVING CHINESE TOKENIZATION WITH LINGUISTIC FILTERS ON STATISTICAL LEXICAL ACQUISITION |
d2478893 | Japanese backchannel utterances, aizuti, in a multi-party design conversation were examined, and aizuti functions were analyzed in comparison with its functions in twoparty dialogues. In addition to the two major functions, signaling acknowledgment and turn-management, it was argued that aizuti in multi-party conversations are involved in joint construction of design plans through management of the floor structure, and display of participants' readiness to engage in collaborative elaboration of jointly constructed proposals. | Aiduti in Japanese Multi-party Design Conversations |
d41665377 | Dans cet article, nous proposons une évaluation dans un cadre utilisateur de Citron, un système de question-réponse en français capable d'extraire des réponses à des questions à réponses multiples (questions possédant plusieurs réponses correctes différentes) en domaine ouvert à partir de documents provenant du Web. Nous présentons ici le protocole expérimental et les résultats pour nos deux expériences utilisateurs qui visent à (1) comparer les performances de Citron par rapport à celles d'un être humain pour la tâche d'extraction de réponses multiples et (2) connaître la satisfaction d'un utilisateur devant différents formats de présentation de réponses.Abstract. In this paper, we propose a user evaluation of Citron, a question-answering system in French which extracts answers for multiple answer questions (expecting different correct answers) in open domain from Web documents. We present here our experimental protocol and results for user evaluations which aim at (1) comparing multiple answer extraction performances of Citron and users, and(2)knowing user preferences about multiple answer presentation.Mots-clés : système de question-réponse, réponses multiples, évaluation utilisateur. | 21 ème Traitement Automatique des Langues Naturelles |
d202782487 | Aspect-level sentiment classification, which is a fine-grained sentiment analysis task, has received lots of attention these years. There is a phenomenon that people express both positive and negative sentiments towards an aspect at the same time. Such opinions with conflicting sentiments, however, are ignored by existing studies, which design models based on the absence of them. We argue that the exclusion of conflict opinions is problematic, for the reason that it represents an important style of human thinking -dialectic thinking. If a realworld sentiment classification system ignores the existence of conflict opinions when it is designed, it will incorrectly mixed conflict opinions into other sentiment polarity categories in action. Existing models have problems when recognizing conflicting opinions, such as data sparsity. In this paper, we propose a multilabel classification model with dual attention mechanism to address these problems. | Recognizing Conflict Opinions in Aspect-level Sentiment Classification with Dual Attention Networks |
d236486167 | We improve customer experience and gain their trust when their issues are resolved rapidly with less friction. Existing work has focused on reducing the overall case resolution time by binning a case into predefined categories and routing it to the desired support engineer. However, the actions taken by the engineer during case analysis and resolution are altogether ignored, even though it forms the bulk of the case resolution time.In this work, we propose two systems that enable support engineers to resolve cases faster. The first, a guidance extraction model, that mines historical cases and provides technical guidance phrases to the support engineers. These phrases can then be used to educate the customer or to obtain critical information needed to resolve the case and thus minimize the number of correspondences between the engineer and customer. The second, a summarization model that creates an abstractive summary of a case to provide better context to the support engineer. Through quantitative evaluation we obtain an F1 score of 0.64 on the guidance extraction model and a BertScore (F1) of 0.55 on the summarization model. | SupportNet: Neural Networks for Summary Generation and Key Segment Extraction from Technical Support Tickets |
d60180900 | Nous présentons dans cet article un modèle d'exploration contextuelle et une plate-forme logicielle qui permet d'accéder au contenu sémantique des textes et d'en extraire des séquences particulièrement pertinentes. L'objectif est de développer et d'exploiter des ressources linguistiques pour identifier dans les textes, indépendamment des domaines traités, certaines des relations organisatrices des connaissances ainsi que les organisations discursives mises en places par l'auteur. L'analyse sémantique du texte est guidée par le repérage d'indices linguistiques déclencheurs dont l'emploi est représentatif des notions étudiées.In this paper, we present a model of contextual exploration and a workstation dedicated to semantic filtering and relevant sentence extracting. The purpose is to develop and to exploit linguistics resources in order to identify in texts, independently of processed domains, some specific relations which organize knowledge and author discourse. Semantic analysis is driven by the identification of linguistic indicators which are relevant clues for the studied notions. | Modèle d'exploration contextuelle pour l'analyse sémantique de textes |
d2440471 | We describe here an algorithm for detecting subject boundaries within text based on a statistical lexical similarity measure. Hearst has already tackled this problem with good results(Hearst, 1994). One of her main assumptions is that a change in subject is accompanied by a change in vocabulary. Using this assumption, but by introducing a new measure of word significance, we have been able to build a robust and reliable algorithm which exhibits improved accuracy without sacrificing language independency. | Detecting Subject Boundaries Within Text: A Language Independent Statistical Approach |
d256461269 | Large multilingual language models generally demonstrate impressive results in zero-shot cross-lingual transfer, yet often fail to successfully transfer to low-resource languages, even for token-level prediction tasks like named entity recognition (NER). In this work, we introduce a simple yet highly effective approach for improving zero-shot transfer for NER to lowresource languages. We observe that NER finetuning in the source language decontextualizes token representations, i.e., tokens increasingly attend to themselves. This increased reliance on token information itself, we hypothesize, triggers a type of overfitting to properties that NE tokens within the source languages share, but are generally not present in NE mentions of target languages. As a remedy, we propose a simple yet very effective sliced fine-tuning for NER (SLICER) that forces stronger token contextualization in the Transformer: we divide the transformed token representations and classifier into disjoint slices that are then independently classified during training. We evaluate SLICER on two standard benchmarks for NER that involve low-resource languages, WikiANN and MasakhaNER, and show that it (i) indeed reduces decontextualization (i.e., extent to which NE tokens attend to themselves), consequently (ii) yielding consistent transfer gains, especially prominent for low-resource target languages distant from the source language. | SLICER: Sliced Fine-Tuning for Low-Resource Cross-Lingual Transfer for Named Entity Recognition |
d6486942 | In this article, we introduce a new technique for constructing wide-coverage morphological lexica from large corpora and morphological knowledge, with an application to French. Basically, it relies on the idea that the existence of a hypothetical lemma can be guessed if several different words found in the corpus are best interpreted as morphological variants of this lemma. We first validated our technique by extracting verbs and adjectives on a general French corpus of 25 million words. Compared with other lexical resources available for French, our results are very satisfying, since we cover many words, often derived words, that are not always present in other lexica. Application of our algorithm to the acquisition of domain-specific adjectives on a botanic corpus gave also very good results, thus demonstrating its usability to extract domain-specific lexica. Moreover, it is generalizable to any language with a substantial morphology. Part of the resulting lexicon (currently verbal forms) is already freely available on http://www.lefff.net/. | Morphology based automatic acquisition of large-coverage lexica |
d2653374 | Morphologically complex terms composed from Greek or Latin elements are frequent in scientific and technical texts. Word forming units are thus relevant cues for the identification of terms in domainspecific texts. This article describes a method for the automatic extraction of terms relying on the detection of classical prefixes and word-initial combining forms. Word-forming units are identified using a regular expression. The system then extracts terms by selecting words which either begin or coalesce with these elements. Next, terms are grouped in families which are displayed as a weighted list in HTML format. | Multilingual Term Extraction from Domain-specific Corpora Using Morphological Structure |
d2750870 | In this paper, we address the task of automatically aligning/detecting the bilingual documents that are translations of each other from a single web-domain as part of WMT 2016. 1 Given the large amounts of data available in each web-domain, a brute force approach like finding similarities between every possible pair is a computationally expensive operation. Therefore, we start with a simple approach on matching just the web page urls after some pre-processing to reduce the number of possible pairings to a small extent. This simple approach obtained a recall of 50% and the exact matches from this approach are removed from further consideration. We built on top of this using an n-gram based approach that uses the partial English translations of French web pages and achieved a recall of 93.71% on the training pairs provided. We also outline an IR-based approach that uses both content and the meta data of each web page url, thereby obtaining a recall of 56.31%. Our final submission to this shared task using n-gram based approach achieved a recall of 93.92%. | Shared Task Papers |
d2813562 | Finite-state approaches have been highly successful at describing the morphological processes of many languages. Such approaches have largely focused on modeling the phone-or character-level processes that generate candidate lexical types, rather than tokens in context. For the full analysis of words in context, disambiguation is also required(Hakkani-Tür et al., 2000;Hajič et al., 2001). In this paper, we apply a novel source-channel model to the problem of morphological disambiguation (segmentation into morphemes, lemmatization, and POS tagging) for concatenative, templatic, and inflectional languages. The channel model exploits an existing morphological dictionary, constraining each word's analysis to be linguistically valid. The source model is a factored, conditionally-estimated random field(Lafferty et al., 2001) that learns to disambiguate the full sentence by modeling local contexts. Compared with baseline state-of-the-art methods, our method achieves statistically significant error rate reductions on Korean, Arabic, and Czech, for various training set sizes and accuracy measures. * This work was supported by a Fannie and John Hertz Foundation Fellowship, a NSF Fellowship, and a NDSEG Fellowship (sponsored by ARO and DOD). The views expressed are not necessarily endorsed by sponsors. We thank Eric Goldlust and Markus Dreyer for Dyna language support and Jason Eisner, David Yarowsky, and three anonymous reviewers for comments that improved the paper. We also thank Jan Hajič and Pavel Krbec for sharing their Czech tagger. | Context-Based Morphological Disambiguation with Random Fields * |
d177702 | Electronic patient records (EPRs) are a valuable resource for research but for confidentiality reasons they cannot be used freely. In order to make EPRs available to a wider group of researchers, sensitive information such as personal names has to be removed. Deidentification is a process that makes this possible. Both rule-based as well as statistical and machine learning based methods exist to perform de-identification, but the second method requires annotated training material which exists only very sparsely for patient names. It is therefore necessary to use rule-based methods for de-identification of EPRs. Not much is known, however, about the order in which the various rules should be applied and how the different rules influence precision and recall. This paper aims to answer this research question by implementing and evaluating four common rules for de-identification of personal names in EPRs written in Swedish: (1) dictionary name matching, (2) title matching, (3) common words filtering and (4) learning from previous modules. The results show that to obtain the highest recall and precision, the rules should be applied in the following order: title matching, common words filtering and dictionary name matching. | Influence of Module Order on Rule-Based De-identification of Personal Names in Electronic Patient Records Written in Swedish |
d5091430 | In statistical machine translation, an alignment defines a mapping between the words in the source and in the target sentence. Alignments are used, on the one hand, to train the statistical models and, on the other, during the decoding process to link the words in the source sentence to the words in the partial hypotheses generated. In both cases, the quality of the alignments is crucial for the success of the translation process. In this paper, we propose an algorithm based on an Estimation of Distribution Algorithm for computing alignments between two sentences in a parallel corpus. This algorithm has been tested on different tasks involving different pair of languages. In the different experiments presented here for the two word-alignment shared tasks proposed in the HLT-NAACL 2003 and in the ACL 2005, the EDAbased algorithm outperforms the best participant systems. | Searching for alignments in SMT. A novel approach based on an Estimation of Distribution Algorithm * |
d8245223 | Maximum Entropy based Semantic Role Labeling | |
d8424384 | Tiffs paper describes Kind Types (KT), a system which uses commonsense knowledge to reason about natural language text. KT encodes some of the knowledge underlying natural language understanding, including category distinctions and descriptions dlffercntiating real-world objects, states and events. It embeds an ontology reflecting the ordinary person's top-level cognitive model of real-world distinctions and a database of prototype descriptions of real-world entities. KT is transportable, empirlcally-based and constrained for efficient reasoning in ways similar to human reasoning processes. ABSTRACT REAl_ ABSTRACT REAL / J CONCRETE CONCRETE | KIND TYPES IN KNOWLEDGE REPRESENTATION |
d2085726 | In this paper we present BabelNet -a very large, wide-coverage multilingual semantic network. The resource is automatically constructed by means of a methodology that integrates lexicographic and encyclopedic knowledge from WordNet and Wikipedia. In addition Machine Translation is also applied to enrich the resource with lexical information for all languages. We conduct experiments on new and existing gold-standard datasets to show the high quality and coverage of the resource. | BabelNet: Building a Very Large Multilingual Semantic Network |
d282477 | In this paper we describe a database that consists of offline handwritten Spanish sentences from four different subtasks. The database includes 1 500 forms produced by the same number of writers. A total of around 100 000 word instances out of a vocabulary of around 3 300 words occur in the collection. This database is intended to be used for offline handwriting recognition tasks. However, this database is expected to be specially useful for recognition systems that may take advantage of language models of restricted-semantic tasks. The database also includes a few image-processing procedures for extraction of handwritten text images from the forms and segmentation of the images into lines and words. | The SPARTACUS-Database: a Spanish Sentence Database for Offline Handwriting Recognition |
d2116296 | Discriminative models such as logistic regression profit from the ability to incorporate arbitrary rich features; however, complex dependencies among overlapping features can often result in weight undertraining. One popular method that attempts to mitigate this problem is logarithmic opinion pools (LOP), which is a specialized form of product of experts model that automatically adjusts the weighting among experts. A major problem with LOP is that it requires significant amounts of domain expertise in designing effective experts. We propose a novel method that learns to induce experts -not just the weighting between them -through the use of a mixed 2 1 norm as previously seen in elitist lasso. Unlike its more popular sibling 1 2 norm (used in group lasso), which seeks feature sparsity at the group-level, 2 1 norm encourages sparsity within feature groups. We demonstrate how this property can be leveraged as a competition mechanism to induce groups of diverse experts, and introduce a new formulation of elitist lasso Max-Ent in the FOBOS optimization framework(Duchi and Singer, 2009). Results on Named Entity Recognition task suggest that this method gives consistent improvements over a standard logistic regression model, and is more effective than conventional induction schemes for experts. | Learning a Product of Experts with Elitist Lasso |
d1923291 | It has been known since Ide and Veronis [6] that it is impossible to automatically extract an ontology structure from a dictionary, because that information is simply not present. We attempt to extract structure elements from a dictionary using clues taken from a formal ontology, and use these elements to match dictionary definitions to ontology synsets; this allows us to enrich the ontology with dictionary definitions, assign ontological structure to the dictionary, and disambiguate elements of definitions and synsets. | Dictionary-Ontology Cross-Enrichment Using TLFi and WOLF to enrich one another |
d2823519 | We present an algorithm for generating referring expressions in open domains. Existing algorithms work at the semantic level and assume the availability of a classification for attributes, which is only feasible for restricted domains. Our alternative works at the realisation level, relies on Word-Net synonym and antonym sets, and gives equivalent results on the examples cited in the literature and improved results for examples that prior approaches cannot handle. We believe that ours is also the first algorithm that allows for the incremental incorporation of relations. We present a novel corpus-evaluation using referring expressions from the Penn Wall Street Journal Treebank. | Generating Referring Expressions in Open Domains |
d257154232 | The goal of the shared task is multi-label classification for biomedical records in English used for Evidence-Based Medicine. In this paper, we describe the model based on the Transformer submitted by our team turkNLP for the shared task. Our model achieved a Micro ROC score of ≈ 0.93 on the shared task and ranked 5 th in the leaderboard. | Automatic Classification of Evidence Based Medicine Using Transformers |
d16913198 | This paper describes the issues involved in extending a trans-lingual lexicon, the TextWise Conceptual Interlingua (CI), with Arabic terms. The Conceptual Interlingua is based on the Princeton English WordNet(Fellbaum, 1998). It is a central component in the cross-lingual information retrieval (CLIR) system CINDOR (Conceptual INterlingua for DOcument Retrieval). Arabic has a rich morphological system combining templatic and affixational paradigms for both inflectional and derivational morphology. This rich morphology poses a major challenge to the design and building of the Arabic CI and also its validation. This is because the available resources for Arabic, whether manually constructed bilingual lexicons or lexicons automatically derived from bilingual parallel corpora, exist at different levels of morphological representation. We describe here the issues and decisions made in the design and construction of the Arabic-English CI using different types of manual and automatic resources. We also present the results of an extensive validation of the Arabic CI and briefly discuss the evaluation of its use for CLIR on the TREC Arabic Benchmark collection. | Design, Construction and Validation of an Arabic-English Conceptual Interlingua for Cross-lingual Information Retrieval |
d15555787 | For a single semantic meaning, various linguistic expressions exist the Mainland China, Hong Kong and Taiwan variety of Mandarin Chinese, a.k.a., the Greater China Region (GCR). Differing from the current bilingual word alignment corpus, in this paper, we have constructed two monolingual GCR corpora. One is a 11,623-triple GCR word dictionary corpora which is automatically extracted and manually annotated from 30 million sentence pairs from Wikipedia. The other one is a manually annotated 12,000 sentence pairs GCR word alignment corpus from Wikipedia and news website. In addition, we present a rulebased word alignment model which systematically explores the different word alignment case, e.g. 1-1, 1-n and m-n mapping, from Mainland China to Hong Kong or Taiwan. Evaluation results on our two different GCR word alignment corpora verify the effectiveness of our model, which significantly outperforms the current Hidden Markov Model (HMM) based method, GIZA++ and their enhanced versions. | Building Monolingual Word Alignment Corpus for the Greater China Region |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.