_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d10456150 | A controlled use of omnipresent data can leverage a potential of services never reached before. In this paper, we propose a user driven approach to take advantage of massive data streams. Our solution, named Stream2Text, relies on a personalized and continuous refinement of data to generate texts (in natural language) that provide a tailored synthesis of relevant data. It enables monitoring by a wide range of users as text streams can be shared on social networks or used individually on mobile devices. | A personal storytelling about your favorite data |
d2201463 | Voice-Rate is an experimental dialog system through which a user can call to get product information. In this paper, we describe an optimal dialog management algorithm for Voice-Rate. Our algorithm uses a POMDP framework, which is probabilistic and captures uncertainty in speech recognition and user knowledge. We propose a novel method to learn a user knowledge model from a review database. Simulation results show that the POMDP system performs significantly better than a deterministic baseline system in terms of both dialog failure rate and dialog interaction time. To the best of our knowledge, our work is the first to show that a POMDP can be successfully used for disambiguation in a complex voice search domain like Voice-Rate. | Optimal Dialog in Consumer-Rating Systems using a POMDP Framework |
d8701766 | In this paper, we present a novel method based on CRFs in response to the two special characteristics of "contextual dependency" and "label redundancy" in sentence sentiment classification. We try to capture the contextual constraints on sentence sentiment using CRFs. Through introducing redundant labels into the original sentimental label set and organizing all labels into a hierarchy, our method can add redundant features into training for capturing the label redundancy.The experimental results prove that our method outperforms the traditional methods like NB, SVM, MaxEnt and standard chain CRFs. In comparison with the cascaded model, our method can effectively alleviate the error propagation among different layers and obtain better performance in each layer. | Adding Redundant Features for CRFs-based Sentence Sentiment Classification |
d236937164 | ||
d226283458 | ||
d6385943 | This paper argues for the development of parallel treebanks. It summarizes the work done in this area and reports on experiments for building a Swedish-German treebank. And it describes our approach for reusing resources from one language while annotating another language. | Bootstrapping Parallel Treebanks |
d2870061 | A growing body of research has recently been conducted on semantic textual similarity using a variety of neural network models. While recent research focuses on word-based representation for phrases, sentences and even paragraphs, this study considers an alternative approach based on character n-grams. We generate embeddings for character n-grams using a continuous-bag-of-n-grams neural network model. Three different sentence representations based on n-gram embeddings are considered. Results are reported for experiments with bigram, trigram and 4-gram embeddings on the STS Core dataset for SemEval-2016 Task 1. | ASOBEK at SemEval-2016 Task 1: Sentence Representation with Character N-gram Embeddings for Semantic Textual Similarity |
d6732044 | With the aim to deal with sentiment-transfer problem, we proposed a novel approach, which integrates the sentiment orientations of documents into the graph-ranking algorithm. We apply the graph-ranking algorithm using the accurate labels of old-domain documents as well as the "pseudo" labels of new-domain documents. Experimental results show that proposed algorithm could improve the performance of baseline methods dramatically for sentiment transfer. | Graph Ranking for Sentiment Transfer |
d7031655 | OVERVIEW This paper describes improvements to DECIPHER, the speech recognition component in SKI's Air Travel Information Systems (ATIS) and Resource Management systems. DECIPHER is a speaker-independent continuous speech recognition system based on hidden Markov model (HMM) technology. We show significant performance improvements in DECIPHER due to (I) the addition of tied-mixture I-IMM modeling (2) rejection of outof-vocabulary speech and background noise while continuing to recognize speech (3) adapting to the current speaker (4) the implementation of N-gram statistical grammars with DECIPHER. Finally we describe our performance in the February 1991 DARPA Resource Management evaluation (4.8 percent word error) and in the February 1991 DARPA-ATIS speech and SLS evaluations (95 sentences correct, 15 wrong of 140). We show that, for the ATIS evaluation, a well-conceived system integration can be relatively robust to speech recognition errors and to linguistic variability and errors. | SPEECH RECOGNITION IN SRI'S RESOURCE MANAGEMENT AND ATIS SYSTEMS |
d28475531 | This paper describes a cognate identification method, used by a lexical alignment system for French and Romanian. We combine statistical techniques and linguistic information to extract cognates from lemmatized, tagged and sentencealigned parallel corpora. We evaluate the cognate identification model and we compare it to other methods using pure statistical techniques. We show that the use of linguistic information in the cognate identification system improves significantly the results. | Cognate Identification for a French -Romanian Lexical Alignment System: Empirical Study |
d988360 | ||
d219302917 | ||
d11074530 | Substantial research effort has been invested in recent decades into the computational study and automatic processing of multi-party conversation. While most aspects of conversational speech have benefited from a wide availability of analytic, computationally tractable techniques, only qualitative assessments are available for characterizing multi-party turn-taking. The current paper attempts to address this deficiency by first proposing a framework for computing turn-taking model perplexity, and then by evaluating several multi-participant modeling approaches. Experiments show that direct multi-participant models do not generalize to held out data, and likely never will, for practical reasons. In contrast, the Extended-Degree-of-Overlap model represents a suitable candidate for future work in this area, and is shown to successfully predict the distribution of speech in time and across participants in previously unseen conversations. | Modeling Norms of Turn-Taking in Multi-Party Conversation |
d218974184 | ||
d202541220 | ||
d14862422 | The DARPA Spoken Language effort has profited greatly from its emphasis on tasks and common evaluation metrics. Common, standardized evaluation procedures have helped the community to focus research effort, to measure progress, and to encourage communication among participating sites. The task and the evaluation metrics, however, must be consistent with the goals of the Spoken Language program, namely interactive problem solving. Our evaluation methods have evolved with the technology, moving from evaluation of read speech from a fixed corpus through evaluation of isolated canned sentences to evaluation of spontaneous speech in context in a canned corpus. A key component missed in current evaluations is the role of subject interaction with the system. Because of the great variability across subjects, however, it is necessary to use either a large number of subjects or a within-subject design. This paper proposes a within-subject design comparing the results of a software-sharing exercise carried out jointly by M1T and SRI. | SUBJECT-BASED EVALUATION MEASURES FOR INTERACTIVE SPOKEN LANGUAGE SYSTEMS |
d946674 | Use of Lambek's (1958) categorial grammar for linguistic work has generally been rather limited. There appear to be two main reasons for this: the notations most commonly used can sometimes obscure the structure of proofs and fail to clearly convey linguistic structure, and the cMculus as it stands is apparently not powerful enough to describe many phenomena encountered in natural language.In this paper we suggest ways of dealing with both these deficiencies. Firstly, we reformulate Lambek's system using proof figures based on the 'natural deduction' notation commonly used for derivations in logic, and discuss some of the related proof-theory. Natural deduction is generally regarded as the most economical and comprehensible system for working on proofs by hand, and we suggest that the same advantages hold for a similar presentation of categorial derivations. Secondly, we introduce devices called structural modalities, based on the structural rules found in logic, for the characterization of commutation, iteration and optionality. This permits the description of linguistic phenomena which Lambek's system does not capture with the desired sensitivity and gencrallty.LAMBEK CATEGORIALGRAMMARPRELIMINARIESCategorial grammar is an approach to language description in which the combination of expressions is governed not by specific linguistic rules but by general logical inference mechanisms. The point of departure can be seen as Frege's position that there are certain 'complete expressions' which are the primary bearers of meaning, and that the meanings of 'incomplete expressions' (including words) are derivative, being * We would like to thank Robin Cooper, Martin Pickering and Pete Whitelock for comments and discussion relating to this work. The authors were respectively supported by SERC Research Studentship 883069"/1; ESRO Research Studentshlp C00428722003; ESPRIT Project 393 and Cognitive Science/HCI Research Initiative 89/CS01 and 89/CS25; SERC Postdoctoral Fellowship B/ITF/206. | PROOF FIGURES AND STRUCTURAL OPERATORS FOR CATEGORIAL GRAMMAR" |
d248779955 | Stereotypes are a positive or negative, generalized, and often widely shared belief about the attributes of certain groups of people, such as people with sensory disabilities. If stereotypes manifest in assistive technologies used by deaf or blind people, they can harm the user in a number of ways-especially considering the vulnerable nature of the target population. AI models underlying assistive technologies have been shown to contain biased stereotypes, including racial, gender, and disability biases. We build on this work to present a psychologybased stereotype assessment of the representation of disability, deafness, and blindness in BERT using the Stereotype Content Model. We show that BERT contains disability bias, and that this bias differs along established stereotype dimensions. | Applying the Stereotype Content Model to assess disability bias in popular pre-trained NLP models underlying AI-based assistive technologies |
d1228554 | In this paper, we address the problem of evaluating spontaneous speech using a combination of machine learning and crowdsourcing. Machine learning techniques inadequately solve the stated problem because automatic speakerindependent speech transcription is inaccurate. The features derived from it are also inaccurate and so is the machine learning model developed for speech evaluation. To address this, we post the task of speech transcription to a large community of online workers (crowd). We also get spoken English grades from the crowd. We achieve 95% transcription accuracy by combining transcriptions from multiple crowd workers. Speech and prosody features are derived by force aligning the speech samples on these highly accurate transcriptions.Additionally, we derive surface and semantic level features directly from the transcription. To demonstrate the efficacy of our approach we performed experiments on an expert-graded speech sample of 319 adult non-native speakers. Using these features in a regression model, we are able achieve a Pearson correlation of 0.76 with expert grades, an accuracy much higher than any previously reported machine learning approach. Our approach has an accuracy that rivals that of expert agreement. This work is timely given the huge requirement of spoken English training and assessment. | Automatic Spontaneous Speech Grading: A Novel Feature Derivation Technique using the Crowd |
d7712278 | In this paper, we present a method of predicting emotions from multi-label conversation transcripts. The transcripts are from a movie dialog corpus and annotated partly by 3 annotators. The method includes building an emotion lexicon bootstrapped from Wordnet following the notion of Plutchik's basic emotions and dyads. The lexicon is then adapted to the training data by using a simple Neural Network to fine-tune the weights toward each basic emotion. We then use the adapted lexicon to extract the features and use them for another Deep Network which does the detection of emotions in conversation transcripts. The experiments were conducted to confirm the effectiveness of the method, which turned out to be nearly as good as a human annotator. | Multiple Emotions Detection in Conversation Transcripts |
d15617679 | We describe a reusable and scalable dialogue toolbox and its application in multiple systems. Our main claim is that ends-based representation and processing throughout the complete dialogue backbone it essential to our approach. | Ends-based Dialogue Processing |
d5389622 | A GRAMMAR BASED APPROACH TO A GRAMMAR CHECKING OF FREE WORD ORDER LANGUAGES | |
d16510245 | Some authors (Simard et al.; Melamed; Danielsson & Mühlenbock) have suggested measures of similarity of words in different languages so as to find extra clues for alignment of parallel texts. Cognate words, like 'Parliament' and 'Parlement', in English and French respectively, provide extra anchors that help to improve the quality of the alignment. In this paper, we will extend an alignment algorithm proposed by Ribeiro et al. using typical contiguous and non-contiguous sequences of characters extracted using a statistically sound method (Dias et al.). With these typical sequences, we are able to find more reliable correspondence points and improve the alignment quality without recurring to heuristics to identify cognates. | Cognates Alignment |
d235258298 | ||
d9738529 | This paper presents an open and flexible methodological framework for the automatic acquisition of multiword expressions (MWEs) from monolingual textual corpora. This research is motivated by the importance of MWEs for NLP applications. After briefly presenting the modules of the framework, the paper reports extrinsic evaluation results considering two applications: computer-aided lexicography and statistical machine translation. Both applications can benefit from automatic MWE acquisition and the expressions acquired automatically from corpora can both speed up and improve their quality. The promising results of previous and ongoing experiments encourage further investigation about the optimal way to integrate MWE treatment into these and many other applications. | A Generic Framework for Multiword Expressions Treatment: from Acquisition to Applications |
d169092704 | ||
d2847571 | This paper describes a project which has explored the feasibility of using a computer to perform a significant portion of the changes required to adapt text from one dialect to several others. This ongoing experiment has examined adaptation between various dialects of Quechua, finding that a computer program may be an important tool for adaptation. An experimental computer program was written and applied to text, and its output was field tested in five target dialects. Preliminary results indicate that preprocessing text with a computer may I) enable informants who are not bi-dialectical (in the source and target dialects) to produce adequate adaptations without much coaching from the linguist/translator; 2) improve the quality of the resulting text; and 3) reduce time and effortmboth in adaptation and in manuscript preparation.2.To discover what kinds of dialect difference information are needed to support an effective dialect-adapting computer program.3. To discover classes of dialect changes not accounted for in a particular first-draft computer program, thereby to provide data for a detailed examination of whether each class of changes is suitable for performance by a computer program.In pursuit of these goals, an experimental computer program was written and applied to text, and its output was field tested.In this paper the nature of the language situation is discussed first, followed by a description of the computer program. Then, procedures for checking the computer-adapted text are described, followed by a discussion of the results of this checking. Finally, conclusions are stated.The Nature of the Language SituationThe practical difficulty of dialect adaptation is primarily determined by the language situation.The General Nature of the Language(s)This experiment was carried out in the subgroup of Quechua called "central" Quechua by P. Landerman [1]. These languages/dialects have the following characteristics:1. More of the structure of the language is in the morphology than in the syntax.2. Much of the discourse structure involves the manipulation of the so-called "topic" marker and the "evidential" suffixes (the reportative, the assertative .... ). | Prospects for Computer-Assisted Dialect Adaptation |
d3889052 | I have organized my comments around some of the questions posed by the panel chair, Fernando Pereira.The key idea in the unification-based approaches to grammar is that we deal with informational structures (called feature structures) which encode a variety of linguistic information (lexical, syntactic, semantic, discourse, perhaps even cross-linguistic) in a uniform way and then manipulate (combine) these structures by means of a few (one, if possible) welldefined operations (unification being the primary one). The feature structures consist of features and associated values, which can be atomic or complex i.e., feature structures themselves. In other'words, the values can be from a structured set. The unification operation builds new structures and together with some string combining operation (concatenation being the primary one) pairs the feature structures with strings(Schieber, 1986). | Unification and Some New Grammatical Formalisms |
d18140285 | Natural Language Generation (NLG) techniques can be applied in generating virtual documents dynamically using information from a database(Dale et al, 1999). One of the applications of NLG techniques to generate documents dynamically is the web-based interactive virtual museum, VIGAN. NLG is used to generate the descriptions of the objects in a virtual museum dynamically based on the profile and interests of the visitor. The focus of the research is on incorporating user's interest, age group, and visit history in the generation of museum object descriptions. The descriptions do not vary only on user's profile but also in lexicalization. Facts are not only in describing the objects, but also in describing Ilocano personalities. User Acceptance Testing proved that object descriptions do vary based on age groups, category of interest, and lexicalization. They commented that the descriptions are easy to understand, the user interface is user friendly and the suggested objects are appropriate. | Natural Language Generation of Museum Object Descriptions based on User Model |
d227230615 | ||
d12874888 | In this paper we present the creation of a corpora annotated with both semantic relatedness (SR) scores and textual entailment (TE) judgments. In building this corpus we aimed at discovering, if any, the relationship between these two tasks for the mutual benefit of resolving one of them by relying on the insights gained from the other. We considered a corpora already annotated with TE judgments and we proceed to the manual annotation with SR scores. The RTE 1-4 corpora used in the PASCAL competition fit our need. The annotators worked independently of one each other and they did not have access to the TE judgment during annotation. The intuition that the two annotations are correlated received major support from this experiment and this finding led to a system that uses this information to revise the initial estimates of SR scores. As semantic relatedness is one of the most general and difficult task in natural language processing we expect that future systems will combine different sources of information in order to solve it. Our work suggests that textual entailment plays a quantifiable role in addressing it. | Corpora for Learning the Mutual Relationship between Semantic Relatedness and Textual Entailment |
d10128350 | ||
d42190070 | ||
d12363636 | A framework for a structured representation of semantic knowledge (e.g. word-senses) has been defined at the IBM Scientific Center of Roma, as part of a project on Italian Text Understanding. This representation, based on the conceptual graphs formalism [SOW84], expresses deep knowledge (pragmatic) on word-senses. The knowledge base data structure is such as to provide easy access by the semantic verification algorithm. This paper discusses some important problem related to the definition of a semantic knowledge base, as depth versus generality, hierarchical ordering of concept types, etc., and describes the solutions adopted within the text understanding project. | A STRUCTURED REPRESENTATION OF WORD-SENSES IrOR SEMANTIC ANALYSIS |
d8941547 | This paper details the system NILC USP that participated in the Semeval 2014: Aspect Based Sentiment Analysis task. This system uses a Conditional Random Field (CRF) algorithm for extracting the aspects mentioned in the text. Our work added semantic labels into a basic feature set for measuring the efficiency of those for aspect extraction. We used the semantic roles and the highest verb frame as features for the machine learning. Overall, our results demonstrated that the system could not improve with the use of this semantic information, but its precision was increased. | NILC USP: Aspect Extraction using Semantic Labels |
d67433799 | Book Reviews Learning to Classify Text Using Support Vector Machines: Methods, Theory, and Algorithms | |
d16861874 | MAE and MAI are lightweight annotation and adjudication tools for corpus creation. DTDs are used to define the annotation tags and attributes, including extent tags, link tags, and non-consuming tags. Both programs are written in Java and use a stand-alone SQLite database for storage and retrieval of annotation data. Output is in stand-off XML. | MAE and MAI: Lightweight Annotation and Adjudication Tools |
d226239382 | ||
d1297432 | ||
d2539019 | This paper presents X-Space, a system that follows the ISO-Space annotation scheme in order to capture spatial information as well as our contribution to the SemEval-2015 task 8 (SpaceEval). Our system is the only participant system that reported results for all three evaluation configurations in SpaceEval. | IXAGroupEHUSpaceEval: (X-Space) A WordNet-based approach towards the Automatic Recognition of Spatial Information following the ISO-Space Annotation Scheme |
d2597568 | Scripts represent knowledge of stereotypical event sequences that can aid text understanding. Initial statistical methods have been developed to learn probabilistic scripts from raw text corpora; however, they utilize a very impoverished representation of events, consisting of a verb and one dependent argument. We present a script learning approach that employs events with multiple arguments. Unlike previous work, we model the interactions between multiple entities in a script. Experiments on a large corpus using the task of inferring held-out events (the "narrative cloze evaluation") demonstrate that modeling multi-argument events improves predictive accuracy. | Statistical Script Learning with Multi-Argument Events |
d1935073 | Translation systems that automatically extract transfer mappings (rules or examples) from bilingual corpora have been hampered by the difficulty of achieving accurate alignment and acquiring high quality mappings. We describe an algorithm that uses a bestfirst strategy and a small alignment grammar to significantly improve the quality of the transfer mappings extracted.For each mapping, frequencies are computed and sufficient context is retained to distinguish competing mappings during translation. Variants of the algorithm are run against a corpus containing 200K sentence pairs and evaluated based on the quality of resulting translations. | A best-first alignment algorithm for automatic extraction of transfer mappings from bilingual corpora |
d12970341 | We here present and compare two unsupervised approaches for inducing the main conceptual information in rather stereotypical summaries in two different languages. We evaluate the two approaches in two different information extraction settings: monolingual and cross-lingual information extraction. The extraction systems are trained on auto-annotated summaries (containing the induced concepts) and evaluated on humanannotated documents. Extraction results are promising, being close in performance to those achieved when the system is trained on human-annotated summaries. | Unsupervised Learning Summarization Templates from Concise Summaries |
d36790065 | A Scaleable Multi-document Centroid-based Summarizer Dragomir Radev | |
d32061952 | In this paper a tool to manage a dataset for a VerbNet-like verb lexicon is presented. It was designed to allow users to create a verb lexicon for another language than English and at the same time use the same data structure as the English VerbNet. We take a look at the most relevant requirements of the software and will give an overview of the functionality achieved so far. | VerbNet Workbench |
d5714733 | Statistical machine translation is often faced with the problem of combining training data from many diverse sources into a single translation model which then has to translate sentences in a new domain. We propose a novel approach, ensemble decoding, which combines a number of translation systems dynamically at the decoding step. In this paper, we evaluate performance on a domain adaptation setting where we translate sentences from the medical domain. Our experimental results show that ensemble decoding outperforms various strong baselines including mixture models, the current state-of-the-art for domain adaptation in machine translation. | Mixing Multiple Translation Models in Statistical Machine Translation |
d524886 | We are presenting a working system for automated news analysis that ingests an average total of 7600 news articles per day in five languages. For each language, the system detects the major news stories of the day using a group-average unsupervised agglomerative clustering process. It also tracks, for each cluster, related groups of articles published over the previous seven days, using a cosine of weighted terms. The system furthermore tracks related news across languages, in all language pairs involved. The cross-lingual news cluster similarity is based on a linear combination of three types of input: (a) cognates, (b) automatically detected references to geographical place names and (c) the results of a mapping process onto a multilingual classification system. A manual evaluation showed that the system produces good results.Allan et al. (1998)identify new events and then track the topic like in an information filtering task by querying new documents against the profile ofRelated work | Multilingual and cross-lingual news topic tracking |
d17471572 | We describe our efforts to apply the Penn Discourse Treebank guidelines on a Tamil corpus to create an annotated corpus of discourse relations in Tamil. After conducting a preliminary exploratory study on Tamil discourse connectives, we show our observations and results of a pilot experiment that we conducted by annotating a small portion of our corpus. Our ultimate goal is to develop a Tamil Discourse Relation Bank that will be useful as a resource for further research in Tamil discourse. Furthermore, a study of the behavior of discourse connectives in Tamil will also help in furthering the cross-linguistic understanding of discourse connectives. | Creating an Annotated Tamil Corpus as a Discourse Resource |
d232021544 | ||
d219309281 | ||
d14461875 | Coreference resolution (CR) is a key task in the automated analysis of characters in stories. Standard CR systems usually trained on newspaper texts have difficulties with literary texts, even with novels; a comparison with newspaper texts showed that average sentence length is greater in novels and the number of pronouns, as well as the percentage of direct speech is higher. We report promising evaluation results for a rule-based system similar to[Lee et al. 2011], but tailored to the domain which recognizes coreference chains in novels much better than CR systems like CorZu. Rule-based systems performed best on the CoNLL 2011 challenge[Pradhan et al. 2011]. Recent work in machine learning showed similar results as rule-based systems [Durett et al. 2013]. The latter has the advantage that its explanation component facilitates a fine grained error analysis for incremental refinement of the rules. | Rule-based Coreference Resolution in German Historic Novels |
d2884642 | Stemming from distributed representation theories, we investigate the interaction between distributed structure and distributional meaning. We propose a pure distributed tree (DT) and distributional distributed tree (DDT). DTs and DDTs are exploited for defining distributed tree kernels (DTKs) and distributional distributed tree kernels (DDTKs). We compare DTKs and DDTKs in two tasks: approximating tree kernels TK(Collins and Duffy, 2002); performing textual entailment recognition (RTE). Results show that DTKs correlate with TKs and perform in RTE better than DDTKs. Then, including distributional vectors in distributed structures is a very difficult task. | Distributed Structures and Distributional Meaning |
d13278463 | Spoken queries are a natural medium for searching the Web in settings where typing on a keyboard is not practical. This paper describes a speech interface to the Google search engine. We present experiments with various statistical language models, concluding that a unigram model with collocations provides the best combination of broad coverage, predictive power, and real-time performance. We also report accuracy results of the prototype system. | Searching the Web by Voice |
d15138302 | This paper presents a new corpus project, aiming at building a national corpus of Polish. What makes it different from a typical YACP (Yet Another Corpus Project) is 1) the fact that all four partners in the project have in the past constructed corpora of Polish, sometimes in the spirit of collaboration, at other times -in the spirit of competition, 2) the partners bring into the project varying areas of expertise and experience, so the synergy effect is anticipated, 3) the corpus will be built with an eye on specific applications in various fields, including lexicography (the corpus will be the empirical basis of a new large general dictionary of Polish) and natural language processing (a number of NLP tools will be constructed within the project). | Towards the National Corpus of Polish |
d218973943 | ||
d11340898 | Given the great amount of definite noun phrases that introduce an entity into the text for the first time, this paper presents a set of linguistic features that can be used to detect this type of definites in Spanish. The efficiency of the different features is tested by building a rule-based and a learning-based chain-starting classifier. Results suggest that the classifier, which achieves high precision at the cost of recall, can be incorporated as either a filter or an additional feature within a coreference resolution system to boost its performance. | A Chain-starting Classifier of Definite NPs in Spanish |
d227231221 | ||
d169100660 | ||
d2704506 | We introduce an approach to optimize a machine translation (MT) system on multiple metrics simultaneously. Different metrics (e.g. BLEU, TER) focus on different aspects of translation quality; our multi-objective approach leverages these diverse aspects to improve overall quality.Our approach is based on the theory of Pareto Optimality. It is simple to implement on top of existing single-objective optimization methods (e.g. MERT, PRO) and outperforms ad hoc alternatives based on linear-combination of metrics. We also discuss the issue of metric tunability and show that our Pareto approach is more effective in incorporating new metrics from MT evaluation for MT optimization. * | Learning to Translate with Multiple Objectives |
d11644259 | In this paper, we describe a new model for word alignment in statistical translation and present experimental results. The idea of the model is to make the alignment probabilities dependent on the differences in the alignment positions rather than on the absolute positions. To achieve this goal, the approach uses a first-order Hidden Markov model (HMM) for the word alignment problem as they are used successfully in speech recognition for the time alignment problem. The difference to the time alignment HMM is that there is no monotony constraint for the possible word orderings. We describe the details of the model and test the model on several bilingual corpora. | HMM-Based Word Alignment in Statistical Translation |
d141112258 | Nous étudions le rôle des entités nommées et marques discursives de rétroaction pour la tâche de classification et prédiction de la satisfaction usager à partir de dialogues. Les expériences menées sur 1027 dialogues Personne-Machine dans le domaine des agences de voyage montrent que les entités nommées et les marques discursives n'améliorent pas de manière significative le taux de classification des dialogues. Par contre, elles permettent une meilleure prédiction de la satisfaction usager à partir des premiers tours de parole usager.Abstract.We study the usefulness of named entities and acknowldgment words for user satisfaction classification and prediction from Human-Computer dialogs. We show that named entities and acknowledgment words do not enhance baseline classification performance. However, they allow a better prediction of user satisfaction in the beginning of the dialogue.Mots-clés : prédiction de la satisfaction usager, classification des dialogues Personne-Machine. | Détection et prédiction de la satisfaction des usagers dans les dialogues Personne-Machine |
d475213 | In this paper we describe our entry to the BioNLP 2009 Shared Task regarding biomolecular event extraction. Our work can be described by three design decisions: (1) instead of building a pipeline using local classier technology, we design and learn a joint probabilistic model over events in a sentence; (2) instead of developing specic inference and learning algorithms for our joint model, we apply Markov Logic, a general purpose Statistical Relation Learning language, for this task; (3) we represent events as relational structures over the tokens of a sentence, as opposed to structures that explicitly mention abstract event entities. Our results are competitive: we achieve the 4th best scores for task 1 (in close range to the 3rd place) and the best results for task 2 with a 13 percent point margin. | A Markov Logic Approach to Bio-Molecular Event Extraction |
d10074544 | The computing cost of many NLP tasks increases faster than linearly with the length of the representation of a sentence. For parsing the representation is tokens, while for operations on syntax and semantics it will be more complex. In this paper we propose a new task of sentence chunking: splitting sentence representations into coherent substructures. Its aim is to make further processing of long sentences more tractable. We investigate this idea experimentally using the Dependency Minimal Recursion Semantics (DMRS) representation. | Graph-and surface-level sentence chunking |
d1524104 | The MODELEXPLAINER | |
d18151307 | This paper presents a novel approach of incorporating fine-grained treebanking decisions made by human annotators as discriminative features for automatic parse disambiguation. To our best knowledge, this is the first work that exploits treebanking decisions for this task. The advantage of this approach is that use of human judgements is made. The paper presents comparative analyses of the performance of discriminative models built using treebanking decisions and state-of-the-art features. We also highlight how differently these features scale when these models are tested on out-of-domain data. We show that, features extracted using treebanking decisions are more efficient, informative and robust compared to traditional features. | Using Treebanking Discriminants as Parse Disambiguation Features |
d8494338 | In this paper, we investigate a novel approach to correcting grammatical and lexical errors in texts written by second language authors. Contrary to previous approaches which tend to use unilingual models of the user's second language (L2), this new approach uses a simple roundtrip Machine Translation method which leverages information about both the author's first (L1) and second languages. We compare the repair rate of this roundtrip translation approach to that of an existing approach based on a unilingual L2 model with shallow syntactic pruning, on a series of preposition choice errors. We find no statistically significant difference between the two approaches, but find that a hybrid combination of both does perform significantly better than either one in isolation. Finally, we illustrate how the translation approach has the potential of repairing very complex errors which would be hard to treat without leveraging knowledge of the author's L1. | Using First and Second Language Models to Correct Preposition Errors in Second Language Authoring |
d4036104 | ||
d1160947 | We present an adaptation technique for statistical machine translation, which applies the well-known Bayesian learning paradigm for adapting the model parameters. Since state-of-the-art statistical machine translation systems model the translation process as a log-linear combination of simpler models, we present the formal derivation of how to apply such paradigm to the weights of the log-linear combination. We show empirical results in which a small amount of adaptation data is able to improve both the non-adapted system and a system which optimises the abovementioned weights on the adaptation set only, while gaining both in reliability and speed. | Log-linear weight optimisation via Bayesian Adaptation in Statistical Machine Translation |
d227230599 | ||
d225063166 | ||
d6349911 | AbstractSemantic parsers map natural language sentences to formal representations of their underlying meaning. Building accurate semantic parsers without prohibitive engineering costs is a longstanding, open research problem. | Semantic Parsing with Combinatory Categorial Grammars |
d190588587 | ||
d8560450 | A CLASSIFICATION METHOI) FOR JAPANESE SIGNS USING MANUAL MOTION DESCRIPTIONS | |
d237010909 | ||
d6648670 | A model is presented to characterize the class of languages obtained by adding reduplication to context-free languages. The model is a pushdown automaton augmented with the ability to check reduplication by using the stack in a new way. The class of languages generated is shown to lie strictly between the context-free languages and the indexed languages. The model appears capable of accommodating the sort of reduplications that have been observed to occur in natural languages, but it excludes many of the unnatural constructions that other formal models have permitted. | A FORMAL MODEL FOR CONTEXT-FREE LANGUAGES AUGMENTED WITH REDUPLICATION |
d180566494 | ||
d199379828 | ||
d6305892 | Paraphrases are alternative syntactic forms in the same language expressing the same semantic content. Speakers of all languages are inherently familiar with paraphrases at different levels of granularity (lexical, phrasal, and sentential). For quite some time, the concept of paraphrasing is getting a growing attention by the research community and its potential use in several natural language processing applications (such as text summarization and machine translation) is being investigated. In this paper, we present, what is to our best knowledge, the first Turkish paraphrase corpus. The corpus is gleaned from four different sources and currently contains 1270 paraphrase pairs. All paraphrase pairs are carefully annotated by native Turkish speakers with the identified semantic correspondences between paraphrases. The work for expanding the corpus is still under way. | Turkish Paraphrase Corpus |
d20149764 | The Unicode standard identifies and provides representation of the vast majority of known characters used in today's writing systems. Many of these characters belong to the unified Han series, which encapsulates characters from writing systems used in languages such as Chinese, Japanese and Korean languages. These pictographic characters are often made up of smaller primitives, either other characters or more simplified pictography. This paper presents research findings of how the Unicode standard currently represents the primitives used in 4134 of the most common Han characters.PACLIC 28! 131 character was somewhat similar in meaning across most data sets.For each entry, the primitives were then defined and described relative to their position. Character positions were broken up into four main directions: top (t), bottom (b), left (l), right (r), to describe where primitives belong visually within a parent character. | Investigation Into Using the Unicode Standard for Primitives of Unified Han Characters |
d21705295 | This paper presents a general use corpus for the Native American indigenous language Choctaw. The corpus contains audio, video, and text resources, with many texts also translated in English. The Oklahoma Choctaw and the Mississippi Choctaw variants of the language are represented in the corpus. The data set provides documentation support for the threatened language, and allows researchers and language teachers access to a diverse collection of resources. | Chahta Anumpa: A Multimodal Corpus of the Choctaw Language |
d6250952 | User-generated content presents many challenges for its automatic processing. While many of them do come from out-of-vocabulary effects, others spawn from different linguistic phenomena such as unusual syntax. In this work we present a French three-domain data set made up of question headlines from a cooking forum, game chat logs and associated forums from two popular online games (MINECRAFT & LEAGUE OF LEGENDS). We chose these domains because they encompass different degrees of lexical and syntactic compliance with canonical language. We conduct an automatic and manual evaluation of the difficulties of processing these domains for part-of-speech prediction, and introduce a pilot study to determine whether dependency analysis lends itself well to annotate these data. We also discuss the development cost of our data set. | From Noisy Questions to Minecraft Texts: Annotation Challenges in Extreme Syntax Scenarios |
d52010736 | This paper describes an unsupervised model for morphological segmentation that exploits the notion of paradigms, which are sets of morphological categories (e.g., suffixes) that can be applied to a homogeneous set of words (e.g., nouns or verbs). Our algorithm identifies statistically reliable paradigms from the morphological segmentation result of a probabilistic model, and chooses reliable suffixes from them. The new suffixes can be fed back iteratively to improve the accuracy of the probabilistic model. Finally, the unreliable paradigms are subjected to pruning to eliminate unreliable morphological relations between words. The paradigm-based algorithm significantly improves segmentation accuracy. Our method achieves state-of-theart results on experiments using the Morpho-Challenge data, including English, Turkish, and Finnish. 1 This work is licenced under a Creative Commons Attribution 4.0 International Licence.Licence details: | Unsupervised Morphology Learning with Statistical Paradigms |
d189871792 | ||
d10008738 | In this paper, we introduce the Priberam Compressive Summarization Corpus, a new multi-document summarization corpus for European Portuguese. The corpus follows the format of the summarization corpora for English in recent DUC and TAC conferences. It contains 80 manually chosen topics referring to events occurred between 2010 and 2013. Each topic contains 10 news stories from major Portuguese newspapers, radio and TV stations, along with two human generated summaries up to 100 words. Apart from the language, one important difference from the DUC/TAC setup is that the human summaries in our corpus are compressive: the annotators performed only sentence and word deletion operations, as opposed to generating summaries from scratch. We use this corpus to train and evaluate learning-based extractive and compressive summarization systems, providing an empirical comparison between these two approaches. The corpus is made freely available in order to facilitate research on automatic summarization. | Priberam Compressive Summarization Corpus: A New Multi-Document Summarization Corpus for European Portuguese |
d38538269 | In this paper, we describe the approach of the ItaliaNLP Lab team to native language identification and discuss the results we submitted as participants to the essay track of NLI Shared Task 2017. We introduce for the first time a 2-stacked sentencedocument architecture for native language identification that is able to exploit both local sentence information and a wide set of general-purpose features qualifying the lexical and grammatical structure of the whole document. When evaluated on the official test set, our sentence-document stacked architecture obtained the best result among all the participants of the essay track with an F1 score of 0.8818. | Stacked Sentence-Document Classifier Approach for Improving Native Language Identification |
d59922297 | Dans le domaine de la classification supervisée et semi-supervisée, cet article présente un contexte favorable à l'application de méthodes statistiques de classification. Il montre l'application d'une stratégie alternative dans le cas où les données d'apprentissage sont insuffisantes, mais où de nombreuses données non étiquetées sont à notre disposition : le cotraining multi-classifieurs. Les deux vues indépendantes habituelles du co-training sont remplacées par deux classifieurs basés sur des techniques de classification différentes : icsiboost sur le boosting et LIBLINEAR sur de la régression logistique.Abstract. In the domain of supervised and semi-supervised classification, this paper describes an experimental context suitable with statistical classification. It shows an alternative method usable when learning data is unsufficient but when many unlabeled data is avaliable : the multi-classifier co-training. Two classifiers based on different classification methods replace the two independent views of the original co-training algorithm : icsiboost based on boosting and LIBLINEAR which is a logistic regression classifier.Classification et apprentissage superviséLes problèmes de classification présentés ici sont relatifs à l'apprentissage automatique supervisé. Il s'agit de construire un modèle représentatif d'un certain nombre de données organisées en classes -ensemble que l'on appelle généralement le corpus d'apprentissage -puis d'utiliser ce modèle afin de classer de nouvelles données, c'est à dire de prédire leur classe au vu de leurs caractéristiques (appelées paramètres ou features). La construction du modèle relève de l'apprentissage automatique supervisé, l'ensemble des exemples constituant le corpus d'apprentissage étant annotés, c'est à dire qu'ils portent le label de leur classe donné a priori.Le processus permettant d'obtenir ce corpus d'apprentissage est généralement d'utiliser des annotateurs manuels, qui vont observer les exemples et, selon leur appréciation, leur attribuer tel label ou tel autre.Un des problèmes majeurs inhérent à la classification supervisée en Traitement Automatique de la Langue Naturelle (TALN) est qu'obtenir un corpus d'apprentissage annoté manuellement est assez difficile, cela coute cher et n'est pas rapide. Ces corpus sont donc souvent disponibles en quantité limitée. Or la qualité des modèles des classifieurs dépend directement de la taille de ces corpus. | Apprentissage automatique et Co-training |
d201624805 | ||
d204904734 | ||
d16300227 | We describe a mechanism for the interpretation of arguments, which can cope with noisy conditions in terms of wording, beliefs and argument structure. This is achieved through the application of the Minimum Message Length Principle to evaluate candidate interpretations. Our system receives as input a quasi-Natural Language argument, where propositions are presented in English, and generates an interpretation of the argument in the form of a Bayesian network (BN). Performance was evaluated by distorting the system's arguments (generated from a BN) and feeding them to the system for interpretation. In 75% of the cases, the interpretations produced by the system matched precisely or almost-precisely the representation of the original arguments. | Towards a Noise-Tolerant, Representation-Independent Mechanism for Argument Interpretation |
d202577673 | ||
d1001286 | Books Received Idioms: Processing, Structure, and Interpretation Associative Engines: Connectionism, Concepts, and Representational Change Computers in Context: The Philosophy and Practice of Systems Design Second Language Acquisition: An Introductory Course An MT Oriented Model of Aspect and Article Semantics | |
d62159671 | Natural language processing (NLP) is critical for improvement of the healthcare process because it has the potential to encode the vast amount of clinical data in textual patient reports. Many clinical applications require coded data to function appropriately, such as decision support and quality assurance applications. How- | Extracting Information on Pneumonia in Infants Using Natural Language Processing of Radiology Reports |
d6120409 | Modern statistical machine translation systems may be seen as using two components: feature extraction, that summarizes information about the translation, and a log-linear framework to combine features. In this paper, we propose to relax the linearity constraints on the combination, and hence relaxing constraints of monotonicity and independence of feature functions. We expand features into a non-parametric, non-linear, and high-dimensional space. We extend empirical Bayes reward training of model parameters to meta parameters of feature generation. In effect, this allows us to trade away some human expert feature design for data. Preliminary results on a standard task show an encouraging improvement. | Training Non-Parametric Features for Statistical Machine Translation |
d10108216 | Information and Deliberation in Discourse | |
d1593266 | The Papillon project is a collaborative project to establish a multilingual dictionary on the Web. This project started 4 years ago with French and Japanese. The partners are now also working on English, Chinese, Lao, Malay, Thai and Vietnamese. It aims to apply the LINUX cooperative construction paradigm to establish a broadcoverage multilingual dictionary. Users can contribute directly on the server by adding new data or correcting existing errors. Their contributions are stored in the user space until checked by a specialist before being fully integrated into the database. The resulting data is then publicly available and freely distributable. An essential condition for the success of the project is to find a handy solution for all the participants to be able to contribute online by editing dictionary entries.In this paper, we describe our solution for an online generic editor of dictionary entries based on the description of their structure. | Online Generic Editing of Heterogeneous Dictionary Entries in Papillon Project |
d15618972 | For a resource poor language like Hindi, it becomes very difficult to bracket a noun sequence using approaches which are only based on corpus or lexical database. For semantic knowledge, power of both type of resources is needed to be combined. Therefore, affinity in between two nouns is preferred to be measured using backoff association which is the combination of lexical and conceptual association. Also, syntax is important for this task. But syntactic rules do not work for the compound nouns which is a special case of noun sequences and it may also occur as the sub-sequence. Using hybrid approach, accuracy of 86.33% has been obtained.We have explored different variations like smoothing, frequency of synonyms and similar words for lexical association. And for conceptual association, different possible noun classes have been used for experiments. Authors have their own way of writing. Sometimes, two nouns can be written together as a single word or dash can be inserted in between the two. This helps in knowing that the two nouns have the tendency to be grouped together and hence this feature has been incorporated for the methods based on conceptual association. | A Hybrid Approach for Bracketing Noun Sequence |
d2777550 | In this paper, we address the issue of syntagmatic expressions from a computational lexical semantic perspective. From a representational viewpoint, we argue for a hybrid approach combining linguistic and conceptual paradigms, in order to account for the continuum we find in natural languages from free combining words to frozen expressions. In particular, we focus on the place of lexical and semantic restricted co-occurrences. From a t)rocessing viewpoint, we show how to generate/analyze syntagmatic expressions by using an efficient constraintbased processor, well fitted for a knowledge-driven approach. | The Computational Lexical Semantics of Syntagmatic Relations |
d17580892 | This paper describes our system which solves simple arithmetic word problems. The system takes a word problem described in natural language, extracts information required for representation, orders the facts presented, applies procedures and derives the answer. It then displays this answer in natural language.The emphasis of this paper is on the natural language processing(NLP) techniques used to retrieve the relevant information from the English word problem. The system shows improvements over existing systems.Procedure | Natural Language Processing for Solving Simple Word Problems |
d1209264 | The orthographical complexity of Chinese, Japanese and Korean (CJK) poses a special challenge to the developers of computational linguistic tools, especially in the area of intelligent information retrieval. These difficulties are exacerbated by the lack of a standardized orthography in these languages, especially the highly irregular Japanese orthography. This paper focuses on the typology of CJK orthographic variation, provides a brief analysis of the linguistic issues, and discusses why lexical databases should play a central role in the disambiguation process. | Lexicon-based Orthographic Disambiguation in CJK Intelligent Information Retrieval ኯڃ + תᒅ ਜᒅ |
d13900148 | Languages other than English have received little attention as far as the application of natural language processing techniques to text composition is concerned. The present paper describes briefly work under development aiming at the design of an integrated environment for the construction and verification of documents written in Spanish. in a first phase, a dictionary of Spanish has been implemented, together with a synonym dictionary. The main features of both dictionaries will be summarised, and how they are applied in an environment for document verification and composition. | TOWARDS AN INTEGRATEI) ENVIRONMENT FOR SPANISH DOCUMENT VERIFICATION AND COMPOSITION |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.