_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d2838830 | We describe our experiments with phrase-based machine translation for the WMT 2013 Shared Task. We trained one system for 18 translation directions between English or Czech on one side and English, Czech, German, Spanish, French or Russian on the other side. We describe a set of results with different training data sizes and subsets. For the pairs containing Russian, we describe a set of independent experiments with slightly different translation models. | CUni Multilingual Matrix in the WMT 2013 Shared Task |
d6230113 | In this paper we argue for the need of NLP-specific resources to support truly high level, semantically oriented applications. We describe what, in our experience, constitutes useful knowledge for such applications and why most extant resources are not sufficient for this purpose, leading our Ontological Semantics group to build its own. We suggest that extensive time and energy are being spent on resources for NLP, though not on developing ones of higher utility but, rather, on trying to discover ways of using less than ideal ones. We believe that a more useful long-term approach to the problem of knowledge acquisition for NLP would be to acquire what is needed from the outset, since it is likely that in the end such work will prove necessary anyway. | The Rationale for Building Resources Expressly for NLP |
d207756646 | ||
d199379345 | ||
d13134731 | A key aspect of cognitive diagnostic models is the specification of the Q-matrix associating the items and some underlying student attributes. In many data-driven approaches, test items are mapped to the underlying, latent knowledge components (KC) based on observed student performance, and with little or no input from human experts. As a result, these latent skills typically focus on modeling the data accurately, but may be hard to describe and interpret. In this paper, we focus on the problem of describing these knowledge components. Using a simple probabilistic model, we extract, from the text of the test items, some keywords that are most relevant to each KC. On a small dataset from the PSLC datashop, we show that this is surprisingly effective, retrieving unknown skill labels in close to 50% of cases. We also show that our method clearly outperforms typical baselines in specificity and diversity. | Towards Automatic Description of Knowledge Components |
d218652117 | ||
d14567685 | The huge amount of the available information in the Web creates the need for effective information extraction systems that are able to produce metadata that satisfy user's information needs. The development of such systems, in the majority of cases, depends on the availability of an appropriately annotated corpus in order to learn or evaluate extraction models. The production of such corpora can be significantly facilitated by annotation tools, which provide user-friendly facilities and enable annotators to annotate documents according to a predefined annotation schema. However, the construction of annotation tools that operate in a distributed environment is a challenging task: the majority of these tools are implemented as Web applications, having to cope with the capabilities offered by browsers. This paper describes the NOMAD collaborative annotation tool, which implements an alternative architecture: it remains a desktop application, fully exploiting the advantages of desktop applications, but provides collaborative annotation through the use of a centralised server for storing both the documents and their metadata, and instance messaging protocols for communicating events among all annotators. The annotation tool is implemented as a component of the Ellogon language engineering platform, exploiting its extensive annotation engine, its cross-platform abilities and its linguistic processing components, if such a need arises. Finally, the NOMAD annotation tool is distributed with an open source license, as part of the Ellogon platform. | Annotating Arguments: The NOMAD Collaborative Annotation Tool |
d67020859 | La construction d'outils d'analyse linguistique pour les langues faiblement dotées est limitée, entre autres, par le manque de corpus annotés. Dans cet article, nous proposons une méthode pour construire automatiquement des outils d'analyse via une projection interlingue d'annotations linguistiques en utilisant des corpus parallèles. Notre approche n'utilise pas d'autres sources d'information, ce qui la rend applicable à un large éventail de langues peu dotées. Nous proposons d'utiliser les réseaux de neurones récurrents pour projeter les annotations d'une langue à une autre (sans utiliser d'information d'alignement des mots). Dans un premier temps, nous explorons la tâche d'annotation morpho-syntaxique. Notre méthode combinée avec une méthode de projection d'annotation basique (utilisant l'alignement mot à mot), donne des résultats comparables à ceux de l'état de l'art sur une tâche similaire.Abstract.Use of Recurrent Neural Network for Part-Of-Speech tags projection from a parallel corpus. In this paper, we propose a method to automatically induce linguistic analysis tools for languages that have no labeled training data. This method is based on cross-language projection of linguistic annotations from parallel corpora. Our method does not assume any knowledge about foreign languages, making it applicable to a wide range of resource-poor languages. No word alignment information is needed in our approach. We use Recurrent Neural Networks (RNNs) as cross-lingual analysis tool. To illustrate the potential of our approach, we firstly investigate Part-Of-Speech (POS) tagging. Combined with a simple projection method (using word alignment information), it achieves performance comparable to the one of recently published approaches for cross-lingual projection.Mots-clés : Multilinguisme, transfert crosslingue, étiquetage morpho-syntaxique, réseaux de neurones récurrents.La construction manuelle de ces ressources est lente et coûteuse, rendant ainsi l'utilisation des approches supervisées difficile voire impossible. Dans cet article, nous nous intéressons à l'induction de ressources linguistiques adéquates à moindre coût pour les langues faiblement dotées, et aussi à la construction automatique d'outils d'analyse linguistique pour ces langues. Pour cela, nous proposons d'utiliser des approches fondées sur la projection interlingue d'annotations.Celles-ci s'articulent autour de l'exploitation des corpus parallèles multilingues entre une langue source richement dotée (disposant d'outils d'analyse linguistique) et une langue cible faiblement dotée. En partant d'un corpus parallèle dont les OTHMAN ZENNAKI, NASREDINE SEMMAR, LAURENT BESACIER textes en langue source sont déjà annotés, les textes en langue cible sont annotés par projection des annotations à l'aide de techniques d'alignement automatique au niveau des mots.Bien que prometteuses, ces approches non supervisées ont des performances assez éloignées de celles des méthodes supervisées. Par exemple, pour une tâche d'analyse morpho-syntaxique supervisée,(Petrov et al., 2012)obtient une précision moyenne de 95.2% pour 22 langues richement dotées, tandis que les analyseurs morpho-syntaxiques non supervisés construits par(Das & Petrov, 2011;Duong et al., 2013)donnent une précision moyenne de 83.4% pour 8 langues Européennes.Dans cet article, nous explorons la possibilité d'employer les réseaux de neurones récurrents (RNN) pour induire des outils multilingues d'analyse linguistique. Dans un premier temps, nous abordons la possibilité de les utiliser comme analyseurs morpho-syntaxiques. Pour cela, nous utilisons un corpus parallèle entre une langue bien dotée et une autre langue moins bien dotée, pour assigner aux mots du corpus parallèle (appartenant aux vocabulaires des langues source et cible) une représentation commune, obtenue à partir d'un alignement au niveau des phrases. Cette représentation commune permet d'apprendre -à partir d'une seule langue étiquetée parmi N -un seul analyseur multilingue capable de traiter N langues.Après un bref état de l'art présenté dans la section 2, notre modèle est décrit dans la partie 3 et son évaluation est présentée dans la partie 4, la partie 5 conclut notre étude et présente nos travaux futurs. | 22 ème Traitement Automatique des Langues Naturelles |
d2439516 | ON TEXT COHERENCE PARSING | |
d26889702 | The aim of this paper is to show how an optimality- | On the Irregular Verbs in Korean |
d235097524 | ||
d172552 | We have participated in the Senseval-2 English tasks (all words and lexical sample) with an unsupervised system based on mutual information measured over a large corpus (277 million words) and some additional heuristics. A supervised extension of the system was also presented to the lexical sample task.Our system scored first among unsupervised systems in both tasks: 56.9% recall in all words, 40.2% in lexical sample. This is slightly worse than the first sense heuristic for all words and 3.6% better for the lexical sample, a strong indication that unsupervised Word Sense Disambiguation remains being a strong challenge. | The UNED systems at Senseval-2 |
d7080762 | Exhaustive extraction of semantic information from text is one of the formidable goals of state-of-the-art NLP systems. In this paper, we take a step closer to this objective. We combine the semantic information provided by different resources and extract new semantic knowledge to improve the performance of a recognizing textual entailment system. CLOTHING PARTS F → PW(subpart, clothing) CLOTHING PARTS F → PW(material, subpart) Example: "Hello, Hank" they said from the depths of the [fur] M aterial [collars] Subpart,T arget of [their]W earer [coats] Clothing . PW(f ur, collar) and PW(collar, coat) CLOTHING F → PAH(descriptor, garment) ∨ PAH(descriptor, material) Example: She didn't bring heels with her so she decided on [gold]Descriptor [leather] M aterial [flip-flops]Garment,T arget. PAH(gold, leather) ∨ PAH(gold, f lip − f lop) KINSHIP F → KIN(ego, alter) Example: The new subsidiary is headed by [Rupert Soames] Alter , [son]T arget [of the former British Ambassador to France and EC vice-president]Ego. KIN(Rupert Soames, the former British Ambassador to France and EC vice-president) GETTING F → POS(recipient, theme) GETTING F → ¬ POS(source, theme) (only if the source is a person) Example: In some cases, [the BGS libraries]Recipient had [obtained]T arget [copies of theses] T heme [from the authors]Source [by purchase or gift]Means, and no loan records were available for such copies. POS(the BGS libraries, copies of theses) and ¬ POS(authors, copies of theses) GETTING F → SRC(theme, source) (if the source is not a person) Example: He also said that [Iran]Recipient [acquired]T arget [fighter-bomber aircraft] T heme [from countries other than the USA and the Soviet Union]Source. SRC(fighter-bomber aircraft, countries other than the USA and the Soviet Union) | A Semantic Approach to Recognizing Textual Entailment |
d219300712 | ||
d220286777 | ||
d30485384 | This system's approach to the attribute selection task was to use a genetic programming algorithm to search for a solution to the task. The evolved programs for the furniture and people domain exhibit quite naive behavior, and the DICE and MASI scores on the training sets reflect the poor humanlikeness of the programs. | OSU-GP: Attribute Selection using Genetic Programming |
d15174121 | We present-some new results for the reading comprehension task described in [3] that improve on the best published results -from 36% in [3] to 41% (the best of the systems described herein). We discuss a variety of techniques that tend to give small improvements, ranging from the fairly simple (give verbs more weight in answer selection) to the fairly complex (use specific techniques for answering specific kinds of questions). | Reading Comprehension Programs in a Statistical-Language-Processing Class* |
d1661484 | This paper reports about our work in the NEWS 2009 Machine Transliteration Shared Task held as part of ACL-IJCNLP 2009. We submitted one standard run and two nonstandard runs for English to Hindi transliteration. The modified joint source-channel model has been used along with a number of alternatives. The system has been trained on the NEWS 2009 Machine Transliteration Shared Task datasets. For standard run, the system demonstrated an accuracy of 0.471 and the mean F-Score of 0.861. The non-standard runs yielded the accuracy and mean F-scores of 0.389 and 0.831 respectively in the first one and 0.384 and 0.828 respectively in the second one. The non-standard runs resulted in substantially worse performance than the standard run. The reasons for this are the ranking algorithm used for the output and the types of tokens present in the test set. | English to Hindi Machine Transliteration System at NEWS 2009 |
d1712533 | A lexical analogy is a pair of word-pairs that share a similar semantic relation. Lexical analogies occur frequently in text and are useful in various natural language processing tasks. In this study, we present a system that generates lexical analogies automatically from text data. Our system discovers semantically related pairs of words by using dependency relations, and applies novel machine learning algorithms to match these word-pairs to form lexical analogies. Empirical evaluation shows that our system generates valid lexical analogies with a precision of 70%, and produces quality output although not at the level of the best humangenerated lexical analogies. | Generating Lexical Analogies Using Dependency Relations |
d249097845 | The last few years have witnessed an exponential rise in the propagation of offensive text on social media. Identification of this text with high precision is crucial for the well-being of society. Most of the existing approaches tend to give high toxicity scores to innocuous statements (e.g., "I am a gay man"). These false positives result from over-generalization on the training data where specific terms in the statement may have been used in a pejorative sense (e.g., "gay"). Emphasis on such words alone can lead to discrimination against the classes these systems are designed to protect. In this paper, we address the problem of offensive language detection on Twitter, while also detecting the type and the target of the offense. We propose a novel approach called SyLSTM, which integrates syntactic features in the form of the dependency parse tree of a sentence and semantic features in the form of word embeddings into a deep learning architecture using a Graph Convolutional Network. Results show that the proposed approach significantly outperforms the state-of-the-art BERT model with orders of magnitude fewer number of parameters. | Leveraging Dependency Grammar for Fine-Grained Offensive Language Detection using Graph Convolutional Networks |
d15664187 | Discovering contradicting protein-protein interactions in text | |
d12880318 | Instance sampling is a method to balance extremely skewed training sets as they occur, for example, in machine learning settings for anaphora resolution. Here, the number of negative samples (i.e. non-anaphoric pairs) is usually substantially larger than the number of positive samples. This causes classifiers to be biased towards negative classification, leading to suboptimal performance. In this paper, we explore how different techniques of instance sampling influence the performance of an anaphora resolution system for German given different classifiers. All sampling methods prove to increase the F-score for all classifiers, but the most successful method is random sampling. In the best setting, the F-score improves from 0.541 to 0.608 for memory-based learning, from 0.561 to 0.611 for decision tree learning and from 0.511 to 0.584 for maximum entropy learning. | Instance Sampling Methods for Pronoun Resolution |
d219861313 | ||
d14182246 | Faced with large and steadily increasing work volumes, the Patent Cooperation Treaty Translation Section at the World Intellectual Property Organization, Geneva, is looking for ways to improve the efficiency of its translation process. A terminology problem has been identified, and attention has turned to automatic bilingual terminology extraction as a possible means of solving that problem. A project has been defined and evaluation tests implemented with the aims of automatically capturing bilingual terminology from existing technical texts and their translations, validating the candidate term pairs generated, defining an appropriate database structure and generating terminological records in an automatic or semi-automatic manner. Benefits of this approach are becoming apparent and, as work progresses, the potential for extending the scope of the project to other related applications offers interesting prospects for the future. | AUTOMATIC BILINGUAL TERMINOLOGY EXTRACTION A Practical Approach |
d252819164 | Recently, with the advent of high-performance generative language models, artificial agents that communicate directly with the users have become more human-like. This development allows users to perform a diverse range of trials with the agents, and the responses are sometimes displayed online by users who share or show-off their experiences. In this study, we explore dialogues with a social chatbot uploaded to an online community, with the aim of understanding how users game human-like agents and display their conversations. Having done this, we assert that user postings can be investigated from two aspects, namely conversation topic and purpose of testing, and suggest a categorization scheme for the analysis. We analyze 639 dialogues to develop an annotation protocol for the evaluation, and measure the agreement to demonstrate the validity. We find that the dialogue content does not necessarily reflect the purpose of testing, and also that users come up with creative strategies to game the agent without being penalized. | Evaluating How Users Game and Display Conversation with Human-Like Agents |
d5442132 | In this paper we describe the non-linear mappings we used with the Helsinki language identification method, HeLI, in the 4 th edition of the Discriminating between Similar Languages (DSL) shared task, which was organized as part of the Var-Dial 2017 workshop. Our SUKI team participated in the closed track together with 10 other teams. Our system reached the 7 th position in the track. We describe the HeLI method and the non-linear mappings in mathematical notation. The HeLI method uses a probabilistic model with character n-grams and word-based backoff. We also describe our trials using the non-linear mappings instead of relative frequencies and we present statistics about the back-off function of the HeLI method. | Evaluating HeLI with Non-linear Mappings |
d14392847 | Lexical knowledge bases (LKBs), such as WordNet, have been shown to be useful for a range of language processing tasks. Extending these resources is an expensive and time-consuming process. This paper describes an approach to address this problem by automatically generating a mapping from WordNet synsets to Wikipedia articles. A sample of synsets has been manually annotated with article matches for evaluation purposes. The automatic methods are shown to create mappings with precision of 87.8% and recall of 46.9%. These mappings can then be used as a basis for enriching WordNet with new relations based on Wikipedia links. The manual and automatically created data is available online. | Mapping WordNet synsets to Wikipedia articles |
d10429456 | The paper discusses several complex transfer problems and their prospective solutions within an English-to-Swedish spoken language translation system. The emphasis in the text is on transfer problems which are not lexically triggered, concentrating mainly on the translation of differences in mood and tense. Laying the groundworks for the translation part, the treatment of verb-phrase syntax and semantics is described in detail. The paper also shortly discusses some lexically triggered complex transfer problems. | COMPLEX VERB TRANSFER PHENOMENA IN THE SLT SYSTEM |
d28327553 | There are as many sign languages as there are deaf communities in the world. Linguists have been collecting corpora of different sign languages and annotating them extensively in order to study and understand their properties. On the other hand, the field of computer vision has approached the sign language recognition problem as a grand challenge and research efforts have intensified in the last 20 years. However, corpora collected for studying linguistic properties are often not suitable for sign language recognition as the statistical methods used in the field require large amounts of data. Recently, with the availability of inexpensive depth cameras, groups from the computer vision community have started collecting corpora with large number of repetitions for sign language recognition research. In this paper, we present the BosphorusSign Turkish Sign Language corpus, which consists of 855 sign and phrase samples from the health, finance and everyday life domains. The corpus is collected using the state-of-the-art Microsoft Kinect v2 depth sensor, and will be the first in this sign language research field. Furthermore, there will be annotations rendered by linguists so that the corpus will appeal both to the linguistic and sign language recognition research communities. | BosphorusSign: A Turkish Sign Language Recognition Corpus in Health and Finance Domains |
d14046602 | Previous research has established several methods of online learning for latent Dirichlet allocation (LDA). However, streaming learning for LDAallowing only one pass over the data and constant storage complexity-is not as well explored. We use reservoir sampling to reduce the storage complexity of a previously-studied online algorithm, namely the particle filter, to constant. We then show that a simpler particle filter implementation performs just as well, and that the quality of the initialization dominates other factors of performance. | Particle Filter Rejuvenation and Latent Dirichlet Allocation |
d17865442 | This paper looks at the web as a corpus and at the effects of using web counts to model language, particularly when we consider them as a domain-specific versus a general-purpose resource. We first compare three vocabularies that were ranked according to frequencies drawn from general-purpose, specialised and web corpora. Then, we look at methods to combine heterogeneous corpora and evaluate the individual and combined counts in the automatic extraction of noun compounds from English general-purpose and specialised texts. Better n-gram counts can help improve the performance of empirical NLP systems that rely on n-gram language models. | Web-based and combined language models: a case study on noun compound identification |
d17162029 | We describe the work carried out by the DCU team on the Semantic Textual Similarity task at SemEval-2015. We learn a regression model to predict a semantic similarity score between a sentence pair. Our system exploits distributional semantics in combination with tried-and-tested features from previous tasks in order to compute sentence similarity. Our team submitted 3 runs for each of the five English test sets. For two of the test sets, belief and headlines, our best system ranked second and fourth out of the 73 submitted systems. Our best submission averaged over all test sets ranked 26 out of the 73 systems. | DCU: Using Distributional Semantics and Domain Adaptation for the Semantic Textual Similarity SemEval-2015 Task 2 |
d15303419 | Sentiment analysis has become an important classification task because a large amount of user-generated content is published over the Internet. Sentiment lexicons have been used successfully to classify the sentiment of user review datasets. More recently, microblogging services such as Twitter have become a popular data source in the domain of sentiment analysis. However, analyzing sentiments on tweets is still difficult because tweets are very short and contain slang, informal expressions, emoticons, mistyping and many words not found in a dictionary. In addition, more than 90 percent of the words in public sentiment lexicons, such as SentiWordNet, are objective words, which are often considered less important in a classification module. In this paper, we introduce a hybrid approach that incorporates sentiment lexicons into a machine learning approach to improve sentiment classification in tweets. We automatically construct an Add-on lexicon that compiles the polarity scores of objective words and out-ofvocabulary (OOV) words from tweet corpora. We also introduce a novel feature weighting method by interpolating sentiment lexicon score into uni-gram vectors in the Support Vector Machine (SVM). Results of our experiment show that our method is effective and significantly improves the sentiment classification accuracy compared to a baseline unigram model. 1 | Sentiment Lexicon Interpolation and Polarity Estimation of Objective and Out-Of-Vocabulary Words to Improve Sentiment Classification on Microblogging |
d218973952 | ||
d53234777 | We present a pilot study of machine translation of selected grammatical contrasts between Czech and English in WMT18 News Translation Task. For each phenomenon, we run a dedicated test which checks if the candidate translation expresses the phenomenon as expected or not. The proposed type of analysis is not an evaluation in the strict sense because the phenomenon can be correctly translated in various ways and we anticipate only one. What is nevertheless interesting are the differences between various MT systems and the single reference translation in their general tendency in handling the given phenomenon. | Testsuite on Czech-English Grammatical Contrasts |
d233365338 | ||
d16877592 | This paper presents a Japanese-to-English statistical machine translation system specialized for patent translation. Patents are practically useful technical documents, but their translation needs different efforts from general-purpose translation. There are two important problems in the Japanese-to-English patent translation: long distance reordering and lexical translation of many domain-specific terms. We integrated novel lexical translation of domain-specific terms with a syntax-based post-ordering framework that divides the machine translation problem into lexical translation and reordering explicitly for efficient syntax-based translation. The proposed lexical translation consists of a domain-adapted word segmentation and an unknown word transliteration. Experimental results show our system achieves better translation accuracy in BLEU and TER compared to the baseline methods. | Japanese-to-English Patent Translation System based on Domain-adapted Word Segmentation and Post-ordering |
d70292337 | Dans cet article, nous nous intéressons au titrage automatique des segments issus de la segmentation thématique de journaux télévisés. Nous proposons d'associer un segment à un article de presse écrite collecté le jour même de la diffusion du journal. La tâche consiste à apparier un segment à un article de presse à l'aide d'une mesure de similarité. Cette approche soulève plusieurs problèmes, comme la sélection des articles candidats, une bonne représentation du segment et des articles, le choix d'une mesure de similarité robuste aux imprécisions de la segmentation. Des expériences sont menées sur un corpus varié de journaux télévisés français collectés pendant une semaine, conjointement avec des articles aspirés à partir de la page d'accueil de Google Actualités. Nous introduisons une métrique d'évaluation reflétant la qualité de la segmentation, du titrage ainsi que la qualité conjointe de la segmentation et du titrage. L'approche donne de bonnes performances et se révèle robuste à la segmentation thématique.Abstract.Automatic Topic Segmentation and Title Assignment in TV Broadcast News This paper addresses the task of assigning a title to topic segments automatically extracted from TV Broadcast News video recordings. We propose to associate to a topic segment the title of a newspaper article collected on the web at the same date. The task implies pairing newspaper articles and topic segments by maximising a given similarity measure. This approach raises several issues, such as the selection of candidate newspaper articles, the vectorial representation of both the segment and the articles, the choice of a suitable similarity measure, and the robustness to automatic segmentation errors. Experiments were made on various French TV Broadcast News shows recorded during one week, in conjunction with text articles collected through the Google News homepage at the same period. We introduce a full evaluation framework allowing to measure the quality of topic segment retrieval, topic title assignment and also joint retrieval and titling. The approach yields good titling performance and reveals to be robust to automatic segmentation. | 22 ème Traitement Automatique des Langues Naturelles |
d15360016 | Word embeddings have recently seen a strong increase in interest as a result of strong performance gains on a variety of tasks. However, most of this research also underlined the importance of benchmark datasets, and the difficulty of constructing these for a variety of language-specific tasks. Still, many of the datasets used in these tasks could prove to be fruitful linguistic resources, allowing for unique observations into language use and variability. In this paper we demonstrate the performance of multiple types of embeddings, created with both count and prediction-based architectures on a variety of corpora, in two language-specific tasks: relation evaluation, and dialect identification. For the latter, we compare unsupervised methods with a traditional, hand-crafted dictionary. With this research, we provide the embeddings themselves, the relation evaluation task benchmark for use in further research, and demonstrate how the benchmarked embeddings prove a useful unsupervised linguistic resource, effectively used in a downstream task. | Evaluating Unsupervised Dutch Word Embeddings as a Linguistic Resource |
d52011109 | Neural sequence-to-sequence models have been successfully extended for summary generation. However, existing frameworks generate a single summary for a given input and do not tune the summaries towards any additional constraints/preferences. Such a tunable framework is desirable to account for linguistic preferences of the specific audience who will consume the summary. In this paper, we propose a neural framework to generate summaries constrained to vocabularydefined linguistic preferences of a target audience. The proposed method accounts for the generation context by tuning the summary words at the time of generation. Our evaluations indicate that the proposed approach tunes summaries to the target vocabulary while still maintaining a superior summary quality against a state-of-the-art word embedding based lexical substitution algorithm, suggesting the feasibility of the proposed approach. We demonstrate two applications of the proposed approach -to generate understandable summaries with simpler words, and readable summaries with shorter words. This work is licensed under a Creative Commons Attribution 4.0 International License.License details: | Vocabulary Tailored Summary Generation |
d12539736 | Existing evaluation metrics for machine translation lack crucial robustness: their correlations with human quality judgments vary considerably across languages and genres. We believe that the main reason is their inability to properly capture meaning: A good translation candidate means the same thing as the reference translation, regardless of formulation. We propose a metric that evaluates MT output based on a rich set of features motivated by textual entailment, such as lexical-semantic (in-)compatibility and argument structure overlap. We compare this metric against a combination metric of four state-of-theart scores (BLEU, NIST, TER, and METEOR) in two different settings. The combination metric outperforms the individual scores, but is bested by the entailment-based metric. Combining the entailment and traditional features yields further improvements. | Robust Machine Translation Evaluation with Entailment Features * |
d7357198 | We present results from a new Interagency Language Roundtable (ILR) based comprehension test. This new test design presents questions at multiple ILR difficulty levels within each document. We incorporated Arabic machine translation (MT) output from three independent research sites, arbitrarily merging these materials into one MT condition. We contrast the MT condition, for both text and audio data types, with high quality human reference Gold Standard (GS) translations.Overall, subjects achieved 95% comprehension for GS and 74% for MT, across 4 genres and 3 difficulty levels. Surprisingly, comprehension rates do not correlate highly with translation error rates, suggesting that we are measuring an additional dimension of MT quality. We observed that it takes 15% more time overall to read MT than GS. | ILR-Based MT Comprehension Test with Multi-Level Questions |
d245782 | We present a general and simple method to adapt an existing NLP tool in order to enable it to deal with historical varieties of languages. This approach consists basically in expanding the dictionary with the old word variants and in retraining the tagger with a small training corpus. We implement this approach for Old Spanish.The results of a thorough evaluation over the extended tool show that using this method an almost state-of-the-art performance is obtained, adequate to carry out quantitative studies in the humanities: 94.5% accuracy for the main part of speech and 92.6% for lemma. To our knowledge, this is the first time that such a strategy is adopted to annotate historical language varieties and we believe that it could be used as well to deal with other non-standard varieties of languages. | Extending the tool, or how to annotate historical language varieties |
d18762115 | This short paper describes the use of the linguistic annotation available in parallel PropBanks (Chinese and English) for the enhancement of automatically derived word alignments. Specifically, we suggest ways to refine and expand word alignments for verb-predicates by using predicate-argument structures. Evaluations demonstrate improved alignment accuracies that vary by corpus type. | Using Parallel Propbanks to enhance Word-alignments |
d39868963 | La présente recherche cherche à réduire la taille de messages textuels sur la base de techniques de compression observées, pour la plupart, dans un corpus de sms. Ce papier explique la méthodologie suivie pour établir des règles de contraction. Il présente ensuite les 33 règles retenues, et illustre les quatre niveaux de compression proposés par deux exemples concrets, produits automatiquement par un premier prototype. Le but de cette recherche n'est donc pas de produire de "l'écrit-sms", mais d'élaborer un procédé de compression capable de produire des textes courts et compréhensibles à partir de n'importe quelle source textuelle en français. Le terme "d'essentialisation" est proposé pour désigner cette approche de réduction textuelle.ABSTRACTTextual Compression Based on Rules Arising from a Corpus of Text MessagesThe present research seeks to reduce the size of text messages on the basis of compression techniques observed mostly in a corpus of sms. This paper explains the methodology followed to establish compression rules. It then presents the 33 considered rules, and illustrates the four suggested levels of compression with two practical examples, automatically generated by a first prototype. This research's main purpose is not to produce "sms-language", but consists in designing a textual compression process able to generate short and understandable texts from any textual source in French. The term of "essentialization" is proposed to describe this approach of textual reduction. MOTS-CLEFS : résumé automatique, compression de texte, sms, lisibilité, essentialisation. | Compression textuelle sur la base de règles issues d'un corpus de sms |
d237055461 | ||
d17741516 | We present an automatic method for senselabeling of text in an unsupervised manner. The method makes use of distributionally similar words to derive an automatically labeled training set, which is then used to train a standard supervised classifier for distinguishing word senses. Experimental results on the Senseval-2 and Senseval-3 datasets show that our approach yields significant improvements over state-of-the-art unsupervised methods, and is competitive with supervised ones, while eliminating the annotation cost. | Good Neighbors Make Good Senses: Exploiting Distributional Similarity for Unsupervised WSD |
d12562601 | The primary objective of this project is to develop a robust, high-performance parser for English by automatically extracting a grammar from an annotated corpus of bracketed sentences, called the q~eeebank. The project is a collaboration between the Sciences t. Our initial focus is the domain of computer manuals with a vocabulary of 3000 words. We use a Treebank that was developed jointly by IBM and the University of Lancaster, England.In this past year, we have demonstrated that our automatically built parser produces parses without crossing brackets for 78% of a blind test set. This improves on the 69% that our manually built grammar-based parser [1] produces. The grammar had been crafted by a grammarian by examining the same training set as the automatically built parser over a period of more than 3 years.Parsing ModelTraditionally, parsing relies on a grammar to determine a set of parse trees for a sentence and typically uses a scoring mechanism based on either rule preference or a probabilistic model to determine a preferred parse. In this conventional approach, a linguist must specify the basic constituents, the rules for combining basic constituents into larger ones, and the detailed conditions under which these rules may be used.Instead of using a grammar, we rely on a probabilistic model, p(TIW), for the probability that a parse tree, T, is a parse for sentence W. We use data from the Treebank, with appropriate statistical modeling techniques, to capture implicitly the plethora of linguistic details necessary to correctly parse most sentences. In our model of parsing, we associate with any parse tree a set of bottom-up derivations; each derivation describing a particular order in which the parse tree is constructed. Our parsing model assigns a probability to a derivation, denoted by p(dlW). The probability of a parse tree is the sum of the probability of all derivations leading to the parse tree. The probability of a derivation is a product of z Co-Principal Investigators: Mark Liberman and Mitchell Marcus probabilities, one for each step of the derivation. These steps are of three types: a tagging step: where we want the probability of tagging a word with a tag in the context of the derivation up to that point. a labeling step: where we want the probability of assigning a non terminal label to a node in the derivation. an extension step: where we want to determine the prob~ ability that a labeled node is extended, for example, Lo the left or right (i.e. to combine with the preceding or following constituents).The probability of a step is determined by a decision tree appropriate to the type of the step. The three decision trees examine the derivation up to that point to determine the probability of any particular step.The parsing models were trained on 28,000 sentences from the Computer Manuals domain, and tested on 1100 unseen sentences of length 1 -25 words. On this test set, the parser produced the correct parse, i.e. a parse which matched the treebank parse exactly, for 38% of the sentences. Ignoring part-of-speech tagging errors, it produced the correct parse tree for 47% of the sentences.Plans for the Coming YearWe plan to continue working with our new parser by completing the following tasks:• implement a set of detailed questions to capture information about conjunction, prepositional attachment, etc.• improve the speed of the search strategy of the parser. | Towards History-based Grammars: Using Richer Models for Probabilistic Parsing |
d2173771 | This paper identifies computational challenges in restructuring encyclopedic resources (like Wikipedia or thesauri) to reorder concepts with the goal of helping learners navigate through a concept network without getting trapped in circular dependencies between concepts. We present approaches that can help content authors identify regions in the concept network, that after editing, would have maximal impact in terms of enhancing the utility of the resource to learners. | Towards Creating Pedagogic Views from Encyclopedic Resources |
d1066131 | This paper investigates the use of a language independent model for named entity recognition based on iterative learning in a co-training fashion, using word-internal and contextual information as independent evidence sources. Its bootstrapping process begins with only seed entities and seed contexts extracted from the provided annotated corpus. F-measure exceeds 77 in Spanish and 72 in Dutch. | Language Independent NER using a Unified Model of Internal and Contextual Evidence |
d15539503 | The FASiL dialogue manager is described in the context of the commercial deployment of conversational interfaces. A practical dialogue model is presented that uses a list-like structure to manage mixed-initiative dialogue using highly modularised, and independently specified dialogue comp onents. | Conversational Dialogue Management in the FASiL project |
d8718951 | SESSION 3: MACHINE TRANSLATION | |
d15264642 | Accuracy of content have not been fully utilized in the previous studies on automated speaking assessment. Compared to writing tests, responses in speaking tests are noisy (due to recognition errors), full of incomplete sentences, and short. To handle these challenges for doing content-scoring in speaking tests, we propose two new methods based on information extraction (IE) and machine learning. Compared to using an ordinary content-scoring method based on vector analysis, which is widely used for scoring written essays, our proposed methods provided content features with higher correlations to human holistic scores. | The 7th Workshop on the Innovative Use of NLP for Building Educational Applications, pages 122-126, Scoring Spoken Responses Based on Content Accuracy |
d219309920 | ||
d5097932 | Automatic sign language recognition (ASLR) is a special case of automatic speech recognition (ASR) and computer vision (CV) and is currently evolving from using artificial labgenerated data to using 'real-life' data. Although ASLR still struggles with feature extraction, it can benefit from techniques developed for ASR. We present a large-vocabulary ASLR system that is able to recognize sentences in continuous sign language and uses features extracted from standard single-view video cameras without using additional equipment. ASR techniques such as the multi-layer-perceptron (MLP) tandem approach, speaker adaptation, pronunciation modelling, and parallel hidden Markov models are investigated. We evaluate the influence of each system component on the recognition performance. On two publicly available large vocabulary databases representing lab-data (25 signer, 455 sign vocabulary, 19k sentence) and unconstrained 'real-life' sign language (1 signer, 266 sign vocabulary, 351 sentences) we can achieve 22.1% respectively 38.6% WER. | Improving Continuous Sign Language Recognition: Speech Recognition Techniques and System Design |
d17808544 | lntroductionThe Prague Dependency Treebank (PDT, as described, e.g., in (Hajic, 1998) or more recently in (Hajic, Pajas and Vidova Hladkä, 2001 )) is a project of linguistic annotation of approx. 1.5 million word corpus of natuially occurring written Czech on three levels ("layers") of complexity and depth: morphological, analytical, and tectogrammatical. The aim of the project is to have a reference corpus annotated by using the accumulated findings of the Prague School as much as possible, wbile simultaneously showing (by experiments, mainly of statistical nature) that such a framework is not only theoretically interesting but possibly also ofpractical use.In this contribution we want to show that the deepest (tectogrammatical) layer of representation of sentence structure we use, which represents "linguistic meaning" as described in (Sgall, Hajieova and Panevova, 1986) and which also records certain aspects of discourse structure, has certain properties tbat can be effectively used in machine translation 1 for languages of quite different nature at the transfer stage. We believe that such representation not only minimizes the "distance" between languages at this Iayer, but also delegates individual language phenornena where they belong to -whether it is the analysis, transfer or generation processes, regardless of methods used for perfotming these steps.The Prague Dependency TreebankThe Prague Dependency Treebank is a manually annotated corpus of Czech. The corpus size is approx. 1.5 million words (tokens). Three main groups ("layers") of annotation are used:• the morphological layer, where lernmas and tags are being annotated based on their context;• the analytical layer, which roughly corresponds to the surface syntax oftbe sentence,• the tectogrammatical layer, or linguistic meaning ofthe sentence in its context.In general, unique annotation for every sentence (and thus within the sentence as weil, i.e. for every token) is used on all three layers. Human judgment is required to interpret the text in question; in case of difficult decisf ons, certain "tie-breaking" rules are in effect ( of rather technical nature); no attempt has been made to define what type of disambiguation is "proper" or "improper" at what level.Technically, the PDT is distributed in text fonn, with an SGML markup throughout. Tools are provided for viewing, searching and editing the corpus, together with some basic Czech analysis tools (tokenization, morphology, tagging) suitable for various experiments. The data in the PDT are organized in such a way tbat statistical experiments can be easily compared between various systems -the data have been pre-divided into training and two sets oHest data.In the present section, we describe briefly the Prague Dependency Treebank structure and its history.BriefHistory ofthe PDTThe Prague Dependency Treebank project has started in 1996 formally as two projects, one for specification of the annotation scheme, and another one for its immediate "validation" (i.e., the actual treebanking) in the | Tectogrammatical Representation: Towards a Minimal Transfer In Machine Translation |
d219305531 | ||
d39183834 | Evaluation Metric-related Optimization Methods for Mandarin Mispronunciation Detection | |
d1596838 | This paper presents the EPAC corpus which is composed by a set of 100 hours of conversational speech manually transcribed and by the outputs of automatic tools (automatic segmentation, transcription, POS tagging, etc.) applied on the entire French ESTER 1 audio corpus: this concerns about 1700 hours of audio recordings from radiophonic shows. This corpus was built during the EPAC project funded by the French Research Agency (ANR) from 2007 to 2010. This corpus increases significantly the amount of French manually transcribed audio recordings easily available and it is now included as a part of the ESTER 1 corpus in the ELRA catalog without additional cost. By providing a large set of automatic outputs of speech processing tools, the EPAC corpus should be useful to researchers who want to work on such data without having to develop and deal with such tools. These automatic annotations are various: segmentation and speaker diarization, one-best hypotheses from the LIUM automatic speech recognition system with confidence measures, but also word-lattices and confusion networks, named entities, part-of-speech tags, chunks, etc. The 100 hours of speech manually transcribed were split into three data sets in order to get an official training corpus, an official development corpus and an official test corpus. These data sets were used to develop and to evaluate some automatic tools which have been used to process the 1700 hours of audio recording. For example, on the EPAC test data set our ASR system yields a word error rate equals to 17.25%. | The EPAC corpus: manual and automatic annotations of conversational speech in French broadcast news |
d4899723 | This paper targets an understanding of how metadiscourse functions in spoken language. Starting from a metadiscourse taxonomy, a set of TED talks is annotated via crowdsourcing and then a lexical grade level predictor is used to map the distribution of the distinct discourse functions of the taxonomy across levels. The paper concludes showing how speakers use these functions in presentational settings. | Lexical Level Distribution of Metadiscourse in Spoken Language |
d3044536 | Discussion forums serve as a platform for student discussions in massive open online courses (MOOCs). Analyzing content in these forums can uncover useful information for improving student retention and help in initiating instructor intervention. In this work, we explore the use of topic models, particularly seeded topic models toward this goal. We demonstrate that features derived from topic analysis help in predicting student survival. | Understanding MOOC Discussion Forums using Seeded LDA |
d6470278 | Open-ended spoken interactions are typically characterised by both structural complexity and high levels of uncertainty, making dialogue management in such settings a particularly challenging problem. Traditional approaches have focused on providing theoretical accounts for either the uncertainty or the complexity of spoken dialogue, but rarely considered the two issues simultaneously. This paper describes ongoing work on a new approach to dialogue management which attempts to fill this gap. We represent the interaction as a Partially Observable Markov Decision Process (POMDP) over a rich state space incorporating both dialogue, user, and environment models. The tractability of the resulting POMDP can be preserved using a mechanism for dynamically constraining the action space based on prior knowledge over locally relevant dialogue structures. These constraints are encoded in a small set of general rules expressed as a Markov Logic network. The first-order expressivity of Markov Logic enables us to leverage the rich relational structure of the problem and efficiently abstract over large regions of the state and action spaces. | Towards Relational POMDPs for Adaptive Dialogue Management |
d220811 | This paper describes the REGIS extended co~nd language, a relational data fan&c/age that allows users to name and describe database objects using natural language phrases. REGIS accepts multlple-word phrases as the names of tables and columns (unlike most systems, which restrict these names to a few characters). An extended command parser uses a networkstructured dictionary to recognize multl-word names, even if some of the words are missing or out of order, and to prompt the user if an ambiguous name is entered. REGIS also provides facilities for attaching descriptive text to database objects, which can be displayed online or included in printed reports. Initial data from a few databases indicate that users choose to take advantage of the naturalness of multl-word descriptions when this option is available. | USING NATURAL LANGUAGE DESCRIPTIONS TO IMPROVE THE USABILITY OF DATABASES |
d15438691 | This paper presents an iterative algorithm for bilingual lexicon extraction from comparable corpora. It is based on a bagof-words model generated at the level of sentences. We present our results of experimentation on corpora of multiple degrees of comparability derived from the FIRE 2010 dataset. Evaluation results on 100 nouns shows that this method outperforms the standard context-vector based approaches. | Co-occurrence Graph Based Iterative Bilingual Lexicon Extraction From Comparable Corpora |
d52189218 | Industry datasets used for text classification are rarely created for that purpose. In most cases, the data and target predictions are a byproduct of accumulated historical data, typically fraught with noise, present in both the text-based document, as well as in the targeted labels. In this work, we address the question of how well performance metrics computed on noisy, historical data reflect the performance on the intended future machine learning model input. The results demonstrate the utility of dirty training datasets used to build prediction models for cleaner (and different) prediction inputs. | Training and Prediction Data Discrepancies: Challenges of Text Classification with Noisy, Historical Data |
d10372851 | We report experiments with multi-modal neural machine translation models that incorporate global visual features in different parts of the encoder and decoder, and use the VGG19 network to extract features for all images. In our experiments, we explore both different strategies to include global image features and also how ensembling different models at inference time impact translations. Our submissions ranked 3rd best for translating from English into French, always improving considerably over an neural machine translation baseline across all language pair evaluated, e.g. an increase of 7.0-9.2 ME-TEOR points. | DCU System Report on the WMT 2017 Multi-modal Machine Translation Task |
d252819352 | Rather than continuing the conversation based on personalized or implicit information, the existing conversation system generates dialogue by focusing only on the superficial content. To solve this problem, FoCus was recently released . FoCus is a persona-knowledge grounded dialogue generation dataset that leverages Wikipedia's knowledge and personal persona, focusing on the landmarks provided by Google, enabling usercentered conversation. However, a closer empirical study is needed since research in the field is still in its early stages. Therefore, we fling two research questions about FoCus. (i) "Is the FoCus whether for conversation or question answering?" to identify the structural problems of the dataset. (ii) "Does the FoCus model do real knowledge blending?" to closely demonstrate that the model acquires actual knowledge. As a result of the experiment, we present that the Fo-Cus model could not correctly blend the knowledge according to the input dialogue and that the dataset design is unsuitable for the multiturn conversation. | Focus on FoCus: Is FoCus focused on Context, Knowledge and Persona? |
d14098062 | ||
d165056331 | ||
d236459972 | ||
d247363304 | We describe here our Machine Translation (MT) model and the results we obtained for the IWSLT 2017 Multilingual Shared Task. Motivated by Zero Shot NMT [1] we trained a Multilingual Neural Machine Translation by combining all the training data into one single collection by appending the tokens: "< 2xx >" (where xx is the language code of the target language) to the source sentences in order to indicate the target language they should be translated to. We observed that even in a low resource situation we were able to get translations whose quality surpass the quality of those obtained by Phrase Based Statistical Machine Translation by several BLEU points. The most surprising result we obtained was in the zero shot setting for Dutch-German and Italian-Romanian where we observed that despite using no parallel corpora between these language pairs, the NMT model was able to translate between these languages and the translations were either as good as or better (in terms of BLEU) than the non zero resource setting. We also verify that the NMT models that use feed forward layers and self attention instead of recurrent layers are extremely fast in terms of training which is useful in a NMT experimental setting. | Kyoto University MT System Description for IWSLT 2017 |
d237558703 | Conversational systems aim to generate responses that are accurate, relevant and engaging, either through utilising neural end-to-end models or through slot filling. Human-tohuman conversations are enhanced by not only the latest utterance of the interlocutor, but also by recalling and referring to relevant information about concepts/objects covered in the conversation so far. Such information may contain recent referred concepts, commonsense knowledge and more. A concrete scenario of such dialogues is the cooking scenario, i.e. when an artificial agent (personal assistant, robot, chatbot) and a human converse about a recipe. We will demo a novel system for commonsense enhanced response generation in the scenario of cooking, where the conversational system is able to not only provide directions for cooking step-by-step, but also display commonsense capabilities such as offering explanations on object use and recommending replacements of ingredients. | Chefbot: A Novel Framework for the Generation of Commonsense-enhanced Responses for Task-based Dialogue Systems |
d14233496 | We present work in progress aiming to build tools for the normalization of User-Generated Content (UGC). As we will see, the task requires the revisiting of the initial steps of NLP processing, since UGC (micro-blog, blog, and, generally, Web 2.0 user texts) presents a number of non-standard communicative and linguistic characteristics, and is in fact much closer to oral and colloquial language than to edited text. We present and characterize a corpus of UGC text in Spanish from three different sources: Twitter, consumer reviews and blogs. We motivate the need for UGC text normalization by analyzing the problems found when processing this type of text through a conventional language processing pipeline, particularly in the tasks of lemmatization and morphosyntactic tagging, and finally we propose a strategy for automatically normalizing UGC using a selector of correct forms on top of a pre-existing spell-checker. | Holaaa!! writin like u talk is kewl but kinda hard 4 NLP |
d9897197 | 摘要 現在已有許多公開情緒語料庫被實驗於語音情緒辨識的研究上,但是並沒有一 個語料庫以台灣常用的語言所錄製。語音情緒辨識會因為語音中不同的文本和 語言等資訊影響最後的辨識結果,因此公開的外國語言語料庫不一定適用於台 灣語言的情緒辨識研究中。為了解決語料庫的問題我們自行錄製的台灣語言情 緒語料庫,採用了最普遍的國語、台語以及客家語三種語言。我們的語料庫仿 照德語公開語料庫 EMO-DB 錄製,對每種語言採用十位語者、十句文本以及 七種情緒,每個語言收錄七百句,並且在錄製完後進行人工辨識,以一定的人 工辨識率做為篩選條件,以確保一定的辨別度。錄製完後以一個龐大的聲學特 徵集搭配支持向量機作為後端分類器,以此實驗做為三種語言的基準辨識率。 關鍵字: 語音情緒辨識, 情緒語料庫建置 一、緒論 在人類社會中言語溝通扮演著非常重要的角色,人們透過語言來交流和傳遞訊息, 基於此理念下發展出許多人機介面科技(human-computer Interaction, HCI),如蘋果公司的 "Siri" 便是此語音領域的代表作之一,將人類從鍵盤滑鼠輸入限制到能不需 手動 (Touchless)的聲控操作。而人類的交流不只單單透過語言文字,還有情感的交流,情感 表達能包含人們所處的狀態,因此有了情感運算的研究。情感運算橫跨許多領域如電腦 科學、心理學、認知哲學與工程學等,在語音領域相關為語音情緒辨識及情緒語音合成。 基本的情感辨識透過分析人的臉部表情以及聲音來做辨識,更進一步則同時分析對話語 意作為分辨依據。 [1]這本小書開始了情感運算的時代,此書定義了情感運算,從一個資訊工程研究者的角 度來描述與說明情感運算的應用及重要性,解釋基本的訊號處理與機器學習分類的觀念。 情感運算為一個龐大的研究議題,其中橫跨多個領域如資訊科學、心理學、感知科學和 工程學等等。[2]對語音情緒辨識做了大略的概論,首先整理了研究中普遍常見的情緒語 料庫,探討基本特徵參數如音高(Pitch)、能量(Energy)、基頻(Fundamental Frequency, F0) 和梅爾倒頻譜係數(Mel Frequency Cepstral Coefficients, MFCC)等,後端則實驗高斯混合 模型(Gaussian Mixture Model, GMM)、支持向量機(Support Vector Machine, SVM)、隱藏 式馬可夫模型(Hidden Markov Model, HMM)和類神經網絡(Artificial Neural Networks, ANN)。 | The Association for Computational Linguistics and Chinese Language Processing 台灣情緒語料庫建置與辨識 |
d66021 | This paper describes the mutually beneficial relationship between a cultural heritage digital library and a historical treebank: an established digital library can provide the resources and structure necessary for efficiently building a treebank, while a treebank, as a language resource, is a valuable tool for audiences traditionally served by such libraries. | The Latin Dependency Treebank in a Cultural Heritage Digital Library |
d16907473 | This paper presents the sequential evaluation of the question answering system SQuaLIA. This system is based on the same sequential process as most statistical question answering systems, involving 4 main steps from question analysis to answer extraction.The evaluation is based on a corpus made from 20 questions taken in the set of an evaluation campaign and which were well answered by SQuaLIA. Each of the 20 questions have been typed by 17 participants natives, non natives and dyslexics. They were vocally instructed the target of each question. Each of the 4 analysis step of the system involves a loss of accuracy, until an average of 60% of right answers at the end of the process. The main cause of this loss seem to be the orthographic mistakes users make on nouns.1 http://trec.nist.gov/data/qa.html 2 | Evaluating Robustness Of A QA System Through A Corpus Of Real-Life Questions |
d8159149 | ||
d9928315 | We present a case study on applying common methods for the prediction of lexical properties to a low-resource language, namely Wambaya. Leveraging a small corpus leads to a typical high-precision, low-recall system; using the Web as a corpus has no utility for this language, but a machine learning approach seems to utilise the available resources most effectively. This motivates a semi-supervised approach to lexicon extension. | Deep Lexical Acquisition of Type Properties in Low-resource Languages: A Case Study in Wambaya |
d164447678 | ||
d16244024 | The Internet is an ever growing source of information stored in documents of different languages. Hence, cross-lingual resources are needed for more and more NLP applications. This paper presents (i) a graph-based method for creating one such resource and (ii) a resource created using the method, a cross-lingual relatedness thesaurus. Given a word in one language, the thesaurus suggests words in a second language that are semantically related. The method requires two monolingual corpora and a basic dictionary. Our general approach is to build two monolingual word graphs, with nodes representing words and edges representing linguistic relations between words. A bilingual dictionary containing basic vocabulary provides seed translations relating nodes from both graphs. We then use an inter-graph node-similarity algorithm to discover related words. Evaluation with three human judges revealed that 49% of the English and 57% of the German words discovered by our method are semantically related to the target words. We publish two resources in conjunction with this paper. First, noun coordinations extracted from the German and English Wikipedias. Second, the cross-lingual relatedness thesaurus which can be used in experiments involving interactive cross-lingual query expansion. | Building a Cross-lingual Relatedness Thesaurus using a Graph Similarity Measure |
d182953256 | Probabilistic topic modeling is a common first step in crosslingual tasks to enable knowledge transfer and extract multilingual features. Although many multilingual topic models have been developed, their assumptions about the training corpus are quite varied, and it is not clear how well the different models can be utilized under various training conditions. In this article, the knowledge transfer mechanisms behind different multilingual topic models are systematically studied, and through a broad set of experiments with four models on ten languages, we provide empirical insights that can inform the selection and future development of multilingual topic models. | An Empirical Study on Crosslingual Transfer in Probabilistic Topic Models |
d62134147 | Unifioatlon and Tram;~ducti~)n ia Cc~mputati[~na] Ph~mology | |
d202583779 | ||
d17577627 | In this paper we study several approaches to adapting a polarity lexicon to a specific domain. On the one hand, the domain adaptation using Term Frequency (TF) and on the other hand, the domain adaptation using pattern matching with a BootStrapping algorithm (BS). Both methods are corpus based and start with the same polarity lexicon, but the first one requires an annotated collection of documents while the second one only needs a corpus where it looks for linguistic patterns. The performance of both methods overcomes the baseline system using the general polarity lexicon iSOL. However, although the TF approach achieves very promising results, the BS strategy does not give as much improvement as we expected. For this reason, we have combined both methods in order to take advantage of the positive aspects of each one. With this new approach the results obtained are even better that those with the systems applied individually. Actually, we have achieved a significant improvement of 11.50% (in terms of accuracy) in the polarity classification of the movie reviews with respect to the results achieved with the general purpose lexicon iSOL. | Domain Adaptation of Polarity Lexicon combining Term Frequency and Bootstrapping |
d18823236 | This paper presents the results of applying transformation-based learning (TBL) to the problem of semantic role labeling. The great advantage of the TBL paradigm is that it provides a simple learning framework in which the parallel tasks of argument identification and argument labeling can mutually influence one another. Semantic role labeling nevertheless differs from other tasks in which TBL has been successfully applied, such as part-of-speech tagging and named-entity recognition, because of the large span of some arguments, the dependence of argument labels on global information, and the fact that core argument labels are largely arbitrary. Consequently, some care is needed in posing the task in a TBL framework. | A transformation-based approach to argument labeling |
d14299671 | Translation memories (TMs) used in computer-aided translation (CAT) systems are the highest-quality source of parallel texts since they consist of segment translation pairs approved by professional human translators. The obvious problem is their size and coverage of new document segments when compared with other parallel data.In this paper, we describe several methods for expanding translation memories using linguistically motivated segment combining approaches concentrated on preserving the high translational quality. The evaluation of the methods was done on a medium-size real-world translation memory and documents provided by a Czech translation company as well as on a large publicly available DGT translation memory published by European Commission. The asset of the TM expansion methods were evaluated by the pre-translation analysis of widely used MemoQ CAT system and the METEOR metric was used for measuring the quality of fully expanded new translation segments. | Increasing Coverage of Translation Memories with Linguistically Motivated Segment Combination Methods |
d15416674 | This paper describes the semantic annotations we are performing on the CallHome Japanese corpus of spontaneous, unscripted telephone conversations(LDC, 1996). Our annotations include (i) semantic classes for all nouns and verbs; (ii) verb senses for all main verbs; and (iii) relations between main verbs and their complements in the same utterance. Our semantic tagset is taken from NTT's Goi-Taikei semantic lexicon and ontology(Ikehara et al., 1997). A pilot study demonstrates that the verb sense tagging can be e ciently performed by native Japanese speakers using computergenerated HTML forms, and that good interannotator reliability can be obtained in the right conditions. | Semantic annotation of a Japanese speech corpus |
d681601 | Bag-of-words (BOW) is now the most popular way to model text in machine learning based sentiment classification. However, the performance of such approach sometimes remains rather limited due to some fundamental deficiencies of the BOW model. In this paper, we focus on the polarity shift problem, and propose a novel approach, called dual training and dual prediction (DTDP), to address it. The basic idea of DTDP is to first generate artificial samples that are polarity-opposite to the original samples by polarity reversion, and then leverage both the original and opposite samples for (dual) training and (dual) prediction. Experimental results on four datasets demonstrate the effectiveness of the proposed approach for polarity classification. | Dual Training and Dual Prediction for Polarity Classification |
d14857072 | Morphological processes in Semitic languages deliver space-delimited words which introduce multiple, distinct, syntactic units into the structure of the input sentence. These words are in turn highly ambiguous, breaking the assumption underlying most parsers that the yield of a tree for a given sentence is known in advance. Here we propose a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. Using a treebank grammar, a data-driven lexicon, and a linguistically motivated unknown-tokens handling technique our model outperforms previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | A Single Generative Model for Joint Morphological Segmentation and Syntactic Parsing |
d235097441 | ||
d1381650 | Although automated word sense disambiguation has become a popular activity within computational lexicology, evaluation of the accuracy of disambiguation systems is still mostly limited to manual checking by the developer. This paper describes our work in collecting data on the disambiguation behavior of human subjects, with the intention of providing (I) a norm against which dictionary-based systems (and perhaps others) can be evaluated, and (2) a source of psycholinguistic information about previously unobserved aspects of human disambiguation, for the use of both psycholinguists and computational researchers. We also describe two of our most important tools: a questionnaire of ambiguous test words in various contexts, and a hypertext user interface for efficient and powerful collection of data from human subjects. | Word Sense Disambiguation by Human Subjects: Computational and Psycholinguistic Applications |
d226239324 | ||
d16572349 | We describe a Chinese to English Machine Translation system developed at the Johns Hopkins University for the NIST 2003 MT evaluation. The system is based on a Weighted Finite State Transducer implementation of the alignment template translation model for statistical machine translation. The baseline MT system was trained using 100,000 sentence pairs selected from a static bitext training collection. Information retrieval techniques were then used to create specific training collections for each document to be translated. This document-specific training set included bitext and name entities that were then added to the baseline system by augmenting the library of alignment templates. We report translation performance of baseline and IR-based systems on two NIST MT evaluation test sets. | |
d37962359 | NORGES ALMENV)TENSKAPét.!GE FORSKNtNGSRAD | |
d36175639 | In this paper we will discuss interpretation of adverbs in Japanese. We will explore the division of labor between the syntactic requirements, semantic requirements, and discourse-contextual constraints involving adverbial interpretation. It will then be argued that this inter-modular approach utilizing LFG explains various elusive paradigms of the adverbs. | Towards a Proper Treatment of Adjuncts in Japanese |
d28019081 | ||
d14165269 | We describe a new task-based corpus in the Spanish language. The corpus consists of videos, transcripts, and annotations of the interaction between a naive speaker and a confederate listener. The speaker instructs the listener to MOVE, ROTATE, or PAINT objects on a computer screen. This resource can be used to study how participants produce instructions in a collaborative goal-oriented scenario, in Spanish. The data set is ideally suited for investigating incremental processes of the production and interpretation of language. We demonstrate here how to use this corpus to explore language-specific differences in utterance planning, for English and Spanish speakers. | A Database for the Exploration of Spanish Planning |
d463176 | Automatic creation of polarity lexicons is a crucial issue to be solved in order to reduce time and efforts in the first steps of Sentiment Analysis. In this paper we present a methodology based on linguistic cues that allows us to automatically discover, extract and label subjective adjectives that should be collected in a domain-based polarity lexicon. For this purpose, we designed a bootstrapping algorithm that, from a small set of seed polar adjectives, is capable to iteratively identify, extract and annotate positive and negative adjectives. Additionally, the method automatically creates lists of highly subjective elements that change their prior polarity even within the same domain. The algorithm proposed reached a precision of 97.5% for positive adjectives and 71.4% for negative ones in the semantic orientation identification task. | Automatic extraction of polar adjectives for the creation of polarity lexicons |
d241583263 | Event coreference resolution is critical to understand events in growing number of online news with multiple modalities including text, video, speech, etc. However, the events and entities depicting in different modalities may not be perfectly aligned and can be difficult to annotate, which makes the task especially challenging with little supervision available. To address the above issues, we propose a supervised model based on attention mechanism and an unsupervised model based on statistical machine translation, capable of learning the relative importance of modalities for event coreference resolution. Experiments on a video multimedia event dataset show that our multimodal models outperform text-only systems in the event coreference resolution task. A careful analysis reveals that the performance gain of the multimodal model especially under the unsupervised setting comes from better learning of visually salient events. | Coreference by Appearance: Visually Grounded Event Coreference Resolution |
d14151772 | Arabic is a language known for its rich and complex morphology. Although many research projects have focused on the problem of Arabic morphological analysis using different techniques and approaches, very few have addressed the issue of generation of fully inflected words for the purpose of text authoring. Available open-source spell checking resources for Arabic are too small and inadequate. Ayaspell, for example, the official resource used with OpenOffice applications, contains only 300,000 fully inflected words. We try to bridge this critical gap by creating an adequate, open-source and large-coverage word list for Arabic containing 9,000,000 fully inflected surface words. Furthermore, from a large list of valid forms and invalid forms we create a character-based tri-gram language model to approximate knowledge about permissible character clusters in Arabic, creating a novel method for detecting spelling errors. Testing of this language model gives a precision of 98.2% at a recall of 100%. We take our research a step further by creating a context-independent spelling correction tool using a finite-state automaton that measures the edit distance between input words and candidate corrections, the Noisy Channel Model, and knowledge-based rules. Our system performs significantly better than Hunspell in choosing the best solution, but it is still below the MS Spell Checker. | Arabic Word Generation and Modelling for Spell Checking |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.