_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d2478928 | We prove a Chomsky-Schützenberger representation theorem for weighted multiple context-free languages.Example 1. We provide a list of complete commutative strong bimonoids (cf. Droste et al. (2010, Ex. 1)) some of which are relevant for natural language processing:• Any complete commutative semiring, e.g. the Boolean semiring B = {0, 1}, ∨, ∧, 0, 1 , the probability semiring Pr = R ≥0 , +, ·, 0, 1 , the Viterbi semiring [0, 1], max, ·, 0, 1 , the tropical semiring R ∪ {∞}, min, +, ∞, 0 , | A Chomsky-Schützenberger Representation for Weighted Multiple Context-free Languages |
d1681736 | This paper is concerned with learning categorial grammars in Gold's model. In contrast to k-valued classical categorial grammars, k-valued Lambek grammars are not learnable from strings. This result was shown for several variants but the question was left open for the weakest one, the non-associative variant NL.We show that the class of rigid and kvalued NL grammars is unlearnable from strings, for each k; this result is obtained by a specific construction of a limit point in the considered class, that does not use product operator.Another interest of our construction is that it provides limit points for the whole hierarchy of Lambek grammars, including the recent pregroup grammars.Such a result aims at clarifying the possible directions for future learning algorithms: it expresses the difficulty of learning categorial grammars from strings and the need for an adequate structure on examples. | k-valued Non-Associative Lambek Categorial Grammars are not Learnable from Strings |
d14686420 | This paper describes BABYLON, a system that attempts to overcome the shortage of parallel texts in low-density languages by supplementing existing parallel texts with texts gathered automatically from the Web. In addition to the identification of entire Web pages, we also propose a new feature specifically designed to find parallel text chunks within a single document. Experiments carried out on the Quechua-Spanish language pair show that the system is successful in automatically identifying a significant amount of parallel texts on the Web. Evaluations of a machine translation system trained on this corpus indicate that the Web-gathered parallel texts can supplement manually compiled parallel texts and perform significantly better than the manually compiled texts when tested on other Web-gathered data. | BABYLON Parallel Text Builder: Gathering Parallel Texts for Low-Density Languages |
d14308271 | This paper proposes a novel method of learning probabilistic subcategorization preference. In the method, for the purpose of coping with the ambiguities of case dependencies and noun class generalization of argument/adjunct nouns, we introduce a data structure which represents a tuple of independent partial subcategorization frames. Each collocation of a verb and argument/adjunct nouns is assumed to be generated from one of the possible tuples of independent partial subcategorization frames. Parameters of subcategorization preference are then estimated so as to maximize the subcategorization preference function for each collocation of a verb and argument/adjunct nouns in the training corpus. We also describe the results of the experiments on learning probabilistic subcategorization preference from the EDR Japanese bracketed corpus, as well as those on evaluating the performance of subcategorization preference. | Learning Probabilistic Subcategorization Preference by Identifying Case Dependencies and Optimal Noun Class Generalization Level* |
d252819426 | Essay exams have been attracting attention as a way of measuring the higher-order abilities of examinees, but they have two major drawbacks in that grading them is expensive and raises questions about fairness. As an approach to overcome these problems, automated essay scoring (AES) is in increasing need. Many AES models based on deep neural networks have been proposed in recent years and have achieved high accuracy, but most of these models are designed to predict only a single overall score. However, to provide detailed feedback in practical situations, we often require not only the overall score but also analytic scores corresponding to various aspects of the essay. Several neural AES models that can predict both the analytic scores and the overall score have also been proposed for this very purpose. However, conventional models are designed to have complex neural architectures for each analytic score, which makes interpreting the score prediction difficult. To improve the interpretability of the prediction while maintaining scoring accuracy, we propose a new neural model for automated analytic scoring that integrates a multidimensional item response theory model, which is a popular psychometric model. | Analytic Automated Essay Scoring based on Deep Neural Networks Integrating Multidimensional Item Response Theory |
d248780366 | This paper describes the submissions of the UPC Machine Translation group to the IWSLT 2022 Offline Speech Translation and Speechto-Speech Translation tracks. The offline task involves translating English speech to German, Japanese and Chinese text. Our Speech Translation systems are trained end-to-end and are based on large pretrained speech and text models. We use an efficient fine-tuning technique that trains only specific layers of our system, and explore the use of adapter modules for the non-trainable layers. We further investigate the suitability of different speech encoders (wav2vec 2.0, HuBERT) for our models and the impact of knowledge distillation from the Machine Translation model that we use for the decoder (mBART). For segmenting the IWSLT test sets we fine-tune a pretrained audio segmentation model and achieve improvements of 5 BLEU compared to the given segmentation. Our best single model uses HuBERT and parallel adapters and achieves 29.42 BLEU at English-German MuST-C tst-COMMON and 26.77 at IWSLT 2020 test. By ensembling many models, we further increase translation quality to 30.83 BLEU and 27.78 accordingly. Furthermore, our submission for English-Japanese achieves 15.85 and English-Chinese obtains 25.63 BLEU on the MuST-C tst-COMMON sets. Finally, we extend our system to perform English-German Speech-to-Speech Translation with a pretrained Text-to-Speech model. | Pretrained Speech Encoders and Efficient Fine-tuning Methods for Speech Translation: UPC at IWSLT 2022 |
d8024703 | Human sentence processing involves integrating probabilistic knowledge from a variety of sources in order to incrementally determine the hierarchical structure for the serial input stream. While a large number of sentence processing effects have been explained in terms of comprehenders' rational use of probabilistic information, effects of local coherences have not. We present here a new model of local coherences, viewing them as resulting from a belief-update process, and show that the relevant probabilities in our model are calculable from a probabilistic Earley parser. Finally, we demonstrate empirically that an implemented version of the model makes the correct predictions for the materials from the original experiment demonstrating local coherence effects. * ⇒ w j 0 λ) | A model of local coherence effects in human sentence processing as consequences of updates from bottom-up prior to posterior beliefs |
d241583804 | The COVID-19 pandemic has witnessed the implementations of exceptional measures by governments across the world to counteract its impact. This work presents the initial results of an on-going project, EXCEPTIUS, aiming to automatically identify, classify and compare exceptional measures against COVID-19 across 32 countries in Europe. To this goal, we created a corpus of legal documents with sentence-level annotations of eight different classes of exceptional measures that are implemented across these countries. We evaluated multiple multi-label classifiers on a manually annotated corpus at sentence level. The XLM-RoBERTa model achieves highest performance on this multilingual multi-label classification task, with a macro-average F1 score of 59.8%. | A Multilingual Approach to Identify and Classify Exceptional Measures Against COVID-19 |
d2295984 | An eggcorn is a type of linguistic error where a word is substituted with one that is semantically plausible -that is, the substitution is a semantic reanalysis of what may be a rare, archaic, or otherwise opaque term. We build a system that, given the original word and its eggcorn form, finds a semantic path between the two. Based on these paths, we derive a typology that reflects the different classes of semantic reinterpretation underlying eggcorns. | Understanding Eggcorns |
d9003449 | We present a novel method of statistical morphological generation, i.e. the prediction of inflected word forms given lemma, part-of-speech and morphological features, aimed at robustness to unseen inputs. Our system uses a trainable classifier to predict "edit scripts" that are then used to transform lemmas into inflected word forms. Suffixes of lemmas are included as features to achieve robustness. We evaluate our system on 6 languages with a varying degree of morphological richness. The results show that the system is able to learn most morphological phenomena and generalize to unseen inputs, producing significantly better results than a dictionarybased baseline. | Robust Multilingual Statistical Morphological Generation Models |
d14085759 | This paper proposes a method for identifying synonymous relations in a bilingual lexicon, which is a set of translation-equivalent term pairs. We train a classifier for identifying those synonymous relations by using spelling variations as main clues. We compared two approaches: the direct identification of bilingual synonym pairs, and the merger of two monolingual synonyms. We showed that our approach achieves a high pair-wise precision and recall, and outperforms the baseline method. | Bilingual Synonym Identification with Spelling Variations |
d250391031 | This work describes the classification system proposed for the Computational Linguistics and Clinical Psychology (CLPsych) Shared Task 2022. We propose the use of multitask learning approach with a bidirectional long-short term memory (Bi-LSTM) model for predicting changes in user's mood (Task A) and their suicidal risk level (Task B). The two classification tasks have been solved independently or in an augmented way previously, where the output of one task is leveraged for learning another task, however this work proposes an 'all-inone' framework that jointly learns the related mental health tasks. Our experimental results (ranked top for task A) suggest that the proposed multi-task framework outperforms the alternative single-task frameworks submitted to the challenge and evaluated via the timeline based and coverage based performance metrics shared by the organisers. We also assess the potential of using various types of feature embedding schemes that could prove useful in initialising the Bi-LSTM model for better multitask learning in the mental health domain. | Detecting Moments of Change and Suicidal Risks in Longitudinal User Texts Using Multi-task Learning |
d9747557 | A wide range of parser and/or grammar evaluation methods have been reported in the literature. However, in most cases these evaluations take the parsers independently (intrinsic evaluations), and only in a few cases has the effect of different parsers in real applications been measured (extrinsic evaluations). This paper compares two evaluations of the Link Grammar parser and the Conexor Functional Dependency Grammar parser. The parsing systems, despite both being dependency-based, return different types of dependencies, making a direct comparison impossible. In the intrinsic evaluation, the accuracy of the parsers is compared independently by converting the dependencies into grammatical relations and using the methodology ofCarroll et al. (1998)for parser comparison. In the extrinsic evaluation, the parsers' impact in a practical application is compared within the context of answer extraction. The differences in the results are significant. | Intrinsic versus Extrinsic Evaluations of Parsing Systems |
d208193411 | ||
d2555606 | Extending phrase-based Statistical Machine Translation systems with a second, dynamic phrase table has been done for multiple purposes. Promising results have been reported for hybrid or multi-engine machine translation, i.e. building a phrase table from the knowledge of external MT systems, and for online learning. We argue that, in prior research, dynamic phrase tables are not scored optimally because they may be of small size, which makes the Maximum Likelihood Estimation of translation probabilities unreliable. We propose basing the scores on frequencies from both the dynamic corpus and the primary corpus instead, and show that this modification significantly increases performance. We also explore the combination of multiengine MT and online learning. | Combining Multi-Engine Machine Translation and Online Learning through Dynamic Phrase Tables |
d18948670 | Named-Entity Recognition (NER) plays a significant role in classifying or locating atomic elements in text into predefined categories such as the name of persons, organizations, locations, expression of times, quantities, monetary values, temporal expressions and percentages. Several Statistical methods with supervised and unsupervised learning have applied English and some other Indian languages successfully. Malayalam has a distinct feature in nouns having no subject-verb agreement, which is of free order, makes the NER identification a complex process. In this paper, a hybrid approach combining rule based machine learning with statistical approach is proposed and implemented, which shows 73.42% accuracy. | A Hybrid Statistical Approach for Named Entity Recognition for Malayalam Language |
d10702716 | A b stra ctThis paper describes some aspects o f a pronunciation dictionary for Swedish, "Svenskt Utlalslexikon" (SUL), which is piesenUy developed at our departm ent This dictionary provides, among other items, three kinds o f information about Swedish pronunciation that are not included in standard dictionaries: information on v a ria n ts, on inflected form s and com pounds, and on p ro p e r nam es. SUL is organized as a m achine-readable lexical database which in its present form contains approxim ately 1 0 0 . 0 0 0 headwords. The run-dm e system comprises four separate processing modules: an inflection engine, a compound engine, the transcripUon engine, and the dicUonary search algorithm . In addiuon to phonetic and phonological information, SUL also aim s to supply various kinds of paradigmadc, syntagmatic and stadsucal inform adon, needed for the linguisdc processing stages in text-to-speech synthesis and autom adc speech recognidon. | A new Dictionary of Swedish Pronunciation |
d1645458 | This ])al)er introdu(:es a new tyl)e of grammar learning algorithm, iilst)ired l)y sl;ring edit distance (Wagner and Fis(:her, 1974). The algorithm takes a (:ortms of tlat S(~lltell(:es as input and returns a (:ortms of lat)elled, l)ra(:ket(~(1 sen-~ ten(:(~s. The method works on 1)airs of unstru(:tllr(?(l SelltellC(~,s that have one or more words in (:onunon. W]lc, ll two senten('es are (tivided into parts that are the same in 1)oth s(mten(:es and parl;s tha|; are (litl'erent, this intbrmation is used to lind l)arl;s that are interchangeal)le. These t)arts are taken as t)ossil)le (:onstituents of the same tyl)e. After this alignment learning stel) , the selection learning stc l) sel(~('ts the most l)rot)at)le constituents from all 1)ossit)le (:onstituents. This m(;thod was used to t)ootstrat) structure (m the ATIS (:ortms (Mar (:us et al., 1f)93) and on the OVIS ~ (:ort)us (Bommma et ~d., 1997). While the results are en(:om:aging (we ol)|;ained Ul) to 89.25 % non-crossing l)ra(:kets precision), this 1)at)er will 1)oint out some of the shortcomings of our at)l)roa(:h and will suggest 1)ossible solul;ions. | ABL: Alignment-Based Learning |
d14251686 | We report results of lexical accommodation studies involving three different interpretation settings: human-human monolingual; human-interpreted bilingual; and machine-interpreted bilingual. We found significant accommodation in all three conversational settings, with the highest rate in the human-interpreted setting. There is evidence for longrange mutual accommodation in that setting, as compared to short-range accommodation in the machine-interpreted setting. Motivations discussed in the accommodation literature, including speakers' concern for social standing and communicational efficiency, are examined in the light of the results obtained. Finally, we draw implications for the design of multimedia human-computer interfaces. | LEXICAL ACCOMMODATION IN MACHINE-MEDIATED INTERACTIONS |
d14683790 | This paper introduces a new implementation of the Canonical Text Services (CTS) protocol intended to be capable of handling thousands of editions. CTS was introduced for the Digital Humanities and is based on a hierarchical structuring of texts down to the level of individual words mirroring traditional practices of citing. The paper gives an overview of CTS for those that are unfamiliar and establishes its place in the Digital Humanities research. Some existing CTS implementations are discussed and it is explained why there is a need for one that is able to scale to much larger text collections. Evaluations are given that can be used to illustrate the performance of the new implementation. | A New Implementation for Canonical Text Services |
d226284012 | ||
d15129940 | Post-Editing and machine translation are often studied from the viewpoint of efficiency (measured e.g. in words processed) or of quality (e.g. human judgement of fluency). Little is known, however, about how post-editing and machine translation change the linguistic profile of the texts produced in contrast to human translations. In this paper, we present a pilot study contrasting lexical profiles of a small collection of texts, focussing on two aspects: variation patterns in terminological translation in post-edited texts, and the translation of cognates in machine translation, both contrasted to purely human translations. The study was conducted for the translation direction English-German. | Patterns of Terminological Variation in Post-editing and of Cognate Use in Machine Translation in Contrast to Human Translation |
d2428094 | Agrammatic aphasia is a serious language impairment which can occur after a stroke or traumatic brain injury. We present an automatic method for analyzing aphasic speech using surface level parse features and context-free grammar production rules. Examining these features individually, we show that we can uncover many of the same characteristics of agrammatic language that have been reported in studies using manual analysis. When taken together, these parse features can be used to train a classifier to accurately predict whether or not an individual has aphasia. Furthermore, we find that the parse features can lead to higher classification accuracies than traditional measures of syntactic complexity. Finally, we find that a minimal amount of pre-processing can lead to better results than using either the raw data or highly processed data. | Using statistical parsing to detect agrammatic aphasia |
d484398 | Arguably, spoken dialogue systems are most often used not in hands/eyes-busy situations, but rather in settings where a graphical display is also available, such as a mobile phone. We explore the use of a graphical output modality for signalling incremental understanding and prediction state of the dialogue system. By visualising the current dialogue state and possible continuations of it as a simple tree, and allowing interaction with that visualisation (e.g., for confirmations or corrections), the system provides both feedback on past user actions and guidance on possible future ones, and it can span the continuum from slot filling to full prediction of user intent (such as GoogleNow). We evaluate our system with real users and report that they found the system intuitive and easy to use, and that incremental and adaptive settings enable users to accomplish more tasks. | Supporting Spoken Assistant Systems with a Graphical User Interface that Signals Incremental Understanding and Prediction State |
d9044768 | The applicability of many current information extraction techniques is severely limited by the need for supervised training data. We demonstrate that for certain field structured extraction tasks, such as classified advertisements and bibliographic citations, small amounts of prior knowledge can be used to learn effective models in a primarily unsupervised fashion. Although hidden Markov models (HMMs) provide a suitable generative model for field structured text, general unsupervised HMM learning fails to learn useful structure in either of our domains. However, one can dramatically improve the quality of the learned structure by exploiting simple prior knowledge of the desired solutions. In both domains, we found that unsupervised methods can attain accuracies with 400 unlabeled examples comparable to those attained by supervised methods on 50 labeled examples, and that semi-supervised methods can make good use of small amounts of labeled data. | Unsupervised Learning of Field Segmentation Models for Information Extraction |
d9726830 | We discuss the design and preliminary results of an experiment for modeling human-human multi-threaded dialogues. We found that participants tend to complete adjacency pairs in dialogues before switching to a new dialogue thread. We also have indications that, in the presence of a manualvisual task, the difficulty of the task influences switching between dialogue threads. | Experimental modeling of human-human multi-threaded dialogues in the presence of a manual-visual task |
d1812706 | Function tag assignment has been studied for English and Spanish. In this paper, we address the question of assigning function tags to parsed sentences in Chinese. We show that good performance for Chinese function tagging can be achieved by using labeling method, extending the work ofBlaheta (2004). In this method, the objects being modeled are syntax trees which require some mechanism to convert them into feature vectors. To encode structural information of the complex inputs, we propose a set of new features. Experimental results show that these new features lead to significant improvements. | Chinese Function Tag Labeling |
d2685794 | This paper proposes a method of constructing a dictionary for a pair of languages from bilingual dictionaries between each of the languages and a third language. Such a method would be useful for language pairs for which wide-coverage bilingual dictionaries are not available, but it suffers from spurious translations caused by the ambiguity of intermediary third-language words. To eliminate spurious translations, the proposed method uses the monolingual corpora of the first and second languages, whose availability is not as limited as that of parallel corpora. Extracting word associations from the corpora of both languages, the method correlates the associated words of an entry word with its translation candidates. It then selects translation candidates that have the highest correlations with a certain percentage or more of the associated words. The method has the following features. It first produces a domain-adapted bilingual dictionary. Second, the resulting bilingual dictionary, which not only provides translations but also associated words supporting each translation, enables contextually based selection of translations. Preliminary experiments using the EDR Japanese-English and LDC Chinese-English dictionaries together with Mainichi Newspaper and Xinhua News Agency corpora demonstrate that the proposed method is viable. The recall and precision could be improved by optimizing the parameters. | Automatic Construction of a Japanese-Chinese Dictionary via English |
d9600472 | We introduce a social media text normalization system that can be deployed as a preprocessing step for Machine Translation and various NLP applications to handle social media text. The proposed system is based on unsupervised learning of the normalization equivalences from unlabeled text. The proposed approach uses Random Walks on a contextual similarity bipartite graph constructed from n-gram sequences on large unlabeled text corpus. We show that the proposed approach has a very high precision of (92.43) and a reasonable recall of (56.4). When used as a preprocessing step for a state-of-the-art machine translation system, the translation quality on social media text improved by 6%. The proposed approach is domain and language independent and can be deployed as a preprocessing step for any NLP application to handle social media text. | Social Text Normalization using Contextual Graph Random Walks |
d204916542 | ||
d35997179 | This paper presents my manual skeleton parsing on a sample text of approximately 100,000 word tokens (or about 2,500 sentences) taken from the PFR Chinese Corpus with a clearly defined parsing scheme of 17 constituent labels. The manually-parsed sample skeleton treebank is one of the very few extant Chinese treebanks. While Chinese part-of-speech tagging and word segmentation have been the subject of concerted research for many years, the syntactic annotation of Chinese corpora is a comparatively new field. The difficulties that I encountered in the production of this treebank demonstrate some of the peculiarities of Chinese syntax. A noteworthy syntactic property is that some serial verb constructions tend to be used as if they were compound verbs. The two transitive verbs in series, unlike common transitive verbs, do not take an object separately within the construction; rather, the serial construction as a whole is able to take the same direct object and the perfective aspect marker le. The skeleton-parsed sample treebank is evaluated against Eyes & Leech (1993)'s criteria and proves to be accurate, uniform and linguistically valid. | Skeleton Parsing in Chinese: Annotation Scheme and Guidelines |
d12710611 | We present a method for utilizing unannotated sentences to improve a semantic parser which maps natural language (NL) sentences into their formal meaning representations (MRs). Given NL sentences annotated with their MRs, the initial supervised semantic parser learns the mapping by training Support Vector Machine (SVM) classifiers for every production in the MR grammar. Our new method applies the learned semantic parser to the unannotated sentences and collects unlabeled examples which are then used to retrain the classifiers using a variant of transductive SVMs. Experimental results show the improvements obtained over the purely supervised parser, particularly when the annotated training set is small. | Semi-Supervised Learning for Semantic Parsing using Support Vector Machines |
d250164491 | In this paper we examine existing sentiment lexicons and sense-based sentiment-tagged corpora to find out how sense and concept-based semantic relations effect sentiment scores (for polarity and valence). We show that some relations are good predictors of sentiment of related words: antonyms have similar valence and opposite polarity, synonyms similar valence and polarity, as do many derivational relations. We use this knowledge and existing resources to build a sentiment annotated wordnet of English, and show how it can be used to produce sentiment lexicons for other languages using the Open Multilingual Wordnet.2 Note that a purely word based system will not be able to distinguish different parts of speech, so will potentially include totally unrelated meanings 3 https://blogs.ntu.edu.sg/chriskhoo/2017/ 07/wkwsci-sentiment-lexicon-v1-1-available-for-download/ | Sense and Sentiment |
d15458283 | In wide-coverage lexicalized grammars many of the elementary structures have substructures in common. This means that during parsing some of the computation associated with different structures is duplicated. This paper explores ways in which the grammar can be precompiled into finite state automata so that some of this shared structure results in shared computation at run-time.1 We use the term elementary structures to mean the basic components of grammatical description. In LTAG (Joshi and Schabes, 1991), for example, the elementary structures are initial and auxiliary trees. A grammar consists of a finite set of elementary structures that are built into derived structures using the composition operations of the formalism.2 Cf. Alshawi (1996) for a similar approach. but with different objectives and a different style of formalism. The approach we present here is also reminiscent of LR parsing in that the FSA used by an LR-parser to recognize viable prefixes can be viewed as the result of determinizing a nondeterministic finite state automaton constructed by linking with E-transitions deterministic automata for the individual productions.3 The second author is currently involved in the development of a wide-coverage grammar and automaton-based parser using the formalism described in Rambow,Vijay-Shanker, and Weir (1995). | AUTOMATON-BASED PARSING FOR LEXICALIZED GRAMMARS |
d46195 | In this paper we present a methodology for extracting subcategorisation frames based on an automatic LFG f-structure annotation algorithm for the Penn-II Treebank. We extract abstract syntactic function-based subcategorisation frames (LFG semantic forms), traditional CFG categorybased subcategorisation frames as well as mixed function/category-based frames, with or without preposition information for obliques and particle information for particle verbs. Our approach does not predefine frames, associates probabilities with frames conditional on the lemma, distinguishes between active and passive frames, and fully reflects the effects of long-distance dependencies in the source data structures. We extract 3586 verb lemmas, 14348 semantic form types (an average of 4 per lemma) with 577 frame types. We present a large-scale evaluation of the complete set of forms extracted against the full COMLEX resource. | Large-Scale Induction and Evaluation of Lexical Resources from the Penn-II Treebank |
d9548105 | Marathi and Hindi both being Indo-Aryan family members and using Devanagari script are similar to a great extent. Both follow SOV sentence structure and are equally liberal in word order. The translation for this language pair appears to be easy. But experiments show this to be a significantly difficult task, primarily due to the fact that Marathi is morphologically richer compared to Hindi. We propose a Marathi to Hindi Statistical Machine Translation (SMT) system which makes use of compound word splitting to tackle the morphological richness of Marathi. | SMT from Agglutinative Languages: Use of Suffix Separation and Word Splitting |
d227231607 | ||
d18125327 | Discourse connectives play an important role in making a text coherent and helping humans to infer relations between spans of text.Using the Penn Discourse Treebank, we investigate what information relevant to inferring discourse relations is conveyed by discourse connectives, and whether the specificity of discourse relations reflects general cognitive biases for establishing coherence. We also propose an approach to measure the effect of a discourse marker on sense identification according to the different levels of a relation sense hierarchy. This will open a way to the computational modeling of discourse processing. | On the Information Conveyed by Discourse Markers |
d171788196 | ||
d238251635 | ||
d29757414 | We describe a method to automatically extract hyponyms from Japanese newspapers. First, we discover patterns which can extract hyponyms of a noun, such as "A nado-no B (B such as A)", then we apply the patterns to the newspaper corpus to extract instances. The procedure works best to extract hyponyms of concrete things in the middle of the word hierarchies. The precision is 49-87 percent depending on the patterns. We compare the extracted hyponyms and those associated by humans. We find that the popular words in the associative concept dictionary are likely to be found in the corpus but also many additional hyponyms can be extracted from 32 years of newspaper articles. | Automatic Extraction of Hyponyms from Japanese Newspapers Using Lexico-syntactic Patterns |
d8730095 | This paper addresses issues that arose in applying the model for discourse entity (DE) generation in B. Webber's work (1978, 1983) to an interactive multimodal interface. Her treatment was extended in 4 areas: (1)the notion of context dependence of DEs was formalized in an intensional logic, (2)the treatment of DEs for indefinite NPs was modified to use skolem functions, (3)the treatment of dependent quantifiers was generalized, and (4) DEs originating from non-linguistic sources, such as pointing actions, were taken into account. The discourse entities are used in intra-and extra-sentential pronoun resolution in BBN Janus. Laboratories, Report No. 6522. Ingria, Robert J.P., and Stallard, David. (1989). A Computational Mechanism for Pronominal Reference. | DISCOURSE ENTITIES IN JANUS |
d9656700 | This paper reports the fully automatic compilation of parallel corpora for Brazilian Portuguese. Scientific news texts available in Brazilian Portuguese, English and Spanish are automatically crawled from a multilingual Brazilian magazine. The texts are then automatically aligned at document-and sentence-level. The resulting corpora contain about 2,700 parallel documents totaling over 150,000 aligned sentences each. The quality of the corpora and their usefulness are tested in an experiment with machine translation. | Fully Automatic Compilation of Portuguese-English and Portuguese-Spanish Parallel Corpora |
d8919638 | Building accurate knowledge graphs is essential for question answering system. We suggest a crowd-to-machine relation extraction system to eventually fill a knowledge graph. To train a relation extraction model, training data first have to be prepared either manually or automatically. A model trained by manually labeled data could show a better performance, however, it is not scalable because another set of training data should be prepared. If a model is trained by automatically collected data the performance could be rather low but the scalability is excellent since automatically collecting training data can be easily done. To expand a knowledge graph, not only do we need a relation extraction model with high accuracy, but also the model is better to be scalable. We suggest a crowd sourcing system with a scalable relation extraction model to fill a knowledge graph. | Filling a Knowledge Graph with a Crowd |
d17926487 | We describe a system of reversible grammar in which, given a logic-grammar specification of a natural language, two efficient PROLOG programs are derived by an off-line compilation process: a parser and a generator for this language. The centerpiece of the system is the inversion algorithm designed to compute the generator code from the parser's PRO-LOG code, using the collection of minimal sets of essential arguments (MSEA) for predicates. The system has been implemented to work with Definite Clause Grammars (DCG) and is a part of an English-Japanese machine translation project currently under development at NYU's Courant Institute. | AUTOMATED INVERSION OF LOGIC GRAMMARS FOR GENERATION |
d8056720 | The standard design for a computer-assisted translation system consists of data entry of source text, machine translation, and revision of raw machine translation. This paper discusses this standard design and presents an alternative multilevel design consisting of integrated word processing, terminology aids, preprocessing aids and a link to an off-line machine translation system. Advantages of the new design are discussed. | COMPUTER-ASSISTED TRANSLATION SYSTEMS: The Standard Design and A Multi-level Design |
d2392146 | In this paper we present our approach to automatically identify the subjectivity, polarity and irony of Italian Tweets. Our system which reaches and outperforms the state of the art in Italian is well adapted for different domains since it uses abstract word features instead of bag of words. We also present experiments carried out to study how Italian Sentiment Analysis systems react to domain changes. We show that bag of words approaches commonly used in Sentiment Analysis do not adapt well to domain changes. | How Topic Biases Your Results? A Case Study of Sentiment Analysis and Irony Detection in Italian |
d3704131 | Recognizing similar or close meaning on different surface form is a common challenge in various Natural Language Processing and Information Access applications. However, we identified multiple limitations in existing resources that can be used for solving the vocabulary mismatch problem. To this end, we will propose the Diversifiable Bootstrapping algorithm that can learn paraphrase patterns with a high lexical coverage. The algorithm works in a lightly-supervised iterative fashion, where instance and pattern acquisition are interleaved, each using information provided by the other. By tweaking a parameter in the algorithm, resulting patterns can be diversifiable with a specific degree one can control. | Diversifiable Bootstrapping for Acquiring High-Coverage Paraphrase Resource |
d3921718 | IntroductionDeveloping a computational account of event reference requires solution to a number of difficult problems, not least of which is characterizing the phenomenon itself. From a listener's point of view, eveni reference encompasses both the task of building up a structured model of the events and situations underlying a given text and the task of interpreting subsequent references to these events and situations afterwards. A computational approach to these tasks requires at least (1) a characterization of the information that an individual clause may convey about an event or situation; (2) a characterization of explicit clues a text gives as to how the pieces described in individual clauses fit together (assuming, as I do, that this does not rely solely on world knowledge; (3) an account of what the listener does in processing an explicit event reference; (4) a characterization of what events and situations are available for explicit reference; and (5) a procedure for choosing among possible ways of resolving an explicit event reference. | Position Paper: Event Reference |
d35814904 | Social media texts, such as tweets from Twitter, contain many types of nonstandard tokens, and the number of normalization approaches for handling such noisy text has been increasing. We present a method for automatically extracting pairs of a variant word and its normal form from unsegmented text on the basis of a pair-wise similarity approach. We incorporated the acquired variant-normalization pairs into Japanese morphological analysis. The experimental results show that our method can extract widely covered variants from large Twitter data and improve the recall of normalization without degrading the overall accuracy of Japanese morphological analysis. | Automatically Extracting Variant-Normalization Pairs for Japanese Text Normalization |
d1137965 | SenseClusters is a freely available word sense discrimination system that takes a purely unsupervised clustering approach. It uses no knowledge other than what is available in a raw unstructured corpus, and clusters instances of a given target word based only on their mutual contextual similarities. It is a complete system that provides support for feature selection from large corpora, several different context representation schemes, various clustering algorithms, and evaluation of the discovered clusters. | SenseClusters -Finding Clusters that Represent Word Senses |
d250390633 | Identifying complex words in texts is an important first step in text simplification (TS) systems. In this paper, we investigate the performance of binary comparative Lexical Complexity Prediction (LCP) models applied to a popular benchmark dataset -the CompLex 2.0 dataset used in SemEval-2021 Task 1. With the data from CompLex 2.0, we create a new dataset contain 1,940 sentences referred to as CompLex-BC. Using CompLex-BC, we train multiple models to differentiate which of two target words is more or less complex in the same sentence. A linear SVM model achieved the best performance in our experiments with an F1-score of 0.86. | An Evaluation of Binary Comparative Lexical Complexity Models |
d4891934 | We outline problems with the interpretation of accuracy in the presence of bias, arguing that the issue is a particularly pressing concern for RTE evaluation. Furthermore, we argue that average precision scores are unsuitable for RTE, and should not be reported. We advocate mutual information as a new evaluation measure that should be reported in addition to accuracy and confidence-weighted score. | A Proposal on Evaluation Measures for RTE |
d53411009 | The system presented here took part in the 2018 Trolling, Aggression and Cyberbullying shared task (Forest and Trees team) and uses a Gated Recurrent Neural Network architecture(Cho et al., 2014)in an attempt to assess whether combining pre-trained English and Hindi fastText(Mikolov et al., 2018)word embeddings as a representation of the sequence input would improve classification performance. The motivation for this comes from the fact that the shared task data for English contained many Hindi tokens and therefore some users might be doing code-switching: the alternation between two or more languages in communication. To test this hypothesis, we also aligned Hindi and English vectors using pre-computed SVD matrices that pulls representations from different languages into a common space(Smith et al., 2017). Two conditions were tested: (i) one with standard pre-trained fastText word embeddings where each Hindi word is treated as an OOV token, and (ii) another where word embeddings for Hindi and English are loaded in a common vector space, so Hindi tokens can be assigned a meaningful representation. We submitted the second (i.e., multilingual) system and obtained the scores of 0.531 weighted F1 for the EN-FB dataset and 0.438 weighted F1 for the EN-TW dataset. | Aggression Identification and Multi-Lingual Word Embeddings |
d16044605 | We discuss part-of-speech (POS) tagging in presence of large, fine-grained label sets using conditional random fields (CRFs). We propose improving tagging accuracy by utilizing dependencies within sub-components of the fine-grained labels. These sub-label dependencies are incorporated into the CRF model via a (relatively) straightforward feature extraction scheme. Experiments on five languages show that the approach can yield significant improvement in tagging accuracy in case the labels have sufficiently rich inner structure. | Part-of-Speech Tagging using Conditional Random Fields: Exploiting Sub-Label Dependencies for Improved Accuracy |
d219300826 | Should writers "avoid clichés like the plague"? Clichés are said to be a prominent characteristic of "low brow" literature, and conversely, a negative marker of "high brow" literature. Clichés may concern the storyline, the characters, or the style of writing. We focus on cliché expressions, readymade stock phrases which can be taken as a sign of uncreative writing. We present a corpus study in which we examine to what extent cliché expressions can be attested in a corpus of various kinds of contemporary fiction, based on a large, curated lexicon of cliché expressions. The results show to what extent the negative view on clichés is supported by data: we find a significant negative correlation of -0.48 between cliché density and literary ratings of texts. We also investigate interactions with genre and characterize the language of clichés with several basic textual features. Code used for this paper is available at https://github.com/andreasvc/litcliches/Definitions & datasetsWe define cliché expressions as follows:Definition. A cliché expression is a fixed, conventionalized multi-word expression which has become overused to the point of losing its original meaning or effect. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. | Cliché Expressions in Literary and Genre Novels |
d17906560 | There are some chronic critics who always complain about the entity in social media. We are working to automatically detect these chronic critics to prevent the spread of bad rumors about the reputation of the entity. In social media, most comments are informal, and, there are sarcastic and incomplete contexts. This means that it is difficult for current NLP technology such as opinion mining to recognize the complaints. As an alternative approach for social media, we can assume that users who share the same opinions will link to each other. Thus, we propose a method that combines opinion mining with graph analysis for the connections between users to identify the chronic critics. Our experimental results show that the proposed method outperforms analysis based only on opinion mining techniques. | Detecting Chronic Critics Based on Sentiment Polarity and User's Behavior in Social Media |
d46458006 | This talk addresses the current needs for so-called emotion in speech, but points out that the issue is better described as the expression of relationships and attitudes rather than the currently held raw (or big-six) emotional states. From an analysis of more than three years of daily conversational speech, we find the direct expression of emotion to be extremely rare, and contend that when speech technologists say that what we need now is more 'emotion' in speech, what they really mean is that the current technologies are too text-based, and that more expression of speaker attitude, affect, and discourse relationships is required. | Getting to the Heart of the Matter; Speech is more than just the Expression of Text or Language |
d2367192 | In this paper we describe the systems we developed for the English (lexical and allwords) and Basque tasks. They were all supervised systems based on Yarowsky's Decision Lists. We used Semcor for training in the English all-words task. We defined different feature sets for each language. For Basque, in order to extract all the information from the text, we defined features that have not been used before in the literature, using a morphological analyzer. We also implemented systems that selected automatically good features and were able to obtain a prefixed precision (85%) at the cost of coverage. The systems that used all the features were identified as BCU-ehu-dlist-all and the systems that selected some features as BCU-ehu-dlistbest. | Decision Lists for English and Basque |
d195848205 | ||
d1868213 | The English articles, the, indefinite a/an, and zero can be troublesome for English language learners. Thomas [1] demonstrated that English second language (L2) learners from first languages (L1) that do not have the equivalent of an article system encounter problems using articles. Ionin and Wexler[2]found that such learners fluctuate between definiteness and specificity. This study examined English L2 article use with Taiwanese English learners to determine the potential factors influencing English article substitution and error patterns in their academic writing. The corpus-based analysis used natural data collected for the Academic Writing Textual Analysis (AWTA) corpus[3]. A detailed online corpus tagging system was developed to examine article use, covering semantic (specific and hearer knowledge) as well as the other features of the English article. The results indicated that learners overused both the definite and indefinite articles but underused the zero article. The definite article was substituted for the indefinite article in specific environments. Although no significant difference existed between specific and non-specific semantic environments in zero article errors, a significant difference emerged between plural and mass/non-count nouns.These results suggest that, in regard to writing, learners need to focus on the semantic/pragmatic relationships of specificity and hearer (or reader) knowledge. | |
d14386043 | The importance of dealing with unknown words in Natural Language Processing (NLP) is growing as NLP systems are used in more and more applications. One aid in predicting the lexical class of words that do not appear in the lexicon (referred to as unknown words) is the use of syntactic parsing rules. The distinction between closed-class and open-class words together with morphological recognition appears to be pivotal in increasing the ability of the system to predict the lexical categories of unknown words. An experiment is performed to investigate the ability of a parser to parse unknown words using morphology and syntactic parsing rules without human intervention. This experiment shows that the performance of the parser is enhanced greatly when morphological recognition is used in conjunction with syntactic rules to parse sentences containing unknown words from the TIMIT corpus. | Analysis of Unknown Lexical Items using Morphological and Syntactic Information with the TIMIT Corpus |
d30602676 | Nous présentons dans ce travail un nouveau système de voyellation automatique des textes arabes en utilisant trois étapes. Durant la première phase, nous avons intégré une base de données lexicale contenant les mots les plus fréquents de la langue arabe avec l'analyseur morphologique AlKhalil Morpho Sys pour fournir les voyellations possibles pour chaque mot. Le second module dont l'objectif est d'éliminer l'ambiguïté repose sur une approche statistique dont l'apprentissage a été effectué sur un corpus constitué de textes de livres arabes et utilisant les modèles de Markov cachés (HMM) où les mots non voyellés représentent les états observés et les mots voyellés sont ses états cachés. Le système utilise les techniques de lissage pour contourner le problème des transitions des mots absentes et l'algorithme de Viterbi pour sélectionner la solution optimale. La troisième étape utilise un modèle HMM basé sur les caractères pour traiter le cas des mots non analysés.Abstract. We present in this work a new approach for the Automatic diacritization for Arabic texts using three stages. During the first phase, we integrated a lexical database containing the most frequent words of Arabic with morphological analysis by Alkhalil Morpho Sys which provided possible diacritization for each word. The objective of the second module is to eliminate the ambiguity using a statistical approach in which the learning phase was performed on a corpus composed by several Arabic books. This approach uses the hidden Markov models (HMM) with Arabic unvowelized words taken as observed states and vowelized words are considered as hidden states. The system uses smoothing techniques to circumvent the problem of unseen word transitions in the corpus and the Viterbi algorithm to select the optimal solution. The third step uses a HMM model based on the characters to deal with the case of unanalyzed words.Mots-clés :Langue arabe, voyellation automatique, analyse morphologique, modèle de Markov caché, corpus, lissage, algorithme de Viterbi. | 21 ème Traitement Automatique des Langues Naturelles |
d5756693 | The following are citations selected by title and abstract as being related to computational linguistics or knowledge representation, resulting from a computer search, using the BRS Information Technologies retrieval service, of the Dissertation Abstracts International (DAI) database produced by University Microfilms International.Included are the UM order number and year-month of entry into the database; author; university, degree, and, if available, number of pages; title; DAI subject category chosen by the author of the dissertation; and abstract. References are sorted first by DAI subject category and second by author. Citations denoted by an MAI reference do not yet have abstracts in the database and refer to abstracts in the published Masters Abstracts International.Unless otherwise specified, paper or microform copies of dissertations may be ordered from University Microfilms International, Dissertation Copies, Post Office Box 1764, Ann Arbor, MI 48106; telephone for U.S. (except Michigan, Hawaii, Alaska): 1-800-521-3042, for Canada: 1-800-268-6090. Price lists and other ordering and shipping information are in the introduction to the published DAI. An alternate source for copies is sometimes provided at the end of the abstract. | ABSTRACTS OF CURRENT LITERATURE Logic Programming Semantics: Techniques and Applications |
d5911867 | Discourse parsing is a challenging task and is crucial for discourse analysis. In this paper, we focus on labelling argument spans of discourse connectives and sense identification in the CoNLL-2015 shared task setting. We have used syntactic features and have also tried a few semantic features. We employ a pipeline of classifiers, where the best features and parameters were selected for each individual classifier, based on experimental evaluation. We could only get results somewhat better than of the baseline on the overall task, but the results over some of the sub-tasks are encouraging. Our initial efforts at using semantic features do not seem to help. | Shallow Discourse Parsing with Syntactic and (a Few) Semantic Features |
d37330432 | 2 casey019@ms55.hinet.net 3 摘要 在傳統的訊號子空間語音強化方法(Signal Subspace Speech Enhancement Method)中,其主 要是利用噪音能量是均勻分佈於訊號所在的向量空間而語音訊號能量則是分佈於某一子空間的 特性,藉由特徵分解(Eigen-Decomposition)來分析出語音訊號及背景噪音,來進行噪音消除。而 在車內噪音環境中,噪音能量的分佈在低頻帶為最多延伸到高頻則逐漸較少,單一的訊號子空 間的語音強化方法已不能更有效的消除位在低頻帶的背景噪音。本論文提出一個基於人耳聽覺 特性的分頻處理,並結合訊號子空間強化方法來克服此一問題。實驗的驗證,則是採用 TAICAR 車內語音資料庫來進行,實驗結果說明本文所提出的方法比起傳統訊號子空間強化法,更適用 於車內噪音的消除,低頻噪音的消除也更明顯。 1. 前言 隨著汽車導航系統的日漸普及,除了提供汽車行車資料及娛樂外,藉由結合行動電話的無線通訊功能,更讓 汽車儼然已經變成隨時可獲知各種生活資訊的行動中心。在汽車內傳統的人機介面是採用觸碰式螢幕,在行 車的狀況下,這樣的介面是不夠安全的,而隨著即時語音辨識技術的日趨成熟,人機介面必定是朝著語音對 話的操控方式改進。在行車環境中,充斥各種噪音,對於語音辨識系統而言,這些背景噪音會嚴重地影響辨 識結果。因此,一般的辨識系統都需使用手持式或頭戴式麥克風,來促成近距離的錄音,以避免背景噪音的 干擾。然而,使用這些錄音設備會對駕駛或者乘客造成不便,所以提供一個在行車環境下能實行遠距離錄音 並具有抗噪音能力的麥克風系統,是有其需求。本論文提出一個利用小波聽覺分頻處理與訊號子空間分解來 達成車內背景噪音消除的目的。 Ephraim 和 Van-Trees 於 1995 年提出一套基於訊號子空間分解的語音強化系統 [1],利用噪音能量是均勻 分佈於訊號所在的向量空間而語音訊號能量則是分佈於某一子空間的特性,藉由特徵分解來分析出語音訊號 及背景噪音,並進一步用一線性估測器來處理得到強化後的語音。由於特徵分解的運算複雜度高,在本論文 中採用子空間追蹤(Subspace Tracking)的方式來做特徵分解,這個演算法稱為 PAST (Projection Approximation Subspace Tracking, PAST) [2],以期能符合即時(Real-Time)的應用。而在車內噪音環境中,噪音能量的分佈 在低頻帶為最多延伸到高頻則逐漸較少,在實驗過程中發現,單一的訊號子空間的語音強化方法已不能更有 效的消除位在低頻帶的背景噪音。因此,本論文提出一個基於人耳聽覺特性的分頻處理,並結合訊號子空間 * | 利用小波聽覺分頻處理與訊號子空間分解於車內噪音消除 * 王駿發 1 楊宗憲 2 張凱行 3 國立成功大學電機工程研所 |
d16555705 | This paper presents NomLex-BR, a lexical resource describing Brazilian Portuguese nominalizations, and its integration with OpenWordnet-PT. We first describe the original English NOMLEX lexical resource and how we used it to bootstrap a Portuguese version. Subsequently, we describe how this lexicon can be embedded into OpenWordnet-PT, which facilitates its use and helps spot-checking both the bigger integrated resource and the original lexicon. Lastly, we outline some of the other, more substantial work that we plan to engage for the project of using linguistic insights for knowledge representation in Portuguese. | Embedding NomLex-BR nominalizations into OpenWordnet-PT |
d226239124 | ||
d7650859 | We describe an approach to automatically detect and annotate definitions for technical terms in German text corpora. This approach focuses on verbs that typically appear in definitions (= definitor verbs). We specify search patterns based on the valency frames of these definitor verbs and use them (1) to detect and delimit text segments containing definitions and (2) to annotate their main functional components: the definiendum (the term that is defined) and the definiens (meaning postulates for this term). On the basis of these annotations we aim at automatically extracting WordNet-style semantic relations that hold between the head nouns of the definiendum and the head nouns of the definiens. In this paper, we will describe our annotation scheme for definitions and report on two studies: (1) a pilot study that evaluates our definition extraction approach using a German corpus with manually annotated definitions as a gold standard.(2) A feasibility study that evaluates the possibility to extract hypernym, hyponym and holonym relations from these annotated definitions. | Automated detection and annotation of term definitions in German text corpora |
d236145015 | ||
d14025460 | Manually verified pitch data were compared with output from a commonly used pitchtracking algorithm. The manual pitch data made statistically significantly better "final rise" predictions than the automatic pitch data, in spite of great similarity between the two sets of measurements. Pitch Tracking doubling/halving errors are described. | A Study of Automatic Pitch Tracker Doubling/Halving "Errors" |
d6033525 | Many factors are thought to increase the chances of misrecognizing a word in ASR, including low frequency, nearby disfluencies, short duration, and being at the start of a turn. However, few of these factors have been formally examined. This paper analyzes a variety of lexical, prosodic, and disfluency factors to determine which are likely to increase ASR error rates. Findings include the following. (1) For disfluencies, effects depend on the type of disfluency: errors increase by up to 15% (absolute) for words near fragments, but decrease by up to 7.2% (absolute) for words near repetitions. This decrease seems to be due to longer word duration. (2) For prosodic features, there are more errors for words with extreme values than words with typical values. (3) Although our results are based on output from a system with speaker adaptation, speaker differences are a major factor influencing error rates, and the effects of features such as frequency, pitch, and intensity may vary between speakers. | Which words are hard to recognize? Prosodic, lexical, and disfluency factors that increase ASR error rates |
d12037515 | In this paper the development of an opinion summarization system that works on Bengali News corpus has been described. The system identifies the sentiment information in each document, aggregates them and represents the summary information in text. The present sys-tem follows a topic-sentiment model for sentiment identification and aggregation. Topic-sentiment model is designed as discourse level theme identification and the topic-sentiment aggregation is achieved by theme clustering (k-means) and Document level Theme Relational Graph representation. The Document Level Theme Relational Graph is finally used for candidate summary sentence selection by standard page rank algorithms used in Information Retrieval (IR). As Bengali is a resource constrained language, the building of annotated gold standard corpus and acquisition of linguistics tools for lexico-syntactic, syntactic and discourse level features extraction are described in this paper. The reported accuracy of the Theme detection technique is 83.60% (precision), 76.44% (recall) and 79.85% (F-measure). The summarization system has been evaluated with Precision of 72.15%, Recall of 67.32% and Fmeasure of 69.65%. | Topic-Based Bengali Opinion Summarization |
d2963978 | In this work, we propose a method for automatic analysis of attitude (affect, judgment, and appreciation) in sentiment words. The first stage of the proposed method is an automatic separation of unambiguous affective and judgmental adjectives from miscellaneous that express appreciation or different attitudes depending on context. In our experiments with machine learning algorithms we employed three feature sets based on Pointwise Mutual Information, word-pattern co-occurrence, and minimal path length. The next stage of the proposed method is to estimate the potentials of miscellaneous adjectives to convey affect, judgment, and appreciation. Based on the sentences automatically collected for each adjective, the algorithm analyses the context of phrases that contain sentiment word by considering morphological tags, high-level concepts, and named entities, and then makes decision about contextual attitude labels. Finally, the appraisal potentials of a word are calculated based on the number of sentences related to each type of attitude. | Analyzing Sentiment Word Relations with Affect, Judgment, and Appreciation |
d18642475 | User generated content often contains non-standard words that hinder effective automatic text processing. In this paper, we present a system we developed to perform lexical normalization for English Twitter text. It first generates candidates based on past knowledge and a novel string similarity measurement and then selects a candidate using features learned from training data. The system has a constrained mode and an unconstrained mode. The constrained mode participated in the W-NUT noisy English text normalization competition(Baldwin et al., 2015)and achieved the best F1 score. | NCSU-SAS-Ning: Candidate Generation and Feature Engineering for Supervised Lexical Normalization |
d16515476 | All systems for automatic sign language translation and recognition, in particular statistical systems, rely on adequately sized corpora. For this purpose, we created the Phoenix corpus that is based on German television weather reports translated into German Sign Language. It comes with a rich annotation of the video data, a bilingual text-based sentence corpus and a monolingual German corpus. | A German Sign Language Corpus of the Domain Weather Report |
d10436913 | Evaluauon is a key part of any research and development effort, but the goals and focus of evaluat:ons are often narrow m scope, addressing a specific algonthm or technique, or analyzing a single result All of the evaluation work clone to date on text summarization systems has been by the developers of mdlvldual systems, usually to study and improve sentence selection cntena Under TIPSTER III, DARPA ~s sponsoring a task-based evaluauon of multiple text summarization systems This focus of this evaluation wall be on user needs, and the feaslbdlty of applying summarization technology to a variety of tasks | A Proposal for Task-based Evaluation of Text Summarization Systems |
d3871114 | In this paper we describe a person clustering system for a given document set and report the results we have obtained on the test set of Chinese personal name (CPN) disambiguation task of CIPS-SIGHAN 2010. This task consists of clustering a set of Xinhua news documents that mention an ambiguous CPN according to named entity in reality. Several features including named entities (NE) and common nouns generated from the documents and a variety of rules are employed in our system. This system achieves F = 86.36% with B_Cubed scoring metrics and F = 90.78% with pu-rity_based metrics. | DLUT: Chinese Personal Name Disambiguation with Rich Features |
d220445930 | ||
d19754679 | Semantic role labeling (SRL) not only needs lexical and syntactic information, but also needs word sense information. However, because of the lack of corpus annotated with both word senses and semantic roles, there is few research on using word sense for SRL. The release of OntoNotes provides an opportunity for us to study how to use word sense for SRL. In this paper, we present some novel word sense features for SRL and find that they can improve the performance significantly. | Improving Semantic Role Labeling with Word Sense |
d21715557 | WordNet-like resources are lexical databases with highly relevance information and data which could be exploited in more complex computational linguistics research and applications. The building process requires manual and automatic tasks, that could be more arduous if the language is a minority one with fewer digital resources. This study focuses in the construction of an initial WordNet database for a low-resourced and indigenous language in Peru: Shipibo-Konibo (shp). First, the stages of development from a scarce scenario (a bilingual dictionary shp-es) are described. Then, it is proposed a synset alignment method by comparing the definition glosses in the dictionary (written in Spanish) with the content of a Spanish WordNet. In this sense, word2vec similarity was the chosen metric for the proximity measure. Finally, an evaluation process is performed for the synsets, using a manually annotated Gold Standard in Shipibo-Konibo. The obtained results are promising, and this resource is expected to serve well in further applications, such as word sense disambiguation and even machine translation in the shp-es language pair. | WordNet-Shp: Towards the Building of a Lexical Database for a Peruvian Minority Language |
d229365915 | ||
d232021735 | ||
d203691203 | This paper reports on a Knowledge Transfer Partnership (KTP) project that aimed to implement machine translation technology at a Welsh Language Service Provider, Cymen Cyf 1 . The project involved leveraging the company's large supply of previous translations in order to train custom domain-specific translation engines for its various clients. BLEU scores achieved ranged from 59.06 for the largest domain-specific engine to 48.53 to the smallest. A small experiment using the TAUS DQF productivity evaluation tool (Görög, 2014) was also run on the highestscoring translation engine, which showed an average productivity gain of 30% across all translators. Domain-specific engines were ultimately successfully introduced into the workflow for two main clients, although a lack of domain specific data proved problematic for others. Various techniques such as domain-adaptation as well as improved tagging of previous translations may ameliorate this situation in the future. | Embedding English to Welsh MT in a Private Company |
d2071551 | There exist various different discourse annotation schemes that vary both in the perspectives of discourse structure considered and the granularity of textual units that are annotated. Comparison and integration of multiple schemes have the potential to provide enhanced information. However, the differing formats of corpora and tools that contain or produce such schemes can be a barrier to their integration. U-Compare is a graphical, UIMA-based workflow construction platform for combining interoperable natural language processing (NLP) resources, without the need for programming skills. In this paper, we present an extension of U-Compare that allows the easy comparison, integration and visualisation of resources that contain or output annotations based on multiple discourse annotation schemes. The extension works by allowing the construction of parallel subworkflows for each scheme within a single U-Compare workflow. The different types of discourse annotations produced by each sub-workflow can be either merged or visualised side-by-side for comparison. We demonstrate this new functionality by using it to compare annotations belonging to two different approaches to discourse analysis, namely discourse relations and functional discourse annotations. Integrating these different annotation types within an interoperable environment allows us to study the correlations between different types of discourse and report on the new insights that this allows us to discover. * The authors have contributed equally to the development of this work and production of the manuscript. | Towards a Better Understanding of Discourse: Integrating Multiple Discourse Annotation Perspectives Using UIMA |
d19330198 | RESUME ____________________________________________________________________________________________________________Cet article présente les résultats d'une analyse sémantique quantitative des unités lexicales spécifiques dans un corpus technique, relevant du domaine des machines-outils pour l'usinage des métaux. L'étude vise à vérifier si et dans quelle mesure les mots-clés du corpus technique sont monosémiques. A cet effet, nous procédons à une analyse statistique de régression simple, qui permet d'étudier la corrélation entre le rang de spécificité des mots-clés et leur rang de monosémie, mais qui soulève des problèmes statistiques et méthodologiques, notamment un biais de fréquence. Pour y remédier, nous adoptons une approche alternative pour le repérage des unités lexicales spécifiques, à savoir l'analyse des marqueurs lexicaux stables ou Stable Lexical Marker Analysis (SLMA). Nous discutons les résultats quantitatifs et statistiques de cette approche dans la perspective de la corrélation entre le rang de spécificité et le rang de monosémie.ABSTRACT _________________________________________________________________________________________________________Semantic analysis of keywords and stable lexical markers in a technical corpusThis article presents the results of a quantitative semantic analysis of typical lexical units in a specialised technical corpus of metalworking machinery in French. The study aims to find out whether and to what extent the keywords of the technical corpus are monosemous. A simple regression analysis, used to examine the correlation between typicality rank and monosemy rank of the keywords, points out some statistical and methodological problems, notably a frequency bias. In order to overcome these problems, we adopt an alternative approach for the identification of typical lexical units, called Stable Lexical Marker Analysis (SLMA). We discuss the quantitative and statistical results of this approach with respect to the correlation between typicality rank and monosemy rank.MOTS-CLES : unités lexicales spécifiques, analyse des mots-clés, analyse des marqueurs lexicaux stables, sémantique quantitative, analyse de régression. | Etude sémantique des mots-clés et des marqueurs lexicaux stables dans un corpus technique |
d7102676 | GOAL ORIENTED PARSING: IMPROVING THE EFFICIENCY OF NATURAL LANGUAGE ACCESS TO RELATIONAL DATA BASES | |
d1208298 | This paper describes an approach to using semantic rcprcsentations for learning information extraction (IE) rules by a type-oriented inductire logic programming (ILl)) system. NLP components of a lnachine translation system are used to automatically generate semantic representations of text corpus that can be given directly to an ILP system. The latest experimental results show high precision and recall of the learned rules. | Learning Semantic-Level Information Extraction Rules by Type-Oriented ILP |
d225062746 | ||
d1230067 | We present UWN, a large multilingual lexical knowledge base that describes the meanings and relationships of words in over 200 languages. This paper explains how link prediction, information integration and taxonomy induction methods have been used to build UWN based on WordNet and extend it with millions of named entities from Wikipedia. We additionally introduce extensions to cover lexical relationships, frame-semantic knowledge, and language data. An online interface provides human access to the data, while a software API enables applications to look up over 16 million words and names. | UWN: A Large Multilingual Lexical Knowledge Base |
d241583568 | What is the first word that comes to your mind when you hear giraffe, or damsel, or freedom? Such free associations contain a huge amount of information on the mental representations of the corresponding concepts, and are thus an extremely valuable testbed for the evaluation of semantic representations extracted from corpora. In this paper, we present FAST (Free ASsociation Tasks), a free association dataset for English rigorously sampled from two standard free association norms collections (the Edinburgh Associative Thesaurus and the University of South Florida Free Association Norms), discuss two evaluation tasks, and provide baseline results. In parallel, we discuss methodological considerations concerning the desiderata for a proper evaluation of semantic representations. | FAST: A carefully sampled and cognitively motivated dataset for distributional semantic evaluation |
d5151805 | Online discussion forums are a valuable means for users to resolve specific information needs, both interactively for the participants and statically for users who search/browse over historical thread data. However, the complex structure of forum threads can make it difficult for users to extract relevant information. Automatically identifying whether the problem in a thread has been solved or not can help direct users to threads where the original problem has been solved, hence enhancing their prospects of solving their particular problem. In this paper, we investigate the task of Solvedness classification by exploiting the discourse structure of forum threads. Experimental results show that simple features derived from thread discourse structure can greatly boost the accuracy of Solvedness classification, which has been shown to be very difficult in previous research. | The Utility of Discourse Structure in Identifying Resolved Threads in Technical User Forums |
d600208 | The semantic behavior of derivational processes has been investigated with compositional distributional models relating the meaning of base, affix, and derivative (e.g., anti+capitalist → anticapitalist). While broadly successful, these approaches model how the distributional behavior generally is affected by derivation. Meanwhile, their predictions can not be interpreted at the level of linguistic regularities. In this paper, we adopt an alternative approach and focus on the impact of derivation on finer-grained semantic properties of the base. We focus on (the psycholinguistically prominent) emotional valence, i.e., the speakers' positive/negative evaluation of the word referent. We present two case studies on German derivational patterns, combining distributional and regression analysis. We are able to establish the broad presence of valence effects in German derivation as well as strong interactions with concreteness. | Are doggies cuter than dogs? Emotional valence and concreteness in German derivational morphology |
d21705101 | Web-based tools and workflow engines can often not be applied to data with restrictive property rights and to big data. In both cases, it is better to move the tools to the data rather than having the data travel to the tools. In this paper, we report on the progress to bring together the CLARIN-based WebLicht workflow engine with the EUDAT-based Generic Execution Framework to address this issue. | Handling Big Data and Sensitive Data Using EUDAT's Generic Execution Framework and the WebLicht Workflow Engine |
d7046575 | The weak equivalence of Combinatory Categorial Grammar (CCG) and Tree-Adjoining Grammar (TAG) is a central result of the literature on mildly context-sensitive grammar formalisms. However, the categorial formalism for which this equivalence has been established differs significantly from the versions of CCG that are in use today. In particular, it allows restriction of combinatory rules on a per grammar basis, whereas modern CCG assumes a universal set of rules, isolating all cross-linguistic variation in the lexicon. In this article we investigate the formal significance of this difference. Our main result is that lexicalized versions of the classical CCG formalism are strictly less powerful than TAG. | Lexicalization and Generative Power in CCG |
d10478932 | This paper is concerned with the standardisation of evaluation metrics for lexical acquisition over precision grammars, which are attuned to actual parser performance. Specifically, we investigate the impact that lexicons at varying levels of lexical item precision and recall have on the performance of pre-existing broad-coverage precision grammars in parsing, i.e., on their coverage and accuracy. The grammars used for the experiments reported here are the LinGO English Resource Grammar (ERG; Flickinger(2000)) and JACY(Siegel and Bender, 2002), precision grammars of English and Japanese, respectively. Our results show convincingly that traditional Fscore-based evaluation of lexical acquisition does not correlate with actual parsing performance. What we argue for, therefore, is a recall-heavy interpretation of F-score in designing and optimising automated lexical acquisition algorithms. | The Corpus and the Lexicon: Standardising Deep Lexical Acquisition Evaluation |
d226239134 | ||
d5375922 | This paper reports on LCC's participation at the Third PASCAL Recognizing Textual Entailment Challenge. First, we summarize our semantic logical-based approach which proved successful in the previous two challenges. Then we highlight this year's innovations which contributed to an overall accuracy of 72.25% for the RTE 3 test data. The novelties include new resources, such as eXtended WordNet KB which provides a large number of world knowledge axioms, event and temporal information provided by the TARSQI toolkit, logic form representations of events, negation, coreference and context, and new improvements of lexical chain axiom generation. Finally, the system's performance and error analysis are discussed. | COGEX at RTE3 |
d232021723 | ||
d10500352 | We present a novel application of NLP and text mining to the analysis of financial documents. In particular, we describe an implemented prototype, Maytag, which combines information extraction and subject classification tools in an interactive exploratory framework. We present experimental results on their performance, as tailored to the financial domain, and some forward-looking extensions to the approach that enables users to specify classifications on the fly. | Maytag: A multi-staged approach to identifying complex events in textual data |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.