_id
stringlengths
4
10
text
stringlengths
0
18.4k
title
stringlengths
0
8.56k
d15730149
The following work describes a voting system to automatically classify the sense selection of the complex types Location/Organization and Container/Content, which depend on regular polysemy, as described by the Generative Lexicon(Pustejovsky, 1995). This kind of sense alternations very often presents semantic underspecificacion between its two possible selected senses. This kind of underspecification is not traditionally contemplated in word sense disambiguation systems, as disambiguation systems are still coping with the need of a representation and recognition of underspecification (Pustejovsky, 2009) The data are characterized by the morphosyntactic and lexical enviroment of the headwords and provided as input for a classifier. The baseline decision tree classifier is compared against an eight-member voting scheme obtained from variants of the training data generated by modifications on the class representation and from two different classification algorithms, namely decision trees and k-nearest neighbors. The voting system improves the accuracy for the non-underspecified senses, but the underspecified sense remains difficult to identify.
A voting scheme to detect semantic underspecification
d219307447
d1326531
Paraphrasing is an important aspect of language competence; however, EFL learners have long had difficulty paraphrasing in their writing owing to their limited language proficiency. Therefore, automatic paraphrase suggestion systems can be useful for writers. In this paper, we present PREFER 1 , a paraphrase reference tool for helping language learners improve their writing skills. In this paper, we attempt to transform the paraphrase generation problem into a graphical problem in which the phrases are treated as nodes and translation similarities as edges. We adopt the PageRank algorithm to rank and filter the paraphrases generated by the pivot-based paraphrase generation method. We manually evaluate the performance of our method and assess the effectiveness of PREFER in language learning. The results show that our method successfully preserves both the semantic meaning and syntactic structure of the query phrase. Moreover, the students' writing performance improve most with the assistance of PREFER.
The 7th Workshop on the Innovative Use of NLP for Building Educational Applications, pages 80-85, PREFER: Using a Graph-Based Approach to Generate Paraphrases for Language Learning
d9919437
This paper addresses the transmission channel impact on human-to-human speech communication quality as well as on ASR performance. Transmission channels include standard wireline or mobile telephone networks and IP-based networks, which can be operated via different types of user interfaces. In order to gain control over the transmission channel, a simulation model is developed. It implements all types of stationary impairments which can be found in the mentioned networks. Human-to-human speech communication quality in these situations is estimated using a network planning model. Experiments are carried out for assessing ASR performance over the same channel, with three different types of recognizers: two prototypical recognizers used in a telephone-based information server, and a standardized set-up developed under the AURORA framework for distributed ASR. It turns out that some interesting differences exist in behavior between the ASR system performance and speech quality in human-to-human communication. The differences should be taken into account by both developers of ASR systems and transmission network planners.
Diagnostic Assessment of Telephone Transmission Impact on ASR Performance and Human-to-Human Speech Quality
d12046735
Natural language understanding depends heavily on assessing veridicality-whether events mentioned in a text are viewed as happening or not-but little consideration is given to this property in current relation and event extraction systems. Furthermore, the work that has been done has generally assumed that veridicality can be captured by lexical semantic properties whereas we show that context and world knowledge play a significant role in shaping veridicality. We extend the FactBank corpus, which contains semantically driven veridicality annotations, with pragmatically informed ones. Our annotations are more complex than the lexical assumption predicts but systematic enough to be included in computational work on textual understanding. They also indicate that veridicality judgments are not always categorical, and should therefore be modeled as distributions. We build a classifier to automatically assign event veridicality distributions based on our new annotations. The classifier relies not only on lexical features like hedges or negations, but also on structural features and approximations of world knowledge, thereby providing a nuanced picture of the diverse factors that shape veridicality."All I know is what I read in the papers" -Will Rogers
Did It Happen? The Pragmatic Complexity of Veridicality Assessment
d16611324
Statistical n-gram language modeling is used in many domains like speech recognition, language identification, machine translation, character recognition and topic classification. Most language modeling approaches work on n-grams of terms. This paper reports about ongoing research in the MEMPHIS project which employs models based on character-level n-grams instead of term n-grams. The models are used for the multi-lingual classification of documents according to the topics of the MEMPHIS domains. We present methods capable of dealing robustly with large vocabularies and informal, erroneous texts in different languages. We also report on our results of using multi-lingual language models and experimenting with different classification parameters like smoothing techniques and n-grams lengths.
N-Gram Language Modeling for Robust Multi-Lingual Document Classification
d14142473
We have created a synchronous corpus of acoustic and 3D facial marker data from multiple speakers for adaptive audio-visual text-tospeech synthesis. The corpus contains data from one female and two male speakers and amounts to 223 Austrian German sentences each. In this paper, we first describe the recording process, using professional audio equipment and a marker-based 3D facial motion capturing system for the audio-visual recordings. We then turn to post-processing, which incorporates forced alignment, principal component analysis (PCA) on the visual data, and some manual checking and corrections. Finally, we describe the resulting corpus, which will be released under a research license at the end of our project. We show that the standard PCA based feature extraction approach also works on a multi-speaker database in the adaptation scenario, where there is no data from the target speaker available in the PCA step.
Building a Synchronous Corpus of Acoustic and 3D Facial Marker Data for Adaptive Audio-visual Speech Synthesis
d426998
Currently, Chinese event extraction systems suffer much from the low quality of annotated event corpora and the high ratio of pseudo trigger mentions to true ones. To resolve these two issues, this paper proposes a joint model of trigger identification and event type determination. Besides, several trigger filtering schemas are introduced to filter out those pseudo trigger mentions as many as possible. Evaluation on the ACE 2005 Chinese corpus justifies the effectiveness of our approach over a strong baseline.一个应用于中文事件抽取的事件触发词识别和类型判别联合模 型
Joint Modeling of Trigger Identification and Event Type Determination in Chinese Event Extraction
d7424558
Artificial I ntclligcn cc I,a, bovatory Ma.ss~cclmsetts Insdtttt(; of :l~cttnology l{m 7(57, 545 Technology Square (~atnbridgc, Massac.husetts, USA, 02] 239AbstractThe objective of this pal)er is to [brmalize the intuition al)out l,he comph;xity of syntactic structures. We propose a definition of structm:al COml)h'xity such that sentences ranked by our definition as more COml)h;x are gen(;rally more diI'ficult lbr humans to process. We justify the definition by showing how it is ahle to account for several seemingly unrelated phenomena in natural languages.
On the Structural Complexity of Natural Language Sentences
d15946286
We present an algorithm for automatic correction of spelling errors on the sentence level, which uses noisy channel model and feature-based reranking of hypotheses. Our system is designed for Russian and clearly outperforms the winner of SpellRuEval-2016 competition. We show that language model size has the greatest influence on spelling correction quality. We also experiment with different types of features and show that morphological and semantic information also improves the accuracy of spellchecking.
Spelling Correction for Morphologically Rich Language: a Case Study of Russian
d8651407
The paper describes a project for continuous data collection for a spoken dialogue system engaged in Question-Answering interactions in English. The Wizard-of-Oz method used in the bootstrap phase is presented, and several types of resulting dialogue annotations are described. The resulting corpus will be publicly released.
The DBOX Corpus Collection of Spoken Human-Human and Human-Machine Dialogues
d16453143
Honorific agreement is one of the main properties of languages like Korean or Japanese, playing an important role in appropriate communication. This makes the deep processing of honorific information crucial in various computational applications such as spoken language translation and generation. We argue that, contrary to the previous literature, an adequate analysis of Korean honorification involves a system that has access not only to morphosyntax but to semantics and pragmatics as well. Along these lines, we have developed a typed feature structure grammar of Korean (based on the framework of HPSG), and implemented it in the Linguistic Knowledge Builder System (LKB). The results of parsing our experimental test suites show that our grammar provides us with enriched grammatical information that can lead to the development of a robust dialogue system for the language.
Deep Processing of Honorification Phenomena in a Typed Feature Structure Grammar
d236478090
Multi-modal machine translation (MMT) aimed at using images to help disambiguate the target during translation and improving robustness, but some recent works showed that the contribution of visual features is either negligible or incremental. In this paper, we show that incorporating pre-trained (vision) language model (VLP) on the source side can improve the multi-modal translation quality significantly. Motivated by BERT, VLP aims to learn better cross-modal representations that improve target sequence generation. We simply adapt BERT to a cross-modal domain for the vision language pre-training, and the downstream multi-modal machine translation can substantially benefit from the pre-training. We also introduce an attention based modality loss to promote the image-text alignment in the latent semantic space. Ablation study verifies that it is effective in further improving the translation quality. Our experiments on the widely used Multi-30K dataset show increased BLEU score up to 6.2 points compared with the text-only model, achieving the state-of-the-art results with a large margin in the semi-unconstrained scenario and indicating a possible direction to rejuvenate the multi-modal machine translation.
Probing Multi-modal Machine Translation with Pre-trained Language Model
d16575107
This paper presents a method for identifying token instances of verb particle constructions (VPCs) automatically, based on the output of the RASP parser. The proposed method pools together instances of VPCs and verb-PPs from the parser output and uses the sentential context of each such instance to differentiate VPCs from verb-PPs. We show our technique to perform at an F-score of 97.4% at identifying VPCs in Wall Street Journal and Brown Corpus data taken from the Penn Treebank.
Automatic Identification of English Verb Particle Constructions using Linguistic Features
d199379328
d20124105
In this paper we present the ongoing efforts to expand the depth and breath of the Open Multilingual Wordnet coverage by introducing two new classes of non-referential concepts to wordnet hierarchies: interjections and numeral classifiers. The lexical semantic hierarchy pioneered by Princeton Wordnet has traditionally restricted its coverage to referential and contentful classes of words: such as nouns, verbs, adjectives and adverbs. Previous efforts have been employed to enrich wordnet resources including, for example, the inclusion of pronouns, determiners and quantifiers within their hierarchies. Following similar efforts, and motivated by the ongoing semantic annotation of the NTU-Multilingual Corpus, we decided that the four traditional classes of words present in wordnets were too restrictive. Though non-referential, interjections and classifiers possess interesting semantics features that can be well captured by lexical resources like wordnets. In this paper, we will further motivate our decision to include non-referential concepts in wordnets and give an account of the current state of this expansion.
Wow! What a useful extension! Introducing Non-Referential Concepts to Wordnet
d10665510
This article describes a method for composing fluent and complex natural language questions, while avoiding the standard pitfalls of free text queries. The method, based on Conceptual Authoring, is targeted at question-answering systems where reliability and transparency are critical, and where users cannot be expected to undergo extensive training in question composition. This scenario is found in most corporate domains, especially in applications that are risk-averse. We present a proof-of-concept system we have developed: a question-answering interface to a large repository of medical histories in the area of cancer. We show that the method allows users to successfully and reliably compose complex queries with minimal training.
Composing Questions through Conceptual Authoring
d604983
This paper presents a joint model for template filling, where the goal is to automatically specify the fields of target relations such as seminar announcements or corporate acquisition events. The approach models mention detection, unification and field extraction in a flexible, feature-rich model that allows for joint modeling of interdependencies at all levels and across fields. Such an approach can, for example, learn likely event durations and the fact that start times should come before end times. While the joint inference space is large, we demonstrate effective learning with a Perceptron-style approach that uses simple, greedy beam decoding. Empirical results in two benchmark domains demonstrate consistently strong performance on both mention detection and template filling tasks.
Discriminative Learning for Joint Template Filling
d15141967
In order to help computational linguists, we have conceived and developed a linguistic software engineering environment, whose goal is to set up reusable and evolutive toolkits for natural language processing. This environment is based on a set of natural language processing components, at the morphologic, syntactic and semantic levels. These components are generic and evolutive, and can be used separately or with specific problem solving units in global strategies built for manmachine communication (according to the general model developed in the Language and Cognition group: Caramel). All these tools are complemented with graphic interfaces, allowing users outside the field of Computer Science to use them very easily. In this paper, we will present first the syntactic analysis, based on a chart parser that uses a LFG grammar for French, and the semantic analysis, based on conceptual graphs. Then we will show how these two analyses collaborate to produce semantic representations and sentences. Before concluding, we will show how these modules are used through a distributed architecture based on CORBA (distributed Smalltalk) implementing the CARAMEL multi-agent architecture.
An Object-Oriented Linguistic Engineering Environment using LFG (Lexical Functionnal Grammar) and CG (Conceptual Graphs)
d3024142
This paper describes a dependency based tagging scheme for creating tree banks for Indian languages. The scheme has been so designed that it is comprehensive, easy to use with linear notation and economical in typing effort. It is based on Paninian grammatical model.
AnnCorra : Building Tree-banks in Indian Languages
d238638452
d14476916
We demonstrate a text to sign language translation system for investigating sign language (SL) structure and assisting in production of sign narratives and informative presentations 1 . The system is demonstrable on a conventional PC laptop computer.
A Prototype Text to British Sign Language (BSL) Translation System
d760575
We present an error mining tool that is designed to help human annotators to find errors and inconsistencies in their annotation. The output of the underlying algorithm is accessible via a graphical user interface, which provides two aggregate views: a list of potential errors in context and a distribution over labels. The user can always directly access the actual sentence containing the potential error, thus enabling annotators to quickly judge whether the found candidate is indeed incorrectly labeled.
A Graphical Interface for Automatic Error Mining in Corpora
d227231489
d2936965
This paper describes the system of the team Orange-Deskiñ, used for the CoNLL 2017 UD Shared Task. We based our approach on an existing open source tool (BistParser), which we modified in order to produce the required output. Additionally we added a kind of pseudoprojectivisation. This was needed since some of the task's languages have a high percentage of non-projective dependency trees. In most cases we also employed word embeddings. For the 4 surprise languages, the data provided seemed too little to train on. Thus we decided to use the training data of typologically close languages instead. Our system achieved a macro-averaged LAS of 68.61% (10th in the overall ranking) which improved to 69.38% after bug fixes.
Multi-Model and Crosslingual Dependency Analysis
d11038905
Recently, monologue data such as lecture and commentary by professionals have been considered as valuable intellectual resources, and have been gathering attention. On the other hand, in order to use these monologue data effectively and efficiently, it is necessary for the monologue data not only just to be accumulated but also to be structured. This paper describes the construction of a Japanese spoken monologue corpus in which dependency structure is given to each utterance. Spontaneous monologue includes a lot of very long sentences composed of two or more clauses. In these sentences, there may exist the subject or the adverb common to multi-clauses, and it may be considered that the subject or adverb depend on multi-predicates. In order to give the dependency information in a real fashion, our research allows that a bunsetsu depends on multiple bunsetsus.
A Syntactically Annotated Corpus of Japanese Spoken Monologue
d18682642
Terminological databases do not always provide detailed information on the linguistic behaviour of terms, although this is important for potential users such as translators or students. In this paper we describe a project that aims to fill this gap by proposing a method for annotating terms in sentences based on that developed within the FrameNet project(Ruppenhofer et al. 2010) and by implementing it in an online resource called DiCoInfo. We focus on the methodology we devised, and show with some preliminary results how similar actantial (i.e. argumental) structures can provide evidence for defining lexical relations in specific languages and capturing cross-linguistic equivalents. The paper argues that the syntactico-semantic annotation of the contexts in which the terms occur allows lexicographers to validate their intuitions concerning the linguistic behaviour of terms as well as interlinguistic relations between them. The syntactico-semantic annotation of contexts could, therefore, be considered a good starting point in terminology work that aims to describe the linguistic functioning of terms and offer a sounder basis to define interlinguistic relationships between terms that belong to different languages.
Capturing Syntactico-semantic Regularities among Terms: An Application of the FrameNet Methodology to Terminology
d1659910
Word graphs have various applications in the field of machine translation. Therefore it is important for machine translation systems to produce compact word graphs of high quality. We will describe the generation of word graphs for state of the art phrase-based statistical machine translation. We will use these word graph to provide an analysis of the search process. We will evaluate the quality of the word graphs using the well-known graph word error rate. Additionally, we introduce the two novel graph-to-string criteria: the position-independent graph word error rate and the graph BLEU score. Experimental results are presented for two Chinese-English tasks: the small IWSLT task and the NIST large data track task. For both tasks, we achieve significant reductions of the graph error rate already with compact word graphs.
Word Graphs for Statistical Machine Translation
d229365834
This paper presents our work in the WMT 2020 Word and Sentence-Level Post-editing Effort Quality Estimation (QE) Shared Task. Our system follows standard Predictor-Estimator architecture, with a pre-trained Transformer as the Predictor, and specific classifiers and regressors as Estimators. We integrate Bottleneck Adapter Layers in the Predictor to improve the transfer learning efficiency and prevent from over-fitting. At the same time, we jointly train the word-and sentence-level tasks with a unified model with multitask learning. Pseudo-PE assisted QE (PEAQE) is proposed, resulting in significant improvements on the performance.Our submissions achieve competitive result in word/sentence-level sub-tasks for both of En-De/Zh language pairs.
HW-TSC's Participation at WMT 2020 Quality Estimation Shared Task
d251274287
Automatic analysis for modern Chinese has greatly improved the accuracy of text mining in related fields, but the study of ancient Chinese is still relatively rare. Ancient text division and lexical annotation are important parts of classical literature comprehension, and previous studies have tried to construct auxiliary dictionary and other fused knowledge to improve the performance. In this paper, we propose a framework for ancient Chinese Word Segmentation and Part-of-Speech Tagging that makes a twofold effort: on the one hand, we try to capture the wordhood semantics; on the other hand, we re-predict the uncertain samples of baseline model by introducing external knowledge. The performance of our architecture outperforms pre-trained BERT with CRF and existing tools such as Jiayan.
-BY-NC 4.0 The Uncertainty-based Retrieval Framework for Ancient Chinese CWS and POS
d1107502
This paper discusses research on distinguishing word meanings in the context of information retrieval systems. We conducted experiments with three sources of evidence for making these distinctions: morphology, part-of-speech, and phrases. We have focused on the distinction between homonymy and polysemy (unrelated vs. related meanings). Our results support the need to distinguish homonymy and polysemy. We found: 1) grouping morphological variants makes a significant improvement in retrieval performance, 2) that more than half of all words in a dictionary that differ in part-of-speech are related in meaning, and 3) that it is crucial to assign credit to the component words of a phrase. These experiments provide a better understanding of word-based methods, and suggest where natural language processing can provide further improvements in retrieval performance.
Homonymy and Polysemy in Information Retrieval
d20257980
d16800076
In this paper, we describe the system architecture used in the Semantic Textual Similarity (STS) task 6 pilot challenge. The goal of this challenge is to accurately identify five levels of semantic similarity between two sentences: equivalent, mostly equivalent, roughly equivalent, not equivalent but sharing the same topic and no equivalence. Our participations were two systems. The first system (rule-based) combines both semantic and syntax features to arrive at the overall similarity. The proposed rules enable the system to adequately handle domain knowledge gaps that are inherent when working with knowledge resources. As such one of its main goals, the system suggests a set of domain-free rules to help the human annotator in scoring semantic equivalence of two sentences. The second system is our baseline in which we use the Cosine Similarity between the words in each sentence pair.
Sbdlrhmn: A Rule-based Human Interpretation System for Semantic Textual Similarity Task
d38595205
Concordances extracted from aligned bi-texts have become an extremely important tool for the bilingual lexicographer. This paper will show in a concrete way how bi-concordances are actually used in a bilingual dictionary project.
BILINGUAL CONCORDANCERS: A NEW TOOL FOR BILINGUAL LEXICOGRAPHERS
d8555426
Answer validation is a component of question answering system, which selects reliable answer from answer candidates extracted by certain methods. In this paper, we propose an approach of answer validation based on the strengths of lexical association between the keywords extracted from a question sentence and each answer candidate. The proposed answer validation process is decomposed into two steps: the first is to extract appropriate keywords from a question sentence using word features and the strength of lexical association, while the second is to estimate the strength of the association between the keywords and an answer candidate based on the hits of search engines. In the result of experimental evaluation, we show that a good proportion (79%) of a multiple-choice quiz "Who wants to be a millionaire" can be solved by the proposed method.
Answer Validation by Keyword Association
d236486306
This paper describes our submission (winning solution for Task A) to the Shared Task on Hateful Meme Detection at WOAH 2021. We build our system on top of a state-of-the-art system for binary hateful meme classification that already uses image tags such as race, gender, and web entities. We add further metadata such as emotions and experiment with data augmentation techniques, as hateful instances are underrepresented in the data set.
VL-BERT+: Detecting Protected Groups in Hateful Multimodal Memes
d21700132
The aim of this work is to present an overview of the research presented at the LREC workshops over the years 1998-2016 with the aim to shed light on the community represented by workshop participants in terms of country of origin, type of affiliation, gender. There has been also an effort towards the identification of the major topics dealt with as well as of the terminological variations noticed in this time span. Data has been retrieved from the portal of the European Language Resources Association (ELRA) which organizes the conference and the resulting corpus made up of workshops titles and of the related presentations has then been processed using a term extraction tool developed at ILC-CNR.
The LREC Workshops Map
d59800522
This paper describes a prototype Japanese-to-Chinese automatic language translation system. ALT-J/C (Automatic Language Translator -Japanese-to-Chinese) is a semantic transfer based system, which is based on ALT-J/E (a Japanese-to-English system), but written to cope with Unicode. It is also designed to cope with constructions specific to Chinese. This system has the potential to become a framework for multilingual translation systems.
ALT-J/C A Prototype Japanese-to-Chinese Automatic Language Translation System
d14466257
This paper proposes a novel method to extract named entities including unfamiliar words which do not occur or occur few times in a training corpus using a large unannotated corpus. The proposed method consists of two steps. The first step is to assign the most similar and familiar word to each unfamiliar word based on their context vectors calculated from a large unannotated corpus. After that, traditional machine learning approaches are employed as the second step. The experiments of extracting Japanese named entities from IREX corpus and NHK corpus show the effectiveness of the proposed method.
Robust Extraction of Named Entity Including Unfamiliar Word
d221878973
d58825626
P a r s in g w it h P r in c ip le s :P r e d i c t in g a P h r a s a l N o d e B e f o r e I ts H e a d A p p e a r s 1 2Recent work in generative syntactic theory has shifted the conception of a natural language grammar from a hom ogeneous set of phrase structure (P S) rules to a heterogeneous set of well-formedness constraints on representations (see, for exam ple, C hom sky (1981), Stowell (1981), C hom sky (1986a) and Pollard k Sag (1987)). In these theories it is assumed that the grammar contains principles that are independent of the language being parsed, together with principles that are parameterized to reflect the varying behavior of different languages. However, there is more to a theory of human sentence processing than just a theory of linguistic com petence. A theory of performance consists of both linguistic knowledge and a parsing algorithm. T his paper will investigate ways of exploiting principle-based syntactic theories directly in a parsing algorithm in order to determ ine whether or not a principle-based parsing algorithm can be com patible with psycholinguistic evidence.Principle-based parsing is an interesting research topic not only from a psycholinguistic point of viewbut also from a practical point o f view. W hen PS rules are used, a separate grammar must be written for each language parsed. Each of these gramm ars contains a great deal of redundant information. For exam ple, there may be two rules, in different grammars, that are identical except for the order of the constituents on the right hand side, indicating a difference in word order. T his redundancy can be avoided by em ploying a universal phrase structure com ponent (not necessarily in the form of rules) along with parameters and associated values. A principles and parameters approach provides a single com pact gram m ar for all languages that would otherw ise be represented by many different (and redundant) PS gramm ars. Any model of human parsing m ust dictate: a) how structures are projected from the lexicon; b) how structures are attached to one another; and c) what constraints affect the resultant structures. T his paper will concentrate on the first two com ponents with respect to principle-based parsing algorithms: node projection and structure attachm ent.T w o basic control structures exist for any parsing algorithm: data-driven control and hypothesis-drivencontrol. Even if a parser is predom inantly hypothesis-driven, the predictions that it makes m ust at some point be com pared with the data that are presented to it. Som e data-driven com ponent is therefore necessary for any parsing algorithm . Thus, a reasonable hypothesis to test is that the hum an parsing algorithm is entirely data-driven. T his is exactly the approach that is taken by a number of principle-based parsing algorithms (see, for exam ple, A b n ey (1986), Kashket (1987), Gibson & : Clark (1987) and Pritchett (1987)).T hese parsing algorithm s each include a node projection algorithm that projects an input word to a m axim al category, but does not cause the projection of any further nodes.A lthough this sim ple strategy is attractive because of its simplicity, it turns out that it cannot account for certain phenom ena observed in the processing of D utch (Frazier (1987): see Section 2.1). A com pletely data-driven node projection algorithm also has difficulty accounting for the processing ease of adjective-noun constructions in English (see Section 2.2). As a result of this evidence, a purely data-driven node projection 1 Paper presented at the International W orkshop on Parsing Technologies, A ugust 28-31, 1989.2 I would like to thank R obin Clark, Rick K azm an, Howard K urtzm an, Eric N yberg and Brad P ritch ett for their com m ents on earlier drafts of this paper, and I offer the usual disclaim er.-63-International Parsing Workshop '89
d36441079
The Universal Dependencies (UD) project aims to develop a consistent annotation framework for treebanks across many languages. In this paper we present the UD scheme for Afrikaans and we describe the conversion of the AfriBooms treebank to this new format. We will compare the conversion to UD to the conversion of related syntactic structures in typologically similar languages.
Universal Dependencies for Afrikaans
d3030143
In this paper we discuss a proposed user knowledge modeling architecture for the ICICLE system, a language tutoring application for deaf learners of written English. The model will represent the language proficiency of the user and is designed to be referenced during both writing analysis and feedback production. We motivate our model design by citing relevant research on second language and cognitive skill acquisition, and briefly discuss preliminary empirical evidence supporting the design. We conclude by showing how our design can provide a rich and robust information base to a language assessment / correction application by modeling user proficiency at a high level of granularity and specificity.
Modeling User Language Proficiency in a Writing Tutor for Deaf Learners of English
d8053070
We present a comparative study of Finnish and Swedish free-text nursing narratives from intensive care. Although the two languages are linguistically very dissimilar, our hypothesis is that there are similarities that are important and interesting from a language technology point of view. This may have implications when building tools to support producing and using health care documentation. We perform a comparative qualitative analysis based on structure and content, as well as a comparative quantitative analysis on Finnish and Swedish Intensive Care Unit (ICU) nursing narratives. Our findings are that ICU nursing narratives in Finland and Sweden have many properties in common, but that many of these are challenging when it comes to developing language technology tools.
Characteristics and Analysis of Finnish and Swedish Clinical Intensive Care Nursing Narratives
d1171116
In this paper we define a lexical metrology in graphs of verbal synonymy to compute the flexsemic score of speakers from their verbal productions in action denomination tasks. This flexsemic score is used to automatically categorize young children versus young adults. We show that this score is effective in French and in Mandarin.
Towards an automatic measurement of verbal lexicon acquisition: the case for a young children-vs-adults categorization in French and Mandarin
d17121460
In a new approach to large-scale extraction of facts from unstructured text, distributional similarities become an integral part of both the iterative acquisition of high-coverage contextual extraction patterns, and the validation and ranking of candidate facts. The evaluation measures the quality and coverage of facts extracted from one hundred million Web documents, starting from ten seed facts and using no additional knowledge, lexicons or complex tools.
Names and Similarities on the Web: Fact Extraction in the Fast Lane
d29755592
Starting from the English affective lexicon ANEW(Bradley and Lang, 1999a)we have created the first Greek affective lexicon. It contains human ratings for the three continuous affective dimensions of valence, arousal and dominance for 1034 words. The Greek affective lexicon is compared with affective lexica in English, Spanish and Portuguese. The lexicon is automatically expanded by selecting a small number of manually annotated words to bootstrap the process of estimating affective ratings of unknown words. We experimented with the parameters of the semantic-affective model in order to investigate their impact to its performance, which reaches 85% binary classification accuracy (positive vs. negative ratings). We share the Greek affective lexicon that consists of 1034 words and the automatically expanded Greek affective lexicon that contains 407K words.
Affective Lexicon Creation for the Greek Language
d32820052
The main goal of this paper is to describe to what extent the three main open word classes in Pite Saami (nouns, verbs and adjectives) can be automatically assigned to inflectional classes in language technology, specifically for a Finite State Transducer. For each of these word classes, the relevant structural features necessary for determining inflectional class membership are described. In this, a clear difference between the behavior of nouns and verbs, on the one hand, and that of adjectives, on the other hand, is ascertained. While morphophonology, as seen in the paradigmatic behavior of all three word classes, is complex and features a number of types of stem alternations, nouns and verbs are predictable, while adjectives are not. With this in mind, a basic algorithm for extracting inflectional class assignment for nouns and verbs is presented for use in a LEXC framework. In contrast to this, adjectives must be assigned to inflectional classes manually. The main TWOLC rules used to trigger morphophonological alternations are also outlined. The Pite Saami lexicographic database that forms the backbone for the LEXC stem files is managed using FileMaker Pro database software, and the workflow used to extract and update LEXC files from that database is described, focussing on the differences between nouns and verbs, and adjectives. In this, light is shed on how, on the one hand, nominal and verbal inflectional patters are highly complex yet reliably systematic, while adjective morphophonology is complex and unpredictable.KokkuvõteSelle artikli peamine eesmärk on kirjeldada, mil määral saab kolme põhilist avatud sõnaklassi (substantiive, verbe ja adjektiive) pite saami keeles automaatselt flekteerida kasutades keeletehnoloogia FST-d. Artiklis kirjeldatakse iga sõnaliigi muuttüübi määramiseks vajalikke strukturaalseid omadusi ning näidatakse, et adjektiivid on substantiividest ja verbidest selgelt erinevad. Samal ajal kui kõigi kolme sõnaklassi paradigmaatilist käitumist iseloomustab kompleksne paljusid tüvevahelduse tüüpe hõlmav morfofonoloogia, saab substantiivide ja verbide muutumist ennustada, kuid adjektiivide oma mitte. Seega kirjeldatakse artiklis This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/ 157 substantiivide ja verbide muuttüübi määramiseks kasutatavat algoritmi, mille väljund on LEXC formaadis. Adjektiivide fleksiooniklass tuleb aga määrata käsitsi. Tuuakse välja ka peamised TWOLC reeglid, mida kasutatakse morfofonoloogilise vahelduse tekitamiseks. LEXC tüvefailide põhialuseks on pite saami keele leksikograafiline andmebaas, mida hallatakse FileMaker tarkvaraga; artiklis kirjeldatakse sellest andmebaasist LEXC failide väljavõtmise ja nende uuendamise töövoogu, keskendudes erinevustele nimisõnade ja verbide, ning adjektiivide vahel. Näidatakse, et substantiivide ja verbide fleksioonimustrid on küll komplekssed, kuid väga süstemaatilised, samas kui adjektiivide morfofonoloogia on keeruline ning raskesti ennustatav.
Extracting inflectional class assignment in Pite Saami Nouns, verbs and those pesky adjectives
d8216769
There are many accurate methods for language identification of long text samples, but identification of very short strings still presents a challenge. This paper studies a language identification task, in which the test samples have only 5-21 characters. We compare two distinct methods that are well suited for this task: a naive Bayes classifier based on character n-gram models, and the ranking method byCavnar and Trenkle (1994). For the n-gram models, we test several standard smoothing techniques, including the current state-of-theart, the modified Kneser-Ney interpolation. Experiments are conducted with 281 languages using the Universal Declaration of Human Rights. Advanced language model smoothing techniques improve the identification accuracy and the respective classifiers outperform the ranking method. The higher accuracy is obtained at the cost of larger models and slower classification speed. However, there are several methods to reduce the size of an n-gram model, and our experiments with model pruning show that it provides an easy way to balance the size and the identification accuracy. We also compare the results to the language identifier in Google AJAX Language API, using a subset of 50 languages.
Language Identification of Short Text Segments with N-gram Models
d236486127
The de-facto standard decoding method for semantic parsing in recent years has been to autoregressively decode the abstract syntax tree of the target program using a top-down depthfirst traversal. In this work, we propose an alternative approach: a Semi-autoregressive Bottom-up Parser (SMBOP) that constructs at decoding step t the top-K sub-trees of height ≤ t. Our parser enjoys several benefits compared to top-down autoregressive parsing. From an efficiency perspective, bottom-up parsing allows to decode all sub-trees of a certain height in parallel, leading to logarithmic runtime complexity rather than linear. From a modeling perspective, a bottom-up parser learns representations for meaningful semantic sub-programs at each step, rather than for semantically-vacuous partial trees. We apply SMBOP on SPIDER, a challenging zero-shot semantic parsing benchmark, and show that SMBOP leads to a 2.2x speed-up in decoding time and a ∼5x speed-up in training time, compared to a semantic parser that uses autoregressive decoding. SMBOP obtains 71.1 denotation accuracy on SPIDER, establishing a new state-of-the-art, and 69.5 exact match, comparable to the 69.6 exact match of the autoregressive RAT-SQL+GRAPPA.
SMBOP: Semi-autoregressive Bottom-up Semantic Parsing
d252763276
Discourse analysis is a hot topic in the field of natural language processing. The purpose of discourse functional pragmatics research is to analyze the function and role of discourse units, which is helpful to deeply understand the theme and content of discourse. At present, discourse analysis mainly focuses on formal grammar, but the function and semantics of discourse as a whole semantic unit have not attracted enough attention. The existing functional pragmatics researches are mainly oriented to event extraction task, but there is no general functional pragmatics research. In view of the importance and status of functional pragmatics research, this paper proposes a Functional Pragmatics Recognition Method Based on News Schemata(FPRNS). FPRNS not only obtains the interaction information of paragraphs, but also incorporates the information of news schemata and the location information of paragraphs, so as to effectively improve the recognition ability of discourse functional pragmatics. The experimental results in the Chinese macro discourse tree-bank show that the proposed method is superior to all baselines.
Discourse Functional Pragmatics Recognition Based on News Schemata
d16414840
Tree-to-string translation is syntax-aware and efficient but sensitive to parsing errors. Forestto-string translation approaches mitigate the risk of propagating parser errors into translation errors by considering a forest of alternative trees, as generated by a source language parser. We propose an alternative approach to generating forests that is based on combining sub-trees within the first best parse through binarization. Provably, our binarization forest can cover any non-consitituent phrases in a sentence but maintains the desirable property that for each span there is at most one nonterminal so that the grammar constant for decoding is relatively small. For the purpose of reducing search errors, we apply the synchronous binarization technique to forest-tostring decoding. Combining the two techniques, we show that using a fast shift-reduce parser we can achieve significant quality gains in NIST 2008 English-to-Chinese track (1.3 BLEU points over a phrase-based system, 0.8 BLEU points over a hierarchical phrase-based system). Consistent and significant gains are also shown in WMT 2010 in the English to German, French, Spanish and Czech tracks.
Binarized Forest to String Translation
d219307667
d232021803
d8925228
This paper describes, compares, and evaluates three different approaches for learning a semantic classification of library titles: 1) syntactically condensed titles, 2) complete titles, and 3) titles without insignificant words are used for learning the classification in connectionist recurrent plausibility networks. In particular, we demonstrate in this paper that automatically derived feature representations and recurrent plausibility networks can scale up to several thousand library titles and reach almost perfect classification accuracy (>98%) compared to a real-world library classification.
Learning a Scanning Understanding for "Real-world" Library Categorization
d221809908
d2137608
This paper aims to investigate the effect of speech act and tone on rhythm. Participants were asked to produce four sets of words in five speech acts. PVI values of duration, pitch, and intensity were used to test the rhythm of vowels. Two main findings were concluded. First, speech act did not have any effect on rhythm, which may be caused by the fact that speech act were not performed on the controlled words in this study. Second, tone had an effect on rhythm in terms of pitch and intensity on some pairs. However, the comparison between the two pairs, tone1-tone2 and tone2-tone3, did not show any significant difference, which may be explained by the nature of phonetic features for tone1-tone2 pair while Chinese third tone sandhi for tone2-tone3 pair. However, this study only used the sets of words that had the same tone. Future studies can put more focus on different combinations of sets of words.
Phonetics of Speech Acts: A Pilot Study
d169615
This paper describes a CoNLL-style chunk representation for the Tübingen Treebank of Written German, which assumes a flat chunk structure so that each word belongs to at most one chunk. For German, such a chunk definition causes problems in cases of complex prenominal modification. We introduce a flat annotation that can handle these structures via a stranded noun chunk.
Chunking German: An Unsolved Problem
d28551941
It may come as a surprise to many people to know that Spain in general and Catalonia in particular are probably the places in the world where machine translation systems are most extensively used for productive applications. The peculiar position of Catalan and Spanish in Catalonia, both being official and therefore mandatory for Public Administration publications and websites, the fact that Catalan is used in everyday life, business and media, and the close linguistic relationship between both languages enabling an excellent translation quality in the Catalan↔Spanish language pair in our MT system, has made it possible for a number of Public Administration organisms and other private companies to use it in a productive way for massive translation. This paper presents the reality of MT for Cata-lan↔Spanish, together with two practical cases where our MT system is currently being used within a productive environment.
Machine Translation for Catalan↔Spanish: The real case for productive MT
d1683131
The paper proposes formulating MT evaluation as a ranking problem, as is often done in the practice of assessment by human. Under the ranking scenario, the study also investigates the relative utility of several features. The results show greater correlation with human assessment at the sentence level, even when using an n-gram match score as a baseline feature. The feature contributing the most to the rank order correlation between automatic ranking and human assessment was the dependency structure relation rather than BLEU score and reference language model feature.
Sentence Level Machine Translation Evaluation as a Ranking Problem: one step aside from BLEU
d17957898
Multi-word entities, such as organisation names, are frequently written in many different ways. We have previously automatically identified over one million acronym pairs in 22 languages, consisting of their short form (e.g. EC) and their corresponding long forms (e.g. European Commission, European Union Commission). In order to automatically group such long form variants as belonging to the same entity, we cluster them, using bottom-up hierarchical clustering and pair-wise string similarity metrics. In this paper, we address the issue of how to evaluate the named entity variant clusters automatically, with minimal human annotation effort. We present experiments that make use of Wikipedia redirection tables and we show that this method produces good results.
Clustering of Multi-Word Named Entity variants: Multilingual Evaluation
d235097559
d243865191
Transfer learning (TL) seeks to improve the learning of a data-scarce target domain by using information from source domains. However, the source and target domains usually have different data distributions, which may lead to negative transfer. To alleviate this issue, we propose a Wasserstein Selective Transfer Learning (WSTL) method. Specifically, the proposed method considers a reinforced selector to select helpful data for transfer learning. We further use a Wasserstein-based discriminator to maximize the empirical distance between the selected source data and target data. The TL module is then trained to minimize the estimated Wasserstein distance in an adversarial manner and provides domain invariant features for the reinforced selector. We adopt an evaluation metric based on the performance of the TL module as delayed reward and a Wasserstein-based metric as immediate rewards to guide the reinforced selector learning. Compared with the competing TL approaches, the proposed method selects data samples that are closer to the target domain. It also provides better state features and reward signals that lead to better performance with faster convergence. Extensive experiments on three real-world text mining tasks demonstrate the effectiveness of the proposed method.
Wasserstein Selective Transfer Learning for Cross-domain Text Mining
d14685428
The article presents the Giellatekno & Divvun language technology resources, more specifically the effort to utilise open-source tools to improve the build infrastructure, and the solutions to help adapt to best practices for software development. The article especially discusses how the infrastructure has been remade to cope with an increasing number of languages without incurring extra overhead for the maintainers, and at the same time let the linguists concentrate on the linguistic work. Finally, the article discusses how a uniform infrastructure like the one presented can be used to easily compare languages in terms of morphological or computational complexity, coverage or for cross-lingual applications.
Building an open-source development infrastructure for language technology projects
d460839
In this paper we describe an empirical study of human-human multi-tasking dialogues (MTD), where people perform multiple verbal tasks overlapped in time. We examined how conversants switch from the ongoing task to a real-time task. We found that 1) conversants use discourse markers and prosodic cues to signal task switching, similar to how they signal topic shifts in single-tasking speech; 2) conversants strive to switch tasks at a less disruptive place; and 3) where they cannot, they exert additional effort (even higher pitch) to signal the task switching. Our machine learning experiment also shows that task switching can be reliably recognized using discourse context and normalized pitch. These findings will provide guidelines for building future speech interfaces to support multi-tasking dialogue.
Switching to Real-Time Tasks in Multi-Tasking Dialogue
d14161026
This paper addresses two major problems in closed task of Chinese word segmentation (CWS): tagging sentences interspersed with non-Chinese words, and long named entity (NE) identification. To resolve the former, we apply Kmeans clustering to identify non-Chinese characters, and then adopt a two-tagger architecture: one for Chinese text and the other for non-Chinese text. For the latter problem, we apply postprocessing to our CWS output using automatically generated templates. The experiment results show that, when non-Chinese characters are sparse in the training corpus, our two-tagger method significantly improves the segmentation of sentences containing non-Chinese words. Identification of long NEs and long words is also enhanced by template-based postprocessing. In the closed task of SIGHAN 2006 CWS, our system achieved F-scores of 0.957, 0.972, and 0.955 on the CKIP, CTU, and MSR corpora respectively.
On Closed Task of Chinese Word Segmentation: An Improved CRF Model Coupled with Character Clustering and Automatically Generated Template Matching
d218973753
d146448683
En recherche d'information, savoir reformuler une idée par des termes différents est une des clefs pour l'amélioration des performances des systèmes de recherche d'information (SRI) existants. L'un des moyens pour résoudre ce problème est d'utiliser des ressources sémantiques spécialisées et adaptées à la base documentaire sur laquelle les recherches sont faites. Nous proposons dans cet article de montrer que les liens sémantiques entre noms et verbes appelés liens qualia, définis dans le modèle du Lexique génératif (Pustejovsky, 1995), peuvent effectivement améliorer les résultats des SRI. Pour cela, nous extrayons automatiquement des couples nom-verbe en relation qualia de la base documentaire à l'aide du système d'acquisition ASARES(Claveau, 2003a). Ces couples sont ensuite utilisés pour étendre les requêtes d'un système de recherche. Nous montrons, à l'aide des données de la campagne d'évaluation Amaryllis, que cette extension permet effectivement d'obtenir des réponses plus pertinentes, et plus particulièrement pour les premiers documents retournés à l'utilisateur.In the information retrieval field, managing the equivalent reformulations of a same idea is a key point to improve the performances of existing retrieval systems. One way to reach this goal is to use specialised semantic resources that are suited to the document database on which the queries are processed. In this paper, we show that the semantic links between nouns and verbs called qualia links, defined in the Generative lexicon framework(Pustejovsky, 1995), enable us to improve the results of retrieval systems. To achieve this goal, we automatically extract from the document database noun-verb pairs that are in qualia relation with the acquisition system ASARES(Claveau, 2003a). These pairs are then used to expand the queries of a retrieval system. With the help of the Amaryllis evaluation campaign data, we show that these expansions actually lead to better results, especially for the first documents proposed to the user.
Extension de requêtes par lien sémantique nom-verbe acquis sur corpus
d198961613
d3101481
Every text has at least one topic and at least one genre. Evidence for a text's topic and genre comes, in part, from its lexical and syntactic features-features used in both Automatic Topic Classification and Automatic Genre Classification (AGC). Because an ideal AGC system should be stable in the face of changes in topic distribution, we assess five previously published AGC methods with respect to both performance on the same topic-genre distribution on which they were trained and stability of that performance across changes in topic-genre distribution. Our experiments lead us to conclude that (1) stability in the face of changing topical distributions should be added to the evaluation critera for new approaches to AGC, and (2) part-of-speech features should be considered individually when developing a high-performing, stable AGC system for a particular, possibly changing corpus.
Squibs Stable Classification of Text Genres
d17703143
We present a multilingual evaluation of approaches for spelling normalisation of historical text based on data from five languages: English, German, Hungarian, Icelandic, and Swedish. Three different normalisation methods are evaluated: a simplistic filtering model, a Levenshteinbased approach, and a character-based statistical machine translation approach. The evaluation shows that the machine translation approach often gives the best results, but also that all approaches improve over the baseline and that no single method works best for all languages.
A Multilingual Evaluation of Three Spelling Normalisation Methods for Historical Text
d9168133
Are word-level affect lexicons useful in detecting emotions at sentence level? Some prior research finds no gain over and above what is obtained with ngram features-arguably the most widely used features in text classification. Here, we experiment with two very different emotion lexicons and show that even in supervised settings, an affect lexicon can provide significant gains. We further show that while ngram features tend to be accurate, they are often unsuitable for use in new domains. On the other hand, affect lexicon features tend to generalize and produce better results than ngrams when applied to a new domain.
Portable Features for Classifying Emotional Text
d14687375
In this paper, we discuss the design of a database of recorded and transcribed read and spontaneous speech of semi-fluent, strongly-accented non-native speakers of English. While many speech applications work best with a recognizer that expects native-like usage, others could benefit from a speech recognition component that is forgiving of the sorts of errors that are not a barrier to communication; in order to train such a recognizer a database of non-native speech is needed. We examine how collecting data from non-native speakers must necessarily differ from collection from native speakers, and describe work we did to develop an appropriate scenario, recording setup, and optimal surroundings during recording.
ELICITING NATURAL SPEECH FROM NON-NATIVE USERS: COLLECTING SPEECH DATA FOR LVCSR
d16732339
In this paper, I introduce the DICI, an electronic dictionary of Italian collocations designed to support the acquisition of the collocational competence in learners of Italian as a second or foreign language. I briefly describe the composition of the reference Italian corpus from which the collocations are extracted, and the methodology of extraction and filtering of candidate collocations. It is an experimental methodology, based on POS filtering, frequency and statistical measures, and tested on a 12-million-word sample from the reference corpus. Furthermore, I explain the main criteria for the composition of the dictionary, in addition to its integration with a Virtual Learning Environment (VLE), aimed at supporting learning activities on collocations. I briefly describe some of the main features of this integration with the VLE, such as the automatic recognition of collocations in written Italian texts, the possibility for students to obtain further linguistic information on selected collocations, and the automatic generation of tests for collocational competence assessment of language learners. While the main goal of the DICI is pedagogical, it is also intended to contribute to research in the field of collocations.
The Dictionary of Italian Collocations: Design and Integration in an Online Learning Environment
d256739255
Document corpora owned by law and regulatory firms pose significant challenges for text classification; being multi-labelled, highly imbalanced, often having a relatively small number of instances and a large word count per instance. Deep learning ensemble methods can improve generalization and performance for multi-label text classification but using pretrained language models as base learners leads to high computational costs.To tackle the imbalance problem and improve generalization we present a fast, pseudostratified sub-sampling method that we use to extract diverse data subsets to create base models for deep ensembles based on fine-tuned models from pre-trained transformers with moderate computational cost such as BERT, RoBERTa, XLNet and Albert. A key feature of the sub-sampling method is that it preserves the characteristics of the entire dataset (particularly the labels' frequency distribution) while extracting subsets. This sub-sampling method is also used to extract smaller size custom datasets from the freely available LexGLUE legal text corpora. We discuss approaches used and classification performance results with deep learning ensembles, illustrating the effectiveness of our approach on the above custom datasets.
Experimenting with ensembles of pre-trained language models for classification of custom legal datasets
d226284000
d21717892
This paper introduces the NIEUW (Novel Incentives and Workflows) project funded by the United States National Science Foundation and part of the Linguistic Data Consortium's strategy to provide order of magnitude improvement in the scale, cost, variety, linguistic diversity and quality of Language Resources available for education, research and technology development. Notwithstanding decades of effort and progress in collecting and distributing Language Resources, it remains the case that demand still far exceeds supply for all of the approximately 7000 languages in the world, even the most well documented languages with global economic and political influence. The absence of Language Resources, regardless of the language, stifles teaching and technology building, inhibiting the creation of language enabled applications and, as a result, commerce and communication. Project oriented approaches which focus intensive funding and effort on problems of limited scope over short durations can only address part of the problem. The HLT community instead requires approaches that do not rely upon highly constrained resources such as project funding and can be sustained across many languages and many years. In this paper, we describe a new initiative to harness the power of alternative incentives to elicit linguistic data and annotation. We also describe changes to the workflows necessary to collect data from workforces attracted by these incentives.
Introducing NIEUW: Novel Incentives and Workflows for Eliciting Linguistic Data
d16636370
Hiero translation models have two limitations compared to phrase-based models: 1) Limited hypothesis space; 2) No lexicalized reordering model. We propose an extension of Hiero called Phrasal-Hiero to address Hiero's second problem. Phrasal-Hiero still has the same hypothesis space as the original Hiero but incorporates a phrase-based distance cost feature and lexicalized reodering features into the chart decoder. The work consists of two parts: 1) for each Hiero translation derivation, find its corresponding discontinuous phrase-based path. 2) Extend the chart decoder to incorporate features from the phrase-based path. We achieve significant improvement over both Hiero and phrase-based baselines for Arabic-English, Chinese-English and German-English translation.
Integrating Phrase-based Reordering Features into a Chart-based Decoder for Machine Translation
d215768688
d219304731
d14016312
In this paper, we present the collection and analysis of a spoken dialogue corpus obtained from interactions of older and younger users with a smart-home system. Our aim is to identify the amount and the origin of linguistic differences in the way older and younger users address the system. In addition, we investigate changes in the users' linguistic behaviour after exposure to the system. The results show that the two user groups differ in their speaking style as well as their vocabulary. In contrast to younger users, who adapt their speaking style to the expected limitations of the system, older users tend to use a speaking style that is closer to human-human communication in terms of sentence complexity and politeness. However, older users are far less easy to stereotype than younger users.
Corpus Analysis of Spoken Smart-Home Interactions with Older Users
d1633670
We present an operable definition of focus which is argued to be of a cognito-pragmatic nature and explore how it is determined in discourse in a formalized manner. For this purpose, a file card model of discourse model and knowledge store is introduced enabling the decomposition and formal representation of its determination process as a programmable algorithm (FDA). Interdisciplinary evidence from social and cognitive psychology is cited and the prospect of the integration of focus via FDA as a discourse-level construct into speech synthesis systems, in particular, concept-tospeech systems, is also briefly discussed.
Focusing on focus: a formalization
d12742176
The hierarchy of salience of the items of the knowledge assumed by the speaker to
Hierarchy o_ff Sallenee an__dd Discourse ~alysls and Production
d16984733
We propose a new subjectivity classification at the segment level that is more appropriate for discourse-based sentiment analysis. Our approach automatically distinguish between subjective nonevaluative and objective segments and between implicit and explicit opinions, by using local and global context features.
Towards Context-Based Subjectivity Analysis
d2664852
In this paper, we present an approach for recognizing spatial containment relations that hold between event mentions. Event mentions refer to real-world events that have spatio-temporal properties. While the temporal aspect of event relations has been well-studied, the spatial aspect has received relatively little attention. The difficulty in this task is the highly implicit nature of event locations in discourse. We present a supervised method that is designed to capture both explicit and implicit spatial relation information. Our approach outperforms the only known previous method by a 14 point increase in F 1 -measure.
Recognizing Spatial Containment Relations between Event Mentions
d227230595
We study the ability of large fine-tuned transformer models to solve a binary classification task of dialect identification, with a special interest in comparing the performance of multilingual to monolingual ones. The corpus analyzed contains Romanian and Moldavian samples from the news domain, as well as tweets for assessing the performance. We find that the monolingual models are superior to the multilingual ones and the best results are obtained using an SVM ensemble of 5 different transformer-based models. We provide our experimental results and an analysis of the attention mechanisms of the best-performing individual classifiers to explain their decisions. The code we used was released under an open-source license.This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.
Applying Multilingual and Monolingual Transformer-Based Models for Dialect Identification
d736936
This paper describes the two systems submitted by LIMSI to the WMT'15 Shared Task on Automatic Post-Editing. The first one relies on a reformulation of the APE task as a Machine Translation task; the second implements a simple rule-based approach. Neither of these two systems manage to improve the automatic translation. We show, by carefully analyzing the failure of our systems that this counterperformance mainly results from the inconsistency in the annotations.
Why Predicting Post-Edition is so Hard? Failure Analysis of LIMSI Submission to the APE Shared Task
d5897247
The paper is the first report on the experimental MT system developed as part of the Japanese-Russian Automatic tra.aslation Project (JaRAP). The system follows the transfer approach to MT. Limited so far to lexico-morphologieal processing, it is seen as a foundation for more ambitious linguistic research. The system is implemented on IBM PC, MS DOS, in Arity Prolog (analysis and transfer) and Turbo Pascal (synthesis).
THE JaRAP EXPERIMENTAL SYSTEM OF JAPANESE-RUSSIAN AUTOMATIC TRANSLATION
d7270459
Processing discourse connectives is important for tasks such as discourse parsing and generation. For these tasks, it is useful to know which connectives can signal the same coherence relations. This paper presents experiments into modelling the substitutability of discourse connectives. It shows that substitutability effects distributional similarity. A novel variancebased function for comparing probability distributions is found to assist in predicting substitutability.
Modelling the substitutability of discourse connectives
d17582328
Recently the research on supervised term weighting has attracted growing attention in the field of Traditional Text Categorization (TTC) and Sentiment Analysis (SA). Despite their impressive achievements, we show that existing methods more or less suffer from the problem of over-weighting. Overlooked by prior studies, over-weighting is a new concept proposed in this paper. To address this problem, two regularization techniques, singular term cutting and bias term, are integrated into our framework of supervised term weighting schemes. Using the concepts of over-weighting and regularization, we provide new insights into existing methods and present their regularized versions. Moreover, under the guidance of our framework, we develop a novel supervised term weighting scheme, regularized entropy (re). The proposed framework is evaluated on three datasets widely used in SA. The experimental results indicate that our re enjoys the best results in comparisons with existing methods, and regularization techniques can significantly improve the performances of existing supervised weighting methods.This work is licensed under a Creative Commons Attribution 4.0 International License. Page numbers and proceedings footer are added by the organisers. License details: http://creativecommons.org/licenses/by/4.0/
Reducing Over-Weighting in Supervised Term Weighting for Sentiment Analysis
d2987758
In this paper we describe the agent based architecture and extensively report the design of the shallow processing model in it. We present the general model describing the data flow and the expected activities that have to be carried out. The notion of question session will be introduced as a means to control the communication among the different agents. We then present a shallow model mainly based on an IR engine and a passage re-ranking that uses the notion of expanded query. We will report the pilot investigation on the performances of the method.AbstractThis paper describes the functionality of MEAD, a comprehensive, public domain, open source, multidocument multilingual summarization environment that has been thus far downloaded by more than 500 organizations. MEAD has been used in a variety of summarization applications ranging from summarization for mobile devices to Web page summarization within a search engine and to novelty detection.AbstractThis paper introduces the POESIA internet filtering system, which is open-source, and which combines standard filtering methods, such as positive/negative URL lists, with more advanced techniques, such as image processing and NLP-enhanced text filtering. The description here focusses on components providing textual content filtering for three European languages (English, Italian and Spanish), employing NLP methods to enhance performance. We address also the acquisition of language data needed to develop these filters, and the evaluation of the system and its components.
A2Q: an agent-based architecure for multilingual Q&A
d1529891
We have released plWordNet 3.0, a very large wordnet for Polish. In addition to what is expected in wordnets -richly interrelated synsets -it contains sentiment and emotion annotations, a large set of multi-word expressions, and a mapping onto WordNet 3.1. Part of the release is enWordNet 1.0, a substantially enlarged copy of WordNet 3.1, with material added to allow for a more complete mapping. The paper discusses the design principles of plWordNet, its content, its statistical portrait, a comparison with similar resources, and a partial list of applications.
plWordNet 3.0 -a Comprehensive Lexical-Semantic Resource
d237366146
Topic model and topic-controlled poetry generation of Chinese ancient poem based on BPE 镁镮镣镩镥镮镴 镃镨镩镮镥镳镥 镰镯镥镴镲镹 镩镳 镡 镧镲镥镡镴 镴镲镥镡镳镵镲镥 镯镦 镨镵镭镡镮 镣镵镬镴镵镲镥键 镔镨镥 镳镨镯镲镴 镡镮镤 镣镯镮镣镩镳镥 镬镡镮镧镵镡镧镥 镣镡镮 镥镸镰镲镥镳镳 镥镸镴镲镥镭镥镬镹 镲镩镣镨 镭镥镡镮镩镮镧镳 镡镮镤 镴镨镥镭镥镳锬 長镨镩镣镨 镨镡镳 镡镴镴镲镡镣镴镥镤 镣镯镵镮镴锭 镬镥镳镳 镬镯镶镥镲镳 镳镩镮镣镥 镡镮镣镩镥镮镴 镴镩镭镥镳键 镗镥 镴镯镯镫 镯镶镥镲 锸锰锰锬锰锰锰 镡镮镣镩镥镮镴 镰镯镥镭镳 镡镳 镴镨镥 镲镥镳镥镡镲镣镨 镤镡镴镡镳镥镴锬 镡镮镤 镴镲镡镩镮镥镤 镡 長镯镲镤 镳镥镧镭镥镮镴镡镴镩镯镮 镭镯镤镥镬 镯镮 镩镴 镢镡镳镥镤 镯镮 镂镹镴镥锭镐镡镩镲 镅镮镣镯镤镩镮镧 镡镬镧镯镲镩镴镨镭锬 長镨镩镣镨 镡镣镣镯镲镤镩镮镧 镴镯 镴镨镥 镣镯锭镯镣镣镵镲镲镥镮镣镥 镦镲镥镱镵镥镮镣镹 镯镦 镴镨镥 镡镮镣镩镥镮镴 镰镯镥镴镲镹 镳镥镴 镦镯镲 長镯镲镤 镳镥镧镭镥镮镴镡镴镩镯镮键 镔镨镩镳 镭镯镤镥镬 镣镡镮 镦镵镲镴镨镥镲 镩镭镰镲镯镶镥 镡 镭镯镲镥 镣镯镮镣镩镳镥 镵镮镤镥镲镳镴镡镮镤镩镮镧 镯镦 镴镨镥 镳镥镭镡镮镴镩镣镳 镩镮 镡镮镣镩镥镮镴 镃镨镩镮镥镳镥 镰镯镥镭镳键 镆镵镲镴镨镥镲镭镯镲镥锬 長镥 镴镲镡镩镮镥镤 镡 镴镯镰镩镣 镭镯镤镥镬 镯镮 镴镨镥 镰镯镳镴锭镳镥镧镭镥镮镴镡镴镩镯镮 镡镮镣镩镥镮镴 镰镯镥镴镲镹 镣镯镲镰镵镳 镢镡镳镥镤 镯镮 镴镨镥 镌镡镴镥镮镴 镄镩镲镩镣镨镬镥镴 镁镬镬镯镣镡镴镩镯镮 镡镬镧镯镲镩镴镨镭键 镂镹 镣镯镭镰镡镲镩镮镧 镡镮镤 镡镤镪镵镳镴镩镮镧 镴镨镥 镮镵镭镢镥镲 镯镦 镴镯镰镩镣镳锬 長镥 镧镥镴 镴镨镥 镭镯镲镥 镡镣镣镵镲镡镴镥 镴镯镰镩镣 镤镩镳镴镲镩镢镵镴镩镯镮 镯镦 镥镡镣镨 镡镮镣镩镥镮镴 镰镯镥镭键 镗镥 镡镬镳镯 镣镡镬镣镵镬镡镴镥镤 镴镨镥 镴镨镥镭镥 镴镲镡镮镳镦镥镲 镭镡镴镲镩镸 長镩镴镨镩镮 镡 镰镯镥镭 镡镦镴镥镲 镡镮镮镯镴镡镴镩镮镧 镴镨镥 镴镯镰镩镣 镯镦 镊镵镥镪镵 镡镮镤 镌镶镳镨镩 镳镥镮镴镥镮镣镥 镢镹 镳镥镮镴镥镮镣镥键 镆镩锭 镮镡镬镬镹锬 長镥 镵镳镥 镡 镳镩镭镰镬镥 镭镥镴镨镯镤锭镃镯镮镴镲镯镬 镃镯镤镥 镴镯 镥镭镢镥镤 镴镨镥 镴镯镰镩镣 镭镯镤镥镬 镩镮镴镯 镴镨镥 镰镯镥镴镲镹 镧镥镮镥镲镡镴镩镯镮 镭镯镤镥镬锬 镩镮 镯镲镤镥镲 镴镯 镣镯镮镴镲镯镬 镴镨镥 镴镨镥镭镥 镯镦 镯镵镲 镧镥镮镥镲镡镴镥镤 镰镯镥镭 镡镮镤 镴镯 镥镸镡镭镩镮镥 镴镨镥 镥锋镥镣镴镩镶镥镮镥镳镳 镯镦 镯镵镲 镴镯镰镩镣 镭镯镤镥镬键
d14448039
An increasingly popular method for finding information online is via the Community Question Answering (CQA) portals such as Yahoo! Answers, Naver, and Baidu Knows. Searching the CQA archives, and ranking, filtering, and evaluating the submitted answers requires intelligent processing of the questions and answers posed by the users. One important task is automatically detecting the question's subjectivity orientation: namely, whether a user is searching for subjective or objective information. Unfortunately, real user questions are often vague, ill-posed, poorly stated. Furthermore, there has been little labeled training data available for real user questions. To address these problems, we present CoCQA, a co-training system that exploits the association between the questions and contributed answers for question analysis tasks. The co-training approach allows CoCQA to use the effectively unlimited amounts of unlabeled data readily available in CQA archives. In this paper we study the effectiveness of CoCQA for the question subjectivity classification task by experimenting over thousands of real users' questions.
CoCQA: Co-Training Over Questions and Answers with an Application to Predicting Question Subjectivity Orientation
d59860008
This paper presents a probabilistic extension of Discontinuous Phrase Structure Grammar (DPSG), a formalism designed to describe discontinuous constituency phenomena adequately and perspicuously by means of trees with crossing branches. We outline an implementation of an agenda-based chart parsing algorithm that is capable of computing the Most Probable Parse for a given input sentence for probabilistic versions of both DPSG and Context-Free Grammar. Experiments were conducted with both types of grammars extracted from the NEGRA corpus. In spite of the much greater complexity of DPSG parsing in terms of the number of (partial) analyses that can be constructed for an input sentence, accuracy results from both experiments are comparable. We also briefly hint at future lines of research aimed at more efficient ways of probabilistic parsing with discontinuous constituents.
COMPUTING THE MOST PROBABLE PARSE FOR A DISCONTINUOUS PHRASE STRUCTURE GRAMMAR
d16939446
This paper describes work carried out in the European project TrendMiner which partly deals with the extraction and representation of real time information from dynamic data streams. The focus of this paper lies on the construction of an integrated ontology, TMO, the TrendMiner Ontology, that has been assembled from several independent multilingual taxonomies and ontologies which are brought together by an interface specification, expressed in OWL. Within TrendMiner, TMO serves as a common language that helps to interlink data, delivered from both symbolic and statistical components of the TrendMiner system. Very often, the extracted data is supplied as quintuples, RDF triples that are extended by two further temporal arguments, expressing the temporal extent in which an atemporal statement is true. In this paper, we will also sneak a peek on the temporal entailment rules and queries that are built into the semantic repository hosting the data and which can be used to derive useful new information.
TMO-The Federated Ontology of the TRENDMINER Project
d17666789
In this paper, we present our contribution in SemEval2016 task7 1 : Determining Sentiment Intensity of English and Arabic Phrases, where we use web search engines for English and Arabic unsupervised sentiment intensity prediction. Our work is based, first, on a group of classic sentiment lexicons (e.g. Sen-timent140 Lexicon, SentiWordNet). Second, on web search engines' ability to find the cooccurrence of sentences with predefined negative and positive words. The use of web search engines (e.g. Google Search API) enhance the results on phrases built from opposite polarity terms.
d201680599
We describe the neural machine translation (NMT) system developed at the National Research Council of Canada (NRC) for the Kazakh-English news translation task of the Fourth Conference on Machine Translation (WMT19). Our submission is a multi-source NMT system taking both the original Kazakh sentence and its Russian translation as input for translating into English.
Multi-Source Transformer for Kazakh-Russian-English Neural Machine Translation
d14315944
In this work we study the effectiveness of speaker adaptation for dialogue act recognition in multiparty meetings. First, we analyze idiosyncracy in dialogue verbal acts by qualitatively studying the differences and conflicts among speakers and by quantitively comparing speaker-specific models. Based on these observations, we propose a new approach for dialogue act recognition based on reweighted domain adaptation which effectively balance the influence of speaker specific and other speakers' data. Our experiments on a realworld meeting dataset show that with even only 200 speaker-specific annotated dialogue acts, the performances on dialogue act recognition are significantly improved when compared to several baseline algorithms. To our knowledge, this work is the first 1 to tackle this promising research direction of speaker adaptation for dialogue act recogntion.
Dialogue Act Recognition using Reweighted Speaker Adaptation
d69600386
An evolutionary algorithm for simultaneously inducing and weighting phonological constraints (Winnow-MaxEnt-Subtree Breeder) is described, analyzed, and illustrated. Implementing weights as sub-population sizes, reproduction with selection executes a new variant of Winnow(Littlestone, 1988), which is shown to converge. A flexible constraint schema, based on the same prosodic and autosegmental trees used in representations, is described, together with algorithms for mutation and recombination (mating). The algorithm is applied to explaining abrupt learning curves, and predicts an empirical connection between abruptness and languageparticularity.
Constraint breeding during on-line incremental learning ⇤