_id
stringlengths
4
10
text
stringlengths
0
18.4k
title
stringlengths
0
8.56k
d51872928
This paper studies how the argumentation strategies of participants in deliberative discussions can be supported computationally. Our ultimate goal is to predict the best next deliberative move of each participant. In this paper, we present a model for deliberative discussions and we illustrate its operationalization. Previous models have been built manually based on a small set of discussions, resulting in a level of abstraction that is not suitable for move recommendation. In contrast, we derive our model statistically from several types of metadata that can be used for move description. Applied to six million discussions from Wikipedia talk pages, our approach results in a model with 13 categories along three dimensions: discourse acts, argumentative relations, and frames. On this basis, we automatically generate a corpus with about 200,000 turns, labeled for the 13 categories. We then operationalize the model with three supervised classifiers and provide evidence that the proposed categories can be predicted.
Modeling Deliberative Argumentation Strategies on Wikipedia
d51878103
In this work, we discuss the importance of external knowledge for performing Named Entity Recognition (NER). We present a novel modular framework that divides the knowledge into four categories according to the depth of knowledge they convey. Each category consists of a set of features automatically generated from different information sources, such as a knowledgebase, a list of names, or document-specific semantic annotations. Further, we show the effects on performance when incrementally adding deeper knowledge and discuss effectiveness/efficiency trade-offs.
A Study of the Importance of External Knowledge in the Named Entity Recognition Task
d14332764
We introduce our incremental coreference resolution system for the BioNLP 2011 Shared Task on Protein/Gene interaction. The benefits of an incremental architecture over a mentionpair model are: a reduction of the number of candidate pairs, a means to overcome the problem of underspecified items in pair-wise classification and the natural integration of global constraints such as transitivity. A filtering system takes into account specific features of different anaphora types. We do not apply Machine Learning, instead the system classifies with an empirically derived salience measure based on the dependency labels of the true mentions. The OntoGene pipeline is used for preprocessing.
An Incremental Model for the Coreference Resolution Task of BioNLP 2011
d196184409
Word embeddings are now pervasive across NLP subfields as the de-facto method of forming text representataions. In this work, we show that existing embedding models are inadequate at constructing representations that capture salient aspects of mathematical meaning for numbers, which is important for language understanding. Numbers are ubiquitous and frequently appear in text. Inspired by cognitive studies on how humans perceive numbers, we develop an analysis framework to test how well word embeddings capture two essential properties of numbers: magnitude (e.g. 3<4) and numeration (e.g. 3=three). Our experiments reveal that most models capture an approximate notion of magnitude, but are inadequate at capturing numeration. We hope that our observations provide a starting point for the development of methods which better capture numeracy in NLP systems.
Exploring Numeracy in Word Embeddings
d233189542
The slow speed of BERT has motivated much research on accelerating its inference, and the early exiting idea has been proposed to make trade-offs between model quality and efficiency. This paper aims to address two weaknesses of previous work: (1) existing fine-tuning strategies for early exiting models fail to take full advantage of BERT; (2) methods to make exiting decisions are limited to classification tasks. We propose a more advanced fine-tuning strategy and a learning-toexit module that extends early exiting to tasks other than classification. Experiments demonstrate improved early exiting for BERT, with better trade-offs obtained by the proposed finetuning strategy, successful application to regression tasks, and the possibility to combine it with other acceleration methods. Source code can be found at https://github.com/ castorini/berxit.
BERxiT: Early Exiting for BERT with Better Fine-Tuning and Extension to Regression
d1467434
This paper presents a corpus-based method for automatic evaluation of geometric constraints on projective prepositions. The method is used to find an appropriate model of geometric constraints for a twodimensional domain. Two simple models are evaluated against the uses of projective prepositions in a corpus of natural language dialogues to find the best parameters of these models. Both models cover more than 96% of the data correctly. An extra treatment of negative uses of projective prepositions (e.g. A is not above B) improves both models getting close to full coverage.
A Corpus-based Analysis of Geometric Constraints on Projective Prepositions
d232021902
d1182973
This paper proposes a method for reordering words in a Japanese sentence based on concurrent execution with dependency parsing so that the sentence becomes more readable. Our contributions are summarized as follows: (1) we extend a probablistic model used in the previous work which concurrently performs word reordering and dependency parsing;(2) we conducted an evaluation experiment using our semi-automatically constructed evaluation data so that sentences in the data are more likely to be spontaneously written by natives than the automatically constructed evaluation data in the previous work.
Japanese Word Reordering Executed Concurrently with Dependency Parsing and Its Evaluation
d53604363
WikiSQL is a newly released dataset for studying the natural language sequence to SQL translation problem. The SQL queries in Wik-iSQL are simple: Each involves one relation and does not have any join operation. Despite of its simplicity, none of the publicly reported structured query generation models can achieve an accuracy beyond 62%, which is still far from enough for practical use. In this paper, we ask two questions, "Why is the accuracy still low for such simple queries?" and "What does it take to achieve 100% accuracy on WikiSQL?" To limit the scope of our study, we focus on the WHERE clause in SQL. The answers will help us gain insights about the directions we should explore in order to further improve the translation accuracy. We will then investigate alternative solutions to realize the potential ceiling performance on WikiSQL. Our proposed solution can reach up to 88.6% condition accuracy on the WikiSQL dataset.
What It Takes to Achieve 100% Condition Accuracy on WikiSQL
d21195745
Le projet d'un dictionnaire et un corpus de textes glosés en langue mwan a démarré en 2004. Auparavant, aucun dictionnaire de cette langue n'avait existé, et seuls quelque textes avaient été publiés. Le système d'écriture utilisé dans ces publications a été non-systématique car elle n'assurait pas la représentation exacte du contour tonal de mots. Actuellement le dictionnaire mwan a 2247 entrées, il est également utilisé pour interlinearization automatique de textes mwan. 48 textes sont glosés à ce moment (38000 mots). Ce corpus est prêt à la conversion en corpus numerique en ligne (à la base de NoSketchEngine software), et publiée en Internet ; ils seront donc disponibles à la communauté linguistique.Abstract. The project of making a dictionary and a corpus of interlinearized texts of the Mwan languages started in 2004. Previously there were no dictionary of this language, and only a few text were published. The writing system used in these publications was controversial as it did not made the accurate fixation of the tonal contour of words. At present the dictionary of Mwan has 2247 entries, the dictionary is also used for automatic interlinearization of Mwan texts. The number of the glossed texts is actually 48 (38000 words). These text are ready to be converted into the on-line Corpus (with the help of the NoSketchEngine software), and be published in the Internet, therefore they will be available to the linguistic community.
The Mwan language : dictionary and corpus of texts
d233365154
d252624410
Numeral expressions in Japanese are characterized by the flexibility of quantifier positions and the variety of numeral suffixes. However, little work has been done to build annotated corpora focusing on these features and datasets for testing the understanding of Japanese numeral expressions. In this study, we build a corpus that annotates each numeral expression in an existing phrase structure-based Japanese treebank with its usage and numeral suffix types. We also construct an inference test set for numerical expressions based on this annotated corpus. In this test set, we particularly pay attention to inferences where the correct label differs between logical entailment and implicature and those contexts such as negations and conditionals where the entailment labels can be reversed. The baseline experiment with Japanese BERT models shows that our inference test set poses challenges for inference involving various types of numeral expressions.
Annotating Japanese Numeral Expressions for a Logical and Pragmatic Inference Dataset
d2416407
This paper presents BlAS (Bahasa Indonesia Analyzer System), an analysis systemfor lndonesian language suitable for multilingual machine translation system. BIAS is developed with a motivation to contribute to on-going cooperative research project in machine translation between Indonesia andotherAsian countries.In addition,it mayserve tofosterNLPresearchinIndonesia.It startwith an overviewofvarious methodologiesforrepresenta lion of linguistic knowledg e atwlplausible strategies of automatic r easoningf or lndonesian language. We examine these methodoiogies from the perspective of their relative advantage and their suitabilityforaninterlingualmochine-translation environment. BIAS is a multi-level analyzer which is developed not only to extract the syntactic and semantic structure of sentences but also to provide a unifying method for knowledge reasoning. Each phase of the analyzer is discussed with emphasis on Indonesian morphology andcase-grammaticalconstructians.
An Analysis of Indonesian Language for Interlingual Machine-Translation System
d8955494
Standard Language at Ford Motor Company: A Case Study in Controlled Language Development and Deployment
d44457147
摘要 近年來由於數位音樂的蓬勃發展,錄音器材越來越普及。使得非混音專業人士也能 利用錄音界面(Audio Interface)錄製出不錯的成品; 但是一旦錄製了多軌(Multi-Track Recording)就會面臨到混音(Mixing)的問題,即需要把多軌的聲音混合在同一個軌中。 混音牽扯到許多音響及聲學心理學的相關技術與知識,非專業人士要混出尚可的成品有 一定的難度,所以我們提出了自動多軌混音系統(Automatic Multi-track Mixing System), 希望藉由監督式學習的方式學習各軌間混音參數的調配,產生每首的基礎混音(Basic mix-down)來幫助非混音專業人士也能混出不錯的成品(Mix-down)。由於混音參數取得 不易,我們會先藉由分軌及混音好的關係估計出各個混音參數,接著利用其參數進行混 音模型(Model)的建立。在參數學習(Parameter Learning)方面由於每軌的混音參數是有依 賴關係的(Dependency),我們採用了核依賴估計(Kernel Dependency Estimation)[1]的參數 學習(Parameter Learning)方式來預測每軌的混音參數。AbstractDue to the revolution of digital music, people can create recordings in a home studio with cheaper gear. However multi-track recordings need to be mixed to combine them into one or more channels. The question is that mixing requires background knowledge in sound engineering and psychoacoustics. It is difficult to get good mixdown for non-specialist in sound engineer. In this paper, we use supervised learning method for automatically mixing multi-track recording into coherent and well-balanced piece. Due to lack of mixing parameters, first we estimate the weight of mixing parameters by using the relation between raw multi-track and mixdown. Given the mixing parameters for any music genre, we use kernel decency estimation method to create our mixing model. The experiment show KDE is 42 able to make a more satisfactory estimation than treating each parameter independently.關鍵詞:核依賴估計,音樂資訊檢索,音樂製作,混音
Automatic Multi-track Mixing by Kernel Dependency Estimation
d218974206
d243864631
d218947434
d9937246
Economic analysis indicates a relationship between consumer sentiment and stock price movements. In this study we harness features from Twitter messages to capture public mood related to four Tech companies for predicting the daily up and down price movements of these companies' NASDAQ stocks. We propose a novel model combining features namely positive and negative sentiment, consumer confidence in the product with respect to 'bullish' or 'bearish' lexicon and three previous stock market movement days. The features are employed in a Decision Tree classifier using cross-fold validation to yield accuracies of 82.93%,80.49%, 75.61% and 75.00% in predicting the daily up and down changes of Apple (AAPL), Google (GOOG), Microsoft (MSFT) and Amazon (AMZN) stocks respectively in a 41 market day sample.
An experiment in integrating sentiment features for tech stock prediction in Twitter Conference or Workshop Item How to cite: An Experiment in Integrating Sentiment Features for Tech Stock Prediction in Twitter
d7058940
This paper describes the Error-Annotated German Learner Corpus (EAGLE), a corpus of beginning learner German with grammatical error annotation. The corpus contains online workbook and and hand-written essay data from learners in introductory German courses at The Ohio State University. We introduce an error typology developed for beginning learners of German that focuses on linguistic properties of lexical items present in the learner data and that has three main error categories for syntactic errors: selection, agreement, and word order. The corpus uses an error annotation format that extends the multi-layer standoff format proposed byLüdeling et al. (2005)to include incremental target hypotheses for each error. In this format, each annotated error includes information about the location of tokens affected by the error, the error type, and the proposed target correction. The multi-layer standoff format allows us to annotate ambiguous errors with more than one possible target correction and to annotate the multiple, overlapping errors common in beginning learner productions.
EAGLE: an Error-Annotated Corpus of Beginning Learner German
d10946638
One of the major issues dealing with any workflow management frameworks is the components interoperability. In this paper, we are concerned with the Apache UIMA framework. We address the problem by considering separately the development of new components and the integration of existing tools. For the former objective, we propose an API to generically handle TS objects by their name using reflexivity in order to make the components TS-independent. In the latter case, we distinguish the case of aggregating heterogeneous TS-dependent UIMA components from the case of integrating non UIMA-native third party tools. We propose a mapper component to aggregate TS-dependent UIMA components. And we propose a component to wrap command lines third party tools and a set of components to connect various markup languages with the UIMA data structure. Finally we present two situations where these solutions were effectively used: Training a POS tagger system from a treebank, and embedding an external POS tagger in a workflow. Our approch aims at providing quick development solutions.
Tackling interoperability issues within UIMA workflows
d15782316
As part of a project to construct an interactive program which will encourage children to play with language by building jokes, we have developed a large lexical database, closely based on WordNet. As well as the standard WordNet information about part of speech, synonymy, hyponymy, etc, we have added phonetic representations and symbolic links allowing attachment of pictures. All information is represented in a relational database, allowing powerful searches using SQL via a Java API. The lexicon has a facility to label subsets of the lexicon with symbolic names, and we are working to incorporate some educationally relevant word lists as sublexicons. This should also allow us to improve the familiarity ratings which the lexicon assigns to words.
Building a Lexical Database for an Interactive Joke-Generator
d1692610
This paper describes a prceXype for automatically scoring College Board Advanced Placement (AP) Biology essays. I. The scoring technique used in this study was based on a previous method used to score sentence-length responses(Burstein, et al, 1996). One hundred training essays were used to build an example-based lexicon and concept granunars. The prototype accesses information from the lexicon and concept grammars to score essays by assigning a classification of Excellent or Poor based on the number of points assigned during scoring. Final computer-based essay scores are based on the system's recognition of conceptual information in the essays. Conceptual analysis in essays is essential to provide a classification based on the essay content. In addition, computergenerated information about essay content can be used to produce diagnostic feedback. The set of essays used in this study had been scored by human raters. The results reported in the paper show 94% agreement on exact or adjacent scores between human rater scores and computer-hased scores for 105 test essays. The methods underlying this application could be used in a number of applications involving rapid semantic analysis of textual materials, especially with regard to scientific or other technical text.
An Automatic Scoring System For Advanced Placement Biology Essays
d15479326
This paper describes how a 45-hour Computers in Translation course is actually taught to 3rd-year translation students at the University of Alacant; the course described started in year 1995-1996 and has undergone substantial redesign until its present form. It is hoped that this description may be of use to instructors who are forced to teach a similar subject in such as small slot of time and need some design guidelines.
A 45-hour Computers in Translation course
d14430523
We investigate the role of increasing friendship in dialogue, and propose a first step towards a computational model of the role of long-term relationships in language use between humans and embodied conversational agents. Data came from a study of friends and strangers, who either could or could not see one another, and who were asked to give directions to one-another, three subsequent times. Analysis focused on differences in the use of dialogue acts and non-verbal behaviors, as well as cooccurrences of dialogue acts, eye gaze and head nods, and found a pattern of verbal and nonverbal behavior that differentiates the dialogue of friends from that of strangers, and differentiates early acquaintances from those who have worked together before. Based on these results, we present a model of deepening rapport which would enable an ECA to begin to model patterns of human relationships.
Coordination in Conversation and Rapport
d218974243
d110443
In the field of machine translation, automatic metrics have proven quite valuable in system development for tracking progress and measuring the impact of incremental changes. However, human judgment still plays a large role in the context of evaluating MT systems. For example, the GALE project uses humantargeted translation edit rate (HTER), wherein the MT output is scored against a post-edited version of itself (as opposed to being scored against an existing human reference). This poses a problem for MT researchers, since HTER is not an easy metric to calculate, and would require hiring and training human annotators to perform the editing task. In this work, we explore soliciting those edits from untrained human annotators, via the online service Amazon Mechanical Turk. We show that the collected data allows us to predict HTER-ranking of documents at a significantly higher level than the ranking obtained using automatic metrics.
Predicting Human-Targeted Translation Edit Rate via Untrained Human Annotators
d1858741
Our goal is to predict the first language (L1) of English essays's authors with the help of the TOEFL11 corpus where L1, prompts (topics) and proficiency levels are provided. Thus we approach this task as a classification task employing machine learning methods. Out of key concepts of machine learning, we focus on feature engineering. We design features across all the L1 languages not making use of knowledge of prompt and proficiency level. During system development, we experimented with various techniques for feature filtering and combination optimized with respect to the notion of mutual information and information gain. We trained four different SVM models and combined them through majority voting achieving accuracy 72.5%.
d16181278
The representative method of using morphological evidence for Chinese unknown word (UW) extraction is Chinese word segmentation (CWS) model, and the method of using distributional evidence for UW extraction is accessor variety (AV) criterion. However, neither of these methods has been verified on large-scale corpus. In this paper, we propose extensions to remedy the drawbacks of these two methods to handle large-scale corpus:(1) for CWS, we propose a generalized definition of word to improve the recall; and (2) for AV, we propose a restricted version to decrease noise. We carry out experiments on a Chinese Web corpus with approximate 200 billion Chinese characters. Experimental results show that our methods outperform the baselines, and the combination of the two evidences can further improve the performance. Moreover, our methods can also efficiently segment the corpus on the fly, which is especially valuable for processing large-scale corpus.
Extract Chinese Unknown Words from a Large-scale Corpus Using Morphological and Distributional Evidences
d18347195
Word boundary detection in variable noise-level environments by support vector machine (SVM) using Low-band Wavelet Energy (LWE) and Zero Crossing Rate (ZCR) features is proposed in this paper. The Wavelet Energy is derived based on Wavelet transformation; it can reduce the affection of noise in a speech signal. With the inclusion of ZCR, we can robustly and effectively detect word boundary from noise with only two features. For detector design, a Gaussian-kernel SVM is used. The proposed detection method is applied to detection word boundaries for an isolated word recognition system in variable noisy environments. Experiments with different types of noises and various signal-to-noise ratios are performed. The results show that using the LWE and ZCR parameters-based SVM, good performance is achieved. Comparison with another robust detection method has also verified the performance of the proposed method.
Wavelet Energy-Based Support Vector Machine for Noisy Word Boundary Detection With Speech Recognition Application
d5479016
In syntax-directed translation, the sourcelanguage input is first parsed into a parsetree, which is then recursively converted into a string in the target-language. We model this conversion by an extended treeto-string transducer that has multi-level trees on the source-side, which gives our system more expressive power and flexibility. We also define a direct probability model and use a linear-time dynamic programming algorithm to search for the best derivation. The model is then extended to the general log-linear framework in order to incorporate other features like n-gram language models. We devise a simple-yet-effective algorithm to generate non-duplicate k-best translations for ngram rescoring. Preliminary experiments on English-to-Chinese translation show a significant improvement in terms of translation quality compared to a state-of-theart phrase-based system.
Statistical Syntax-Directed Translation with Extended Domain of Locality
d14409919
This paper proposes a networked data mining method for relations discovery from large corpus. The key idea is representing the named entities pairs and their contexts as the network structure and detecting the communities from the network. Then each community relates to a relation the named entities pairs in the same community have the same relation. Finally, we labeled the relations. Our experiment using the corpus of People's Daily reveals not only that the relations among named entities could be detected with high precision, but also that appropriate labels could be automatically provided for the relations.
Discovering Relations among Named Entities by Detecting Community Structure
d226283941
d14067
We offer a critical review of the current state of opinion role extraction involving opinion verbs. We argue that neither the currently available lexical resources nor the manually annotated text corpora are sufficient to appropriately study this task. We introduce a new corpus focusing on opinion roles of opinion verbs from the Subjectivity Lexicon and show potential benefits of this corpus. We also demonstrate that state-of-the-art classifiers perform rather poorly on this new dataset compared to the standard dataset for the task showing that there still remains significant research to be done.
Opinion Holder and Target Extraction for Verb-based Opinion Predicates -The Problem is Not Solved
d18957176
EUFID: A FRIENDLY AND FLEXIBLE FRONT-END FOR DATA MANAGEMENT SYSTEMS
d45270485
Dans cet article, nous abordons la problématique du fonctionnement de la temporalité en langue des signes française (LSF). Nous allons étudier plus particulièrement quelques structures portant sur la durée. Nous présenterons dans un premier temps les descriptions existantes du système aspecto-temporel de la LSF et les difficultés que nous trouvons pour modéliser ces travaux. Le but de cet article est de proposer une grammaire formelle qui prenne en compte le fonctionnement de la LSF et qui puisse faire l'objet d'un traitement de modélisation. Notre démarche consiste à étudier un corpus LSF pour établir des liens de fonction à forme afin d'obtenir des règles de grammaire qu'on peut générer dans un projet de synthèse à l'aide d'un signeur avatar.Abstract. Temporality constitutes a major issue in filed of modeling french signed language (LSF). In fact, it is very difficult to model actual discriptions of the aspect-temporal systems of LSF. In this paper we present the bases of a novel formal grammar that permits the modeling of the LSF. This paper presents a study to construct this grammar. We analysed a French SL corpus to create formel rool between the signed gesture and its signification. Our objective is to obtain rules of grammar that can generate a synthesis project using a signer avatar.
21 ème Traitement Automatique des Langues Naturelles
d10052144
The FLaReNet Strategic Agenda highlights the most pressing needs for the sector of Language Resources and Technologies and presents a set of recommendations for its development and progress in Europe, as issued from a three-year consultation of the FLaReNet European project. The FLaReNet recommendations are organised around nine dimensions: a) documentation b) interoperability c) availability, sharing and distribution d) coverage, quality and adequacy e) sustainability f) recognition g) development h) infrastructure and i) international cooperation. As such, they cover a broad range of topics and activities, spanning over production and use of language resources, licensing, maintenance and preservation issues, infrastructures for language resources, resource identification and sharing, evaluation and validation, interoperability and policy issues. The intended recipients belong to a large set of players and stakeholders in Language Resources and Technology, ranging from individuals to research and education institutions, to policy-makers, funding agencies, SMEs and large companies, service and media providers. The main goal of these recommendations is to serve as an instrument to support stakeholders in planning for and addressing the urgencies of the Language Resources and Technologies of the future.
The FLaReNet Strategic Language Resource Agenda
d17940556
We present a data-driven approach to learn user-adaptive referring expression generation (REG) policies for spoken dialogue systems. Referring expressions can be difficult to understand in technical domains where users may not know the technical 'jargon' names of the domain entities. In such cases, dialogue systems must be able to model the user's (lexical) domain knowledge and use appropriate referring expressions. We present a reinforcement learning (RL) framework in which the system learns REG policies which can adapt to unknown users online. Furthermore, unlike supervised learning methods which require a large corpus of expert adaptive behaviour to train on, we show that effective adaptive policies can be learned from a small dialogue corpus of non-adaptive human-machine interaction, by using a RL framework and a statistical user simulation. We show that in comparison to adaptive hand-coded baseline policies, the learned policy performs significantly better, with an 18.6% average increase in adaptation accuracy. The best learned policy also takes less dialogue time (average 1.07 min less) than the best hand-coded policy. This is because the learned policies can adapt online to changing evidence about the user's domain expertise.
Learning to Adapt to Unknown Users: Referring Expression Generation in Spoken Dialogue Systems
d15281226
1In this demonstration, we will showcase BBN's Speech-to-Speech (S2S) translation system that employs novel interaction strategies to resolve errors through user-friendly dialog with the speaker. The system performs a series of analysis on input utterances to detect out-ofvocabulary (OOV) named-entities and terms, sense ambiguities, homophones, idioms and ill-formed inputs. This analysis is used to identify potential errors and select an appropriate resolution strategy. Our evaluation shows a 34% (absolute) improvement in cross-lingual transfer of erroneous concepts in our English to Iraqi-Arabic S2S system.
Interactive Error Resolution Strategies for Speech-to-Speech Translation Systems
d17281411
Verb Particle Constructions (VPCs) are flexible in nature and hence quite complex and challenging to handle. As a consequence, VPCs generate a lot of interest for NLP community. Despite their prevalence in English they are not handled very well, and hence often result in poor quality of translation. In this paper we investigate VPCs for English to Hindi translation. An English VPC can have different meanings in Hindi based on what its neighboring entities are. The paper focuses on finding the correct Hindi verb for an English VPC. We also discuss some rules for VPC identification, and approaches for resolving the context of a VPC for English to Hindi machine translation.
Context Resolution of Verb Particle Constructions for English to Hindi Translation
d58248297
From a purely theoretical point of view, it makes sense to approach recognizing textual entailment (RTE) with the help of logic. After all, entailment matters are all about logic. In practice, only few RTE systems follow the bumpy road from words to logic. This is probably because it requires a combination of robust, deep semantic analysis and logical inference-and why develop something with this complexity if you perhaps can get away with something simpler? In this article, with the help of an RTE system based on Combinatory Categorial Grammar, Discourse Representation Theory, and first-order theorem proving, we make an empirical assessment of the logic-based approach. High precision paired with low recall is a key characteristic of this system. The bottleneck in achieving high recall is the lack of a systematic way to produce relevant background knowledge. There is a place for logic in RTE, but it is (still) overshadowed by the knowledge acquisition problem.
Is there a place for logic in recognizing textual entailment?
d227230691
d20499557
Language resources and tools to create and process these resources are necessary components in human language technology and natural language applications. In this paper, we describe a survey of existing language resources for Swedish, and the need for Swedish language resources to be used in research and real-world applications in language technology as well as in linguistic research. The survey is based on a questionnaire sent to industry and academia, institutions and organizations, and to experts involved in the development of Swedish language resources in Sweden, the Nordic countries and world-wide.
Language Resources and Tools for Swedish: A Survey
d1183321
The implications of a specific pseudometric on the collection of languages over a finite alphabet are explored. In distinction from an approach in(Calude et al., 2009) that relates to collections of infinite or bi-infinite sequences, the present work is based on an adaptation of the "Besicovitch" pseudometric introduced by Besicovitch (1932) and elaborated in(Cattaneo et al., 1997)in the context of cellular automata. Using this pseudometric to form a metric quotient space, we study its properties and draw conclusions about the location of certain well-understood families of languages in the language space. We find that topologies, both on the space of formal languages itself and upon quotient spaces derived from pseudometrics on the language space, may offer insights into the relationships, and in particular the distance, between languages over a common alphabet.
Topology of Language Classes
d226262318
Corresponding author 1 Our code is available at https://github.com/ WangsyGit/PathQG.(a) Machine generated questions (Qs) for an input text together with human generated ones (GTQs). Phrases underlined are the answers to ground-truth questions.(b) Knowledge graph constructed based on the input text shown in top sub-figure. Two colored ellipsoid are two query paths related to two ground truth questions in sub-figure (a) respectively. Nodes in green are covered by ground-truth questions.
PathQG: Neural Question Generation from Facts
d1034973
This paper describes a system aimed at automatically scoring two task types of high and medium-high linguistic entropy from a spoken English test with a total of six widely differing task types.We describe the speech recognizer used for this system and its acoustic model and language model adaptation; the speech features computed based on the recognition output; and finally the scoring models based on multiple regression and classification trees.For both tasks, agreement measures between machine and human scores (correlation, kappa) are close to or reach inter-human agreements.
Towards Automatic Scoring of a Test of Spoken Language with Heterogeneous Task Types
d15416031
Previous work on opinion mining and sentiment analysis mainly concerns product, movie, or literature reviews; few applied this technique to analyze the publicity of person. We present a novel document modeling method that utilizes embeddings of emotion keywords to perform reader's emotion classification, and calculates a publicity score that serves as a quantifiable measure for the publicity of a person of interest. Experiments are conducted on two Chinese corpora that in total consists of over forty thousand users' emotional response after reading news articles. Results demonstrate that the proposed method can outperform state-ofthe-art reader-emotion classification methods, and provide a substantial ground for publicity score estimation for candidates of political elections. We believe it is a promising direction for mining the publicity of a person from online social and news media that can be useful for propaganda and other purposes.
How Do I Look? Publicity Mining From Distributed Keyword Representation of Socially Infused News Articles
d28992980
When response metrics for evaluating the utility of machine translation (MT) output on a given task do not yield a single ranking of MT engines, how are MT users to decide which engine best supports their task? When the cost of different types of response errors vary, how are MT users to factor that information into their rankings? What impact do different costs have on response-based rankings?Starting with data from an extraction experiment detailed in Voss & Tate(2006), this paper describes three response-rate metrics developed to quantify different aspects of MT users' performance identifying who/when/where-items in MT output, and then presents a loss function analysis over these rates to derive a single customizable metric, applying a range of values to correct responses and costs to different error types.For the given experimental dataset, loss function analyses provided a clearer characterization of the engines' relative strength than did comparing the response rates to each other. For one MT engine, varying the costs had no impact: the engine consistently ranked best. By contrast, cost variations did impact the ranking of the other two engines: a rank reversal occurred on who-item extractions when incorrect responses were penalized more than non-responses.Future work with loss analysis, developing operational cost ratios of error rates to correct response rates, will re-quire user studies and expert documentscreening personnel to establish baseline values for effective MT engine support on wh-item extraction.
Combining Evaluation Metrics Via Loss Functions
d7529491
Non-sentential utterances (e.g., shortanswers as in "Who came to the party?"-"Peter.") are pervasive in dialogue. As with other forms of ellipsis, the elided material is typically present in the context (e.g., the question that a short answer answers). We present a machine learning approach to the novel task of identifying fragments and their antecedents in multiparty dialogue. We compare the performance of several learning algorithms, using a mixture of structural and lexical features, and show that the task of identifying antecedents given a fragment can be learnt successfully (f (0.5) = .76); we discuss why the task of identifying fragments is harder (f (0.5) = .41) and finally report on a combined task (f (0.5) = .38).
Towards Finding and Fixing Fragments: Using ML to Identify Non-Sentential Utterances and their Antecedents in Multi-Party Dialogue
d252624637
This paper presents the Latvian Language Learner Corpus (LaVA) developed at the Institute of Mathematics and Computer Science, University of Latvia. LaVA corpus contains 1015 essays (190k tokens and 790k characters excluding whitespaces) from foreigners studying at Latvian higher education institutions and who are learning Latvian as a foreign language in the first or second semester, reaching the A1 (possibly A2) Latvian language proficiency level. The corpus has morphological and error annotations. Error analysis and the statistics of the LaVA corpus are also provided in the paper. The corpus is publicly available at: http://www.korpuss.lv/id/LaVA.
LaVA -Latvian Language Learner corpus
d218974283
This paper describes the training of a general-purpose German sentiment classification model. Sentiment classification is an important aspect of general text analytics. Furthermore, it plays a vital role in dialogue systems and voice interfaces that depend on the ability of the system to pick up and understand emotional signals from user utterances. The presented study outlines how we have collected a new German sentiment corpus and then combined this corpus with existing resources to train a broad-coverage German sentiment model. The resulting data set contains 5.4 million labelled samples. We have used the data to train both, a simple convolutional and a transformer-based classification model and compared the results achieved on various training configurations. The model and the data set will be published along with this paper.
Training a Broad-Coverage German Sentiment Classification Model for Dialog Systems
d42654123
The Dictionaries division at Oxford University Press (OUP) is aiming to model, integrate, and publish lexical content for 100 languages focussing on digitally under-represented languages. While there are multiple ontologies designed for linguistic resources, none had adequate features for meeting our requirements, chief of which was the capability to losslessly capture diverse features of many different languages in a dictionary format, while supplying a framework for inferring relations like translation, derivation, etc., between the data. Building on valuable features of existing models, and working with OUP monolingual and bilingual dictionary datasets, we have designed and implemented a new linguistic ontology. The ontology has been reviewed by a number of computational linguists, and we are working to move more dictionary data into it. We have also developed APIs to surface the linked data to dictionary websites.
Towards a Linguistic Ontology with an Emphasis on Reasoning and Knowledge Reuse
d10442166
ÙØÓÑ Ø ÏÓÖ ËÔ Ò Í× Ò À Ò Å Ö ÓÚ ÅÓ Ð ÓÖ Ê ¬Ò Ò ÃÓÖ Ò Ì ÜØ ÓÖÔÓÖ Ó¹ Ð Ä Ò Ë Ò ¹ ÓÓ Ä Ò À ¹ Ò Ê Ñ AEÄÈ Ä º¸ ÔØº Ó ÓÑÔÙØ Ö Ë Ò Ò Ò Ò Ö Ò ¸ÃÓÖ ÍÒ Ú Ö× ØÝ ½¸ ¹ ¸ Ò Ñ¹ ÓÒ ¸Ë ÓÒ Ù ¹ Ù¸Ë ÓÙÐ ½¿ ¹ ¼½¸ÃÓÖ À Ù ¹Ë Ó Ä Ñ ÔØº Ó ÁÒ ÓÖÑ Ø ÓÒ Ò ÓÑÑÙÒ Ø ÓÒ׸ ÓÒ Ò ÍÒ Ú Ö× ØÝ ½½ ÒË Ó¹ ÓÒ ¸ ÓÒ Ò ¿¿¼¹ ¼ ¸ÃÓÖ ×ØÖ Ø Ì × Ô Ô Ö ÔÖÓÔÓ× × ÛÓÖ ×Ô Ò ÑÓ Ð Ù× Ò Ò Å Ö ÓÚ ÑÓ Ð´ÀÅŵ ÓÖ Ö ¬Ò Ò ÃÓ¹ Ö Ò Ö Û Ø ÜØ ÓÖÔÓÖ º ÈÖ Ú ÓÙ× ×Ø Ø ×Ø Ð Ô¹ ÔÖÓ × ÓÖ ÙØÓÑ Ø ÛÓÖ ×Ô Ò Ú Ù× ÑÓ Ð× Ø Ø Ñ Ù× Ó Ò ÙÖ Ø ÔÖÓ Ð Ø × Ù× Ø Ý Ó ÒÓØ ÓÒ× Ö Ø ÔÖ Ú ÓÙ× ×Ô ¹ Ò ×Ø Ø º Ï ÓÒ× Ö ÛÓÖ ×Ô Ò ÔÖÓ Ð Ñ × Ð ×× ¬ Ø ÓÒ ÔÖÓ Ð Ñ ×Ù × È ÖØ¹Ó ¹ËÔ ´ÈÇ˵ Ø Ò Ò Ú ÜÔ Ö Ñ ÒØ Û Ø Ú Ö¹ ÓÙ× ÑÓ Ð× ÓÒ× Ö Ò ÜØ Ò ÓÒØ ÜØº ܹ Ô Ö Ñ ÒØ Ð Ö ×ÙÐØ × ÓÛ× Ø Ø Ø Ô Ö ÓÖÑ Ò Ó Ø ÑÓ Ð ÓÑ × ØØ Ö × Ø ÑÓÖ ÓÒ¹ Ø ÜØ ÓÒ× Ö º ÁÒ × Ó Ø × Ñ ÒÙÑ Ö Ó Ô Ö Ñ Ø Ö× Ö Ù× Û Ø ÓØ Ö Ñ Ø Ó ¸ Ø × ÔÖÓÚ Ø Ø ÓÙÖ ÑÓ Ð × ÑÓÖ « Ø Ú Ý × ÓÛ Ò Ø ØØ Ö Ö ×ÙÐØ×º ½ ÁÒØÖÓ Ù Ø ÓÒ
d14813549
First story detection (FSD) involves identifying first stories about events from a continuous stream of documents. A major problem in this task is the high degree of lexical variation in documents which makes it very difficult to detect stories that talk about the same event but expressed using different words. We suggest using paraphrases to alleviate this problem, making this the first work to use paraphrases for FSD. We show a novel way of integrating paraphrases with locality sensitive hashing (LSH) in order to obtain an efficient FSD system that can scale to very large datasets. Our system achieves state-of-the-art results on the first story detection task, beating both the best supervised and unsupervised systems. To test our approach on large data, we construct a corpus of events for Twitter, consisting of 50 million documents, and show that paraphrasing is also beneficial in this domain.
Using paraphrases for improving first story detection in news and Twitter
d226239317
d256461142
Sentence embeddings in the form of fixed-size vectors that capture the information in the sentence as well as the context are critical components of Natural Language Processing systems. With transformer model based sentence encoders outperforming the other sentence embedding methods in the general domain, we explore the transformer based architectures to generate dense sentence embeddings in the biomedical domain. In this work, we present BioSimCSE, where we train sentence embeddings with domain specific transformer based models with biomedical texts. We assess our model's performance with zero-shot and finetuned settings on Semantic Textual Similarity (STS) and Recognizing Question Entailment (RQE) tasks. Our BioSimCSE model using Bi-oLinkBERT achieves state of the art (SOTA) performance on both tasks.
BioSimCSE: BioMedical Sentence Embeddings using Contrastive learning
d256461174
Creative texts can sometimes be difficult to understand as they balance on the edge of comprehensibility. However, good language skills and common sense can allow advanced language users to both interpret creative texts and reject some linguistic input as nonsense. The goal of this work is to evaluate whether current language models can make the distinction between creative language use and nonsense. To test this, we have computed the mean rank and pseudo-log-likelihood score (PLL) of metaphorical and nonsensical sentences. We have also fine-tuned RoBERTa for binary classification between the two categories. There was a significant difference in the mean ranks and PLL scores of the categories, and the classifier reached around 75-88% accuracy. The results raise interesting questions on what could have led to such satisfactory performance.
On the Cusp of Comprehensibility: Can Language Models Distinguish Between Metaphors and Nonsense?
d259370591
We introduce LAVIS, an open-source deep learning library for LAnguage-VISion research and applications. LAVIS aims to serve as a one-stop comprehensive library that brings recent advancements in the language-vision field accessible for researchers and practitioners, as well as fertilizing future research and development. It features a unified interface to easily access state-of-the-art image-language, videolanguage models and common datasets. LAVIS supports training, evaluation and benchmarking on a rich variety of tasks, including multimodal classification, retrieval, captioning, visual question answering, dialogue and pre-training. In the meantime, the library is also highly extensible and configurable, facilitating future development and customization. In this paper, we describe design principles, key components and functionalities of the library, and also present benchmarking results across common language-vision tasks. RunnerBase RunnerIter lavis.runners Builders Datasets lavis.datasets … ImageProcessors TextProcessors VideoProcessors lavis.processors ALBEF BLIP BLIP2 CLIP lavis.models lavis.models lavis.models lavis.tasks
LAVIS: A One-stop Library for Language-Vision Intelligence
d20566599
In recent years, the development of intelligent tutoring dialogue systems has become more prevalent, in an attempt to close the performance gap between human and computer tutors. Tutoring applications differ in many ways, however, from the types of applications for which spoken dialogue systems are typically developed. This talk will illustrate some of the opportunities and challenges in this area, focusing on issues such as affective reasoning, discourse and dialogue analysis, and performance evaluation.
Invited Talk Discourse and Dialogue Processing in Spoken Intelligent Tutoring Systems
d18517863
Assamese is a morphologically rich, agglutinative and relatively free word order Indic language. Although spoken by nearly 30 million people, very little computational linguistic work has been done for this language. In this paper, we present our work on part of speech (POS) tagging for Assamese using the well-known Hidden Markov Model. Since no well-defined suitable tagset was available, we develop a tagset of 172 tags in consultation with experts in linguistics. For successful tagging, we examine relevant linguistic issues in Assamese.For unknown words, we perform simple morphological analysis to determine probable tags. Using a manually tagged corpus of about 10000 words for training, we obtain a tagging accuracy of nearly 87% for test inputs.
Part of Speech Tagger for Assamese Text
d1829055
The purpose of our work is to explore the possibility of using sentence diagrams produced by schoolchildren as training data for automatic syntactic analysis. We have implemented a sentence diagram editor that schoolchildren can use to practice morphology and syntax. We collect their diagrams, combine them into a single diagram for each sentence and transform them into a form suitable for training a particular syntactic parser. In this study, the object language is Czech, where sentence diagrams are part of elementary school curriculum, and the target format is the annotation scheme of the Prague Dependency Treebank. We mainly focus on the evaluation of individual diagrams and on their combination into a merged better version.This work is licensed under a Creative Commons Attribution 4.0 International License. Page numbers and proceedings footer are added by the organizers. License details: http://creativecommons.org/licenses/by/4.0/
Sentence diagrams: their evaluation and combination
d1548819
d6367774
We introduce the zipfR package, a powerful and user-friendly open-source tool for LNRE modeling of word frequency distributions in the R statistical environment. We give some background on LNRE models, discuss related software and the motivation for the toolkit, describe the implementation, and conclude with a complete sample session showing a typical LNRE analysis.
zipfR: Word Frequency Distributions in R
d9632975
Entailment recognition approaches are useful for application domains such as information extraction, question answering or summarisation, for which evidence from multiple sentences needs to be combined. We report on a new 3-way judgement Recognizing Textual Entailment (RTE) resource that originates in the Social Media domain, and explain our semi-automatic creation method for the special purpose of information verification, which draws on manually established rumourous claims reported during crisis events. From about 500 English tweets related to 70 unique claims we compile and evaluate 5.4k RTE pairs, while continue automatizing the workflow to generate similar-sized datasets in other languages.
Monolingual Social Media Datasets for Detecting Contradiction and Entailment
d21729459
Emotion recognition has become a popular topic of interest, especially in the field of human computer interaction. Previous works involve unimodal analysis of emotion, while recent efforts focus on multimodal emotion recognition from vision and speech. In this paper, we propose a new method of learning about the hidden representations between just speech and text data using convolutional attention networks. Compared to the shallow model which employs simple concatenation of feature vectors, the proposed attention model performs much better in classifying emotion from speech and text data contained in the CMU-MOSEI dataset.
Convolutional Attention Networks for Multimodal Emotion Recognition from Speech and Text Data
d16954494
In this paper, we study the problem of disfluency detection using the encoder-decoder framework. We treat disfluency detection as a sequence-to-sequence problem and propose a neural attentionbased model which can efficiently model the long-range dependencies between words and make the resulting sentence more likely to be grammatically correct. Our model firstly encodes the source sentence with a bidirectional Long Short-Term Memory (BI-LSTM) and then uses the neural attention as a pointer to select an ordered subsequence of the input as the output. Experiments show that our model achieves the state-of-the-art f-score of 86.7% on the commonly used English Switchboard test set. We also evaluate the performance of our model on the in-house annotated Chinese data and achieve a significantly higher f-score compared to the baseline of CRF-based approach.
A Neural Attention Model for Disfluency Detection
d9331674
In this paper, we question the homogeneity of a large parallel corpus by measuring the similarity between various sub-parts. We compare results obtained using a general measure of lexical similarity based on χ 2 and by counting the number of discourse connectives. We argue that discourse connectives provide a more sensitive measure, revealing differences that are not visible with the general measure. We also provide evidence for the existence of specific characteristics defining translated texts as opposed to nontranslated ones, due to a universal tendency for explicitation.
How Comparable are Parallel Corpora? Measuring the Distribution of General Vocabulary and Connectives
d6453189
We present a shallow approach to the sentence ordering problem. The employed features are based on discourse entities, shallow syntactic analysis, and temporal precedence relations retrieved from VerbOcean. We show that these relatively simple features perform well in a machine learning algorithm on datasets containing sequences of events, and that the resulting models achieve optimal performance with small amounts of training data. The model does not yet perform well on datasets describing the consequences of events, such as the destructions after an earthquake.
Domain-Independent Shallow Sentence Ordering
d248780478
Neural networks are widely used in various NLP tasks for their remarkable performance. However, the complexity makes them difficult to interpret, i.e., they are not guaranteed right for the right reason. Besides the complexity, we reveal that the model pathology -the inconsistency between word saliency and model confidence, further hurts the interpretability. We show that the pathological inconsistency is caused by the representation collapse issue, which means that the representation of the sentences with tokens in different saliency reduced is somehow collapsed, and thus the important words cannot be distinguished from unimportant words in terms of model confidence changing. In this paper, to mitigate the pathology and obtain more interpretable models, we propose Pathological Contrastive Training (PCT) framework, which adopts contrastive learning and saliency-based samples augmentation to calibrate the sentences representation. Combined with qualitative analysis, we also conduct extensive quantitative experiments and measure the interpretability with eight reasonable metrics. Experiments show that our method can mitigate the model pathology and generate more interpretable models while keeping the model performance. Ablation study also shows the effectiveness. * Corresponding Author. This is a wonderful movie full of grace and hope. This is a wonderful movie full of grace and hope.
Mitigating the Inconsistency Between Word Saliency and Model Confidence with Pathological Contrastive Training
d235258294
d250390521
Previous studies have shown that the Abstract Meaning Representation (AMR) can improve Neural Machine Translation (NMT). However, there has been little work investigating incorporating AMR graphs into Transformer models. In this work, we propose a novel encoder-decoder architecture which augments the Transformer model with a Heterogeneous Graph Transformer (Yao et al., 2020) which encodes source sentence AMR graphs. Experimental results demonstrate the proposed model outperforms the Transformer model and previous non-Transformer based models on two different language pairs in both the high resource setting and low resource setting. Our source code, training corpus and released models are available at https://github.com/ jlab-nlp/amr-nmt.
Improving Neural Machine Translation with the Abstract Meaning Representation by Combining Graph and Sequence Transformers
d18205010
We present POLYGLOTIE, a web-based tool for developing extractors that perform Information Extraction (IE) over multilingual data. Our tool has two core features: First, it allows users to develop extractors against a unified abstraction that is shared across a large set of natural languages. This means that an extractor needs only be created once for one language, but will then run on multilingual data without any additional effort or language-specific knowledge on part of the user. Second, it embeds this abstraction as a set of views within a declarative IE system, allowing users to quickly create extractors using a mature IE query language. We present POLYGLOTIE as a hands-on demo in which users can experiment with creating extractors, execute them on multilingual text and inspect extraction results. Using the UI, we discuss the challenges and potential of using unified, crosslingual semantic abstractions as basis for downstream applications. We demonstrate multilingual IE for 9 languages from 4 different language groups: English, German, French, Spanish, Japanese, Chinese, Arabic, Russian and Hindi.
Multilingual Information Extraction with POLYGLOTIE
d2365382
The Internet has become a very popular platform for communication around the world. However because most modern computer keyboards are Latin-based, Asian language speakers (such as Chinese) cannot input characters (Hanzi) directly with these keyboards. As a result, methods for representing Chinese characters using Latin alphabets were introduced. The most popular method among these is the Pinyin input system. Pinyin is also called "Romanised" Chinese in that it phonetically resembles a Chinese character. Due to the highly ambiguous mapping from Pinyin to Chinese characters, word misuses can occur using standard computer keyboard, and more commonly so in internet chat-rooms or instant messengers where the language used is less formal. In this paper we aim to develop a system that can automatically identify such anomalies, whether they are simple typos intentional substitutions. After identifying them, the system should suggest the correct word to be used.
Professor or screaming beast? Detecting Words Misuse in Chinese
d81603
Many methods of text summarization combining sentence selection and sentence compression have recently been proposed. Although the dependency between words has been used in most of these methods, the dependency between sentences, i.e., rhetorical structures, has not been exploited in such joint methods. We used both dependency between words and dependency between sentences by constructing a nested tree, in which nodes in the document tree representing dependency between sentences were replaced by a sentence tree representing dependency between words. We formulated a summarization task as a combinatorial optimization problem, in which the nested tree was trimmed without losing important content in the source document. The results from an empirical evaluation revealed that our method based on the trimming of the nested tree significantly improved the summarization of texts.
Single Document Summarization based on Nested Tree Structure
d252818996
Multi-modal neural machine translation (MNMT) aims to improve textual level machine translation performance in the presence of text-related images. Most of the previous works on MNMT focus on multi-modal fusion methods with full visual features. However, text and its corresponding image may not match exactly, visual noise is generally inevitable. The irrelevant image regions may mislead or distract the textual attention and cause model performance degradation. This paper proposes a noise-robust multi-modal interactive fusion approach with cross-modal relation-aware mask mechanism for MNMT. A text-image relation-aware attention module is constructed through the cross-modal interaction mask mechanism, and visual features are extracted based on the text-image interaction mask knowledge. Then a noise-robust multi-modal adaptive fusion approach is presented by fusion the relevant visual and textual features for machine translation. We validate our method on the Multi30K dataset. The experimental results show the superiority of our proposed model, and achieve the state-of-the-art scores in all En-De, En-Fr and En-Cs translation tasks 1 . * Corresponding author. 1 https://github.com/nlp-mmt/Noise-robust-Text2image-Mask
Noise-robust Cross-modal Interactive Learning with Text2image Mask for Multi-modal Neural Machine Translation
d7328821
Term translation is of great importance for statistical machine translation (SMT), especially document-informed SMT. In this paper, we investigate three issues of term translation in the context of documentinformed SMT and propose three corresponding models: (a) a term translation disambiguation model which selects desirable translations for terms in the source language with domain information, (b) a term translation consistency model that encourages consistent translations for terms with a high strength of translation consistency throughout a document, and (c) a term bracketing model that rewards translation hypotheses where bracketable source terms are translated as a whole unit. We integrate the three models into hierarchical phrase-based SMT and evaluate their effectiveness on NIST Chinese-English translation tasks with large-scale training data. Experiment results show that all three models can achieve significant improvements over the baseline. Additionally, we can obtain a further improvement when combining the three models.
Modeling Term Translation for Document-informed Machine Translation
d4313559
The lack of positive results on supervised domain adaptation for WSD have cast some doubts on the utility of handtagging general corpora and thus developing generic supervised WSD systems. In this paper we show for the first time that our WSD system trained on a general source corpus (BNC) and the target corpus, obtains up to 22% error reduction when compared to a system trained on the target corpus alone. In addition, we show that as little as 40% of the target corpus (when supplemented with the source corpus) is sufficient to obtain the same results as training on the full target data. The key for success is the use of unlabeled data with SVD, a combination of kernels and SVM.
Supervised Domain Adaption for WSD
d15420134
This paper considers the problem of document-level multi-way sentiment detection, proposing a hierarchical classifier algorithm that accounts for the inter-class similarity of tagged sentiment-bearing texts. This type of classifier also provides a natural mechanism for reducing the feature space of the problem. Our results show that this approach improves on state-of-the-art predictive performance for movie reviews with three-star and fourstar ratings, while simultaneously reducing training times and memory requirements.
A Hierarchical Classifier Applied to Multi-way Sentiment Detection
d1253703
This paper presents an empirical study on four techniques of language model adaptation, including a maximum a posteriori (MAP) method and three discriminative training models, in the application of Japanese Kana-Kanji conversion. We compare the performance of these methods from various angles by adapting the baseline model to four adaptation domains. In particular, we attempt to interpret the results given in terms of the character error rate (CER) by correlating them with the characteristics of the adaptation domain measured using the information-theoretic notion of cross entropy. We show that such a metric correlates well with the CER performance of the adaptation methods, and also show that the discriminative methods are not only superior to a MAP-based method in terms of achieving larger CER reduction, but are also more robust against the similarity of background and adaptation domains.
An Empirical Study on Language Model Adaptation Using a Metric of Domain Similarity
d323236
Developing a system that can automatically respond to a user's utterance has recently become a topic of research in natural language processing. However, most works on the topic take into account only a single preceding utterance to generate a response. Recent works demonstrate that the application of statistical machine translation (SMT) techniques towards monolingual dialogue setting, in which a response is treated as a translation of a stimulus, has a great potential, and we exploit the approach to tackle the context-dependent response generation task. We attempt to extract relevant and significant information from the wider contextual scope of the conversation, and incorporate it into the SMT techniques. We also discuss the advantages and limitations of this approach through our experimental results.
Context-Dependent Automatic Response Generation Using Statistical Machine Translation Techniques
d17915994
d201680843
d12311035
In this paper, we adopt an n-best rescoring scheme using pitch-accent patterns to improve automatic speech recognition (ASR) performance. The pitch-accent model is decoupled from the main ASR system, thus allowing us to develop it independently. N-best hypotheses from recognizers are rescored by additional scores that measure the correlation of the pitch-accent patterns between the acoustic signal and lexical cues. To test the robustness of our algorithm, we use two different data sets and recognition setups: the first one is English radio news data that has pitch accent labels, but the recognizer is trained from a small amount of data and has high error rate; the second one is English broadcast news data using a state-of-the-art SRI recognizer. Our experimental results demonstrate that our approach is able to reduce word error rate relatively by about 3%. This gain is consistent across the two different tests, showing promising future directions of incorporating prosodic information to improve speech recognition.
N-Best Rescoring Based on Pitch-accent Patterns
d27334701
In this paper, we present a community answers ranking system which is based on Grice Maxims. In particular, we describe a ranking system which is based on answer relevancy scores, assigned by three main components: Named entity recognition, similarity score, and sentiment analysis.
TrentoTeam at SemEval-2017 Task 3: An application of Grice Maxims in Ranking Community Question Answers
d208332328
d3097795
This paper presents a clustering approach that simultaneously identifies product features and groups them into aspect categories from online reviews. Unlike prior approaches that first extract features and then group them into categories, the proposed approach combines feature and aspect discovery instead of chaining them. In addition, prior work on feature extraction tends to require seed terms and focus on identifying explicit features, while the proposed approach extracts both explicit and implicit features, and does not require seed terms. We evaluate this approach on reviews from three domains. The results show that it outperforms several state-of-the-art methods on both tasks across all three domains. Cell-phone
Clustering for Simultaneous Extraction of Aspects and Features from Reviews
d233029471
This paper proposes two BERT-based models for accurately rescoring (reranking) N-best speech recognition hypothesis lists. Reranking the N-best hypothesis lists decoded from the acoustic model has been proven to improve the performance in a two-stage automatic speech The 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020) Taipei, Taiwan, September 24-26, 2020. The Association for Computational Linguistics and Chinese Language Processing 148 recognition (ASR) systems. However, with the rise of pre-trained contextualized language models, they have achieved state-of-the-art performance in many NLP applications, but there is a dearth of work on investigating its effectiveness in ASR. In this paper, we develop simple yet effective methods for improving ASR by reranking the N-best hypothesis lists leveraging BERT (bidirectional encoder representations from Transformers). Specifically, we treat reranking N-best hypotheses as a downstream task by simply fine-tuning the pre-trained BERT.We proposed two BERT-based reranking language models: (1) uniBERT: ideal unigram elicited from a given N-best list taking advantage of BERT to assist a LSTMLM,(2)classBERT: treating the N-best lists reranking as a multi-class classification problem. These models attempt to harness the power of BERT to reranking the N-best hypothesis lists generated in the ASR initial pass. Experiments on the benchmark AMI dataset show that the proposed reranking methods outperform the baseline LSTMLM which is a strong and widelyused competitor with 3.14% improvement in word error rate (WER).
Innovative Pretrained-based Reranking Language Models for N-best Speech Recognition Lists
d6077224
REVISING AN ATN PARSER
d11026805
In this paper, we demonstrate how the state-of-the-art machine learning and text mining techniques can be used to build effective social media-based substance use detection systems. Since a substance use ground truth is difficult to obtain on a large scale, to maximize system performance, we explore different unsupervised feature learning methods to take advantage of a large amount of unsupervised social media data. We also demonstrate the benefit of using multi-view unsupervised feature learning to combine heterogeneous user information such as Facebook "likes" and "status updates" to enhance system performance. Based on our evaluation, our best models achieved 86% AUC for predicting tobacco use, 81% for alcohol use and 84% for illicit drug use, all of which significantly outperformed existing methods. Our investigation has also uncovered interesting relations between a user's social media behavior (e.g., word usage) and substance use.
Multi-View Unsupervised User Feature Embedding for Social Media-based Substance Use Prediction
d246702333
The detection of hyperbole is an important stepping stone to understanding the intentions of a hyperbolic utterance. We propose a model that combines pre-trained language models with privileged information for the task of hyperbole detection. We also introduce a suite of behavioural tests to probe the capabilities of hyperbole detection models across a range of hyperbole types. Our experiments show that our model improves upon baseline models on an existing hyperbole detection dataset. Probing experiments combined with analysis using local linear approximations (LIME) show that our model excels at detecting one particular type of hyperbole. Further, we discover that our experiments highlight annotation artifacts introduced through the process of literal paraphrasing of hyperbole. These annotation artifacts are likely to be a roadblock to further improvements in hyperbole detection.Type
Harnessing Privileged Information for Hyperbole Detection
d17987473
We describe the systems submitted to the shared task on pronoun prediction organized within the Second DiscoMT Workshop. The systems are trained on linguistically motivated features extracted from both sides of an English-French parallel corpus and their parses. We have used a parser that integrates morphological disambiguation and which handles the RE-PLACE_XX placeholders explicitly. In particular, we compare the relevance of three groups of features: a) syntactic (from the English parse), b) morphological (from the French morphological analysis) and c) contextual (from the French sentence) for French pronoun prediction. A discussion on the role of these sets of features for each pronoun class is included.
Predicting Pronoun Translation Using Syntactic, Morphological and Contextual Features from Parallel Data
d3618812
Treebanks are not large enough to reliably model precise lexical phenomena. This deficiency provokes attachment errors in the parsers trained on such data. We propose in this paper to compute lexical affinities, on large corpora, for specific lexico-syntactic configurations that are hard to disambiguate and introduce the new information in a parser. Experiments on the French Treebank showed a relative decrease of the error rate of 7.1% Labeled Accuracy Score yielding the best parsing results on this treebank.
Semi-supervised Dependency Parsing using Lexical Affinities
d259370793
Despite the recent advances in dialogue state tracking (DST), the joint goal accuracy (JGA) of the existing methods on MultiWOZ 2.1 still remains merely 60%. In our preliminary error analysis, we find that beam search produces a pool of candidates that is likely to include the correct dialogue state. Motivated by this observation, we introduce a novel framework, called BREAK (Beam search and RE-rAnKing), that achieves outstanding performance on DST. Our proposed method performs DST in two stages: (i) generating k-best dialogue state candidates with beam search and (ii) re-ranking the candidates to select the correct dialogue state.This simple yet powerful framework shows state-of-the-art performance on all versions of MultiWOZ and M2M datasets. Most notably, we push the joint goal accuracy to 80-90% on MultiWOZ 2.1-2.4, which is an improvement of 23.6%, 26.3%, 21.7%, and 10.8% over the previous best-performing models, respectively. The data and code will be available at https://github.com/tony-won/DST -BREAK.
BREAK: Breaking the Dialogue State Tracking Barrier with Beam Search and Re-ranking
d1177283
This paper investigates the causes of the comparatively low success rates in finding the antecedents of plural pronouns as compared to finding antecedents of singular pronouns. We are trying to show experimentally that considering morphological agreement as a strong constraint in pronoun resolution results in the erroneous interpretation of almost a quarter of the plural pronouns. The work is based on analysing sample texts from the British National Corpus and online technical manuals.
A corpus based investigation of morphological disagreement in anaphoric relations
d16964865
CDB is a relational database designed for the particular needs of representing lexical collocations. The relational model is defined such that competence-based descriptions of collocations (the competence base) and actually occurring collocation examples extracted from text corpora (the example base) complete each other. In the paper, the relational model is described and examples for the representation of German PP-verb collocations are given. A number of example queries are presented, and additional facilities which are built on top of the database are discussed.
CDB -A Database of Lexical Collocations
d196189905
Conversational machine reading comprehension (CMRC) extends traditional single-turn machine reading comprehension (MRC) by multi-turn interactions, which requires machines to consider the history of conversation. Most of models simply combine previous questions for conversation understanding and only employ recurrent neural networks (RNN) for reasoning. To comprehend context profoundly and efficiently from different perspectives, we propose a novel neural network model, Multi-perspective Convolutional Cube (MC 2 ). We regard each conversation as a cube. 1D and 2D convolutions are integrated with RNN in our model. To avoid models previewing the next turn of conversation, we also extend causal convolution partially to 2D. Experiments on the Conversational Question Answering (CoQA) dataset show that our model achieves state-of-the-art results.
MC 2 : Multi-perspective Convolutional Cube for Conversational Machine Reading Comprehension
d209315202
d187965753
d253802640
Asking questions during a lecture is a central part of the traditional classroom setting which benefits both students and instructors in many ways. However, no previous work has studied the task of automatically generating student questions based on explicit lecture context. We study the feasibility of automatically generating student questions given the lecture transcript windows where the questions were asked. First, we create a data set of student questions and their corresponding lecture transcript windows. Using this data set, we investigate variants of T5, a sequence-to-sequence generative language model, for a preliminary exploration of this task. Specifically, we compare the effects of training with continuous prefix tuning and pre-training with search engine queries. Question generation evaluation results on two MOOCs show that that pre-training on search engine queries tends to make the generation model more precise whereas continuous prefix tuning offers mixed results.
Generation of Student Questions for Inquiry-based Learning
d12878133
In this paper we investigate the relevance of aspectual type for the problem of temporal information processing, i.e. the problems of the recent TempEval challenges.
Aspectual Type and Temporal Relation Classification