_id
stringlengths
4
10
text
stringlengths
0
18.4k
title
stringlengths
0
8.56k
d14734025
The modelling of natural language tasks using data-driven methods is often hindered by the problem of insufficient naturally occurring examples of certain linguistic constructs. The task we address in this paper -quality estimation (QE) of machine translation -suffers from lack of negative examples at training time, i.e., examples of low quality translation. We propose various ways to artificially generate examples of translations containing errors and evaluate the influence of these examples on the performance of QE models both at sentence and word levels.
The role of artificially generated negative data for quality estimation of machine translation
d1774526
SRI and U.C. Berkeley have begun a cooperative effort to develop a new architecture for real-time implementation of spoken language systems (SLS). Our goal is to develop fast speech recognition algorithms, and supporting hardware capable of recognizing continuous speech from a bigram-or trigram-based 20,000-word vocabulary or a 1,000-to 5,000word SLS.
Hardware for Hidden Markov-Model-Based, Large-Vocabulary Real-Time Speech Recognition
d14858666
This paper described UIC-CSC, the entry we submitted for the Content Selection Challenge 2013. Our model consists of heuristic rules based on co-occurrences of predicates in the training data.
d8294974
We present a novel transition-based, greedy dependency parser which implements a flexible mix of bottom-up and top-down strategies. The new strategy allows the parser to postpone difficult decisions until the relevant information becomes available. The novel parser has a ∼12% error reduction in unlabeled attachment score over an arc-eager parser, with a slow-down factor of 2.8.
A Transition-Based Dependency Parser Using a Dynamic Parsing Strategy
d1371374
In order to capture rich language phenomena, neural machine translation models have to use a large vocabulary size, which requires high computing time and large memory usage. In this paper, we alleviate this issue by introducing a sentence-level or batch-level vocabulary, which is only a very small sub-set of the full output vocabulary. For each sentence or batch, we only predict the target words in its sentencelevel or batch-level vocabulary. Thus, we reduce both the computing time and the memory usage. Our method simply takes into account the translation options of each word or phrase in the source sentence, and picks a very small target vocabulary for each sentence based on a wordto-word translation model or a bilingual phrase library learned from a traditional machine translation model. Experimental results on the large-scale English-to-French task show that our method achieves better translation performance by 1 BLEU point over the large vocabulary neural machine translation system of Jean et al. (2015).
Vocabulary Manipulation for Neural Machine Translation
d252819225
We present a biomedical knowledge enhanced pre-trained language model for medicinal product vertical search. Following ELECTRA's replaced token detection (RTD) pre-training, we leverage biomedical entity masking (EM) strategy to learn better contextual word representations. Furthermore, we propose a novel pre-training task, product attribute prediction (PAP), to inject product knowledge into the pre-trained language model efficiently by leveraging medicinal product databases directly. By sharing the parameters of PAP's transformer encoder with that of RTD's main transformer, these two pre-training tasks are jointly learned. Experiments demonstrate the effectiveness of PAP task for pre-trained language model on medicinal product vertical search scenario, which includes query-title relevance, query intent classification, and named entity recognition in query.
A Domain Knowledge Enhanced Pre-Trained Language Model for Vertical Search: Case Study on Medicinal Products
d12080584
Coedition of a natural language text and its representation in some interlingual form seems the best and simplest way to share text revision across languages. For various reasons, UNL graphs are the best candidates in this context. We are developing a prototype where, in the simplest sharing scenario, naive users interact directly with the text in their language (L0), and indirectly with the associated graph. The modified graph is then sent to the UNL-L0 deconverter and the result shown. If is is satisfactory, the errors were probably due to the graph, not to the deconverter, and the graph is sent to deconverters in other languages. Versions in some other languages known by the user may be displayed, so that improvement sharing is visible and encouraging. As new versions are added with appropriate tags and attributes in the original multilingual document, nothing is ever lost, and cooperative working on a document is rendered feasible. On the internal side, liaisons are established between elements of the text and the graph by using broadly available resources such as a L0-English or better a L0-UNL dictionary, a morphosyntactic parser of L0, and a canonical graph2tree transformation. Establishing a "best" correspondence between the "UNL-tree+L0" and the "MS-L0 structure", a lattice, may be done using the dictionary and trying to align the tree and the selected trajectory with as few crossing liaisons as possible. A central goal of this research is to merge approaches from pivot MT, interactive MT, and multilingual text authoring.RésuméLa coédition d'un texte en langue naturelle et de sa représentation dans une forme interlingue semble le moyen le meilleur et le plus simple de partager la révision du texte vers plusieurs langues. Pour diverses raisons, les graphes UNL sont les meilleurs candidats dans ce contexte. Nous développons un prototype où, dans le scénario avec partage le plus simple, des utilisateurs "naïfs" interagissent directement avec le texte dans leur langue (L0), et indirectement avec le graphe associé. Le graphe modifié est ensuite envoyé au déconvertisseur UNL-L0 et le résultat est affiché. S'il est satisfaisant, les erreurs étaient probablement dues au graphe et non au déconvertisseur, et le graphe est envoyé aux déconvertisseurs vers d'autres langues. Les versions dans certaines autres langues connues de l'utilisateur peuvent être affichées, de sorte que le partage de l'amélioration soit visible et encourageant. Comme les nouvelles versions sont ajoutées dans le document multilingue original avec des balises et des attributs appropriés, rien n'est jamais perdu, et le travail coopératif sur un même document est rendu possible. Du côté interne, des liaisons sont établies entre des éléments du texte et du graphe en utilisant des ressources largement disponibles comme un dictionnaire L0-anglais, ou mieux L0-UNL, un analyseur morphosyntaxique de L0, et une transformation canonique de graphe UNL à arbre. On peut établir une "meilleure" correspondance entre "l'arbre-UNL+L0" et la "structure MS-L0", une treille, en utilisant le dictionnaire et en cherchant à aligner l'arbre et une trajectoire avec aussi peu que possible de croisements de liaisons. Un but central de cette recherche est de fusionner les approches de la TA par pivot, de la TA interactive, et de la génération multilingue de texte.
Coedition to share text revision across languages and improve MT a posteriori
d219302654
d9024948
We present a pairwise learning-to-rank approach to machine translation evaluation that learns to differentiate better from worse translations in the context of a given reference. We integrate several layers of linguistic information encapsulated in tree-based structures, making use of both the reference and the system output simultaneously, thus bringing our ranking closer to how humans evaluate translations. Most importantly, instead of deciding upfront which types of features are important, we use the learning framework of preference re-ranking kernels to learn the features automatically. The evaluation results show that learning in the proposed framework yields better correlation with humans than computing the direct similarity over the same type of structures. Also, we show our structural kernel learning (SKL) can be a general framework for MT evaluation, in which syntactic and semantic information can be naturally incorporated.
Learning to Differentiate Better from Worse Translations
d227231101
d51878592
In this paper, we provide a preliminary comparison of statistical machine translation (SMT) and neural machine translation (NMT) for English→Irish in the fixed domain of public administration. We discuss the challenges for SMT and NMT of a less-resourced language such as Irish, and show that while an out-of-the-box NMT system may not fare quite as well as our tailor-made domain-specific SMT system, the future may still be promising for EN→GA NMT.
SMT versus NMT: Preliminary comparisons for Irish
d33081390
D ette e r ein rap p o rt om ei u n d e rsø k in g av p ro d u k tiv ite te n v ed b ru k av m ask in o m se tjin g (h e re tte r M T ) som del av e it stø rre k o m m e rsie lt o m se tjin g sp ro sjek t. Ei p ro sje k tg ru p p e v e d D et h isto risk -filo so fisk e fak u ltet v e d U n iv e rsite tet i B e rg en gjennom fjørte i tid sro m m e t 1 9 8 8 -9 0 d e t s.k. P ro sje k t P H IL E N T R A p ä o p p d rag frå P h illip s P e tro leu m C o m p an y N o rw ay . D e n m a sk in e lle d e le n av o p p d rag et v a rt u tfø rt ved b ruk av W e id n e rs M a ero C A T system fo r e n g e lsk -til-n o rsk . D e tte p ro g ram m et v a rt u tv ik la ved U iB 1986-87 (P ro sje k t E N T R A ) p ä g ru n n la g av tilsv a ra n d e e n g e ls k ty sk v ersjo n (jfr. B rek k e o g S k arsten 1987). E N T R A -sy ste m e t v a rt u tv ik la p ä D E C M ic ro v a x II i sam arb eid m ed D igital E q u ip m en t C o rp o ratio n . N o rg e , o g v a rt k jø rt p ä d e n n e m a sk in e n i p ro sje k t p erio den .AvgrensingR ap p o rten e r o rie n tert u t Irä ø n sk e t om ä v in n a e m p irisk k u n n sk a p o m d e n to lle e k siste ra n d e p ro g ram v are fo r M T k a n sp ela in n a n fö r e it m e ir o m fa tta n d e o m se tjin g sp ro sjek t m ed reelle ram m efa k to ra r (sä som kostnadsefT ektivitet, tid sp ress, k v alite tsk o n tro ll m .m .). U ru lersø k in g a e r e in k o n k re t k o n fro n tasjo n m ello m e it o p e ra sjo n elt o g p rak tisk irm retta M T -sy ste m o g d ei s p rå k le g e , ø k o n o m isk e o g so siale rea litetan e som d e t e r n ø d v e n d ig ä k u n n a tak la. D e n n e rry>poiten m ä sjä a st som ei tilb ak e m e ld in g frä e it fro n ta v sn itt som e r v ik tig d e rso m v i m e in e r a t d e t vi so m sp rå k sp esia listar driv m ed , sk al h a relev an s fo r d e t sa m fu n n e t vi le v e r i. D e t so m fø lg je r e r a lts ä in n re tta m o t p rak tisk u tfo rsk in g h e lle r en n teo re tisk fo rld a rin g av d ei fa k to ra r so m s p e la r in n v ed b ru k av M T som o m se tjin g sreid sk a p .ProsjektdesignD en o v ero rd n a ark itek tu re n fo r P ro sje k t P H IL E N T R A g ä rfra m av fly ts k je m a e ti A p p e n d ik s 1 (m ed m in d re ju ste rin g a r, jfr. n e d a n fö r). T e k sta n e ko m frå o p p d ra g sg iv a r so m try k t o rig in a l o g som A S C Il-fil p ä d isk ett. K v a rt d e ld o k u m e n t (k ap ittel) i p a p irk o p i g jek k til tra n s la tø r fo r term -ek ser-
Operajonell maskinomse^ing: Kvar møter vi veggen?
d5446954
Automatic metaphor detection usually relies on various features, incorporating e.g. selectional preference violations or concreteness ratings to detect metaphors in text. These features rely on background corpora, hand-coded rules or additional, manually created resources, all specific to the language the system is being used on. We present a novel approach to metaphor detection using a neural network in combination with word embeddings, a method that has already proven to yield promising results for other natural language processing tasks. We show that foregoing manual feature engineering by solely relying on word embeddings trained on large corpora produces comparable results to other systems, while removing the need for additional resources.
d219308578
This paper describes a natural language pars ing algorithm for unrestricted text which uses a pr:obability-based scoring function to select the "best" parse of a sentence. The parser, Pearl, is a time-asynchronous bottom-up cha.rt parser with Earley-type top-down prediction which pur sues the highest-scoring theory in the chart, where the score of a theory represents the extent to which the context of the sentence predicts that interpre tation. This parser differs from previous attempts at stochastic parsers in that it uses a richer form of conditional probabilities based on c6ntext to pre dict likelihood. Pearl also provides a framework for incorporating the results of previous work in part-of-speech assignment, unknown word mod els, and other probabilistic models of linguistic features into one parsing tool, interleaving these techniques instead of using the traditional pipeline architecture. In preliminary tests, Pearl has been successful at resolving part-of-speech and word (in speech processing) ambiguity, determining cate gories for unknown words, and selecting correct parses first using a very loosely fitting covering grammar.
Pearl: A Probabilistic Chart Parser*
d2131413
We propose a generic method to perform lexical disambiguation in lexicalized grammatical formalisms. It relies on dependency constraints between words. The soundness of the method is due to invariant properties of the parsing in a given grammar that can be computed statically from the grammar.
Dependency Constraints for Lexical Disambiguation
d2276859
With the overwhelming amount of biological knowledge stored in free text, natural language processing (NLP) has received much attention recently to make the task of managing information recorded in free text more feasible. One requirement for most NLP systems is the ability to accurately recognize biological entity terms in free text and the ability to map these terms to corresponding records in databases. Such task is called biological named entity tagging. In this paper, we present a system that automatically constructs a protein entity dictionary, which contains gene or protein names associated with UniProt identifiers using online resources. The system can run periodically to always keep up-to-date with these online resources. Using online resources that were available on Dec. 25, 2004, we obtained 4,046,733 terms for 1,640,082 entities. The dictionary can be accessed from the following website:
Dynamically Generating a Protein Entity Dictionary Using Online Re- sources
d53092701
We describe our systems and results in the type-level low-resource setting of the CoNLL-SIGMORPHON 2018 Shared Task on Universal Morphological Reinflection. We test nonneural transduction models, as well as more recent neural methods. We also investigate the effect of leveraging unannotated corpora to improve the performance of selected methods. Our best system obtains the highest accuracy on 34 out of 103 languages.
Combining Neural and Non-Neural Methods for Low-Resource Morphological Reinflection
d7392978
Lacking standardized extrinsic evaluation methods for vector representations of words, the NLP community has relied heavily on word similarity tasks as a proxy for intrinsic evaluation of word vectors. Word similarity evaluation, which correlates the distance between vectors and human judgments of "semantic similarity" is attractive, because it is computationally inexpensive and fast. In this paper we present several problems associated with the evaluation of word vectors on word similarity datasets, and summarize existing solutions. Our study suggests that the use of word similarity tasks for evaluation of word vectors is not sustainable and calls for further research on evaluation methods.
Problems With Evaluation of Word Embeddings Using Word Similarity Tasks
d249204518
This project aimed at extending the test sets of the MuST-C speech translation (ST) corpus with new reference translations. The new references were collected from professional post-editors working on the output of different ST systems for three language directions: English-German/Italian/Spanish. In this paper, we describe how the data were collected and how they are distributed. As an evidence of their usefulness, we also summarize the findings of the first comparative evaluation of cascade and direct ST approaches, which was carried out relying on the collected data. The project was partially funded by the European Association for Machine Translation (EAMT) through its 2020 Sponsorship of Activities programme.
Extending the MuST-C Corpus for a Comparative Evaluation of Speech Translation Technology
d15370134
In recent years, social media has become a customer touch-point for the business functions of marketing, sales and customer service. We aim to show that intention analysis might be useful to these business functions and that it can be performed effectively on short texts (at the granularity level of a single sentence). We demonstrate a scheme of categorization of intentions that is amenable to automation using simple machine learning techniques that are language-independent. We discuss the grounding that this scheme of categorization has in speech act theory. In the demonstration we go over a number of usage scenarios in an attempt to show that the use of automatic intention detection tools would benefit the business functions of sales, marketing and service. We also show that social media can be used not just to convey pleasure or displeasure (that is, to express sentiment) but also to discuss personal needs and to report problems (to express intentions). We evaluate methods for automatically discovering intentions in text, and establish that it is possible to perform intention analysis on social media with an accuracy of 66.97% ± 0.10%.
Intention Analysis for Sales, Marketing and Customer Service
d220379534
d232428217
We describe Verse by Verse, our experiment in augmenting the creative process of writing poetry with an AI. We have created a group of AI poets, styled after various American classic poets, that are able to offer as suggestions generated lines of verse while a user is composing a poem. In this paper, we describe the underlying system to offer these suggestions. This includes a generative model, which is tasked with generating a large corpus of lines of verse offline and which are then stored in an index, and a dual-encoder model that is tasked with recommending the next possible set of verses from our index given the previous line of verse.
Augmenting Poetry Composition with Verse by Verse
d172133084
Building large datasets annotated with semantic information, such as FrameNet, is an expensive process. Consequently, such resources are unavailable for many languages and specific domains. This problem can be alleviated by using unsupervised approaches to induce the frames evoked by a collection of documents. That is the objective of the second task of SemEval 2019, which comprises three subtasks: clustering of verbs that evoke the same frame and clustering of arguments into both frame-specific slots and semantic roles.We approach all the subtasks by applying a graph clustering algorithm on contextualized embedding representations of the verbs and arguments. Using such representations is appropriate in the context of this task, since they provide cues for word-sense disambiguation. Thus, they can be used to identify different frames evoked by the same words. Using this approach we were able to outperform all of the baselines reported for the task on the test set in terms of Purity F 1 , as well as in terms of BCubed F 1 in most cases.
L 2 F/INESC-ID at SemEval-2019 Task 2: Unsupervised Lexical Semantic Frame Induction using Contextualized Word Representations
d248780304
Simultaneous translation is a task that requires starting translation before the speaker has finished speaking, so we face a trade-off between latency and accuracy. In this work, we focus on prefix-to-prefix translation and propose a method to extract alignment between bilingual prefix pairs. We use the alignment to segment a streaming input and fine-tune a translation model. The proposed method demonstrated higher BLEU than those of baselines in low latency ranges in our experiments on the IWSLT simultaneous translation benchmark.
Simultaneous Neural Machine Translation with Prefix Alignment
d370938
Microblogging services have brought users to a new era of knowledge dissemination and information seeking. However, the large volume and multi-aspect of messages hinder the ability of users to conveniently locate the specific messages that they are interested in. While many researchers wish to employ traditional text classification approaches to effectively understand messages on microblogging services, the limited length of the messages prevents these approaches from being employed to their full potential. To tackle this problem, we propose a novel semi-supervised learning scheme to seamlessly integrate the external web resources to compensate for the limited message length. Our approach first trains a classifier based on the available labeled data as well as some auxiliary cues mined from the web, and probabilistically predicts the categories for all unlabeled data. It then trains a new classifier using the labels for all messages and the auxiliary cues, and iterates the process to convergence. Our approach not only greatly reduces the time-consuming and labor-intensive labeling process, but also deeply exploits the hidden information from unlabeled data and related text resources. We conducted extensive experiments on two real-world microblogging datasets. The results demonstrate the effectiveness of the proposed approaches which produce promising performance as compared to state-of-the-art methods.Title and abstract in Chinese
A Semi-Supervised Bayesian Network Model for Microblog Topic Classification
d535032
Existing opinion analysis techniques rely on the clues within the sentence that focus on the sentiment analysis task itself. However, the sentiment analysis task is not isolated from other NLP tasks (co-reference resolution, entity linking, etc) but they can benefit each other. In this paper, we define dependencies between sentiment analysis and other tasks, and express the dependencies in first order logic rules regardless of the representations of different tasks. The conceptual framework proposed in this paper using such dependency rules as constraints aims at exploiting information outside the sentence and outside the document to improve sentiment analysis. Further, the framework allows exception to the rules.
How can NLP Tasks Mutually Benefit Sentiment Analysis? A Holistic Approach to Sentiment Analysis
d16608728
The focus of this paper is on two types of shortening observed in recent Japanese loanwords and on the trochaic shortening in Fijian. I will argue that the Japanese shortenings take place when a metrically unstable binary foot (extrametrical foot) becomes a stray mora so as to be accommodated into one of the two proposed surface ternary foot structures whereas Fijian Trochaic Shortening occurs when a metrically unstable mora (extrametrical mora) is obligatorily accommodated into the other proposed surface ternary foot structure. I will conclude that the distinct shortening phenomena observed in the two languages can be accounted for simply on the ground of parametrical differences.
Vowel Shortening and Surface Ternary Feet -Ternary Rhythm Through Strictly Binary Footing
d30640
We study the multi-tweet summarization task, which aims to find representative tweets from a given set of tweets. Multi-tweet summarization allows people to quickly grasp the essential meaning of a large number of tweets. It can also be used as a pre-processing component for information extraction tasks on tweets. The challenge of this task lies in computing a tweet's salience score with little information in a single tweet. We propose a graph-based multi-tweet summarization system that incorporates social network features, which make up for the information shortage in a tweet. Another distinguished feature of our system is that tweet readability and user diversity are considered. We evaluate our system on a manually annotated dataset, and show that our system outperforms the stateof-the-art baseline. We further evaluate our method in a real scenario of summarization of Twitter search results and demonstrate its effectiveness.Title and Abstract in another language (Chinese) 基于图模型和社会关系特征的推特消息摘要方法 本文考察多推特消息摘要任务。其目的是帮助用户快速了解大量推特消息的基本意思,或 在信息抽取前对推特消息做预处理。 该任务的主要挑战是:单条推特消息往往不能提供足 够的信息来计算它的显著度。本文提出基于图模型的方法,考虑社会关系网相关的特征、 可读性及用户的多样性来克服单条推特消息的不足。 在人工标注的数据集以及推特搜索 上,该方法均展示了其有效性。
Graph-based Multi-tweet Summarization Using Social Signals
d182952765
Asynchronous stochastic gradient descent (SGD) converges poorly for Transformer models, so synchronous SGD has become the norm for Transformer training. This is unfortunate because asynchronous SGD is faster at raw training speed since it avoids waiting for synchronization. Moreover, the Transformer model is the basis for state-of-the-art models for several tasks, including machine translation, so training speed matters. To understand why asynchronous SGD under-performs, we blur the lines between asynchronous and synchronous methods. We find that summing several asynchronous updates, rather than applying them immediately, restores convergence behavior. With this method, the Transformer attains the same BLEU score 1.36 times as fast.
Making Asynchronous Stochastic Gradient Descent Work for Transformers
d237702957
Argumentation and debating are fundamental capabilities of human intelligence. They are essential for a wide range of everyday activities that involve reasoning, decision making or persuasion. Computational Argumentation is defined as "the application of computational methods for analyzing and synthesizing argumentation and human debate" . Over the last few years, this field has been rapidly evolving, as evident by the growing research community, and the increasing number of publications in top NLP and AI conferences.The tutorial focuses on Debating Technologies, a sub-field of computational argumentation defined as "computational technologies developed directly to enhance, support, and engage with human debating" . A recent milestone in this field is Project Debater, which was revealed in 2019 as the first AI system that can debate human experts on complex topics. 1 Project Debater is the third in the series of IBM Research AI's grand challenges, following Deep Blue and Watson. It has been developed for over six years by a large team of researchers and engineers, and its live demonstration in February 2019 received massive media attention. This research effort has resulted in more than 50 scientific papers to date, and many datasets freely available for research purposes.In this tutorial, we aim to answer the question: "what does it take to build a system that can debate humans"? Our main focus is on the scientific problems such system must tackle. Some of these intriguing problems include argument retrieval for a given debate topic, argument quality assessment and stance classification, identifying relevant prin-1 https://www.research.ibm.com/ artificial-intelligence/project-debater/ cipled arguments to be used in conjunction with corpus-mined arguments, organizing the arguments into a compelling narrative, recognizing the arguments made by the human opponent and making a rebuttal. For each of these problems we will present relevant scientific work from various research groups as well as our own. Many of the underlying capabilities of Project Debater have been made freely available for academic research, and the tutorial will include a detailed explanation of how to use and leverage these tools.A complementary goal of the tutorial is to provide a holistic view of a debating system. Such a view is largely missing in the academic literature, where each paper typically addresses a specific problem in isolation. We present a complete pipeline of a debating system, and discuss the information flow and the interaction between the various components. We will also share our experience and lessons learned from developing such a complex, large scale NLP system. Finally, the tutorial will discuss practical applications and future challenges of debating technologies.
Advances in Debating Technologies: Building AI That Can Debate Humans
d36483408
One of the key characteristics of any summary is that it must be concise. To achieve this the content of the summary (1) must be focused on the key events, and(2)should leave out any information that she audience can infer on their own. We have recently begun a project on
CONVEYING IMPLICIT CONTENT IN NARRATIVE SUMMARW~S
d236917195
d17872035
The paper motivates a strategy for identification and annotation of derivational relations in the Bulgarian wordnet that aims at coping with the complex morphology of the language in an elegant way. Our method involves transfer of the Princeton WordNet (morpho)semantic relations into the Bulgarian wordnet, at the level of the synset, and further detection of derivational relations between literals in Bulgarian. Derivational relations have been annotated to reflect the complexity of Bulgarian morphology. Introduced literal relations improve the consistency and employability of the wordnet.
Coping with Derivation in the Bulgarian Wordnet
d236459821
A current open question in natural language processing is to what extent language models, which are trained with access only to the form of language, are able to capture the meaning of language. In many cases, meaning constrains form in consistent ways. This raises the possibility that some kinds of information about form might reflect meaning more transparently than others. The goal of this study is to investigate under what conditions we can expect meaning and form to covary sufficiently, such that a language model with access only to form might nonetheless succeed in emulating meaning. Focusing on propositional logic, we generate training corpora using a variety of motivated constraints, and measure a distributional language model's ability to differentiate logical symbols (¬, ∧, ∨). Our findings are largely negative: none of our simulated training corpora result in models which definitively differentiate meaningfully different symbols (e.g., ∧ vs. ∨), suggesting a limitation to the types of semantic signals that current models are able to exploit. Gary F Marcus. 2001. The algebraic mind: Integrating connectionism and cognitive science. MIT press. . 2021. Provable limitations of acquiring meaning from ungrounded form:what will future language models understand? TACL. . 2020. oLMpics-on what language model pre-training captures. Transactions of the Association for Computational Linguistics, 8:743-758.
AND does not mean OR: Using Formal Languages to Study Language Models' Representations
d18070886
The relationship between how people describe objects and when they choose to point is complex and likely to be influenced by factors related to both perceptual and discourse context. In this paper, we explore these interactions using machine-learning on a dialogue corpus, to identify multimodal referential strategies that can be used in automatic multimodal generation. We show that the decision to use a pointing gesture depends on features of the accompanying description (especially whether it contains spatial information), and on visual properties, especially distance or separation of a referent from its previous referent.This work is licensed under a Creative Commons Attribution 4.0 International Licence. Page numbers and proceedings footer are added by the organisers. Licence details: http://creativecommons.org/licenses/by/4.0/
Learning when to point: A data-driven approach
d16257389
We present a joint model for predicate argument alignment. We leverage multiple sources of semantic information, including temporal ordering constraints between events. These are combined in a max-margin framework to find a globally consistent view of entities and events across multiple documents, which leads to improvements over a very strong local baseline.
Predicate Argument Alignment using a Global Coherence Model
d39833001
We describe SimpleNLG-ES, an adaptation of the SimpleNLG realization library for the Spanish language.Our implementation is based on the bilingual English-French SimpleNLG-EnFr adaptation. The library has been tested using a battery of examples that ensure that the most common syntax, morphology and orthography rules for Spanish are met. The library is currently being used in three different projects for the development of data-to-text systems in the meteorological, statistical data information, and business intelligence application domains.
Adapting SimpleNLG to Spanish
d10499493
We describe the implementation of a Word Sense Disambiguation (WSD) tool in a Dutch Text-to-Pictograph translation system, which converts textual messages into sequences of pictographic images. The system is used in an online platform for Augmentative and Alternative Communication (AAC). In the original translation process, the appropriate sense of a word was not disambiguated before converting it into a pictograph. This often resulted in incorrect translations. The implementation of a WSD tool provides a better semantic understanding of the input messages.
Improving Text-to-Pictograph Translation Through Word Sense Disambiguation
d17416305
This paper presents an analysis of indirect passives in Japanese in terms of event structure and qualia structure proposed in the framework of the generative lexicon (Pustejovsky 1995). On the assumption that the event structure of the indirect passive construction is based on the causative structure, the present analysis accounts for the adversative interpretation of indirect passive sentences, the selection restriction on verbs, and the obligatory presence of the adjunct phrase.
On the Event Structure of Indirect Passive in Japanese
d7031199
We present an empirical study on the use of semantic information for Concept Segmentation and Labeling (CSL), which is an important step for semantic parsing. We represent the alternative analyses output by a state-of-the-art CSL parser with tree structures, which we rerank with a classifier trained on two types of semantic tree kernels: one processing structures built with words, concepts and Brown clusters, and another one using semantic similarity among the words composing the structure. The results on a corpus from the restaurant domain show that our semantic kernels exploiting similarity measures outperform state-of-the-art rerankers.
Semantic Kernels for Semantic Parsing
d11497470
A database which provides information about bacteria and their habitats in a comprehensive and normalized way is crucial for applied microbiology studies. Having this information spread through textual resources such as scientific articles and web pages leads to a need for automatically detecting bacteria and habitat entities in text, semantically tagging them using ontologies, and finally extracting the events among them. These are the challenges set forth by the Bacteria Biotopes Task of the BioNLP Shared Task 2016. This paper describes a system for habitat and bacteria entity normalization through the OntoBiotope ontology and the NCBI taxonomy, respectively. The system, which obtained promising results on the shared task data set, utilizes basic information retrieval techniques.
Ontology-based Categorization of Bacteria and Habitat Entities using Information Retrieval Techniques
d2276802
Implicative verbs (e.g. manage) entail their complement clauses, while non-implicative verbs (e.g. want) do not. For example, while managing to solve the problem entails solving the problem, no such inference follows from wanting to solve the problem. Differentiating between implicative and non-implicative verbs is therefore an essential component of natural language understanding, relevant to applications such as textual entailment and summarization. We present a simple method for predicting implicativeness which exploits known constraints on the tense of implicative verbs and their complements. We show that this yields an effective, data-driven way of capturing this nuanced property in verbs.
Tense Manages to Predict Implicative Behavior in Verbs
d219301015
d2723592
This article describes a strategy based on a naive-bayes classifier for detecting the polarity of English tweets. The experiments have shown that the best performance is achieved by using a binary classifier between just two sharp polarity categories: positive and negative. In addition, in order to detect tweets with and without polarity, the system makes use of a very basic rule that searchs for polarity words within the analysed tweets/texts. When the classifier is provided with a polarity lexicon and multiwords it achieves 63% F-score.
Citius: A Naive-Bayes Strategy for Sentiment Analysis on English Tweets *
d6882058
We address the problem of interactively learning perceptually grounded word meanings in a multimodal dialogue system. We design a semantic and visual processing system to support this and illustrate how they can be integrated. We then focus on comparing the performance (Precision, Recall, F1, AUC) of three state-of-the-art attribute classifiers for the purpose of interactive language grounding (MLKNN, DAP, and SVMs), on the aPascal-aYahoo datasets. In prior work, results were presented for object classification using these methods for attribute labelling, whereas we focus on their performance for attribute labelling itself. We find that while these methods can perform well for some of the attributes (e.g. head, ears, furry) none of these models has good performance over the whole attribute set, and none supports incremental learning. This leads us to suggest directions for future work.
Comparing attribute classifiers for interactive language grounding
d248780134
Text-to-SQL parsers map natural language questions to programs that are executable over tables to generate answers, and are typically evaluated on large-scale datasets like SPIDER(Yu et al., 2018). We argue that existing benchmarks fail to capture a certain out-of-domain generalization problem that is of significant practical importance: matching domain specific phrases to composite operations over columns.To study this problem, we propose a synthetic dataset and a re-purposed train/test split of the SQUALL dataset(Shi et al., 2020)as new benchmarks to quantify domain generalization over column operations. Our results indicate that existing state-of-the-art parsers struggle in these benchmarks. We propose to address this problem by incorporating prior domain knowledge by preprocessing table schemas, and design a method that consists of two components: schema expansion and schema pruning. This method can be easily applied to multiple existing base parsers, and we show that it significantly outperforms baseline parsers on this domain generalization problem, boosting the underlying parsers' overall performance by up to 13.8% relative accuracy gain (5.1% absolute) on the new SQUALL data split. * This work was performed during a research internship at Microsoft Semantic Machines.
Bridging the Generalization Gap in Text-to-SQL Parsing with Schema Expansion
d17368365
WHAT IS DISCOURSE GENERATION?The task of discourse generation is to produce multisentential text in natural language which (when heard or read) produces effects (informing, motivating, etc.) and impressions (conciseness, correctness, ease of reading, etc.) which are appropriate to a need or goal held by the creator of the text.Because even little children can produce multieententiaJ text, the task of discourse generation appears deceptively easy. It is actually extremely complex, in part because it usually involves many different kinds of knowledge. The skilled writer must know the subiect matter, the beliefs of the reader and his own reasons for writing. He must also know the syntax, semantics, inferential patterns, text structures and words of the language. It would be complex enough if these were all independent bodies of knowledge, independently employed. Unfortunately, they are all interdependent in intricate ways. The use of each must be coordinated with all of the others.For Artificial Intelligence, discourse generation is an unsolved problem.There have been only token efforts to date, and no one has addressed the whole problem. Still, those efforts reveal the nature of the task, what makes it diffic;,It and how the complexities can be controlled.In comparing two AI discourse generators here we can do no more than suggest opportunities and attractive options for future exploration.Hopefully we can convey the benefits of hindsight without too much detailed description of the individual systems. We describe them only in terms of a few of the techniques which they employ, partly because these tschnk:lUes seem more vaJuable than the system designs in which they happen to have been used.THE TWO SYSTEMSThe systems which we study here are PROTEUS, by Anthony Davey atEdinburgh [Davey 79], and KDS by Mann and Moore at ISI [Mann andMoore 801. As we will see, each is severely limited and idiosyncratic in scope and technique. Comparison of their individual skills reveals some technical opportunities.Why do we study these systems rather then others? Both of them represent recent developments, in Davey's case, recently published. Neither of them has the appearance of following a hand-drawn map or some' other humanly-produced sequential presentation. Thus their performance represents capabilities of the programs more than cs4)abilities of the programmer. Also, they are relatively unfamiliar to the AI audience. Perhaps most importantly, they have written some of the best machine-produced discourse of the existing art.Rrst we identify particular techniclues in each system which contribute strongly to the quality of the resulting text. Then we compare the two Systems discussing their common failings and the possibilities for creating a system having the best of both.DAVEY'S PROTEUSPROTEUS creates commentary on games of tic.tac-toe (noughts and crosses.) Despite the apparent simplicity of this task, the possibilities of producing text are rich and diverse. (See the example in Appendix .)
TWO DISCOURSE GENERATORS
d212628339
d17383842
Recent Results from the ARM Continuous Speech Recognition Project
d70033909
Book Reviews Knowledge Representation: Logical, Philosophical, and Computational Foundations
d52140674
We propose a new approach to stylometric analysis combining lexical and textual information, but without annotation or other pre-processing. In particular, our study makes use Chinese tones motifs and word length motifs automatically extracted from unannotated texts. The proposed approach is based on linked data in nature as tone and word-length information is extracted from a lexicon and mapped to the text. Support vector machine and random forest were used to establish the classification models for author differentiation. Based on comparative study of classification results of different models, we conclude that the combination of word-final tones motifs, segment-final motifs and word length motifs provides the best outcome and hence is the best model.
Stylometric Studies based on Tone and Word Length Motifs
d219301332
Semantic Structures
d8018043
The paper describes experiments on sentiment classification of microblog messages using an architecture allowing general machine learning classifiers to be combined either sequentially to form a multi-step classifier, or in parallel, creating an ensemble classifier. The system achieved very competitive results in the shared task on sentiment analysis in Twitter, in particular on non-Twitter social media data, that is, input it was not specifically tailored to.
NTNUSentEval at SemEval-2016 Task 4: * Combining General Classifiers for Fast Twitter Sentiment Analysis
d1494871
We introduce three new techniques for statistical language models: extension modeling, nonmonotonic contexts, and the divergence heuristic. Together these techniques result in language models that have few states, even fewer parameters, and low message entropies.
New Techniques for Context Modeling
d7476008
Efficient wide-coverage parsing is integral to large-scale NLP applications. Unfortunately, parsers for linguistically motivated formalisms, e.g. HPSG and TAG, are often too inefficient for these applications.This paper describes two modifications to the standard CKY chart parsing algorithm used in the Clark and Curran(2006)Combinatory Categorial Grammar (CCG) parser. The first modification extends the tight integration of the supertagger and parser, so that individual supertags can be added to the chart, which is then repaired rather than rebuilt. The second modification adds constraints to the chart that restrict which constituents can combine.Parsing speed is improved by 30-35% without a significant accuracy penalty and a small increase in coverage when both of these modifications are used.
Efficient Combinatory Categorial Grammar Parsing
d51876475
Recent studies in the field of text-based personality recognition experiment with different languages, feature extraction techniques, and machine learning algorithms to create better and more accurate models; however, little focus is placed on exploring the language use of a group of individuals defined by nationality. Individuals of the same nationality share certain practices and communicate certain ideas that can become embedded into their natural language. Many nationals are also not limited to speaking just one language, such as how Filipinos speak Filipino and English, the two national languages of the Philippines. The addition of several regional/indigenous languages, along with the commonness of codeswitching, allow for a Filipino to have a rich vocabulary. This presents an opportunity to create a text-based personality model based on how Filipinos speak, regardless of the language they use. To do so, data was collected from 250 Filipino Twitter users. Different combinations of data processing techniques were experimented upon to create personality models for each of the Big Five. The results for both regression and classification show that Conscientiousness is consistently the easiest trait to model, followed by Extraversion. Classification models for Agreeableness and Neuroticism had subpar performances, but performed better than those of Openness. An analysis on personality trait score representation showed that classifying extreme outliers generally produce better results for all traits except for Neuroticism and Openness.
Modeling Personality Traits of Filipino Twitter Users
d18198045
We describe a domain-specific method of adapting conditional random fields (CRFs) to morphosyntactic tagging of highly-inflectional languages. The solution involves extending CRFs with additional, position-wise restrictions on the output domain, which are used to impose consistency between the modeled label sequences and morphosyntactic analysis results both at the level of decoding and, more importantly, in parameters estimation process. We decompose the problem of morphosyntactic disambiguation into two consecutive stages of the context-sensitive morphosyntactic guessing and the disambiguation proper. The division helps in designing well-adjusted, CRF-based methods for both tasks, which in combination constitute Concraft, a highly accurate tagging system for the Polish language available under the 2-clause BSD license. Evaluation on the National Corpus of Polish shows that our solution significantly outperforms other state-of-the-art taggers for Polish -Pantera, WMBT and WCRFT -especially in terms of the accuracy measured with respect to unknown words.
Harnessing the CRF complexity with domain-specific constraints. The case of morphosyntactic tagging of a highly inflected language
d6436356
We demonstrate a robotic agent in a 3D virtual environment that understands human navigational instructions. Such an agent needs to select actions based on not only instructions but also situations. It is also expected to immediately react to the instructions. Our agent incrementally understands spoken instructions and immediately controls a mobile robot based on the incremental understanding results and situation information such as the locations of obstacles and moving history. It can be used as an experimental system for collecting human-robot interactions in dynamically changing situations.
A Robotic Agent in a Virtual Environment that Performs Situated Incremental Understanding of Navigational Utterances
d11066688
In this paper we present a proof-of-concept implementation of Neural Theorem Provers (NTPs), end-to-end differentiable counterparts of discrete theorem provers that perform first-order inference on vector representations of symbols using function-free, possibly parameterized, rules. As such, NTPs follow a long tradition of neural-symbolic approaches to automated knowledge base inference, but differ in that they are differentiable with respect to representations of symbols in a knowledge base and can thus learn representations of predicates, constants, as well as rules of predefined structure. Furthermore, they still allow us to incorporate domainknowledge provided as rules. The NTP presented here is realized via a differentiable version of the backward chaining algorithm. It operates on substitution representations and is able to learn complex logical dependencies from training facts of small knowledge bases.
Learning Knowledge Base Inference with Neural Theorem Provers
d7872207
Natural Language Processing (NLP) continues to grow in popularity in a range of research and commercial applications. However, installing, maintaining, and running NLP tools can be time consuming, and many commercial and research end users have only intermittent need for large processing capacity. This paper describes ILLINOISCLOUDNLP, an on-demand framework built around NLPCURATOR and Amazon Web Services' Elastic Compute Cloud (EC2). This framework provides a simple interface to end users via which they can deploy one or more NLPCURATOR instances on EC2, upload plain text documents, specify a set of Text Analytics tools (NLP annotations) to apply, and process and store or download the processed data. It also allows end users to use a model trained on their own data: ILLINOISCLOUDNLP takes care of training, hosting, and applying it to new data just as it does with existing models within NLPCURATOR. As a representative use case, we describe our use of ILLINOISCLOUDNLP to process 3.05 million documents used in the 2012 and 2013 Text Analysis Conference Knowledge Base Population tasks at a relatively deep level of processing, in approximately 20 hours, at an approximate cost of US$500; this is about 20 times faster than doing so on a single server and requires no human supervision and no NLP or Machine Learning expertise.
ILLINOISCLOUDNLP: Text Analytics Services in the Cloud
d236486112
d219309898
d13434512
In this article, we propose a web based listening test system that can be used with a large range of listeners. Our main goals were to make the configuration of the tests as simple and flexible as possible, to simplify the recruiting of the testees and, of course, to keep track of the results using a relational database. This first version of our system can perform the most widely used listening tests in the speech processing community (AB-BA, ABX and MOS tests). It can also easily evolve and propose other tests implemented by the tester by means of a module interface. This scenario is explored in this article which proposes an implementation of a module for Comparison Mean Opinion Score (CMOS) tests and conduct of such an experiment. This test allowed us to extract from the BREF120 corpus a couple of voices of distinct supra-segmental characteristics. This system is offered to the speech synthesis and speech conversion community under free license.
WEB-based listening test system for speech synthesis and speech conversion evaluation
d18431463
Stochastic gradient descent (SGD) uses approximate gradients estimated from subsets of the training data and updates the parameters in an online fashion. This learning framework is attractive because it often requires much less training time in practice than batch training algorithms. However, L1-regularization, which is becoming popular in natural language processing because of its ability to produce compact models, cannot be efficiently applied in SGD training, due to the large dimensions of feature vectors and the fluctuations of approximate gradients. We present a simple method to solve these problems by penalizing the weights according to cumulative values for L1 penalty. We evaluate the effectiveness of our method in three applications: text chunking, named entity recognition, and part-of-speech tagging. Experimental results demonstrate that our method can produce compact and accurate models much more quickly than a state-of-the-art quasi-Newton method for L1-regularized loglinear models.
Stochastic Gradient Descent Training for L1-regularized Log-linear Models with Cumulative Penalty
d30980424
This paper describes the design and functioning of the English generation phase in JETS, a limited transfer, Japanese-English machine translation system that is loosely based on the linguistic framework of relational grammar. To facilitate the development of relational-grammar-based generators, we have built an NL-and-application-independent generator shell and relational grammar rulewriting language. The implemented generator, GENIE, maps abstract canonical structures, representing the basic predicate-argument structures of sentences, into well-formed English sentences via a two-stage plan-and-execute design. This modularity permits the independent development of a very general, deterministic execution grammar that is driven by a set of planning rules sensitive to lexical, syntactic and stylistic constraints. Processing in GENIE is category-driven, i.e., grammatical
Relational-Grammar-Based Generation in the JETS Japanese-English Machine Translation System
d14167126
Most traditional approaches to anaphora resolution rely heavily on linguistic and domain knowledge. One of the disadvantages of developing a knowledge-based system, however, is that it is a very labour-intensive
Multilingual robust anaphora resolution
d51996322
The current study seeks to implement a deep learning classification algorithm using argument-structure level representation of metaphoric constructions, for the identification of source domain mappings in metaphoric utterances.It thus builds on previous work in computational metaphor interpretation (Mohler et al. 2014; Shutova 2010; Bollegala & Shutova 2013; Hong 2016; Su et al. 2017) while implementing a theoretical framework based off of work in the interface of metaphor and construction grammar (Sullivan 2006(Sullivan , 2007(Sullivan , 2013. The results indicate that it is possible to achieve an accuracy of approximately 80.4% using the proposed method, combining construction grammatical features with a simple deep learning NN. I attribute this increase in accuracy to the use of constructional cues, extracted from the raw text of metaphoric instances.
Computationally Constructed Concepts: A Machine Learning Approach to Metaphor Interpretation Using Usage-Based Construction Grammatical Cues
d13894161
This paper introduces three types of Statistical Machine Translation (SMT) output errors that would require grammatical knowledge for prevention. The first type is due to words that are negative in meaning but not in form. Problems arise when the negative forms are obligatory in target languages. The second type of errors is derived from the rigidity of pattern phrases or correlatives which do not allow for intervening elements. The third type is caused by ellipses in input sentences which must be reinstated for output sentences when so required by rules of omission in target languages or the difference in Head-Complement order between source and target languages.157
SMT Errors Requiring Grammatical Knowledge for Prevention
d2442915
This paper describes our attempt to build a consensus round the morphological analysis of a set of forms for Portuguese, to be used as a basis for the creation of a "golden list" in the first Morpholympics for Portuguese, Morfolimpíadas, an evaluation contest on Portuguese morphological analysis. This golden standard was used to rank participating morphological systems according to precision and coverage. The discussion in the paper is centered on the general choices made and the problems encountered. The paper ends with a short description of the (publicly available) resource.
On the problems of creating a golden standard of inflected forms in Portuguese
d5951198
In most statistical machine translation systems, the phrase/rule extraction algorithm uses alignments in the 1-best form, which might contain spurious alignment points. The usage of weighted alignment matrices that encode all possible alignments has been shown to generate better phrase tables for phrase-based systems. We propose two algorithms to generate the well known MSD reordering model using weighted alignment matrices. Experiments on the IWSLT 2010 evaluation datasets for two language pairs with different alignment algorithms show that our methods produce more accurate reordering models, as can be shown by an increase over the regular MSD models of 0.4 BLEU points in the BTEC French to English test set, and of 1.5 BLEU points in the DIALOG Chinese to English test set.
Reordering Modeling using Weighted Alignment Matrices
d3166162
This paper proposes the application of finite-state approximation techniques on a unification-based grammar of word formation for a language like German. A refinement of an RTN-based approximation algorithm is proposed, which extends the state space of the automaton by selectively adding distinctions based on the parsing history at the point of entering a context-free rule. The selection of history items exploits the specific linguistic nature of word formation. As experiments show, this algorithm avoids an explosion of the size of the automaton in the approximation construction.
Compounding and derivational morphology in a finite-state setting
d13588360
The romanization of non-Latin scripts is a complex computational task that is highly language dependent. This presentation will focus on three of the most challenging non-Latin scripts: Chinese, Japanese, and Arabic (CJA).Much progress has been made in personal name machine-transliteration methodologies, as documented in the various NEWS reports over the last several years. Such techniques as phrase-based SMT, RNN-based LM and CRF have emerged, leading to gradual improvements in accuracy scores. But methodology is only one aspect of the problem. Equally important is the high level of ambiguity of the CJA scripts, which poses special challenges to named entity extraction and machine transliteration. These difficulties are exacerbated by the lack of comprehensive proper noun dictionaries, the multiplicity of ambiguous transcription schemes, and orthographic variation.
SOME LINGUISTIC ISSUES IN THE MACHINE TRANSLITERATION OF CHINESE, JAPANESE, AND ARABIC NAMES
d198183907
This paper describes Tübingen-Oslo team's participation in the cross-lingual morphological analysis task in the VarDial 2019 evaluation campaign. We participated in the shared task with a standard neural network model. Our model achieved analysis F1-scores of 31.48 and 23.67 on test languages Karachay-Balkar (Turkic) and Sardinian (Romance) respectively. The scores are comparable to the scores obtained by the other participants in both language families, and the analysis score on the Romance data set was also the best result obtained in the shared task. Besides describing the system used in our shared task participation, we describe another, simpler, model based on linear classifiers, and present further analyses using both models. Our analyses, besides revealing some of the difficult cases, also confirm that the usefulness of a source language in this task is highly correlated with the similarity of source and target languages.
Neural and Linear Pipeline Approaches to Cross-lingual Morphological Analysis
d232021822
Within a larger frame of facilitating human-robot interaction, we present here the creation of a core vocabulary to be learned by a robot. It is extracted from two tokenised and lemmatized scenarios pertaining to two imagined microworlds in which the robot is supposed to play an assistive role. We also evaluate two resources for their utility for expanding this vocabulary so as to better cope with the robot's communication needs. The language under study is Romanian and the resources used are the Romanian wordnet and word embedding vectors extracted from the large representative corpus of contemporary Romanian, CoRoLa. The evaluation is made for two situations: one in which the words are not semantically disambiguated before expanding the lexicon, and another one in which they are disambiguated with senses from the Romanian wordnet. The appropriateness of each resource is discussed.
Evaluating the Wordnet and CoRoLa-based Word Embedding Vectors for Romanian as Resources in the Task of Microworlds Lexicon Expansion
d10977966
The Prague Dependency Treebank has been conceived of as a semi-automatic three-layer annotation system, in which the layers of morphemic and 'analytic' (surface-syntactic) tagging are followed by the layer of tectogrammatical tree structures. Two types of deletions are recognized: (i) those licensed by the grammatical properties of the given sentence, and (ii) those possible only if the preceding context exhibits certain specific properties. Within group (i), either the position itself in the sentence structure is determined, but its lexical setting is 'free' (as e.g. with a deleted subject in Czech as a pro-drop language), or both the position and its 'filler' are determined. Group (ii) reflects the typological differences between English and Czech; the rich morphemics of the latter is more favorable for deletions. Several steps of the tagging procedure are carried out automatically, but most parts of the restoration of deleted nodes still have to be done "manually". If along with the node that is being restored, also nodes depending on it are deleted, then these are restored only if they function as arguments or obligatory adjuncts. The large set of annotated utterances will make it possible to check and amend the present results, also with applications of statistic methods. Theoretical linguistics will be enabled to check its descriptive framework; the degree of automation of the procedure will then be raised, and the treebank will be useful for most different tasks in language processing.
Semantico-Syntactic Tagging of Very Large Corpora: the Case of Restoration of Nodes on the Underlying Level
d233364996
Sarcasm detection and sentiment analysis are important tasks in Natural Language Understanding. Sarcasm is a type of expression where the sentiment polarity is flipped by an interfering factor. In this study, we exploited this relationship to enhance both tasks by proposing a multi-task learning approach using a combination of static and contextualised embeddings. Our proposed system achieved the best result in the sarcasm detection subtask(Abu Farha et al., 2021).
Multi-task Learning Using a Combination of Contextualised and Static Word Embeddings for Arabic Sarcasm Detection and Sentiment Analysis
d248780502
In this paper, we present the details of our approaches that attained the second place in the shared task of the ACL 2022 Cognitive Modeling and Computational Linguistics Workshop. The shared task is focused on multiand cross-lingual prediction of eye movement features in human reading behavior, which could provide valuable information regarding language processing. To this end, we train 'adapters' inserted into the layers of frozen transformer-based pretrained language models. We find that multilingual models equipped with adapters perform well in predicting eyetracking features. Our results suggest that utilizing language-and task-specific adapters is beneficial and translating test sets into similar languages that exist in the training set could help with zero-shot transferability in the prediction of human reading behavior.
Team DMG at CMCL 2022 Shared Task: Transformer Adapters for the Multi-and Cross-Lingual Prediction of Human Reading Behavior
d17078663
Product names with a temporal cue in a product review often refer to several product instances purchased at different times. Previous approaches to product entity recognition and temporal information analysis do not take into account such temporal cues and thus fail to distinguish different product instances. We propose to formulate the resolution of such product names as a classification problem by utilizing time expressions, event features and other temporal cues for a classifier in two stages, detecting the existence of such temporal cues and identifying the purchase time. The empirical results show that term-based features and existing event-based features together enhance the performance of product instance distinction.
Product Name Classification for Product Instance Distinction
d15861242
Social networks have transformed communication dramatically in recent years through the rise of new platforms and the development of a new language of communication. This landscape requires new forms to describe and predict the behaviour of users in networks. This paper presents an analysis of the frequency distribution of hashtag popularity in Twitter conversations. Our objective is to determine if these frequency distribution follow some well-known frequency distribution that many real-life sets of numerical data satisfy. In particular, we study the similarity of frequency distribution of hashtag popularity with respect to Zipf's law, an empirical law referring to the phenomenon that many types of data in social sciences can be approximated with a Zipfian distribution. Additionally, we also analyse Benford's law, is a special case of Zipf's law, a common pattern about the frequency distribution of leading digits. In order to compute correctly the frequency distribution of hashtag popularity, we need to correct many spelling errors that Twitter's users introduce. For this purpose we introduce a new filter to correct hashtag mistake based on string distances. The experiments obtained employing datasets of Twitter streams generated under controlled conditions show that Benford's law and Zipf's law can be used to model hashtag frequency distribution.
Zipf's and Benford's laws in Twitter hashtags
d79360
We propose a distance phrase reordering model (DPR) for statistical machine translation (SMT), where the aim is to capture phrase reorderings using a structure learning framework. On both the reordering classification and a Chinese-to-English translation task, we show improved performance over a baseline SMT system.
Handling phrase reorderings for machine translation
d218974107
d40168724
This paper proposes methods to pre-process questions in the postings before a QA system can find answers in a discussion group in the Internet. Pre-processing includes garbage text removal and question segmentation. Garbage keywords are collected and different length thresholds are assigned to them for garbage text identification. Interrogative forms and question types are used to segment questions. The best performance on the test set achieves 92.57% accuracy in garbage text removal and 85.87% accuracy in question segmentation, respectively.
Question Pre-Processing in a QA System on Internet Discussion Groups
d30919873
The problem of (semi-)automatic treebank conversion arises when converting between different schemas, such as from a language specific schema to Universal Dependencies, or when converting from one Universal Dependencies version to the next. We propose a formalism based on top-down tree transducers to convert dependency trees. Building on a well-defined mechanism yields a robust transformation system with clear semantics for rules and which guarantees that every transformation step results in a well formed tree, in contrast to previously proposed solutions. The rules only depend on the local context of the node to convert and rely on the dependency labels as well as the PoS tags. To exemplify the efficiency of our approach, we created a rule set based on only 45 manually transformed sentences from the Hamburg Dependency Treebank. These rules can already transform annotations with both coverage and precision of more than 90%.
Dependency Schema Transformation with Tree Transducers
d38933531
This paper describes a fully-automated realtime broadcast news video and audio processing system. The system combines speech recognition, machine translation, and crosslingual information retrieval components to enable real-time alerting from live English and Arabic news sources.
Multilingual Video and Audio News Alerting
d241583698
In decision making in the economic field, an especially important requirement is to rapidly understand news to absorb ever-changing economic situations. Given that most economic news is written in English, the ability to read such information without waiting for a translation is particularly valuable in economics in contrast to other fields. In consideration of this issue, this research investigated the extent to which non-native English speakers are able to read economic news to make decisions accordingly -an issue that has been rarely addressed in previous studies. Using an existing standard dataset as training data, we created a classifier that automatically evaluates the readability of text with high accuracy for English learners. Our assessment of the readability of an economic news corpus revealed that most news texts can be read by intermediate English learners. We also found that in some cases, readability varies considerably depending on the knowledge of certain words specific to the economic field.
d252624730
This paper presents first steps towards a sign language avatar for communicating railway travel announcements in Dutch Sign Language. Taking an interdisciplinary approach, it demonstrates effective ways to employ co-design and focus group methods in the context of developing sign language technology, and presents several concrete findings and results obtained through co-design and focus group sessions which have not only led to improvements of our own prototype but may also inform the development of signing avatars for other languages and in other application domains.
-BY-NC 4.0 First Steps Towards a Signing Avatar for Railway Travel Announcements in the Netherlands
d1839539
The goal of this paper is to propose a classification of the syntactic alternations admitted by the most frequent Italian verbs. The data-driven two-steps procedure exploited and the structure of the identified classes of alternations are presented in depth and discussed. Even if this classification has been developed with a practical application in mind, namely the semi-automatic building of a VerbNet-like lexicon for Italian verbs, partly following the methodology proposed in the context of the VerbNet project, its availability may have a positive impact on several related research topics and Natural Language Processing tasks.
Bootstrapping an Italian VerbNet: data-driven analysis of verb alternations
d1211840
Syntax-based MT systems have proven effective-the models are compelling and show good room for improvement. However, decoding involves a slow search. We present a new lazy-search method that obtains significant speedups over a strong baseline, with no loss in Bleu.
Faster MT Decoding through Pervasive Laziness
d244009860
This paper deliberates on the process of building the first constituency-to-dependency conversion tool of Turkish 1 . The starting point of this work is a previous study in which 10,000 phrase structure trees were manually transformed into Turkish from the original Penn Treebank corpus. Within the scope of this project, these Turkish phrase structure trees were automatically converted into UD-style dependency structures, using both a rule-based algorithm and a machine learning algorithm specific to the requirements of the Turkish language. The results of both algorithms were compared and the machine learning approach proved to be more accurate than the rule-based algorithm. The output was revised by a team of linguists. The refined versions were taken as gold standard annotations for the evaluation of the algorithms. In addition to its contribution to the UD Project with a large dataset of 10,000 Turkish dependency trees, this project also fulfills the important gap of a Turkish conversion tool, enabling the quick compilation of dependency corpora which can be used for the training of better dependency parsers.
From Constituency to UD-Style Dependency: Building the First Conversion Tool of Turkish
d7472011
A grammatical method of combining two kinds of speech repair cues is presented. One cue, prosodic disjuncture, is detected by a decision tree-based ensemble classifier that uses acoustic cues to identify where normal prosody seems to be interrupted(Lickley, 1996). The other cue, syntactic parallelism, codifies the expectation that repairs continue a syntactic category that was left unfinished in the reparandum(Levelt, 1983). The two cues are combined in a Treebank PCFG whose states are split using a few simple tree transformations. Parsing performance on the Switchboard and Fisher corpora suggests that these two cues help to locate speech repairs in a synergistic way.
PCFGs with Syntactic and Prosodic Indicators of Speech Repairs
d199379325
Natural Language inference is the task of identifying relation between two sentences as entailment, contradiction or neutrality. MedNLI is a biomedical flavour of NLI for clinical domain. This paper explores the use of Bidirectional Encoder Representation from Transformer (BERT) for solving MedNLI. The proposed model, BERT pre-trained on PMC, PubMed and fine-tuned on MIMIC-III v1.4, achieves state of the art results on MedNLI (83.45%) and an accuracy of 78.5% in MEDIQA challenge. The authors present an analysis of the attention patterns that emerged as a result of training BERT on MedNLI using a visualization tool, bertviz.
Saama Research at MEDIQA 2019: Pre-trained BioBERT with Attention Visualisation for Medical Natural Language Inference
d250390514
Word embeddings are growing to be a crucial resource in the field of NLP for any language. This work introduces a novel technique for static subword embeddings transfer for Indic languages from a relatively higher resource language to a genealogically related low resource language. We primarily work with Hindi-Marathi, simulating a low-resource scenario for Marathi, and confirm observed trends on Nepali. We demonstrate the consistent benefits of unsupervised morphemic segmentation on both source and target sides over the treatment performed by fastText. Our best-performing approach uses an EM-style approach to learning bilingual subword embeddings; we also show, for the first time, that a trivial "copyand-paste" embeddings transfer based on even perfect bilingual lexicons is inadequate in capturing language-specific relationships. We find that our approach substantially outperforms the fastText baselines for both Marathi and Nepali on the Word Similarity task as well as WordNet-Based Synonymy Tests; on the former task, its performance for Marathi is close to that of pretrained fastText embeddings that use three orders of magnitude more Marathi data.
Subword-based Cross-lingual Transfer of Embeddings from Hindi to Marathi and Nepali
d7629244
A simple conceptual model is employed to investigate events, and break the task of coreference resolution into two steps: semantic class detection and similaritybased matching. With this perspective an algorithm is implemented to cluster event mentions in a large-scale corpus. Results on test data from AQUAINT TimeML, which we annotated manually with coreference links, reveal how semantic conventions vs. information available in the context of event mentions affect decisions in coreference analysis.
Conceptual and Practical Steps in Event Coreference Analysis of Large-scale Data
d2045887
We present the Uppsala Persian Dependency Treebank (UPDT) with a syntactic annotation scheme based on Stanford Typed Dependencies. The treebank consists of 6,000 sentences and 151,671 tokens with an average sentence length of 25 words. The data is from different genres, including newspaper articles and fiction, as well as technical descriptions and texts about culture and art, taken from the open source Uppsala Persian Corpus (UPC). The syntactic annotation scheme is extended for Persian to include all syntactic relations that could not be covered by the primary scheme developed for English. In addition, we present open source tools for automatic analysis of Persian containing a text normalizer, a sentence segmenter and tokenizer, a part-of-speech tagger, and a parser. The treebank and the parser have been developed simultaneously in a bootstrapping procedure. The result of a parsing experiment shows an overall labeled attachment score of 82.05% and an unlabeled attachment score of 85.29%. The treebank is freely available as an open source resource.
A Persian Treebank with Stanford Typed Dependencies
d227231392
We present edition 1.2 of the PARSEME shared task on identification of verbal multiword expressions (VMWEs). Lessons learned from previous editions indicate that VMWEs have low ambiguity, and that the major challenge lies in identifying test instances never seen in the training data. Therefore, this edition focuses on unseen VMWEs. We have split annotated corpora so that the test corpora contain around 300 unseen VMWEs, and we provide non-annotated raw corpora to be used by complementary discovery methods. We released annotated and raw corpora in 14 languages, and this semi-supervised challenge attracted 7 teams who submitted 9 system results. This paper describes the effort of corpus creation, the task design, and the results obtained by the participating systems, especially their performance on unseen expressions. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. 6 The annotated corpus for the 1.2 edition is available at http://hdl.handle.net/11234/1-3367 7 Some languages present in editions 1.0 and 1.1 are not covered because the corpora were not upgraded: Arabic, Bulgarian, Croatian, Czech, English, Farsi, Hungarian, Lithuanian, Maltese, Slovene and Spanish. 8 http://hdl.handle.net/11234/1-2184 9 Note that other languages also have inflected (reflexive) pronouns, e.g. in IRVs: FR je me rends (lit. 'I return myself') 'I go', il se rend (lit. 'he returns himself') 'he goes', etc. The difference is that, in the Irish examples, the pronoun is not lexicalized and should normally not be annotated as a VMWE component. 10 https://proycon.anaproy.nl/software/flat/
Edition 1.2 of the PARSEME Shared Task on Semi-supervised Identification of Verbal Multiword Expressions
d248779988
Zero-shot relation extraction (ZSRE) aims to predict target relations that cannot be observed during training. While most previous studies have focused on fully supervised relation extraction and achieved considerably high performance, less effort has been made towards ZSRE. This study proposes a new model incorporating discriminative embedding learning for both sentences and semantic relations. In addition, a self-adaptive comparator network is used to judge whether the relationship between a sentence and a relation is consistent. Experimental results on two benchmark datasets showed that the proposed method significantly outperforms the state-of-the-art methods.
Improving Discriminative Learning for Zero-Shot Relation Extraction
d7391475
We propose a new method for word sense disambiguation for verbs. In our method, sense-dependent selectional preference of verbs is obtained through the probabilistic model on the lexical network. The meanfield approximation is employed to compute the state of the lexical network. The outcome of the computation is used as features for discriminative classifiers. The method is evaluated on the dataset of the Japanese word sense disambiguation.
Potts Model on the Case Fillers for Word Sense Disambiguation
d6235360
A long-standing challenge in coreference resolution has been the incorporation of entity-level information -features defined over clusters of mentions instead of mention pairs. We present a neural network based coreference system that produces high-dimensional vector representations for pairs of coreference clusters. Using these representations, our system learns when combining clusters is desirable. We train the system with a learning-to-search algorithm that teaches it which local decisions (cluster merges) will lead to a high-scoring final coreference partition. The system substantially outperforms the current state-of-the-art on the English and Chinese portions of the CoNLL 2012 Shared Task dataset despite using few hand-engineered features.
Improving Coreference Resolution by Learning Entity-Level Distributed Representations
d19007990
What is the information captured by neural network models of language? We address this question in the case of character-level recurrent neural language models. These models do not have explicit word representations; do they acquire implicit ones? We assess the lexical capacity of a network using the lexical decision task common in psycholinguistics: the system is required to decide whether or not a string of characters forms a word. We explore how accuracy on this task is affected by the architecture of the network, focusing on cell type (LSTM vs. SRN), depth and width. We also compare these architectural properties to a simple count of the parameters of the network. The overall number of parameters in the network turns out to be the most important predictor of accuracy; in particular, there is little evidence that deeper networks are beneficial for this task.
Comparing Character-level Neural Language Models Using a Lexical Decision Task
d60520470
Variousissues in the implem�ntation of generalized LR parsing with probability' are discussed. A method for preve_ nting the generation of infinite ·numb ers of states is described and the space requirements of the parsing tables are · assessed for a substantial natural -language grammar .Because of a high degree of ambiguity in the grammar , there are many multiple entries and the tables are rather large . A new method for grammar adaptation is introduced wh ich may help to reduce th is problem .A probabilistic version of the Tomita parse forest i& also de scribed.
ADAPTIVE PROBABILISTIC GENERALIZED LR PARSING