_id
stringlengths
4
10
text
stringlengths
0
18.4k
title
stringlengths
0
8.56k
d260493816
Events in text documents are interrelated in complex ways. In this paper, we study two types of relation: Event Coreference and Event Sequencing. We show that the popular tree-like decoding structure for automated Event Coreference is not suitable for Event Sequencing. To this end, we propose a graph-based decoding algorithm that is applicable to both tasks. The new decoding algorithm supports flexible feature sets for both tasks. Empirically, our event coreference system has achieved state-of-the-art performance on the TAC-KBP 2015 event coreference task and our event sequencing system beats a strong temporal-based, oracle-informed baseline. We discuss the challenges of studying these event relations.Title and Abstract in Chinese
Graph-Based Decoding for Event Coreference and Sequencing Resolution
d8109232
This paper presents a generic dialogue state tracker that maintains beliefs over user goals based on a few simple domainindependent rules, using basic probability operations. The rules apply to observed system actions and partially observable user acts, without using any knowledge obtained from external resources (i.e. without requiring training data). The core insight is to maximise the amount of information directly gainable from an errorprone dialogue itself, so as to better lowerbound one's expectations on the performance of more advanced statistical techniques for the task. The proposed method is evaluated in the Dialog State Tracking Challenge, where it achieves comparable performance in hypothesis accuracy to machine learning based systems. Consequently, with respect to different scenarios for the belief tracking problem, the potential superiority and weakness of machine learning approaches in general are investigated.
A Simple and Generic Belief Tracking Mechanism for the Dialog State Tracking Challenge: On the believability of observed information
d219306274
d14760845
We present the project of classification of Prague Discourse Treebank documents (Czech journalistic texts) for their genres. Our main interest lies in opening the possibility to observe how text coherence is realized in different types (in the genre sense) of language data and, in the future, in exploring the ways of using genres as a feature for multi-sentence-level language technologies. In the paper, we first describe the motivation and the concept of the genre annotation, and briefly introduce the Prague Discourse Treebank. Then, we elaborate on the process of manual annotation of genres in the treebank, from the annotators' manual work to post-annotation checks and to the inter-annotator agreement measurements. The annotated genres are subsequently analyzed together with discourse relations (already annotated in the treebank) -we present distributions of the annotated genres and results of studying distinctions of distributions of discourse relations across the individual genres.
Genres in the Prague Discourse Treebank
d13550283
We describe preliminary work in the creation of the first specialized vocabulary to be integrated into the Open Multilingual Wordnet (OMW). The NCIt Derived WordNet (ncitWN) is based on the National Cancer Institute Thesaurus (NCIt), a controlled biomedical terminology that includes formal class restrictions and English definitions developed by groups of clinicians and terminologists. The ncitWN is created by converting the NCIt to the WordNet Lexical Markup Framework and adding semantic types. We report the development of a prototype ncitWN and first steps towards integrating it into the OMW.
Toward Constructing the National Cancer Institute Thesaurus Derived WordNet (ncitWN)
d5667397
The paper discusses some classes of contextual grammars--mainly those with "maximal use of selectors"--giving some arguments that these grammars can be considered a good model for natural language syntax.A contextual grammar produces a language starting from a finite set of words and iteratively adding contexts to the currently generated words, according to a selection procedure: each context has associated with it a selector, a set of words; the context is adjoined to any occurrence of such a selector in the word to be derived. In grammars with maximal use of selectors, a context is adjoined only to selectors for which no superword is a selector. Maximality can be defined either locally or globally (with respect to all selectors in the grammar). The obtained families of languages are incomparable with that of Chomsky context-free languages (and with other families of languages that contain linear languages and that are not "too large"; see Section 5) and have a series of properties supporting the assertion that these grammars are a possible adequate model for the syntax of natural languages. They are able to straightforwardly describe all the usual restrictions appearing in natural (and artificial) languages, which lead to the non-context-freeness of these languages: reduplication, crossed dependencies, and multiple agreements; however, there are center-embedded constructions that cannot be covered by these grammars.While these assertions concern only the weak generative capacity of contextual grammars, some ideas are also proposed for associating a structure to the generated words, in the form of a tree, or of a dependence relation (as considered in descriptive linguistics and also similar to that in link grammars).Computational LinguisticsVolume 24, Number 2 selectors (sets of words); a context can be adjoined to any associated word-selector. In this way, starting from a finite set of words, we can generate a language.This operation of iterated selective insertion of words is related to the basic combinatorics on words, as well as to the basic operations in rewriting systems of any type. Indeed, contextual grammars, in the many variants considered in the literature, were investigated mainly from a mathematical point of view; see P~iun (1982, 1985, 1994), P~un,Rozenberg and Salomaa (1994), and their references. A complete source of information is the monograph P~iun (1997). A few applications of contextual grammars were developed in connection with action theory (P~un 1979), with the study of theatrical works (P~iun 1976), and with computer program evolution (B~lanescu and Gheorghe 1987), but up to now no attempt has been made to check the relevance of contextual grammars in the very field where they were motivated: linguistics, the study of natural languages. A sort of a posteriori explanation is given: the variants of contextual grammars investigated so far are not powerful enough, hence they are not interesting enough; what they can do, a regular or a context-free grammar can do as well. However, a recently introduced class of contextual grammars seems to be quite appealing from this point of view: the grammars with a maximal use of selectors (Martin-Vide et al. 1995). In these grammars, a context is adjoined to a word-selector if this selector is the largest on that place (no other word containing it as a proper subword can be a selector). Speaking strictly from a formal language theory point of view, the behavior of these grammars is not spectacular: the family of generated languages is incomparable with the family of context-free languages, incomparable with many other families of contextual languages, and (strictly) included in the family of context-sensitive languages, properties rather common in the area of contextual grammars.This type of grammar has a surprising property, however, important from a linguistic point of view: all of the three basic features of natural (and artificial) languages that lead to their non-context-freeness (reduplication, crossed dependencies, and multiple agreements) can be covered by such grammars (and no other class of contextual grammars can do the same). Technically, the above mentioned non-context-free features lead to formal languages of the forms {xcx I x E {a, b}*} (duplicated words of arbitrary length), {anbmcnd m I n, m > 1} (two crossed dependencies), and {anbnc" I n > 1} ([at least] three correlated positions). All of them are non-context-free languages and all of them can be generated in a surprisingly simple way by contextual grammars with selectors used in the maximal mode.Examples of natural language constructions based on reduplication were found, for instance, byCuly (1985), andRadzinski (1990), whereas crossed dependencies were demonstrated for Swiss German byShieber (1985); see also Partee, ter Meulen and Wall (1990) or a number of contributions toSavitch et al. (1987). Multiple agreements were identified early on in programming languages (see, for example, Floyd [1962]), and certain constructions having such characteristics can also be found in natural languages. We shall give some arguments in Section 4. Some remarks are in order here. Although we mainly deal with the syntax of natural languages, we sometimes also mention artificial languages, mainly programming languages. Without entering into details outside the scope of our paper, 1 we adopt the standpoint that natural and artificial languages have many common features (Man-1 A word of warning: When we invoke statements concerning various topics, some of which have been debated for a long time, we do not necessarily argue for these statements and we do not consider the adequacy of contextual grammars as either proved or disproved by them. We simply mention a connection between a linguistic fact and a feature of our grammars.246Marcus, Martin-Vide, and P~un Contextual Grammars aster Ramer 1993). For instance, we consider these languages infinite and organized on successive levels of grammaticality, whose number is unlimited in principle, although practically only a finite number of such levels can be approached. In Marcus (1981-83), where an effective analysis of contextual ambiguity in English, French, Romanian, and Hungarian is proposed, practical difficulties imposed a limitation to two levels of grammaticality for English (one level excluding compound words, the other level allowing the building of compound words) and Hungarian, but six levels for the analysis of French verbs. The reason for this situation is the "open" character of natural languages, making it impossible to formulate a necessary and sufficient condition for a sentence to be well-formed. As is pointed out byHockett (1970), the set of well-formed strings in a natural language should be both finite and infinite, a requirement that is impossible to fulfill in the framework of classical set theory; for a related discussion, seeSavitch (1991Savitch ( , 1993. This can also be related to Chomsky's claim that a basic problem in linguistics is to find a grammar able to generate all and only the well-formed strings in a natural language. Chomsky's claim presupposes that natural languages have the status of formal languages, but not everyone agrees with this notion. Even for programming languages, many authors reject the idea that well-formed strings constitute a formal language; see, for instance, the various articles in the collective volumeOlivetti (1970), as well asMarcus (1979). Returning to constructions specific to natural languages, we have found the surprising fact that the language {ancbmcbmca n I n, m > 1} cannot be generated by contextual grammars with a maximal global use of selectors. Observe the center-embedded structure of this language and the fact that it is an "easy" linear language. As Manaster Ramer (1994, 4) points out, "the Chomsky hierarchy is in fact highly misleading .... suggesting as it does, for example, that center-embedded structures (including mirrorimages) are simpler (since they are context-free) than cross-serial structures (including reduplications). Yet we know that natural languages abound in reduplications but abhor mirror-images(Rounds, Manaster Ramer, and Friedman 1987)and it also appears that, other things being equal, cross-serial structures are easier to process than center-embedded ones."This point brings to mind Chomsky's arguments (1964,) that centerembedded constructions can be handled by the grammar (the description of competence), but not by the performance system. Here competence itself is not able to cover the center-embedded construction. However, we have to mention the fact that other similar constructions can be covered by contextual grammars (with or without maximal use of selectors). This is the case with {wc mi(w) I w c {a, b}*}, where mi(w) is the mirror image of w. Also the language {w mi(w) I w E {a,b}*} can be generated when the maximal use of selectors is considered, but not without involving this feature.The difference between these last two languages suggests another point supporting the adequacy of contextual grammars: from the Chomsky hierarchy point of view, there is no difference between these languages; rather, their grammars are similar. This is not the case in the contextual grammar framework, and this also corresponds to our intuition: having a marker (the central c here) is helpful, it is significantly easier to process a language when certain positions of its sentences are specified. (Further illustrations of this point can be found in Section 4.) We conclude that contextual grammars with a maximal use of selectors seem adequate from these points of view for modeling natural languages. 2
Contextual Grammars as Generative Models of Natural Languages
d14288879
The paper describes the development of software for automatic grammatical ana]ysi$ of unl~'Ui~, unedited English text at the Unit for Compm= Research on the Ev~li~h Language (UCREL) at the Univet~ of Lancaster. The work is ~n'nmtly funded by IBM and carried out in collaboration with colleagues at IBM UK (W'~) and IBM Yorktown Heights. The paper will focus on the lexicon component of the word raging system, the UCREL grammar, the datal~zlks of parsed sentences, and the tools that have been written to support developmem of these comlm~ems. ~ wozk has applications to speech technology, sl~lfing conectim, end other areas of natural lmlguage pngessil~ ~y, our goal is to provide a language model using transin'ca statistics to di.~.nbigu~ al.:mative 1~ for a speech .:a~nicim device.
Lexicon and grammar in probabilistic tagging of written English
d5236680
The current paper proposes a novel approach to Spanish
Hybrid Approach to Zero Subject Resolution for multilingual MT -Spanish-to-Korean Cases
d1106545
Arbitrary n-ary relations (n ≥ 1) can in principle be realized through binary relations obtained by a reification process that introduces new individuals to which the additional arguments are linked via accessor properties. Modern ontologies which employ standards such as RDF and OWL have mostly obeyed this restriction, but have struggled with it nevertheless. Additional arguments for representing, e.g., valid time, grading, uncertainty, negation, trust, sentiment, or additional verb roles (for ditransitive verbs and adjuncts) are often better modeled in relation and information extraction systems as direct arguments of the relation instance, instead of being hidden in deep structures. In order to address non-binary relations directly, ontologies must be extended by Cartesian types, ultimately leading to an extension of the standard entailment rules for RDFS and OWL. In order to support ontology construction, ontology editors such as Protégé have to be adapted as well.
) marriedTo(peter, lisa
d62414073
Several computational linguistics techniques are applied to analyze a large corpus of Spanish sonnets from the 16th and 17th centuries. The analysis is focused on metrical and semantic aspects. First, we are developing a hybrid scansion system in order to extract and analyze rhythmical or metrical patterns. The possible metrical patterns of each verse are extracted with language-based rules. Then statistical rules are used to resolve ambiguities. Second, we are applying distributional semantic models in order to, on one hand, extract semantic regularities from sonnets, and on the other hand to group together sonnets and poets according to these semantic regularities. Besides these techniques, in this position paper we will show the objectives of the project and partial results. 1
A computational linguistic approach to Spanish Golden Age Sonnets: metrical and semantic aspects
d219309193
d27595165
We define infinitary count-invariance for categorial logic, extending countinvariance for multiplicatives(van Benthem, 1991)and additives and bracket modalities(Valentín et al., 2013)to include exponentials. This provides an effective tool for pruning proof search in categorial parsing/theorem-proving.
Count-Invariance Including Exponentials
d236999906
d250150518
We present a morpho-syntactically-annotated corpus of Western Sierra Puebla Nahuatl that conforms to the annotation guidelines of the Universal Dependencies project. We describe the sources of the texts that make up the corpus, the annotation process, and important annotation decisions made throughout the development of the corpus. As the first indigenous language of Mexico to be added to the Universal Dependencies project, this corpus offers a good opportunity to test and more clearly define annotation guidelines for the Mesoamerican linguistic area, spontaneous and elicited spoken data, and code-switching.
Universal Dependencies for Western Sierra Puebla Nahuatl
d20885063
fin this paper, we describe a working system for interactive Japanese syntactic an',dysis. A human user can intervene during parsing to hell) the system to produce a correct parse tree. Human interactions are limited to the very simple task of indicating the modifiee (governor) of a phrase, and thus a non-expert native speaker can use the syst:em. The user is free to give any information in ;my order, or even to provide no information. The :.;ystem is being used as the source language analyzer of a Japanese-to-English machine translation ::;ystem currently under development.
An Interactive Japanese Parser for Machine Translation
d881090
Creation titles, i.e. titles of literary and/or artistic works, comprise over 7% of named entities in Chinese documents. They are the fourth large sort of named entities in Chinese other than personal names, location names, and organization names. However, they are rarely mentioned and studied before. Chinese title recognition is challenging for the following reasons. There are few internal features and nearly no restrictions in the naming style of titles. Their lengths and structures are varied. The worst of all, they are generally composed of common words, so that they look like common fragments of sentences. In this paper, we integrate punctuation rules, lexicon, and naïve Bayesian models to recognize creation titles in Chinese documents. This pioneer study shows a precision of 0.510 and a recall of 0.685 being achieved. The promising results can be integrated into Chinese segmentation, used to retrieve relevant information for specific titles, and so on.
Integrating Punctuation Rules and Naïve Bayesian Model for Chinese Creation Title Recognition
d11358655
This paper presents a LTAG-based analysis of gapping and VP ellipsis, which proposes that resolution of the elided material is part of a general disambiguation procedure, which is also responsible for resolution of underspecified representations of scope.
Semantic Interpretation of Unrealized Syntactic Material in LTAG
d2186770
WHITE PAPER ON SPOKEN LANGUAGE SYSTEMS
d1675316
Edinburgh University participated in the WMT 2009 shared task using the Moses phrase-based statistical machine translation decoder, building systems for all language pairs. The system configuration was identical for all language pairs (with a few additional components for the German-English language pairs). This paper describes the configuration of the systems, plus novel contributions to Moses including truecasing, more efficient decoding methods, and a framework to specify reordering constraints.
Edinburgh's Submission to all Tracks of the WMT2009 Shared Task with Reordering and Speed Improvements to Moses
d7348738
This paper presents a discriminative pruning method of n-gram language model for Chinese word segmentation. To reduce the size of the language model that is used in a Chinese word segmentation system, importance of each bigram is computed in terms of discriminative pruning criterion that is related to the performance loss caused by pruning the bigram. Then we propose a step-by-step growing algorithm to build the language model of desired size. Experimental results show that the discriminative pruning method leads to a much smaller model compared with the model pruned using the state-of-the-art method. At the same Chinese word segmentation F-measure, the number of bigrams in the model can be reduced by up to 90%. Correlation between language model perplexity and word segmentation performance is also discussed.
Discriminative Pruning of Language Models for Chinese Word Segmentation
d8926220
In this paper, we aim to investigate the coordination of interlocutors behavior in different emotional segments. Conversational coordination between the interlocutors is the tendency of speakers to predict and adjust each other accordingly on an ongoing conversation. In order to find such a coordination, we investigated 1) lexical similarities between the speakers in each emotional segments, 2) correlation between the interlocutors using psycholinguistic features, such as linguistic styles, psychological process, personal concerns among others, and 3) relation of interlocutors turn-taking behaviors such as competitiveness. To study the degree of coordination in different emotional segments, we conducted our experiments using real dyadic conversations collected from call centers in which agent's emotional state include empathy and customer's emotional states include anger and frustration. Our findings suggest that the most coordination occurs between the interlocutors inside anger segments, where as, a little coordination was observed when the agent was empathic, even though an increase in the amount of non-competitive overlaps was observed. We found no significant difference between anger and frustration segment in terms of turn-taking behaviors. However, the length of pause significantly decreases in the preceding segment of anger where as it increases in the preceding segment of frustration. This work is licensed under a Creative Commons Attribution 4.0 International Licence.Licence details:
How Interlocutors Coordinate with each other within Emotional Segments?
d15295970
The paper demonstrates how the generic parser of a minimally supervised information extraction framework can be adapted to a given task and domain for relation extraction (RE). For the experiments a generic deep-linguistic parser was employed that works with a largely hand-crafted headdriven phrase structure grammar (HPSG) for English. The output of this parser is a list of n best parses selected and ranked by a MaxEnt parse-ranking component, which had been trained on a more or less generic HPSG treebank. It will be shown how the estimated confidence of RE rules learned from the n best parses can be exploited for parse reranking. The acquired reranking model improves the performance of RE in both training and test phases with the new first parses. The obtained significant boost of recall does not come from an overall gain in parsing performance but from an application-driven selection of parses that are best suited for the RE task. Since the readings best suited for successful rule extraction and instance extraction are often not the readings favored by a regular parser evaluation, generic parsing accuracy actually decreases. The novel method for taskspecific parse reranking does not require any annotated data beyond the semantic seed, which is needed anyway for the RE task.
Minimally Supervised Domain-Adaptive Parse Reranking for Relation Extraction
d243128
This paper gives an overview of the MultiTal project, which aims to create a research infrastructure that ensures long-term distribution of NLP tools descriptions. The goal is to make NLP tools more accessible and usable to end-users of different disciplines. The infrastructure is built on a meta-data scheme modelling and standardising multilingual NLP tools documentation. The model is conceptualised using an OWL ontology. The formal representation of the ontology allows us to automatically generate organised and structured documentation in different languages for each represented tool.
The MultiTal NLP tool infrastructure
d2280364
This work introduces a new approach for aspect based sentiment analysis task. Its main purpose is to automatically assign the correct polarity for the aspect term in a phrase. It is a probabilistic automata where each state consists of all the nouns, adjectives, verbs and adverbs found in an annotated corpora. Each one of them contains the number of occurrences in the annotated corpora for the four required polarities (i.e. positive, negative, neutral and conflict). Also, the transitions between states have been taken into account. These values were used to assign the predicted polarity when a pattern was found in a sentence; if a pattern cannot be applied, the probabilities of the polarities between states were computed in order to predict the right polarity. The system achieved results around 66% and 57% of recall for the restaurant and laptop domain respectively.
UMCC_DLSI_Prob: A Probabilistic Automata for Aspect Based Sentiment Analysis
d10762592
Despite progress in the development of computational means, human input is still critical in the production of consistent and useable aligned corpora and term banks. This is especially true for specialized corpora and term banks whose end-users are often professionals with very stringent requirements for accuracy, consistency and coverage. In the compilation of a high quality Chinese-English legal glossary for ELDoS project, we have identified a number of issues that make the role human input critical for term alignment and extraction. They include the identification of low frequency terms, paraphrastic expressions, discontinuous units, and maintaining consistent term granularity, etc. Although manual intervention can more satisfactorily address these issues, steps must also be taken to address intra-and inter-annotator inconsistency.
Some Considerations on Guidelines for Bilingual Alignment and Terminology Extraction
d32474324
This year, project efforts are focused on reapplying and revising existing evaluation techniques for the purpose of evaluating English and Japanese information extraction systems in the joint ventures and microelectronics domains. This year's effort will culminate in the Fifth Message Understanding Conference in August, 1993. development in March, 1993 for the evaluation in July and the conference in August. Over 20 organizations (including Tipster-sponsored organizations) are planning to participate. Most of the non-Tipster organizations will be working only on the English joint ventures task or the English microelectronics task; however, two will be working on joint ventures in both languages, and one will be working on microelectronics in Japanese only.RECENT RESULTSMUC-4: The MUC-4 evaluation was conducted in FY92, the conference was held in June, 1992, and a proceedings was published in September. A single-value metric based on recall and precision was developed, and statistical significance tests were conducted. A blind test of 17 seventeen systems was conducted using an improved version of the Latin American terrorism information extraction task originally defined for MUC-3. Higher levels of performance by nearly all veteran systems were achieved for MUC-4, but the top scores are still only moderate. Progress in controlling the tendency to generate spurious data was obvious, but the problem still exists, along with the problem of insufficient domain coverage and general world knowledge. The push to extend the systems has brought into the focus the adverse effect that errors made in early stages of processing at the sentence and phrasal level have on suprasentential processing done in subsequent stages.TIPSTER INTERIM EVALUATIONS:The scoring software used for MUC-4 was rewritten for the object-oriented Tipster template design. Accomodations were made for scoring Japanese. Alternative scoring procedures and new metrics were introduced. The Tipster English and Japanese systems were evaluated in September, 1992 on joint ventures, and they were evaluated in February, 1993 on both joint ventures and microelectronics. The results of these evaluations are being used to make decisions concerning the evaluation methodology to be used for the final Tipster evaluation (which will be the MUC-5 evaluation).MUC-5: The call for participation in MUC-5 was issued in October, 1992, and participants beganPLANS FOR THE YEAR• Improve the evaluation methodology to be used for MUC-5 based on the experiences of the Tipster interim evaluations.• Coordinate the MUC-5 evaluation and conduct the conference.• Foster interest in resource-sharing among evaluation participants to support future R&D on information extraction and NLP in general.403
INFORMATION EXTRACTION SYSTEM EVALUATION
d16931101
The paper presents an integral framework for multilingual lexical databases (henceforth MLLD) based on Compreno technology. It differs from the existing approaches to MLLD in the following aspects: 1) it is based on a universal semantic hierarchy (SH) of thesaurus type filled with language-specific lexicon; 2) the position in the SH generally determines semantic and syntactic model of a word; 3) this model proposes a suite of elaborate tools to determine universal and language-specific semantic and syntactic properties and deals efficiently with problems of cross-lingual lexical, semantic and syntactic asymmetry. Currently, it includes English, Russian, German, French and Chinese and proves to be a compatible MLLD for typologically different languages that can be used as a comprehensive lexical-semantic database for various NLP applications.
The Compreno Semantic Model as an Integral Framework for a Multilingual Lexical Database
d263609475
In the context of data visualization, as in other grounded settings, referents are created by the task the agents engage in and are salient because they belong to the shared physical setting.Our focus is on resolving references to visualizations on large displays; crucially, reference resolution is directly involved in the process of creating new entities, namely new visualizations.First, we developed a reference resolution model for a conversational assistant.We trained the assistant on controlled dialogues for data visualizations involving a single user.Second, we ported the conversational assistant including its reference resolution model to a different domain, supporting two users collaborating on a data exploration task.We explore how the new setting affects reference detection and resolution; we compare the performance in the controlled vs unconstrained setting, and discuss the general lessons that we draw from this adaptation.
Reference Resolution and New Entities in Exploratory Data Visualization: From Controlled to Unconstrained Interactions with a Conversational Assistant
d264038837
Le résumé multi-documents est une tâche difficile en traitement automatique du langage, ayant pour objectif de résumer les informations de plusieurs documents.Cependant, les documents sources sont souvent insuffisants pour obtenir un résumé qualitatif.Nous proposons un modèle guidé par un système de recherche d'informations combiné avec une mémoire non paramétrique pour la génération de résumés.Ce modèle récupère des candidats pertinents dans une base de données, puis génère le résumé en prenant en compte les candidats avec un mécanisme de copie et les documents sources.Cette mémoire non paramétrique est implémentée avec la recherche approximative des plus proches voisins afin de faire des recherches dans de grandes bases de données.Notre méthode est évalué sur le jeu de données MultiXScience qui regroupe des articles scientifiques.Enfin, nous discutons de nos résultats et des orientations possibles pour de futurs travaux.
Résumé automatique multi-documents guidé par une base de résumés similaires
d259370751
Dataset distillation aims to create a small dataset of informative synthetic samples to rapidly train neural networks that retain the performance of the original dataset. In this paper, we focus on constructing distilled fewshot datasets for natural language processing (NLP) tasks to fine-tune pre-trained transformers. Specifically, we propose to introduce attention labels, which can efficiently distill the knowledge from the original dataset and transfer it to the transformer models via attention probabilities. We evaluated our dataset distillation methods in four various NLP tasks and demonstrated that it is possible to create distilled few-shot datasets with the attention labels, yielding impressive performances for finetuning BERT. Specifically, in AGNews, a fourclass news classification task, our distilled fewshot dataset achieved up to 93.2% accuracy, which is 98.5% performance of the original dataset even with only one sample per class and only one gradient step.
Dataset Distillation with Attention Labels for Fine-tuning BERT
d256460906
Nowadays in the finance world, there is a global trend for responsible investing, linked with a growing need for developing automated methods for analysing Environmental, Social and Governance (ESG) related elements in financial texts. In this work we propose a solution to the FinSim4-ESG task, consisting in classifying sentences from financial reports as sustainable or unsustainable. We propose a novel knowledge-based latent heterogeneous representation that relies on knowledge from taxonomies, knowledge graphs and multiple contemporary document representations. We hypothesize that an approach based on a combination of knowledge and document representations can introduce significant improvement over conventional document representation approaches. We perform ensembling, both at the classifier level and at the representation level (late-fusion and early-fusion). The proposed approaches achieve competitive accuracy of 89% and are 5.85% behind the best score in the shared task.
Knowledge informed sustainability detection from short financial texts
d58094
In recent work we have presented a formal framework for linguistic annotation based on labeled acyclic digraphs. These 'annotation graphs' offer a simple yet powerful method for representing complex annotation structures incorporating hierarchy and overlap. Here, we motivate and illustrate our approach using discourse-level annotations of text and speech data drawn from the CALLHOME, COCONUT, MUC-7, DAMSL and TRAINS annotation schemes. With the help of domain specialists, we have constructed a hybrid multi-level annotation for a fragment of the Boston University Radio Speech Corpus which includes the following levels: segment, word, breath, ToBI, Tilt, Treebank, coreference and named entity. We show how annotation graphs can represent hybrid multi-level structures which derive from a diverse set of file formats. We also show how the approach facilitates substantive comparison of multiple annotations of a single signal based on different theoretical models. The discussion shows how annotation graphs open the door to wide-ranging integration of tools, formats and corpora.
Annotation Graphs as a Framework for Multidimensional Linguistic Data Analysis
d18933807
The Neural Bag-of-Words (NBOW) model performs classification with an average of the input word vectors and achieves an impressive performance.While the NBOW model learns word vectors targeted for the classification task it does not explicitly model which words are important for given task.In this paper we propose an improved NBOW model with this ability to learn task specific word importance weights.The word importance weights are learned by introducing a new weighted sum composition of the word vectors.With experiments on standard topic and sentiment classification tasks, we show that (a) our proposed model learns meaningful word importance for a given task (b) our model gives best accuracies among the BOW approaches.We also show that the learned word importance weights are comparable to tf-idf based word weights when used as features in a BOW SVM classifier.
Learning Word Importance with the Neural Bag-of-Words Model
d11895984
The recent advances in information retrieval (IR) in this collection of ten original papers reflect the wide range of research topics developed at the Center for Intelligent Information Retrieval (CIIR) at the University of Massachusetts, Amherst. W. Bruce Croft, the Director of CIIR and the editor of this volume, presents in the preface both the history and the impressive research track of the center. The preface also lists the topics covered in the collection, reflecting the fact that the majority of the papers deal with research in traditional IR or with architecture and implementation issues. Only one of the papers tackles new areas, namely the topic detection and tracking problem. Surprisingly, none of the papers address one of the most exciting new IR tasks: the open-domain textual question-answering task. As one might expect, the papers in this volume are of varying quality. However, both the IR researcher and the computational linguist will find at least two of the papers outstanding. Warren Greiff's paper "The use of exploratory data analysis in information retrieval research" reports on a new line of research that uncovers statistical regularities in novel ways. By analyzing data using the notion of weight of evidence, Greiff obtains a new formulation of the inverse document frequency (IDF) that generates results of the same quality as those obtained using traditional IDF measures. Moreover, this data-driven approach is extended to other IR measures, such as term frequency and document length. This approach, comprising four incremental models, generates a ranking formula that is shown to perform similarly to the INQUERY system. Greiff's data-driven model has special promise as it allows natural extensions based on additional sources of evidence such as thesaurus terms or phrases. This technique is an ideal research vehicle for traditional and modern IR.The second paper that presents outstanding new research is "Topic detection and tracking: Event clustering as a basis for first story detection" by Ron Papka and James Allan. This paper captivates the reader by presenting an overview of the topic detection and tracking (TDT) problem, whose purpose is to organize broadcast news stories by the real-world events that they discuss. The paper describes the research problems considered in all three phases of the TDT and focuses on the solutions developed at CIIR for one of the problems, namely the detection problem--that is, identifying when new topics have appeared in the news stream. The approach discussed is both practical and technically interesting as it presents both new algorithms and modifications of existing IR techniques for the TDT problem.301
Book Reviews Advances in Information Retrieval: Recent Research from the Center for Intelligent Information Retrieval
d2254454
Interactive spoken dialog provides many new challenges for spoken language systems. One of the most critical is the prevalence of speech repairs. This paper presents an algorithm that detects and corrects speech repairs based on finding the repair pattern. The repair pattern is built by finding word matches and word replacements, and identifying fragments and editing terms. Rather than using a set of prebuilt templates, we build the pattern on the fly. In a fair test, our method, when combined with a statistical model to filter possible repairs, was successful at detecting and correcting 80% of the repairs, without using prosodic information or a parser.
Detecting and Correcting Speech Repairs
d261768583
Automatic Evaluation (AE) and Response Selection (RS) models assign quality scores to various candidate responses and rank them in conversational setups.Prior response ranking research compares various models' performance on synthetically generated test sets.In this work, we investigate the performance of model-based reference-free AE and RS models on our constructed response ranking datasets that mirror real-case scenarios of ranking candidates during inference time.Metrics' unsatisfying performance can be interpreted as their low generalizability over more pragmatic conversational domains such as human-chatbot dialogs.To alleviate this issue we propose a novel RS model called MERCY that simulates human behavior in selecting the best candidate by taking into account distinct candidates concurrently and learns to rank them.In addition, MERCY leverages natural language feedback as another component to help the ranking task by explaining why each candidate response is relevant/irrelevant to the dialog context.These feedbacks are generated by prompting large language models in a few-shot setup.Our experiments show the better performance of MERCY over baselines for the response ranking task in our curated realistic datasets.
MERCY: Multiple Response Ranking Concurrently in Realistic Open-Domain Conversational Systems
d262738072
Minimalist grammars have been criticized for their inability to analyze successive cyclic movement and multiple wh-movement in a manner that is faithful to the Minimalist literature. Persistent features have been proposed in the literature as a potential remedy(Stabler, 2011;Laszakovits, 2018). We show that not all persistent features are alike. The persistent features involved in multiple wh-movement do not increase subregular complexity, making this phenomenon appear very natural from the perspective of MGs. The persistent features in successive-cyclic movement, on the other hand, change the subregular nature of movement, favoring an alternative treatment along the lines ofKobele (2006).
Multiple Wh-Movement is not Special: The Subregular Complexity of Persistent Features in Minimalist Grammars
d659322
In this paper we describe a modular system architecture for distributed parse annotation using interactive correction. This involves interactively adding constraints to an existing parse until the returned parse is correct. Using a mixed initiative approach, human annotators interact live with distributed ccg parser servers through an annotation gui. The examples presented to each annotator are selected by an active learning framework to maximise the value of the annotated corpus for machine learners. We report on an initial implementation based on a distributed workflow architecture.
A Distributed Architecture for Interactive Parse Annotation
d7921428
Intelligent selection of training data has proven a successful technique to simultaneously increase training efficiency and translation performance for phrase-based machine translation (PBMT). With the recent increase in popularity of neural machine translation (NMT), we explore in this paper to what extent and how NMT can also benefit from data selection. While state-of-the-art data selection (Axelrod et al., 2011) consistently performs well for PBMT, we show that gains are substantially lower for NMT. Next, we introduce dynamic data selection for NMT, a method in which we vary the selected subset of training data between different training epochs. Our experiments show that the best results are achieved when applying a technique we call gradual fine-tuning, with improvements up to +2.6 BLEU over the original data selection approach and up to +3.1 BLEU over a general baseline.
Dynamic Data Selection for Neural Machine Translation
d5939791
We explore the application of memorybased learning to morphological analysis and part-of-speech tagging of written Arabic, based on data from the Arabic Treebank. Morphological analysis -the construction of all possible analyses of isolated unvoweled wordforms -is performed as a letter-by-letter operation prediction task, where the operation encodes segmentation, part-of-speech, character changes, and vocalization. Part-of-speech tagging is carried out by a bi-modular tagger that has a subtagger for known words and one for unknown words. We report on the performance of the morphological analyzer and part-of-speech tagger. We observe that the tagger, which has an accuracy of 91.9% on new data, can be used to select the appropriate morphological analysis of words in context at a precision of 64.0 and a recall of 89.7.
Memory-based morphological analysis generation and part-of-speech tagging of Arabic
d21712878
In this paper, we present evaluation corpora covering four genres for four language pairs that we harvested from the web in an automated fashion. We use these multi-genre benchmarks to evaluate the impact of genre differences on machine translation (MT). We observe that BLEU score differences between genres can be large and that, for all genres and all language pairs, translation quality improves when using four genre-optimized systems rather than a single genre-agnostic system. Finally, we train and use genre classifiers to route test documents to the most appropriate genre systems. The results of these experiments show that our multi-genre benchmarks can serve to advance research on text genre adaptation for MT.
Evaluation of Machine Translation Performance Across Multiple Genres and Languages
d21720656
This paper presents an expressive French audiobooks corpus containing eighty seven hours of good audio quality speech, recorded by a single amateur speaker reading audiobooks of different literary genres. This corpus departs from existing corpora collected from audiobooks since they usually provide a few hours of mono-genre and multi-speaker speech. The motivation for setting up such a corpus is to explore expressiveness from different perspectives, such as discourse styles, prosody, and pronunciation, and using different levels of analysis (syllable, prosodic and lexical words, prosodic and syntactic phrases, utterance or paragraph). This will allow developing models to better control expressiveness in speech synthesis, and to adapt pronunciation and prosody to specific discourse settings (changes in discourse perspectives, indirect vs. direct styles, etc.). To this end, the corpus has been annotated automatically and provides information as phone labels, phone boundaries, syllables, words or morpho-syntactic tagging. Moreover, a significant part of the corpus has also been annotated manually to encode direct/indirect speech information and emotional content. The corpus is already usable for studies on prosody and TTS purposes and is available to the community.
SynPaFlex-Corpus: An Expressive French Audiobooks Corpus Dedicated to Expressive Speech Synthesis
d1594281
We propose a number of refinements to the canonical approach to interactive translation prediction. By more permissive matching criteria, placing emphasis on matching the last word of the user prefix, and dealing with predictions to partially typed words, we observe gains in both word prediction accuracy (+5.4%) and letter prediction accuracy (+9.3%).
Refinements to Interactive Translation Prediction Based on Search Graphs
d53046735
In a world of proliferating data, the ability to rapidly summarize text is growing in importance. Automatic summarization of text can be thought of as a sequence to sequence problem. Another area of natural language processing that solves a sequence to sequence problem is machine translation, which is rapidly evolving due to the development of attentionbased encoder-decoder networks. This work applies these modern techniques to abstractive summarization. We perform analysis on various attention mechanisms for summarization with the goal of developing an approach and architecture aimed at improving the state of the art. In particular, we modify and optimize a translation model with self-attention for generating abstractive sentence summaries. The effectiveness of this base model along with attention variants is compared and analyzed in the context of standardized evaluation sets and test metrics. However, we show that these metrics are limited in their ability to effectively score abstractive summaries, and propose a new approach based on the intuition that an abstractive model requires an abstractive evaluation.
Abstractive Summarization Using Attentive Neural Techniques
d14131077
Most existing approaches for zero pronoun resolution are heavily relying on annotated data, which is often released by shared task organizers. Therefore, the lack of annotated data becomes a major obstacle in the progress of zero pronoun resolution task. Also, it is expensive to spend manpower on labeling the data for better performance. To alleviate the problem above, in this paper, we propose a simple but novel approach to automatically generate large-scale pseudo training data for zero pronoun resolution. Furthermore, we successfully transfer the cloze-style reading comprehension neural network model into zero pronoun resolution task and propose a two-step training mechanism to overcome the gap between the pseudo training data and the real one. Experimental results show that the proposed approach significantly outperforms the state-of-the-art systems with an absolute improvements of 3.1% F-score on OntoNotes 5.0 data.
Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution
d128380
The paper describes the results of an empirical study of integrating bigram collocations and similarities between them and unigrams into topic models. First of all, we propose a novel algorithm PLSA-SIM that is a modification of the original algorithm PLSA. It incorporates bigrams and maintains relationships between unigrams and bigrams based on their component structure. Then we analyze a variety of word association measures in order to integrate top-ranked bigrams into topic models. All experiments were conducted on four text collections of different domains and languages. The experiments distinguish a subgroup of tested measures that produce top-ranked bigrams, which demonstrate significant improvement of topic models quality for all collections, when integrated into PLSA-SIM algorithm.
Topic Models: Accounting Component Structure of Bigrams
d262755580
We present a large parallel corpus of texts published by the United Nations Organization, which we exploit for the creation of phrasebased statistical machine translation (SMT) systems for new language pairs. We present a setup where phrase tables for these language pairs are used for translation between languages for which parallel corpora of sufficient size are so far not available. We give some preliminary results for this novel application of SMT and discuss further refinements.
Parallel Corpora and Phrase-Based Statistical Machine Translation for New Language Pairs via Multiple Intermediaries
d13748720
Interacting with relational databases through natural language helps users of any background easily query and analyze a vast amount of data. This requires a system that understands users' questions and converts them to SQL queries automatically. In this paper we present a novel approach, TYPESQL, which views this problem as a slot filling task. Additionally, TYPESQL utilizes type information to better understand rare entities and numbers in natural language questions. We test this idea on the WikiSQL dataset and outperform the prior state-of-the-art by 5.5% in much less time. We also show that accessing the content of databases can significantly improve the performance when users' queries are not wellformed. TYPESQL gets 82.6% accuracy, a 17.5% absolute improvement compared to the previous content-sensitive model.
TypeSQL: Knowledge-based Type-Aware Neural Text-to-SQL Generation
d243979264
The amount of information available online can be overwhelming for users to digest, specially when dealing with other users' comments when making a decision about buying a product or service.In this context, opinion summarization systems are of great value, extracting important information from the texts and presenting them to the user in a more understandable manner.It is also known that the usage of semantic representations can benefit the quality of the generated summaries.This paper aims at developing opinion summarization methods based on Abstract Meaning Representation of texts in the Brazilian Portuguese language.Four different methods have been investigated, alongside some literature approaches.The results show that a Machine Learning-based method produced summaries of higher quality, outperforming other literature techniques on manually constructed semantic graphs.We also show that using parsed graphs over manually annotated ones harmed the output.Finally, an analysis of how important different types of information are for the summarization process suggests that using Sentiment Analysis features did not improve summary quality.
Semantic-Based Opinion Summarization
d118680003
Grammatical Error Correction (GEC) has been recently modeled using the sequenceto-sequence framework. However, unlike sequence transduction problems such as machine translation, GEC suffers from the lack of plentiful parallel data. We describe two approaches for generating large parallel datasets for GEC using publicly available Wikipedia data.The first method extracts sourcetarget pairs from Wikipedia edit histories with minimal filtration heuristics, while the second method introduces noise into Wikipedia sentences via round-trip translation through bridge languages. Both strategies yield similar sized parallel corpora containing around 4B tokens. We employ an iterative decoding strategy that is tailored to the loosely supervised nature of our constructed corpora. We demonstrate that neural GEC models trained using either type of corpora give similar performance. Fine-tuning these models on the Lang-8 corpus and ensembling allows us to surpass the state of the art on both the CoNLL-2014 benchmark and the JFLEG task. We provide systematic analysis that compares the two approaches to data generation and highlights the effectiveness of ensembling. * * Equal contribution. Listing order is random. Jared conducted systematic experiments to determine useful variants of the Wikipedia revisions corpus, pre-training and finetuning strategies, and iterative decoding. Chris implemented the ensemble and provided background knowledge and resources related to GEC. Shankar ran training and decoding experiments using round-trip translated data. Jared, Chris and Shankar wrote the paper. Noam identified Wikipedia revisions as a source of training data. Noam developed the heuristics for using the full Wikipedia revisions at scale and conducted initial experiments to train Transformer models for GEC. Noam and Niki provided guidance on training Transformer models using the Tensor2Tensor toolkit. Simon proposed using round-trip translations as a source for training data, and corrupting them with common errors extracted from Wikipedia revisions. Simon generated such data for this paper.
Corpora Generation for Grammatical Error Correction
d59848390
TREC: Experiment and Evaluation in Information Retrieval
d5420466
Over 50 million scholarly articles have been published: they constitute a unique repository of knowledge. In particular, one may infer from them relations between scientific concepts, such as synonyms and hyponyms. Artificial neural networks have been recently explored for relation extraction. In this work, we continue this line of work and present a system based on a convolutional neural network to extract relations. Our model ranked first in the SemEval-2017 task 10 (ScienceIE) for relation extraction in scientific articles (subtask C).
MIT at SemEval-2017 Task 10: Relation Extraction with Convolutional Neural Networks
d1451
In this paper we describe a biography summarization system using sentence classification and ideas from information retrieval. Although the individual techniques are not new, assembling and applying them to generate multi-document biographies is new. Our system was evaluated in DUC2004. It is among the top performers in task 5-short summaries focused by person questions.
Multi-document Biography Summarization
d201698399
We present a novel approach to answering sequential questions based on structured objects such as knowledge bases or tables without using a logical form as an intermediate representation. We encode tables as graphs using a graph neural network model based on the Transformer architecture. The answers are then selected from the encoded graph using a pointer network. This model is appropriate for processing conversations around structured data, where the attention mechanism that selects the answers to a question can also be used to resolve conversational references. We demonstrate the validity of this approach with competitive results on the Sequential Question Answering (SQA) task (Iyyer et al., 2017).
Answering Conversational Questions on Structured Data without Logical Forms
d10202222
There have been several efforts to extend distributional semantics beyond individual words, to measure the similarity of word pairs, phrases, and sentences (briefly, tuples; ordered sets of words, contiguous or noncontiguous). One way to extend beyond words is to compare two tuples using a function that combines pairwise similarities between the component words in the tuples. A strength of this approach is that it works with both relational similarity (analogy) and compositional similarity (paraphrase). However, past work required hand-coding the combination function for different tasks. The main contribution of this paper is that combination functions are generated by supervised learning. We achieve state-of-the-art results in measuring relational similarity between word pairs (SAT analogies and SemEval 2012 Task 2) and measuring compositional similarity between nounmodifier phrases and unigrams (multiplechoice paraphrase questions).
Distributional Semantics Beyond Words: Supervised Learning of Analogy and Paraphrase
d254017551
Neural machine translation (NMT) is often criticized for failures that happen without awareness. The lack of competency awareness makes NMT untrustworthy. This is in sharp contrast to human translators who give feedback or conduct further investigations whenever they are in doubt about predictions. To fill this gap, we propose a novel competency-aware NMT by extending conventional NMT with a selfestimator, offering abilities to translate a source sentence and estimate its competency. The selfestimator encodes the information of the decoding procedure and then examines whether it can reconstruct the original semantics of the source sentence. Experimental results on four translation tasks demonstrate that the proposed method not only carries out translation tasks intact but also delivers outstanding performance on quality estimation. Without depending on any reference or annotated data typically required by state-of-the-art metric and quality estimation methods, our model yields an even higher correlation with human quality judgments than a variety of aforementioned methods, such as BLEURT, COMET, and BERTScore. Quantitative and qualitative analyses show better robustness of competency awareness in our model. 1
Competency-Aware Neural Machine Translation: Can Machine Translation Know its Own Translation Quality?
d9007028
In order to effectively access the rapidly increasing range of media content available in the home, new kinds of more natural interfaces are needed. In this paper, we explore the application of multimodal interface technologies to searching and browsing a database of movies. The resulting system allows users to access movies using speech, pen, remote control, and dynamic combinations of these modalities. An experimental evaluation, with more than 40 users, is presented contrasting two variants of the system: one combining speech with traditional remote control input and a second where the user has a tablet display supporting speech and pen input.
A Multimodal Interface for Access to Content in the Home
d355852
A challenge for search systems is to detect not only when an item is relevant to the user's information need, but also when it contains something new which the user has not seen before. In the TREC novelty track, the task was to highlight sentences containing relevant and new information in a short, topical document stream. This is analogous to highlighting key parts of a document for another person to read, and this kind of output can be useful as input to a summarization system. Search topics involved both news events and reported opinions on hot-button subjects. When people performed this task, they tended to select small blocks of consecutive sentences, whereas current systems identified many relevant and novel passages. We also found that opinions are much harder to track than events.
Novelty Detection: The TREC Experience
d648623
Metaphor is highly frequent in language, which makes its computational processing indispensable for real-world NLP applications addressing semantic tasks. Previous approaches to metaphor modeling rely on task-specific hand-coded knowledge and operate on a limited domain or a subset of phenomena. We present the first integrated open-domain statistical model of metaphor processing in unrestricted text. Our method first identifies metaphorical expressions in running text and then paraphrases them with their literal paraphrases. Such a text-to-text model of metaphor interpretation is compatible with other NLP applications that can benefit from metaphor resolution. Our approach is minimally supervised, relies on the state-of-the-art parsing and lexical acquisition technologies (distributional clustering and selectional preference induction), and operates with a high accuracy.
Statistical Metaphor Processing
d10586665
Recognition of social signals, from human facial expressions or prosody of speech, is a popular research topic in human-robot interaction studies. There is also a long line of research in the spoken dialogue community that investigates user satisfaction in relation to dialogue characteristics. However, very little research relates a combination of multimodal social signals and language features detected during spoken face-to-face human-robot interaction to the resulting user perception of a robot. In this paper we show how different emotional facial expressions of human users, in combination with prosodic characteristics of human speech and features of human-robot dialogue, correlate with users' impressions of the robot after a conversation. We find that happiness in the user's recognised facial expression strongly correlates with likeability of a robot, while dialogue-related features (such as number of human turns or number of sentences per robot utterance) correlate with perceiving a robot as intelligent. In addition, we show that facial expression, emotional features, and prosody are better predictors of human ratings related to perceived robot likeability and anthropomorphism, while linguistic and non-linguistic features more often predict perceived robot intelligence and interpretability. As such, these characteristics may in future be used as an online reward signal for in-situ Reinforcement Learningbased adaptive human-robot dialogue systems. Figure 1: Left: a live view of experimental setup showing a participant interacting with Pepper. Right: a diagram of experimental setup showing the participant (green) and the robot (white) positioned face to face. The scene was recorded by cameras (triangles C) from the robot's perspective focusing on the face of the participant and from the side, showing the whole scene. The experimenter (red) was seated behind a divider.
Sympathy Begins with a Smile, Intelligence Begins with a Word: Use of Multimodal Features in Spoken Human-Robot Interaction
d253098734
Video corpus moment retrieval (VCMR) is the task to retrieve the most relevant video moment from a large video corpus using a natural language query. For narrative videos, e.g., dramas or movies, the holistic understanding of temporal dynamics and multimodal reasoning is crucial. Previous works have shown promising results; however, they relied on the expensive query annotations for VCMR, i.e., the corresponding moment intervals. To overcome this problem, we propose a self-supervised learning framework: Modal-specific Pseudo Query Generation Network (MPGN). First, MPGN selects candidate temporal moments via subtitle-based moment sampling. Then, it generates pseudo queries exploiting both visual and textual information from the selected temporal moments. Through the multimodal information in the pseudo queries, we show that MPGN successfully learns to localize the video corpus moment without any explicit annotation. We validate the effectiveness of MPGN on the TVR dataset, showing competitive results compared with both supervised models and unsupervised setting models.
Modal-specific Pseudo Query Generation for Video Corpus Moment Retrieval
d2882092
Non-linear models recently receive a lot of attention as people are starting to discover the power of statistical and embedding features. However, tree-based models are seldom studied in the context of structured learning despite their recent success on various classification and ranking tasks. In this paper, we propose S-MART, a tree-based structured learning framework based on multiple additive regression trees. S-MART is especially suitable for handling tasks with dense features, and can be used to learn many different structures under various loss functions.We apply S-MART to the task of tweet entity linking -a core component of tweet information extraction, which aims to identify and link name mentions to entities in a knowledge base. A novel inference algorithm is proposed to handle the special structure of the task. The experimental results show that S-MART significantly outperforms state-of-the-art tweet entity linking systems.
S-MART: Novel Tree-based Structured Learning Algorithms Applied to Tweet Entity Linking
d2554437
We describe a novel method that extracts paraphrases from a bitext, for both the source and target languages. In order to reduce the search space, we decompose the phrase-table into sub-phrase-tables and construct separate clusters for source and target phrases. We convert the clusters into graphs, add smoothing/syntacticinformation-carrier vertices, and compute the similarity between phrases with a random walk-based measure, the commute time.The resulting phrase-paraphrase probabilities are built upon the conversion of the commute times into artificial cooccurrence counts with a novel technique. The co-occurrence count distribution belongs to the power-law family.
Power-Law Distributions for Paraphrases Extracted from Bilingual Corpora
d574574
This paper reports on a writing style analysis of hyperpartisan (i.e., extremely onesided) news in connection to fake news. It presents a large corpus of 1,627 articles that were manually fact-checked by professional journalists from BuzzFeed. The articles originated from 9 well-known political publishers, 3 each from the mainstream, the hyperpartisan left-wing, and the hyperpartisan right-wing. In sum, the corpus contains 299 fake news, 97% of which originated from hyperpartisan publishers.We propose and demonstrate a new way of assessing style similarity between text categories via Unmasking-a meta-learning approach originally devised for authorship verification-, revealing that the style of left-wing and right-wing news have a lot more in common than any of the two have with the mainstream. Furthermore, we show that hyperpartisan news can be discriminated well by its style from the mainstream (F 1 = 0.78), as can be satire from both (F 1 = 0.81). Unsurprisingly, stylebased fake news detection does not live up to scratch (F 1 = 0.46). Nevertheless, the former results are important to implement pre-screening for fake news detectors.
A Stylometric Inquiry into Hyperpartisan and Fake News
d196174735
With social media becoming increasingly popular on which lots of news and real-time events are reported, developing automated question answering systems is critical to the effectiveness of many applications that rely on realtime knowledge. While previous datasets have concentrated on question answering (QA) for formal text like news and Wikipedia, we present the first large-scale dataset for QA over social media data. To ensure that the tweets we collected are useful, we only gather tweets used by journalists to write news articles. We then ask human annotators to write questions and answers upon these tweets. Unlike other QA datasets like SQuAD in which the answers are extractive, we allow the answers to be abstractive. We show that two recently proposed neural models that perform well on formal texts are limited in their performance when applied to our dataset. In addition, even the finetuned BERT model is still lagging behind human performance with a large margin. Our results thus point to the need of improved QA systems targeting social media text. 1
TWEETQA: A Social Media Focused Question Answering Dataset
d259833814
Prompting is a widely adopted technique for fine-tuning large language models. Recent research by Scao and Rush (2021) has demonstrated its effectiveness in improving few-shot learning performance compared to vanilla finetuning and also showed that prompting and vanilla fine tuning achieves similar performance in high data regime (∼> 2000 samples). This paper investigates the impact of imbalanced data distribution on prompting. Through rigorous experimentation on diverse datasets and models, our findings reveals that even in scenarios with high data regimes, prompting consistently outperforms vanilla fine-tuning by exhibiting average performance improvement of 2 − 5%.
Prompting language models improves performance in imbalanced setting
d261349536
As the basic task in the field of Natural language processing, text classification plays a crucial role in the case classification in the field of telecom Internet fraud, and has great significance and far-reaching impact on intelligent case analysis. The purpose of this task is to classify the given case description text, which contains the overall description of the case after being desensitized. We first used Ernie's pre training model to fine tune the case content to get the category of each case, and then used pseudo tags and model fusion methods to improve the current F1 value. Finally, we won the second place in the CCL23-Eval task 6 Telecom Internet fraud case classification evaluation. The evaluation index F1 value of this task is 0.8628, achieving a more advanced detection effect.
System Report for CCL23-Eval Task 6: Classification of Telecom Internet Fraud Cases Based on Deep Learning
d501662
This paper presents an abstract data model for linguistic annotations and its implementation using XML, RDF and related standards; and to outline the work of a newly formed committee of the International Standards Organization (ISO), ISO/TC 37/SC 4 Language Resource Management, which will use this work as its starting point. The primary motive for presenting the latter is to solicit the participation of members of the research community to contribute to the work of the committee.
Standards for Language Resources
d57916
Language models for speech recognition typically use a probability model of the form
Prefix Probabilities from Stochastic Tree Adjoining Grammars*
d256808330
Multilingual machine translation (MMT) benefits from cross-lingual transfer but is a challenging multitask optimization problem. This is partly because there is no clear framework to systematically learn languagespecific parameters. Self-supervised learning (SSL) approaches that leverage large quantities of monolingual data (where parallel data is unavailable) have shown promise by improving translation performance as complementary tasks to the MMT task. However, jointly optimizing SSL and MMT tasks is even more challenging. In this work, we first investigate how to utilize intra-distillation to learn more language-specific parameters and then show the importance of these languagespecific parameters.Next, we propose a novel but simple SSL task, concurrent denoising, that co-trains with the MMT task by concurrently denoising monolingual data on both the encoder and decoder. Finally, we apply intra-distillation to this co-training approach. Combining these two approaches significantly improves MMT performance, outperforming three state-of-the-art SSL methods by a large margin, e.g., 11.3% and 3.7% improvement on an 8-language and a 15-language benchmark compared with MASS, respectively 1 .
Language-Aware Multilingual Machine Translation with Self-Supervised Learning
d219907063
Les systèmes automatiques d'identification de la langue subissent une dégradation importante de leurs performances quand les caractéristiques acoustiques des signaux de test diffèrent fortement des caractéristiques des données d'entraînement. Dans cet article, nous étudions l'adaptation de domaine non supervisée d'un système entraîné sur des conversations téléphoniques à des transmissions radio. Nous présentons une méthode de régularisation d'un réseau de neurones consistant à ajouter à la fonction de coût un terme mesurant la divergence entre les deux domaines. Des expériences sur le corpus OpenSAD15 nous permettent de sélectionner la Maximum Mean Discrepancy pour réaliser cette mesure. Cette approche est ensuite appliquée à un système moderne d'identification de la langue reposant sur des x-vectors. Sur le corpus RATS, pour sept des huit canaux radio étudiés, l'approche permet, sans utiliser de données annotées du domaine cible, de surpasser la performance d'un système entraîné de façon supervisée avec des données annotées de ce domaine.ABSTRACTUnsupervised domain adaptation for language identification by regularization of a neural networkAutomatic spoken language identification systems suffer from a performance drop when acoustic characterics of the test signal differ in a significative way from the characteristics of the training data. In this paper, we study the unsupervised domain adaptation of a system trained on conversational telephone speech to radio transmission channels. We present a regularization method for a neural network which consists in adding to the cost function a term that measures the discrepancy between domains. Based on experiments on the corpus OpenSAD15, we select the Maximum Mean Discrepancy loss to perform this measure. This approach is then applied to a state-of-the-art x-vector system. On the RATS corpus, for seven of the eight studied radio channels, our approach achieves a better performance on the target domain than a system trained in a supervised way using labelled data from this domain. MOTS-CLÉS : adaptation de domaine non supervisée, identification de la langue, régularisation, maximum mean discrepancy, robustesse.
Adaptation de domaine non supervisée pour la reconnaissance de la langue par régularisation d'un réseau de neurones
d115523027
Most research on semantic role labeling (SRL) has been focused on training and evaluating on the same corpus in order to develop the technology. This strategy, while appropriate for initiating research, can lead to over-training to the particular corpus. The work presented in this paper focuses on analyzing the robustness of an SRL system when trained on one genre of data and used to label a different genre. Our state-of-the-art semantic role labeling system, while performing well on WSJ test data, shows significant performance degradation when applied to data from the Brown corpus. We present a series of experiments designed to investigate the source of this lack of portability. These experiments are based on comparisons of performance using PropBanked WSJ data and PropBanked Brown corpus data. Our results indicate that while syntactic parses and argument identification port relatively well to a new genre, argument classification does not. Our analysis of the reasons for this is presented and generally point to the nature of the more lexical/semantic features dominating the classification task and general structural features dominating the argument identification task.
Towards Robust Semantic Role Labeling
d246430379
Script diversity presents a challenge to Multilingual Language Models (MLLM) by reducing lexical overlap among closely related languages. Therefore, transliterating closely related languages that use different writing scripts to a common script may improve the downstream task performance of MLLMs. We empirically measure the effect of transliteration on MLLMs in this context. We specifically focus on the Indic languages, which have the highest script diversity in the world, and we evaluate our models on the IndicGLUE benchmark. We perform the Mann-Whitney U test to rigorously verify whether the effect of transliteration is significant or not. We find that transliteration benefits the low-resource languages without negatively affecting the comparatively highresource languages. We also measure the crosslingual representation similarity of the models using centered kernel alignment on parallel sentences from the FLORES-101 dataset. We find that for parallel sentences across different languages, the transliteration-based model learns sentence representations that are more similar.
Does Transliteration Help Multilingual Language Modeling?
d17550250
This study aims to show that frequency of occurrence over time for technical terms and keyphrases differs from general language terms in the sense that technical terms and keyphrases show a strong tendency to be recent coinage, and that this difference can be exploited for the automatic identification and extraction of technical terms and keyphrases. To this end, we propose two features extracted from temporally labelled datasets designed to capture surface level n-gram neology. Our analysis shows that these features, calculated over consecutive bigrams, are highly indicative of technical terms and keyphrases, which suggests that both technical terms and keyphrases are strongly biased to be surface level neologisms. Finally, we evaluate the proposed features on a gold-standard dataset for technical term extraction and show that the proposed features are comparable or superior to a number of features commonly used for technical term extraction.
Technical Term Extraction Using Measures of Neology
d5663378
A machine translated sentence is seldom completely correct.Confidence measures are designed to detect incorrect words, phrases or sentences, or to provide an estimation of the probability of correctness.In this article we describe several word-and sentence-level confidence measures relying on different features: mutual information between words, n-gram and backward n-gram language models, and linguistic features.We also try different combination of these measures.Their accuracy is evaluated on a classification task.We achieve 17% error-rate (0.84 f-measure) on wordlevel and 31% error-rate (0.71 f-measure) on sentence-level.
Word-and Sentence-level Confidence Measures for Machine Translation
d189898554
Machine reading comprehension with unanswerable questions is a challenging task. In this work, we propose a data augmentation technique by automatically generating relevant unanswerable questions according to an answerable question paired with its corresponding paragraph that contains the answer. We introduce a pair-to-sequence model for unanswerable question generation, which effectively captures the interactions between the question and the paragraph. We also present a way to construct training data for our question generation models by leveraging the existing reading comprehension dataset. Experimental results show that the pair-to-sequence model performs consistently better compared with the sequence-to-sequence baseline. We further use the automatically generated unanswerable questions as a means of data augmentation on the SQuAD 2.0 dataset, yielding 1.9 absolute F1 improvement with BERT-base model and 1.7 absolute F1 improvement with BERT-large model.
Learning to Ask Unanswerable Questions for Machine Reading Comprehension
d250391009
This work is about finding the similarity between a pair of news articles. There are seven different objective similarity metrics provided in the dataset for each pair and the news articles are in multiple different languages. On top of the pre-trained embedding model, we calculated cosine similarity for baseline results and feed-forward neural network was then trained on top of it to improve the results. We also built separate pipelines for each similarity metric for feature extraction. We could see significant improvement from baseline results using feature extraction and feed-forward neural network.
SemEval-2022 Task 8: Multi-lingual News Article Similarity
d256358455
Evidence data for automated fact-checking (AFC) can be in multiple modalities such as text, tables, images, audio, or video. While there is increasing interest in using images for AFC, previous works mostly focus on detecting manipulated or fake images. We propose a novel task, chart-based fact-checking, and introduce ChartBERT as the first model for AFC against chart evidence. ChartBERT leverages textual, structural and visual information of charts to determine the veracity of textual claims. For evaluation, we create ChartFC, a new dataset of 15, 886 charts. We systematically evaluate 75 different vision-language (VL) baselines and show that ChartBERT outperforms VL models, achieving 63.8% accuracy. Our results suggest that the task is complex yet feasible, with many challenges ahead.
Reading and Reasoning over Chart Images for Evidence-based Automated Fact-Checking
d264038768
L'Apprentissage Actif (AA) est largement utilisé en apprentissage automatique afin de réduire l'effort d'annotation.Bien que la plupart des travaux d'AA soient antérieurs aux transformers, le succès récent de ces architectures a conduit la communauté à revisiter l'AA dans le contexte des modèles de langues pré-entraînés.De plus, le mécanisme de fine-tuning, où seules quelques données annotées sont utilisées pour entraîner le modèle sur une nouvelle tâche, est parfaitement en accord avec l'objectif de l'AA.Nous proposons d'étudier l'impact de l'AA dans le contexte des transformers pour la tâche de classification multi-labels.Or la plupart des stratégies AA, lorsqu'elles sont appliquées à ces modèles, conduisent à des temps de calcul excessifs, ce qui empêche leurs utilisations au cours d'une interaction homme-machine en temps réel.Afin de pallier ce problème, nous utilisons des stratégies d'AA basées sur l'incertitude.L'article compare six stratégies d'AA basées sur l'incertitude dans le contexte des transformers et montre que si deux stratégies améliorent invariablement les performances, les autres ne surpassent pas l'échantillonnage aléatoire.L'étude montre également que les stratégies performantes ont tendance à sélectionner des ensembles d'instances plus diversifiées pour l'annotation.
Impact de l'apprentissage multi-labels actif appliqué aux transformers
d926149
We present a transition-based parser that jointly produces syntactic and semantic dependencies. It learns a representation of the entire algorithm state, using stack long short-term memories. Our greedy inference algorithm has linear time, including feature extraction. On the CoNLL 2008-9 English shared tasks, we obtain the best published parsing performance among models that jointly learn syntax and semantics.
Greedy, Joint Syntactic-Semantic Parsing with Stack LSTMs
d203610420
Document-level context has received lots of attention for compensating neural machine translation (NMT) of isolated sentences. However, recent advances in document-level NMT focus on sophisticated integration of the context, explaining its improvement with only a few selected examples or targeted test sets. We extensively quantify the causes of improvements by a document-level model in general test sets, clarifying the limit of the usefulness of document-level context in NMT. We show that most of the improvements are not interpretable as utilizing the context. We also show that a minimal encoding is sufficient for the context modeling and very long context is not helpful for NMT.
When and Why is Document-level Context Useful in Neural Machine Translation?
d11690925
We propose a novel semantic tagging task, sem-tagging, tailored for the purpose of multilingual semantic parsing, and present the first tagger using deep residual networks (ResNets). Our tagger uses both word and character representations, and includes a novel residual bypass architecture. We evaluate the tagset both intrinsically on the new task of semantic tagging, as well as on Part-of-Speech (POS) tagging. Our system, consisting of a ResNet and an auxiliary loss function predicting our semantic tags, significantly outperforms prior results on English Universal Dependencies POS tagging (95.71% accuracy on UD v1.2 and 95.67% accuracy on UD v1.3).This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http:// creativecommons.org/licenses/by/4.0/
Semantic Tagging with Deep Residual Networks
d1131917
We describe a submission to the WMT12 Quality Estimation task, including an extensive Machine Learning experimentation. Data were augmented with features from linguistic analysis and statistical features from the SMT search graph. Several Feature Selection algorithms were employed. The Quality Estimation problem was addressed both as a regression task and as a discretised classification task, but the latter did not generalise well on the unseen testset. The most successful regression methods had an RMSE of 0.86 and were trained with a feature set given by Correlation-based Feature Selection. Indications that RMSE is not always sufficient for measuring performance were observed.
Quality Estimation for Machine Translation output using linguistic analysis and decoding features
d263609405
The goal of DSTC11 track 5 is to build taskoriented dialogue systems that can effectively utilize external knowledge sources such as FAQs and reviews.This year's challenge differs from previous ones as it includes subjective knowledge snippets and requires multiple snippets for a single turn.We propose a pipeline system for the challenge focusing on entity tracking, knowledge selection and response generation.Specifically, we devise a novel heuristic to ensemble the outputs from the rule-based method and neural model for entity tracking and knowledge selection.We also leverage metadata information in the knowledge source to handle fine-grained user queries.Our approach achieved the first place in objective evaluation and the third place in human evaluation of DSTC11 track 5.
Leveraging Ensemble Techniques and Metadata for Subjective Knowledge-grounded Conversational Systems
d1642440
Linguists and psychologists have long been studying cross-linguistic transfer, the influence of native language properties on linguistic performance in a foreign language. In this work we provide empirical evidence for this process in the form of a strong correlation between language similarities derived from structural features in English as Second Language (ESL) texts and equivalent similarities obtained from the typological features of the native languages. We leverage this finding to recover native language typological similarity structure directly from ESL text, and perform prediction of typological features in an unsupervised fashion with respect to the target languages. Our method achieves 72.2% accuracy on the typology prediction task, a result that is highly competitive with equivalent methods that rely on typological resources.
Reconstructing Native Language Typology from Foreign Language Usage
d14613502
We present a French to English translation system for Wikipedia biography articles. We use training data from outof-domain corpora and adapt the system for biographies. We propose two forms of domain adaptation. The first biases the system towards words likely in biographies and encourages repetition of words across the document. Since biographies in Wikipedia follow a regular structure, our second model exploits this structure as a sequence of topic segments, where each segment discusses a narrower subtopic of the biography domain. In this structured model, the system is encouraged to use words likely in the current segment's topic rather than in biographies as a whole. We implement both systems using cachebased translation techniques. We show that a system trained on Europarl and news can be adapted for biographies with 0.5 BLEU score improvement using our models. Further the structure-aware model outperforms the system which treats the entire document as a single segment.
Structured and Unstructured Cache Models for SMT Domain Adaptation
d13052370
This paper aims to compare different regularization strategies to address a common phenomenon, severe overfitting, in embedding-based neural networks for NLP. We chose two widely studied neural models and tasks as our testbed. We tried several frequently applied or newly proposed regularization strategies, including penalizing weights (embeddings excluded), penalizing embeddings, reembedding words, and dropout. We also emphasized on incremental hyperparameter tuning, and combining different regularizations. The results provide a picture on tuning hyperparameters for neural NLP models.
A Comparative Study on Regularization Strategies for Embedding-based Neural Networks
d264038838
Projet MALIN : MAnuels scoLaires INclusifs
d264038844
Ces dernières années l'évaluation de la traduction automatique, qu'elle soit humaine ou automatique, a rencontré des difficultés.Face aux importantes avancées en matière de traduction automatique neuronale, l'évaluation s'est montrée peu fiable.De nombreuses nouvelles approches ont été proposées pour améliorer les protocoles d'évaluation.L'objectif de ce travail est de proposer une vue d'ensemble sur l'état global de l'évaluation de la Traduction Automatique (TA).Nous commencerons par exposer les approches d'évaluation humaine, ensuite nous présenterons les méthodes d'évaluation automatiques tout en différenciant entre les familles d'approches (métriques superficielles et apprises) et nous prêterons une attention particulière à l'évaluation au niveau du document qui prend compte du contexte.Pour terminer, nous nous concentrerons sur la méta-évaluation des méthodes.
L'évaluation de la traduction automatique du caractère au document : un état de l'art
d3502581
This work exploits translation data as a source of semantically relevant learning signal for models of word representation. In particular, we exploit equivalence through translation as a form of distributed context and jointly learn how to embed and align with a deep generative model. Our EMBEDALIGN model embeds words in their complete observed context and learns by marginalisation of latent lexical alignments. Besides, it embeds words as posterior probability densities, rather than point estimates, which allows us to compare words in context using a measure of overlap between distributions (e.g. KL divergence). We investigate our model's performance on a range of lexical semantics tasks achieving competitive results on several standard benchmarks including natural language inference, paraphrasing, and text similarity.
Deep Generative Model for Joint Alignment and Word Representation
d18981641
We present a discrete optimization model based on a linear programming formulation as an alternative to the cascade of classifiers implemented in many language processing systems. Since NLP tasks are correlated with one another, sequential processing does not guarantee optimal solutions. We apply our model in an NLG application and show that it performs better than a pipeline-based system.
Beyond the Pipeline: Discrete Optimization in NLP
d263875914
We show that the problems of parsing and surface realization for grammar formalisms with "context-free" derivations, coupled with Montague semantics (under a certain restriction) can be reduced in a uniform way to Datalog query evaluation. As well as giving a polynomialtime algorithm for computing all derivation trees (in the form of a shared forest) from an input string or input logical form, this reduction has the following complexity-theoretic consequences for all such formalisms: (i) the decision problem of recognizing grammaticality (surface realizability) of an input string (logical form) is in LOGCFL; and (ii) the search problem of finding one logical form (surface string) from an input string (logical form) is in functional LOGCFL. Moreover, the generalized supplementary magic-sets rewriting of the Datalog program resulting from the reduction yields efficient Earley-style algorithms for both parsing and generation.
Parsing and Generation as Datalog Queries
d263609556
Neural data-to-text systems lack the control and factual accuracy required to generate useful and insightful summaries of multidimensional data.We propose a solution in the form of data views, where each view describes an entity and its attributes along specific dimensions.A sequence of views can then be used as a high-level schema for document planning, with the neural model handling the complexities of micro-planning and surface realization.We show that our view-based system retains factual accuracy while offering high-level control of output that can be tailored based on user preference or other norms within the domain.
Enhancing factualness and controllability of Data-to-Text Generation via data Views and constraints
d9298377
Generating captions for images is a task that has recently received considerable attention. In this work we focus on caption generation for abstract scenes, or object layouts where the only information provided is a set of objects and their locations. We propose OBJ2TEXT, a sequence-tosequence model that encodes a set of objects and their locations as an input sequence using an LSTM network, and decodes this representation using an LSTM language model. We show that our model, despite encoding object layouts as a sequence, can represent spatial relationships between objects, and generate descriptions that are globally coherent and semantically relevant. We test our approach in a task of object-layout captioning by using only object annotations as inputs. We additionally show that our model, combined with a state-of-the-art object detector, improves an image captioning model from 0.863 to 0.950 (CIDEr score) in the test benchmark of the standard MS-COCO Captioning task.
OBJ2TEXT: Generating Visually Descriptive Language from Object Layouts
d17002591
Natural language generation (NLG) is an important component of question answering(QA) systems which has a significant impact on system quality. Most tranditional QA systems based on templates or rules tend to generate rigid and stylised responses without the natural variation of human language. Furthermore, such methods need an amount of work to generate the templates or rules. To address this problem, we propose a Context-Aware LSTM model for NLG. The model is completely driven by data without manual designed templates or rules. In addition, the context information, including the question to be answered, semantic values to be addressed in the response, and the dialogue act type during interaction, are well approached in the neural network model, which enables the model to produce variant and informative responses. The quantitative evaluation and human evaluation show that CA-LSTM obtains state-of-the-art performance.
Context-aware Natural Language Generation for Spoken Dialogue Systems
d221739041
In practical machine learning settings, the data on which a model must make predictions often come from a different distribution than the data it was trained on. Here, we investigate the problem of unsupervised multi-source domain adaptation, where a model is trained on labelled data from multiple source domains and must make predictions on a domain for which no labelled data has been seen. Prior work with CNNs and RNNs has demonstrated the benefit of mixture of experts, where the predictions of multiple domain expert classifiers are combined; as well as domain adversarial training, to induce a domain agnostic representation space. Inspired by this, we investigate how such methods can be effectively applied to large pretrained transformer models. We find that domain adversarial training has an effect on the learned representations of these models while having little effect on their performance, suggesting that large transformer-based models are already relatively robust across domains. Additionally, we show that mixture of experts leads to significant performance improvements by comparing several variants of mixing functions, including one novel mixture based on attention. Finally, we demonstrate that the predictions of large pretrained transformer based domain experts are highly homogenous, making it challenging to learn effective functions for mixing their predictions.
Transformer Based Multi-Source Domain Adaptation
d11406047
In named entity recognition, we often don't have a large in-domain training corpus or a knowledge base with adequate coverage to train a model directly. In this paper, we propose a method where, given training data in a related domain with similar (but not identical) named entity (NE) types and a small amount of in-domain training data, we use transfer learning to learn a domain-specific NE model. That is, the novelty in the task setup is that we assume not just domain mismatch, but also label mismatch.
Named Entity Recognition for Novel Types by Transfer Learning
d15065468
Discourse relations bind smaller linguistic units into coherent texts. However, automatically identifying discourse relations is difficult, because it requires understanding the semantics of the linked arguments. A more subtle challenge is that it is not enough to represent the meaning of each argument of a discourse relation, because the relation may depend on links between lower-level components, such as entity mentions. Our solution computes distributional meaning representations by composition up the syntactic parse tree.A key difference from previous work on compositional distributional semantics is that we also compute representations for entity mentions, using a novel downward compositional pass. Discourse relations are predicted from the distributional representations of the arguments, and also of their coreferent entity mentions. The resulting system obtains substantial improvements over the previous state-of-theart in predicting implicit discourse relations in the Penn Discourse Treebank.
One Vector is Not Enough: Entity-Augmented Distributional Semantics for Discourse Relations
d5315979
The paper proposes the task of universal semantic tagging-tagging word tokens with languageneutral, semantically informative tags. We argue that the task, with its independent nature, contributes to better semantic analysis for wide-coverage multilingual text. We present the initial version of the semantic tagset and show that (a) the tags provide semantically fine-grained information, and (b) they are suitable for cross-lingual semantic parsing. An application of the semantic tagging in the Parallel Meaning Bank supports both of these points as the tags contribute to formal lexical semantics and their cross-lingual projection. As a part of the application, we annotate a small corpus with the semantic tags and present new baseline result for universal semantic tagging.
Towards Universal Semantic Tagging
d13752894
We report the findings of the second Complex Word Identification (CWI) shared task organized as part of the BEA workshop colocated with NAACL-HLT'2018. The second CWI shared task featured multilingual and multi-genre datasets divided into four tracks: English monolingual, German monolingual, Spanish monolingual, and a multilingual track with a French test set, and two tasks: binary classification and probabilistic classification. A total of 12 teams submitted their results in different task/track combinations and 11 of them wrote system description papers that are referred to in this report and appear in the BEA workshop proceedings.
A Report on the Complex Word Identification Shared Task 2018