_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d11125179 | Search engines are increasingly relying on large knowledge bases of facts to provide direct answers to users' queries. However, the construction of these knowledge bases is largely manual and does not scale to the long and heavy tail of facts. Open information extraction tries to address this challenge, but typically assumes that facts are expressed with verb phrases, and therefore has had difficulty extracting facts for noun-based relations.We describe ReNoun, an open information extraction system that complements previous efforts by focusing on nominal attributes and on the long tail. ReNoun's approach is based on leveraging a large ontology of noun attributes mined from a text corpus and from user queries. ReNoun creates a seed set of training data by using specialized patterns and requiring that the facts mention an attribute in the ontology. ReNoun then generalizes from this seed set to produce a much larger set of extractions that are then scored. We describe experiments that show that we extract facts with high precision and for attributes that cannot be extracted with verb-based techniques. | ReNoun: Fact Extraction for Nominal Attributes |
d16915849 | Term transliteration addresses the problem of converting terms in one language into their phonetic equivalents in the other language via spoken form. It is especially concerned with proper nouns, such as personal names, place names and organization names. Pronunciation variation refers to pronunciation ambiguity frequently encountered in spoken language, which has a serious impact on term transliteration. More than one transliteration variants can be generated by an out-of-vocabulary term due to different kinds of pronunciation variations. It is important to take this issue into account when dealing with term transliteration. Several models, which take pronunciation variation into consideration, are proposed for term transliteration in this paper. They describe transliteration from various viewpoints and utilize the relationships trained from extracted transliterated-term pairs. An experiment in applying the proposed models to term transliteration was conducted and evaluated. The experimental results show promise. These proposed models are not only applicable to term transliteration, but also are helpful in indexing and retrieving spoken document retrieval. | Incorporating Pronunciation Variation into Different Strategies of Term Transliteration |
d7061704 | This abstract describes a natural language system which deals usefully with ungrammatical input and describes some actual and potential applications of it in computer aided second language learning. However, this is not the only area in which the principles of the system might be used, and the aim in building it was simply to demonstrate the workability of the general mechanism, and provide a framework for assessing developments of it.BACKGROUND | LIMITED DOMAIN SYSTEMS FOR LANGUAGE TEACHING |
d16742116 | ÌÓÔ ÜØÖ Ø ÓÒ × ÓÒ ÈÖ ÓÖ ÃÒÓÛÐ Ó Ø Ò ÖÓÑ Ì Ö Ø Ó ÙÑ ÒØ× Ã ÝÓ Ì Ø×Ù Û Ò Á ÖÓ Ã Ó Ý × Ú Ò Ë Ò ×¸ Ö Ù Ø Ë ÓÓÐ Ó ÀÙÑ Ò Ø × Ò Ë Ò ×Ç ÒÓÑ ÞÙ ÍÒ Ú Ö× ØÝ ¾¹½¹½ Ç Ø×Ù ÙÒ ÝÓ¹ Ù ÌÓ ÝÓ¸½½¾¹ ½¼ Â È AE Ø Ø×Ù Û º ÝÓ¸ Ó ×ºÓ º º Ô ×ØÖ Ø Ì × Ô Ô Ö ÒÚ ×Ø Ø × Ø Ö Ð Ø ÓÒ ØÛ Ò ÔÖ ÓÖ ÒÓÛÐ Ò Ð Ø ÒØ ØÓÔ Ð ×× ¢ ¹ Ø ÓÒº Ì Ö Ö Ñ ÒÝ × × Û Ö Ø ØÓÔ Ð ×× ¢ Ø ÓÒ ÓÒ Ý Ä Ø ÒØ Ö Ð Ø ÐÐÓ ¹ Ø ÓÒ Ö ×ÙÐØ× Ò Ø Ö ÒØ Ð ×× ¢ Ø ÓÒ Ø Ø ÙÑ Ò× ÜÔ Øº ÌÓ ÑÔÖÓÚ Ø × ÔÖÓ Ð Ñ¸× Ú¹ Ö Ð ×ØÙ × Ù× Ò Ö Ð Ø ÓÖ ×Ø ÔÖ ÓÖ Ò¹ ר Ó Ö Ð Ø ×ØÖ ÙØ ÓÒ Ú Ò ×ØÙ ¹ Ò ÓÖ Ö ØÓ ÔÖÓÚ ÓÒ×ØÖ ÒØ× ÓÒ ÛÓÖ × ×Ó × Ø Ý Ö Ð ×× ¢ ÒØÓ Ø × Ñ ÓÖ ÒÓØ Ø × Ñ ØÓÔ ×º ÀÓÛ Ú Ö¸ Ò Ñ Ò Ý × ×¸Ø ÔÖ ÓÖ ÒÓÛÐ × ÓÒרÖÙ Ø ÖÓÑ ×Ù ¹ Ø Ú Ú Û Ó ÙÑ Ò׸ ÙØ × ÒÓØ ÓÒרÖÙ Ø × ÓÒ Ø ÔÖÓÔ ÖØ × Ó Ø Ö Ø Ó ÙÑ ÒØ×º ÁÒ Ø × ×ØÙ Ý¸Û ÓÒרÖÙ Ø ÔÖ ÓÖ ÒÓÛÐ × ÓÒ Ø ÛÓÖ × ÜØÖ Ø ÖÓÑ Ø Ö Ø Ó ¹ ÙÑ ÒØ× Ò ÔÖÓÚ Ø × ÓÒ×ØÖ ÒØ× ÓÖ ØÓÔ Ð ×× ¢ Ø ÓÒº Ï × Ù×× Ø Ö ×ÙÐØ Ó ØÓÔ Ð ×× ¢ Ø ÓÒ Û Ø Ø ÓÒ×ØÖ ÒØ×º ½ ÁÒØÖÓ Ù Ø ÓÒ | |
d16948139 | This article describes a corpus of news texts in Brazilian Portuguese. News were collected from four big newswire outlets, segmented in paragraphs, and marked up by a group of four annotators, who had to classify each paragraph according to two dimensions: target entity (that is the person which is the main subject of the news contained in the paragraph), and the paragraph's polarity with respect to the target entity. The corpus comprises 131 news, segmented in 1,447 paragraphs, with 65,675 words in total. Along with the corpus, we have also built a gold standard, where paragraphs are classified according to the opinion of the majority of annotators. This gold standard and annotated corpus are available to the community under a Creative Commons licence. | An Annotated Corpus for Sentiment Analysis in Political News |
d218974002 | ||
d218974125 | ||
d257985702 | Nous décrivons l'analyse d'un corpus de conversations humain-humain et humainrobot. Vingt et un participants ont été scannés en imagerie par résonance magnétique fonctionnelle (IRMf) pendant qu'ils discutaient soit avec un humain, soit avec un robot. En s'inspirant de ce qui est communément utilisé pour étudier les conversations, huit variables linguistiques adaptées aux spécificités du corpus ont permis de mettre en évidence les compétences linguistiques limitées du système de magicien d'Oz utilisé pour contrôler le robot. Nous avons également adapté une variable d'alignement lexical qui nous permet d'étudier l'alignement conversationnel, plus important dans les interactions avec le robot qu'avec l'humain. Enfin, nos résultats de neuro-imagerie suggèrent une réduction du contrôle cognitif associée à l'augmentation de l'alignement lexical du participant sur l'interlocuteur.ABSTRACT. Here we describe the analysis of a unique corpus of conversations with a human or a robot. Twenty-one participants were scanned with functional Magnetic Resonance Imaging (fMRI) while talking either with a human or with a robot. Inspired by what is commonly studied in natural conversations, eight linguistic variables adapted to the specifics of the corpus highlight the limited linguistic skills of the Wizard of Oz system used to control the robot. We also calculate a lexical alignment variable which allows us to study the phenomenon of conversational alignment, increased with the robot compared to the human. Finally, our neuroimaging results suggest a reduction in cognitive control associated with an increase in the participant's lexical alignment with the interlocutor. MOTS-CLÉS : conversation, humain, robot, neurosciences, alignement lexical. | Comparaison linguistique et neuro-physiologique de conversations humain humain et humain robot |
d229368774 | We describe an approach to generating explanations about why robot actions fail, focusing on the considerations of robots that are run by cognitive robotic architectures. We define a set of Failure Types and Explanation Templates, motivating them by the needs and constraints of cognitive architectures that use action scripts and interpretable belief states, and describe content realization and surface realization in this context. We then describe an evaluation that can be extended to further study the effects of varying the explanation templates. | Generating Explanations of Action Failures in a Cognitive Robotic Architecture |
d14558364 | ||
d5477691 | Lockheed Martin's Advanced Technology Laboratories has been designing, developing, testing, and evaluating spoken language understanding systems in several unique operational environments over the past five years. Through these experiences we have encountered numerous challenges in making each system become an integral part of a user's operations. In this paper, we discuss these challenges and report how we overcame them with respect to a number of domains. | The Pragmatics of Taking a Spoken Language System Out of the Laboratory |
d6146900 | Brute-force word sense disambiguation (WSD) algorithms based on semantic relatedness are really time consuming. We study how to perform WSD faster and better on the span of a text. Several stochastic algorithms can be used to perform Global WSD. We focus here on an Ant Colony Algorithm and compare it to two other methods (Genetic and Simulated Annealing Algorithms) in order to evaluate them on the Semeval 2007 Task 7. A comparison of the algorithms shows that the Ant Colony Algorithm is faster than the two others, and yields better results. Furthermore, the Ant Colony Algorithm coupled with a majority vote strategy reaches the level of the first sense baseline and among other systems evaluated on the same task rivals the lower performing supervised algorithms.TITLE AND ABSTRACT IN FRENCHAlgorithme à colonie de fourmis pour la désambiguïsation lexicale non supervisée de textes : comparaison et évaluationLes algorithmes exhaustifs de désambiguïsation lexicale ont une complexité exponentielle et le contexte qu'il est calculatoirement possible d'utiliser s'en trouve réduit. Il ne s'agit donc pas d'une solution viable. Nous étudions comment réaliser de la désambiguïsation lexicale plus rapidement et plus efficacement à l'échelle du texte. Nous nous intéressons ainsi à l'adaptation d'un algorithme à colonies de fourmis et nous le confrontons à d'autres méthodes issues de l'état de l'art, un algorithme génétique et un recuit simulé en les évaluant sur la tâche 7 de Semeval 2007. Une comparaison des algorithmes montre que l'algorithme à colonies de fourmis est plus rapide que les deux autres et obtiens de meilleurs résultats. De plus, cet algorithme, couplé avec un vote majoritaire atteint le niveau de la référence premier sens et rivalise avec les moins bons algorithmes supervisés sur cette tâche. | Ant Colony Algorithm for the Unsupervised Word Sense Disambiguation of Texts: Comparison and Evaluation |
d2380891 | Learning of new words is assisted by contextual information. This context can come in several forms, including observations in nonlinguistic semantic domains, as well as the linguistic context in which the new word was presented. We outline a general architecture for word learning, in which structural alignment coordinates this contextual information in order to restrict the possible interpretations of unknown words. We identify spatial relations as an applicable semantic domain, and describe a system-in-progress for implementing the general architecture using video sequences as our non-linguistic input. For example, when the complete system is presented with "The bird dove to the rock," with a video sequence of a bird flying from a tree to a rock, and with the meanings for all the words except the preposition "to," the system will register the unknown "to" with the corresponding aspect of the bird's trajectory. | An Architecture for Word Learning using Bidirectional Multimodal Structural Alignment |
d16483732 | We propose an unsupervised system that learns continuous degrees of lexicality for noun-noun compounds, beating a strong baseline on several tasks. We demonstrate that the distributional representations of compounds and their parts can be used to learn a finegrained representation of semantic contribution. Finally, we argue such a representation captures compositionality better than the current status-quo which treats compositionality as a binary classification problem. | An Unsupervised Ranking Model for Noun-Noun Compositionality |
d41495811 | In the last few years the European Parliament has witnessed a significant increase in translation demand. Although Translation Memory (TM) tools, terminology databases and bilingual concordancers have provided significant leverage in terms of quality and productivity the European Parliament is in need for advanced language technology to keep facing successfully the challenge of multilingualism. This paper describes an ongoing large-scale machine translation postediting evaluation campaign the purpose of which is to estimate the business benefits from the use of machine translation for the European Parliament. This paper focuses mainly on the design, the methodology and the tools used by the evaluators but it also presents some preliminary results for the following language pairs: Polish-English, Danish-English, Lithuanian-English, English-German and English-French. | |
d18061791 | PHRED (PHRasal English Diction is a natural language generator designed for use in a variety of domains. It was constructed to share a knowledge base with PHRAN (PHRasal ANalyzer) as part of a real-time user-friendly interface. The knowledge base consists of pattern-concept pairs, i.e., associations between linguistic structures and conceptual templates. Using this knowledge base, PHRED produces appropriate and grammatical natural language output from a conceptual representation.PHRED and PHRAN are currently used as central components of the user interface to the UNIX Consultant System (UC). This system answers questions and solves problems related to the UNIX 3 operating system. UC passes the conceptual form of its responses, usually either questions or answers to questions, to the PHRED generator, which expresses them in the user's language. Currently the consultant can answer questions and produce its responses in either English or Spanish.There are a number of practical advantages to PHRED as the generation component of a natural language system. Having a knowledge base shared between analyzer and generator eliminates the redundancy of having separate grammars and lexicons for input and output. It avoids possibly awkward inconsistencies caused by such a separation, and allows for interchangeable interfaces, such as the English and Spanish versions of the UC interface.The phrasal approach to language processing realized in PHRED has proven helpful in generation as in analysis. PHRED commands the use of idioms, grammatical constructions, and canned phrases without a specialized mechanism or data structure. This is accomplished without restricting the ability of the generator to utilize more general linguistic knowledge.As the generation component of a natural language interface, PHRED affords extensibility, simplicity, and processing speed. Its design incorporates a cognitive motivation as well. It diverges from the traditional computational approach by focusing on the use of specialized phrasal knowledge. This phrasal approach minimizes the autonomy of the individual word, the bane of some earlier approaches to language processing. The two-stage process used by PHRED to select appropriate linguistic structures also fits well with cognitive theories of language and memory. | PHRED: A GENERATOR FOR NATURAL LANGUAGE INTERFACES 1 |
d985658 | In this paper, we present an approach to exploit phrase tables generated by statistical machine translation in order to map French discourse connectives to discourse relations. Using this approach, we created ConcoLeDisCo, a lexicon of French discourse connectives and their PDTB relations. When evaluated against LEX-CONN, ConcoLeDisCo achieves a recall of 0.81 and an Average Precision of 0.68 for the CONCESSION and CONDITION relations. | Automatic Mapping of French Discourse Connectives to PDTB Discourse Relations |
d15369413 | In this paper, we describe a fast algorithm for aligning sentences with their translations in a bilingual corpus. Existing efficient algorithms ignore word identities and only consider sentence length (Brown el al., 1991b; Gale and Church, 1991). Our algorithm constructs a simple statistical word-to-word translation model on the fly during alignment. We find the alignment that maximizes the probability of generating the corpus with this translation model. We have achieved an error rate of approximately 0.4% on Canadian Hansard data, which is a significant improvement over previous results. The algorithm is language independent. | ALIGNING SENTENCES IN BILINGUAL CORPORA USING LEXICAL INFORMATION |
d17489944 | We explore the annotation of information structure in German and compare the quality of expert annotation with crowdsourced annotation taking into account the cost of reaching crowd consensus.Concretely, we discuss a crowd-sourcing effort annotating focus in a task-based corpus of German containing reading comprehension questions and answers. Against the backdrop of a gold standard reference resulting from adjudicated expert annotation, we evaluate a crowd sourcing experiment using majority voting to determine a baseline performance. To refine the crowd-sourcing setup, we introduce the Consensus Cost as a measure of agreement within the crowd. We investigate the usefulness of Consensus Cost as a measure of crowd annotation quality both intrinsically, in relation to the expert gold standard, and extrinsically, by integrating focus annotation information into a system performing Short Answer Assessment taking into account the Consensus Cost.We find that low Consensus Cost in crowd sourcing indicates high quality, though high cost does not necessarily indicate low accuracy but increased variability. Overall, taking Consensus Cost into account improves both intrinsic and extrinsic evaluation measures. | Focus Annotation of Task-based Data: Establishing the Quality of Crowd Annotation |
d226262234 | Given the growing ubiquity of emojis in language, there is a need for methods and resources that shed light on their meaning and communicative role. One conspicuous aspect of emojis is their use to convey affect in ways that may otherwise be non-trivial to achieve. In this paper, we seek to explore the connection between emojis and emotions by means of a new dataset consisting of human-solicited association ratings. We additionally conduct experiments to assess to what extent such associations can be inferred from existing data in an unsupervised manner. Our experiments show that this succeeds when high-quality wordlevel information is available. | EmoTag1200 : Understanding the Association between Emojis and Emotions |
d6152635 | This paper presents the procedure of the syntactic annotation of the National Corpus of Polish. Syntactic annotation consists here of shallow parsing and manual post-editing of the results by annotators. The description concentrates on the delimitation of syntactic words and groups, as well as on problems encountered during the annotation process. | The Design of Syntactic Annotation Levels in the National Corpus of Polish |
d2629140 | This paper is motivated by the observation that not all adjectives in Chinese have a canonical antonym. For example, most Chinese speakers choose to translate the English word dishonest into a word string bu chengshi 'not honest' instead of any antonym candidates of chengshi suggested in antonym dictionaries. Our discourse evidence from corpus data suggests that bu chengshi is evolving into a word in discourse at a faster pace than some other 'bu + adjective' strings, and this may result from the lexical gap for a canonical antonym of chengshi and the communicative need for such a word. As a consequence, it is proposed that if the lexicalization process of bu chengshi continues in the future, the string may need to be considered a single word in a segmentation system (i.e., buchengshi 'dishonest'). For a segmentation system to distinguish between words and phrases, discourse factors should be taken into consideration. | Lexical Gaps and Lexicalization: Implications for Word Segmentation Systems for Chinese NLP |
d250390736 | We describe our system for playing a minimal improvisational game in a group. In Chain Reaction, players collectively build a chain of word pairs or solid compounds. The game emphasizes memory and rapid improvisation, while absurdity and humor increases during play. Our approach is unique in that we have grounded our work in the principles of oral culture according to Walter Ong, an early scholar of orature. We show how a simple computer model can be designed to embody many aspects of oral poetics, suggesting design directions for other work in oral improvisation and poetics. The opportunities for our system's further development include creating culturally specific automated players; situating play in different temporal, physical, and social contexts; and constructing a more elaborate improvisor. | A Minimal Computational Improviser Based on Oral Thought |
d11703983 | The past several years have witnessed the rapid progress of end-to-end Neural Machine Translation (NMT). However, there exists discrepancy between training and inference in NMT when decoding, which may lead to serious problems since the model might be in a part of the state space it has never seen during training. To address the issue, Scheduled Sampling has been proposed. However, there are certain limitations in Scheduled Sampling and we propose two dynamic oracle-based methods to improve it. We manage to mitigate the discrepancy by changing the training process towards a less guided scheme and meanwhile aggregating the oracle's demonstrations. Experimental results show that the proposed approaches improve translation quality over standard NMT system. | Dynamic Oracle for Neural Machine Translation in Decoding Phase |
d14166989 | A well-recognized limitation of research on supervised sentence compression is the dearth of available training data. We propose a new and bountiful resource for such training data, which we obtain by mining the revision history of Wikipedia for sentence compressions and expansions. Using only a fraction of the available Wikipedia data, we have collected a training corpus of over 380,000 sentence pairs, two orders of magnitude larger than the standardly used Ziff-Davis corpus. Using this newfound data, we propose a novel lexicalized noisy channel model for sentence compression, achieving improved results in grammaticality and compression rate criteria with a slight decrease in importance. | Mining Wikipedia Revision Histories for Improving Sentence Compression |
d1227006 | ||
d12399949 | Cross document event coreference (CDEC) is an important task that aims at aggregating eventrelated information across multiple documents. We revisit the evaluation for CDEC, and discover that past works have adopted different, often inconsistent, evaluation settings, which either overlook certain mistakes in coreference decisions, or make assumptions that simplify the coreference task considerably. We suggest a new evaluation methodology which overcomes these limitations, and allows for an accurate assessment of CDEC systems. Our new evaluation setting better reflects the corpus-wide information aggregation ability of CDEC systems by separating event-coreference decisions made across documents from those made within a document. In addition, we suggest a better baseline for the task and semi-automatically identify several inconsistent annotations in the evaluation dataset. | Revisiting the Evaluation for Cross Document Event Coreference |
d221090697 | Knowledge graphs (KGs) of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge graphs are typically incomplete, it is useful to perform knowledge graph completion or link prediction, i.e. predict whether a relationship not in the knowledge graph is likely to be true. This paper serves as a comprehensive survey of embedding models of entities and relationships for knowledge graph completion, summarizing up-to-date experimental results on standard benchmark datasets and pointing out potential future research directions.This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http:// creativecommons.org/licenses/by/4.0/. | A survey of embedding models of entities and relationships for knowledge graph completion |
d6910290 | The problem of re-ranking initial retrieval results exploring the intrinsic structure of documents is widely researched in information retrieval (IR) and has attracted a considerable amount of time and study. However, one of the drawbacks is that those algorithms treat queries and documents separately. Furthermore, most of the approaches are predominantly built upon graph-based methods, which may ignore some hidden information among the retrieval set. This paper proposes a novel document reranking method based on Latent Dirichlet Allocation (LDA) which exploits the implicit structure of the documents with respect to original queries. Rather than relying on graphbased techniques to identify the internal structure, the approach tries to find the latent structure of "topics" or "concepts" in the initial retrieval set. Then we compute the distance between queries and initial retrieval results based on latent semantic information deduced. Empirical results demonstrate that the method can comfortably achieve significant improvement over various baseline systems. | Latent Document Re-Ranking |
d16088358 | We describe GPKEX, a keyphrase extraction method based on genetic programming. We represent keyphrase scoring measures as syntax trees and evolve them to produce rankings for keyphrase candidates extracted from text. We apply and evaluate GPKEX on Croatian newspaper articles. We show that GPKEX can evolve simple and interpretable keyphrase scoring measures that perform comparably to more complex machine learning methods previously developed for Croatian. | GPKEX: Genetically Programmed Keyphrase Extraction from Croatian Texts |
d243865338 | Communication between human and mobile agents is getting increasingly important as such agents are widely deployed in our daily lives. Vision-and-Dialogue Navigation is one of the tasks that evaluate the agent's ability to interact with humans for assistance and navigate based on natural language responses. In this paper, we explore the Navigation from Dialogue History (NDH) task, which is based on the Cooperative Vision-and-Dialogue Navigation (CVDN) dataset, and present a stateof-the-art model which is built upon Vision-Language transformers. However, despite achieving competitive performance, we find that the agent in the NDH task is not evaluated appropriately by the primary metric -Goal Progress. By analyzing the performance mismatch between Goal Progress and other metrics (e.g., normalized Dynamic Time Warping) from our state-of-the-art model, we show that NDH's sub-path based task setup (i.e., navigating partial trajectory based on its correspondent subset of the full dialogue) does not provide the agent with enough supervision signal towards the goal region. Therefore, we propose a new task setup called NDH-FULL which takes the full dialogue and the whole navigation path as one instance. We present a strong baseline model and show initial results on this new task. We further describe several approaches that we try, in order to improve the model performance (based on curriculum learning, pre-training, and data-augmentation), suggesting potential useful training methods on this new NDH-FULL task. 1 | NDH-FULL: Learning and Evaluating Navigational Agents on Full-Length Dialogue |
d174800868 | We propose a new automatic evaluation metric for machine translation. Our proposed metric is obtained by adjusting the Earth Mover's Distance (EMD) to the evaluation task. The EMD measure is used to obtain the distance between two probability distributions consisting of some signatures having a feature and a weight. We use word embeddings, sentence-level tf · idf , and cosine similarity between two word embeddings, respectively, as the features, weight, and the distance between two features. Results show that our proposed metric can evaluate machine translation based on word meaning. Moreover, for distance, cosine similarity and word position information are used to address wordorder differences. We designate this metric as Word Embedding-based automatic MT evaluation using Word Position Information (WE WPI). A meta-evaluation using WMT16 metrics shared task set indicates that our WE WPI achieves the highest correlation with human judgment among several representative metrics. | Word Embedding-Based Automatic MT Evaluation Metric using Word Position Information |
d7772654 | The notion of logophoricity has long played a crucial role in understanding the co-referential relations between certain anaphoric expressions cross-linguistically, especially for longdistance anaphors violating a locality constraint and syntactic prominence conditions within the framework of pure syntactic accounts. However, Pan(2001)has shown that the long-distance binding of Chinese ziji should not be treated with the logophoric accounts in some aspects. This paper revisits Pan's (2001) puzzle, which arises from the ability of ziji to serve as a logophor, in order to call attention to what the alternative to this view might be, and proposes a solution to it through the notion of empathy, inKuno and Kaburaki's (1977)sense of the term, so that long-distance anaphors, which are not fully covered in terms of logophoricity, can be reconciled with other East Asian languages, such as Japanese zibun and Korean caki, in terms of a unified treatment.PACLIC 29212 | Pan's (2001) puzzle revisited |
d220060872 | Meaning Representation (AMR)(Banarescu et al., 2013) is a framework for semantic dependencies that encodes its rooted and directed acyclic graphs in a format called PENMAN notation. The format is simple enough that users of AMR data often write small scripts or libraries for parsing it into an internal graph representation, but there is enough complexity that these users could benefit from a more sophisticated and well-tested solution. The open-source Python library Penman provides a robust parser, functions for graph inspection and manipulation, and functions for formatting graphs into PENMAN notation. Many functions are also available in a command-line tool, thus extending its utility to non-Python setups. | Penman: An Open-Source Library and Tool for AMR Graphs |
d248779916 | This paper introduces the method of VPAI_Lab team's experiments on BioNLP 2022 shared task 1 Medical Video Classification (Med-VidCL). Given an input video, the MedVidCL task aims to correctly classify it into one of three following categories: Medical Instructional, Medical Non-instructional, and Nonmedical. Inspired by its dataset construction process, we divide the classification process into two stages. The first stage is to classify videos into medical videos and non-medical videos. In the second stage, for those samples classified as medical videos, we further classify them into instructional videos and noninstructional videos. In addition, we also propose the cross-modal fusion method to solve the video classification, such as fusing the text features (question and subtitles) from the pretraining language models and visual features from image frames. Specifically, we use textual information to concatenate and query the visual information for obtaining better feature representation. Extensive experiments show that the proposed method significantly outperforms the official baseline method by 15.4% in the F1 score, which shows its effectiveness. Finally, the official results show that our method ranks the Top-1 on the official unseen test set. All the experimental codes are open-sourced at https: //github.com/Lireanstar/MedVidCL. | VPAI_Lab at MedVidQA 2022: A Two-Stage Cross-modal Fusion Method for Medical Instructional Video Classification |
d7520399 | In this paper we present the principles of lexico-semantic annotation of Składnica Treebank using Polish WordNet lexical units. We describe different means of annotation, depending on the structure of a sentence in Składnica on the one hand and the availability of adequate lexical unit in PLWN on the other. Apart from "standard" annotation involving lexical units with the same lemma as the token under annotation, multi-word units, different verb lemmas including reflexive marker się as well as synonyms and hypernyms have also been involved. Some tokens have obtained tags explaining why they require no annotation. Additionally, we discuss the assessment of the annotation of whole sentences. | Lexico-Semantic Annotation of Składnica Treebank by means of PLWN Lexical Units |
d11529229 | This paper deals with the uses of the annotations of third person singular neuter pronouns in the DAD parallel and comparable corpora of Danish and Italian texts and spoken data. The annotations contain information about the functions of these pronouns and their uses as abstract anaphora. Abstract anaphora have constructions such as verbal phrases, clauses and discourse segments as antecedents and refer to abstract objects comprising events, situations and propositions. The analysis of the annotated data shows the language specific characteristics of abstract anaphora in the two languages compared with the uses of abstract anaphora in English. Finally, the paper presents machine learning experiments run on the annotated data in order to identify the functions of third person singular neuter personal pronouns and neuter demonstrative pronouns. The results of these experiments vary from corpus to corpus. However, they are all comparable with the results obtained in similar tasks in other languages. This is very promising because the experiments have been run on both written and spoken data using a classification of the pronominal functions which is much more fine-grained than the classifications used in other studies. | The DAD parallel corpora and their uses |
d227231211 | ||
d14869426 | One main complexity of the copula constructions concerns a mismatch between morphology and syntactic constituency: the copula seems to form a morphological unit with the immediately preceding element, whereas in terms of syntax the copula appears to take this as its syntactic complement. In capturing such mismatches, we show that the copula is treated as an independent verb at the level of tectogrammatical structure (or syntax tree), whereas as a bound morpheme at the level of phenogrammatical structure (or domain tree), in terms of Dowty 1992 (or Reape 1994). This paper, adopting the notion of DOMAIN in HPSG, shows that copula constructions are a subtype of compacting-constructions. These constructions compact the domain value of the copula and that of its preceding element together into one domain unit, eventually making it inert to syntactic phenomena such as scrambling, deletion and pro-form substitution. This construction-based approach provides a clean analysis for the formation of the copula construction and related phenomena. | Mismatches in Korean Copula Constructions and Linearization Effects |
d2896894 | In this paper, we demonstrate how to extend TimeML, a rich specification language for event and temporal expressions in text, with the implicit typical durations of events, temporal information in text that has hitherto been largely unexploited. Event duration information can be very important in applications in which the time course of events is to be extracted from text. For example, whether two events overlap or are in sequence often depends very much on their durations. | Extending TimeML with Typical Durations of Events |
d53223828 | We extend the classic Referring Expressions Generation task by considering zero pronouns in "pro-drop" languages such as Chinese, modelling their use by means of the Bayesian Rational Speech Acts model(Frank and Goodman, 2012). By assuming that highly salient referents are most likely to be referred to by zero pronouns (i.e., pro-drop is more likely for salient referents than the less salient ones), the model offers an attractive explanation of a phenomenon not previously addressed probabilistically. | Modelling Pro-drop with the Rational Speech Acts Model |
d8265456 | In this paper, we propose a novel way to include unsupervised feature selection methods in probabilistic taxonomy learning models. We leverage on the computation of logistic regression to exploit unsupervised feature selection of singular value decomposition (SVD). Experiments show that this way of using SVD for feature selection positively affects performances. | SVD Feature Selection for Probabilistic Taxonomy Learning |
d13232194 | As a paratactic language, sentence-level argument extraction in Chinese suffers much from the frequent occurrence of ellipsis with regard to inter-sentence arguments. To resolve such problem, this paper proposes a novel global argument inference model to explore specific relationships, such as Coreference, Sequence and Parallel, among relevant event mentions to recover those intersentence arguments in the sentence, discourse and document layers which represent the cohesion of an event or a topic. Evaluation on the ACE 2005 Chinese corpus justifies the effectiveness of our global argument inference model over a state-of-the-art baseline. | Argument Inference from Relevant Event Mentions in Chinese Argument Extraction |
d7494276 | This paper presents novel methods for modeling numerical common sense: the ability to infer whether a given number (e.g., three billion) is large, small, or normal for a given context (e.g., number of people facing a water shortage). We first discuss the necessity of numerical common sense in solving textual entailment problems. We explore two approaches for acquiring numerical common sense. Both approaches start with extracting numerical expressions and their context from the Web. One approach estimates the distribution of numbers co-occurring within a context and examines whether a given value is large, small, or normal, based on the distribution. Another approach utilizes textual patterns with which speakers explicitly expresses their judgment about the value of a numerical expression. Experimental results demonstrate the effectiveness of both approaches. | Is a 204 cm Man Tall or Small ? Acquisition of Numerical Common Sense from the Web |
d252546561 | Highly imbalanced textual datasets continue to pose a challenge for supervised learning models. However, viewing such imbalanced text data as an anomaly detection (AD) problem has advantages for certain tasks such as detecting hate speech, or inappropriate and/or offensive language in large social media feeds. There the unwanted content tends to be both rare and nonuniform with respect to its thematic character, and better fits the definition of an anomaly than a class. Several recent approaches to textual AD use transformer models, achieving good results but with trade-offs in pre-training and inflexibility with respect to new domains. In this paper we compare two linear models within the NMF family, which also have a recent history in textual AD. We introduce a new approach based on an alternative regularization of the NMF objective. Our results surpass other linear AD models and are on par with deep models, performing comparably well even in very small outlier concentrations. | A Lightweight yet Robust Approach to Textual Anomaly Detection |
d6645015 | Many applications of computational linguistics are greatly influenced by the quality of corpora available and as automatically generated corpora continue to play an increasingly common role, it is essential that we not overlook the importance of well-constructed and homogeneous corpora. This paper describes an automatic approach to improving the homogeneity of corpora using an unsupervised method of statistical outlier detection to find documents and segments that do not belong in a corpus. We consider collections of corpora that are homogeneous with respect to topic (i.e. about the same subject), or genre (written for the same audience or from the same source) and use a combination of stylistic and lexical features of the texts to automatically identify pieces of text in these collections that break the homogeneity. These pieces of text that are significantly different from the rest of the corpus are likely to be errors that are out of place and should be removed from the corpus before it is used for other tasks. We evaluate our techniques by running extensive experiments over large artificially constructed corpora that each contain single pieces of text from a different topic, author, or genre than the rest of the collection and measure the accuracy of identifying these pieces of text without the use of training data. We show that when these pieces of text are reasonably large (1,000 words) we can reliably identify them in a corpus. | An Unsupervised Approach for the Detection of Outliers in Corpora |
d8090830 | We employ statistical methods to analyze, generate, and translate rhythmic poetry. We first apply unsupervised learning to reveal word-stress patterns in a corpus of raw poetry. We then use these word-stress patterns, in addition to rhyme and discourse models, to generate English love poetry. Finally, we translate Italian poetry into English, choosing target realizations that conform to desired rhythmic patterns. | Automatic Analysis of Rhythmic Poetry with Applications to Generation and Translation |
d10338976 | Lingway (formerly LexiQuest, partner of the MUSI project) 4 33-35, Ledru-Rollin,AbstractIn this paper we will illustrate the approach to multilingual automatic abstract production adopted by the EU-sponsored project MLIS-MUSI. Although a small scale research project, MUSI has tried to tackle the challenges set by multilingual summarization by adopting an original approach based on the definition of a shared ontology and representation language, and on the reuse of existing linguistic resources. MUSI combines a linguistic-based module for relevant sentence extraction and a concept-based component to generate multilingual summaries. | Multilingual Summarization by Integrating Linguistic Resources in the MLIS- MUSI Project |
d52281610 | The use of connectionist approaches in conversational agents has been progressing rapidly due to the availability of large corpora. However current generative dialogue models often lack coherence and are content poor. This work proposes an architecture to incorporate unstructured knowledge sources to enhance the next utterance prediction in chit-chat type of generative dialogue models. We focus on Sequence-to-Sequence (Seq2Seq) conversational agents trained with the Reddit News dataset, and consider incorporating external knowledge from Wikipedia summaries as well as from the NELL knowledge base. Our experiments show faster training time and improved perplexity when leveraging external knowledge. | Extending Neural Generative Conversational Model using External Knowledge Sources |
d18005090 | This paper reports on an initial and necessary step toward the construction of a Pan-Chinese lexical resource. We investigated the regional variation of lexical items in two specific domains, finance and sports; and explored how much of such variation is covered in existing Chinese synonym dictionaries, in particular the Tongyici Cilin. The domain-specific lexical items were obtained from subsections of a synchronous Chinese corpus, LIVAC. Results showed that 20-40% of the words from various subcorpora are unique to the individual communities, and as much as 70% of such unique items are not yet covered in the Tongyici Cilin. The results suggested great potential for building a Pan-Chinese lexical resource for Chinese language processing. Our next step would be to explore automatic means for extracting related lexical items from the corpus, and to incorporate them into existing semantic classifications. | Regional Variation of Domain-Specific Lexical Items: Toward a Pan- Chinese Lexical Resource |
d252847506 | Deep learning based methods have shown tremendous success in several Natural Language Processing (NLP) tasks. The recent trends in the usage of Deep Learning based models for natural language tasks have definitely produced incredible performance for several application areas. However, one major problem that most of these models face is the lack of transparency, i.e., the actual decision process of the underlying model is not explainable. In this paper, first we solve a very fundamental problem of Natural Language Understanding (NLU), i.e., intent detection using a Bidirectional Long Short Term Memory (BiL-STM). In order to determine the defining features that lead to a specific intent class, we use the Layerwise Relevance Propagation (LRP) algorithm to find the defining feature(s). In the process, we conclude that saliency method of LRP (epsilon Layerwise Relevance Propagation) is a prominent process for highlighting the important features of the input responsible for classification of intent, which also provides significant insights into the inner workings, such as the reasons for misclassification by the black box model. | Towards Explainable Dialogue Systems: Explaining Intent Classification using Saliency Techniques |
d20049194 | Text similarity measures are used in multiple tasks such as plagiarism detection, information ranking and recognition of paraphrases and textual entailment. While recent advances in deep learning highlighted further the relevance of sequential models in natural language generation, existing similarity measures do not fully exploit the sequential nature of language. Examples of such similarity measures include ngrams and skip-grams overlap which rely on distinct slices of the input texts. In this paper we present a novel text similarity measure inspired from a common representation in DNA sequence alignment algorithms. The new measure, called TextFlow, represents input text pairs as continuous curves and uses both the actual position of the words and sequence matching to compute the similarity value. Our experiments on eight different datasets show very encouraging results in paraphrase detection, textual entailment recognition and ranking relevance. | TextFlow: A Text Similarity Measure based on Continuous Sequences |
d440067 | Current state-of-the-art Statistical Machine Translation systems are based on log-linear models that combine a set of feature functions to score translation hypotheses during decoding. The models are parametrized by a vector of weights usually optimized on a set of sentences and their reference translations, called development data. In this paper, we explore a (common and industry relevant) scenario where a system trained and tuned on general domain data needs to be adapted to a specific domain for which no or only very limited in-domain bilingual data is available. It turns out that such systems can be adapted successfully by re-tuning model parameters using surprisingly small amounts of parallel in-domain data, by cross-tuning or no tuning at all. We show in detail how and why this is effective, compare the approaches and effort involved. We also study the effect of system hyperparameters (such as maximum phrase length and development data size) and their optimal values in this scenario.TITLE AND ABSTRACT IN CZECHJednoduchá a efektivní optimalizace parametrů pro doménovou adaptaci statistického strojového překladuSoučasné systémy statistického strojového překladu jsou založeny na logaritmicko-lineárních modelech, které pro hodnocení překladových hypotéz ve fázi dekódování kombinují sadu příznakových funkcí. Tyto modely jsou parametrizovány vektorem vah, které se optimalizují na tzv. vývojových datech, tj. množině vět a jejich referenčních překladů. V tomtočlánku se zabýváme (častou a pro průmyslové nasazení relevantní) situací, kdy je třeba překladový systém natrénovaný na datech z obecné domény adaptovat na nějakou specifickou doménu, pro kterou jsou k dispozici paralelní data jen ve velice omezeném (či žádném) množství. Ukazujeme, že takové systémy mohou být vhodně adaptovány pomocí optimalizace parametrů za použití jen překvapivě malého množství paralelních doménově-specifických dat nebo tzv. křížovou optimalizací. Možností je také nepoužití optimalizace vůbec. Jednotlivé přístupy analyzujeme a porovnáváme jejich cekovou náročnost. Dále se zabýváme analýzou systémových hyperparametrů (např. maximální délkou frází a velikostí vývojových dat) a jejich optimalizací. | Simple and Effective Parameter Tuning for Domain Adaptation of Statistical Machine Translation ♣❡❝✐♥❛❅✉❢❛•✳♠❢❢✳❝✉♥✐✳❝③, ❛t♦r❛•❅❝♦♠♣✉t✐♥❣✳❞❝✉✳✐❡, •♦s❡❢❅❝♦♠♣✉t✐♥❣✳❞❝✉✳✐❡ |
d258463987 | Visual Question Answering (VQA) has arisen in recent public interest thanks to its applicability in many different fields. However, it requires understanding the combination of pictures and questions, which is highly challenging in both vision and language processing. Many previous works have achieved remarkable results to address this problem in many different languages. However, in the Vietnamese language, the VQA problem has not made significant progress due to the lack of data and fundamental systems. Therefore, we propose a model specifically designed and optimized for the Vietnamese Visual Question Answering problem. Our model leverages the strength of pre-trained models as well as presents Bi-directional Cross-attention architecture to learn visual and textual features more effectively. Through experimental results and ablation studies, the proposed approach obtains promising results against the existing models for Vietnamese on the ViVQA dataset. | Bi-directional Cross-Attention Network on Vietnamese Visual Question Answering |
d236477366 | In this work, we propose a new task called Image-guided Story Ending Generation (IgSEG). Given a multi-sentence story plot and an ending-related image, IgSEG aims to generate a story ending that conforms to the contextual logic and the relevant visual concepts. In contrast to the story ending generation task, which generates open-ended endings, the major challenges of IgSEG are to comprehend the given context and image sufficiently, and mine the appropriate semantics from the image to make the generated story ending informative, reasonable, and coherent. To address the challenges, we propose a Multi-layer Graph convolution and Cascade-LSTM (MGCL) based model which mainly comprises of two collaborative modules: i) a multi-layer graph convolutional network to learn the dependency relations of sentences and the logical clue of the context; ii) a multiple context-image attention module to generate the story endings by gradually incorporating textual and visual semantic concepts. Our MGCL is thus capable of building logically consistent and semantically rich story endings. To evaluate the proposed model, we modify the existing VIST dataset to obtain the VIST-Ending dataset. Empirically, our MGCL outperforms all the strong baselines on both automatic and human evaluation. | IgSEG: Image-guided Story Ending Generation |
d17002113 | We describe the CMU submission for the 2014 shared task on language identification in code-switched data.We participated in all four language pairs: Spanish-English, Mandarin-English, Nepali-English, and Modern Standard Arabic-Arabic dialects. After describing our CRF-based baseline system, we discuss three extensions for learning from unlabeled data: semi-supervised learning, word embeddings, and word lists. | The CMU Submission for the Shared Task on Language Identification in Code-Switched Data |
d3133955 | In timeline extraction the goal is to order all the events in which a target entity is involved in a timeline. Due to the lack of explicitly annotated data, previous work is primarily rule-based and uses pre-trained temporal linking systems. In this work, we propose a distantly supervised approach by heuristically aligning timelines with documents. The noisy training data created allows us to learn models that anchor events to temporal expressions and entities; during testing, the predictions of these models are combined to produce the timeline. Furthermore, we show how to improve performance using joint inference. In experiments in the SemEval-2015 TimeLine task we show that our distantly supervised approach matches the state-of-the-art performance while joint inference further improves on it by 3.2 F-score points. | Timeline extraction using distant supervision and joint inference |
d21728175 | Automatically scoring metaphor novelty is an unexplored topic in natural language processing, and research in this area could benefit a wide range of NLP tasks. However, no publicly available metaphor novelty datasets currently exist, making it difficult to perform research on this topic. We introduce a large corpus of metaphor novelty scores for syntactically related word pairs, and release it freely to the research community. We describe the corpus here, and include an analysis of its score distribution and the types of word pairs included in the corpus. We also provide a brief overview of standard metaphor detection corpora, to provide the reader with greater context regarding how this corpus compares to other datasets used for different types of computational metaphor processing. Finally, we establish a performance benchmark to which future researchers can compare, and show that it is possible to learn to score metaphor novelty on our dataset at a rate ignificantly better than chance or naïve strategies. | A Corpus of Metaphor Novelty Scores for Syntactically-Related Word Pairs |
d690413 | In this paper we describe the MELB-MKB system, as entered in the SemEval-2007 lexical substitution task. The core of our system was the "Relatives in Context" unsupervised approach, which ranked the candidate substitutes by web-lookup of the word sequences built combining the target context and each substitute. Our system ranked third in the final evaluation, performing close to the top-ranked system. | MELB-MKB: Lexical Substitution System based on Relatives in Context |
d12495478 | Providing sets of semantically related words in the lexical entries of an electronic dictionary should help language learners quickly understand the meaning of the target words. Relational information might also improve memorisation, by allowing the generation of structured vocabulary study lists. However, an open issue is which semantic relations are cognitively most salient, and should therefore be used for dictionary construction. In this paper, we present a concept description elicitation experiment conducted with German and Italian speakers. The analysis of the experimental data suggests that there is a small set of concept-class-dependent relation types that are stable across languages and robust enough to allow discrimination across broad concept domains. Our further research will focus on harvesting instantiations of these classes from corpora. | Cognitively Salient Relations for Multilingual Lexicography |
d18827772 | This paper is trying to show that the concept of the prototypical transitive sentence is very useful in the study of transitivity, but is as such not so unproblematic as has often been assumed. This is achieved by presenting examples from different languages that show the weaknesses of the "traditional" prototype. The prototype proposed here remains, however, a tentative one, because there are so many properties that cannot be described typologically because of their language-specific nature. | PROBLEMS IN DEFINING A PROTOTYPICAL TRANSITIVE SENTENCE TYPOLOGICALLY |
d16932735 | The Natural Language Toolkit (NLTK) is widely used for teaching natural language processing to students majoring in linguistics or computer science. This paper describes the design of NLTK, and reports on how it has been used effectively in classes that involve different mixes of linguistics and computer science students. We focus on three key issues: getting started with a course, delivering interactive demonstrations in the classroom, and organizing assignments and projects. In each case, we report on practical experience and make recommendations on how to use NLTK to maximum effect. | Multidisciplinary Instruction with the Natural Language Toolkit |
d252624581 | From data to meaning in representation of emotions | |
d251465617 | We present the Brooklyn Multi-Interaction Corpus (B-MIC), a collection of dyadic conversations designed to identify speaker traits and conversation contexts that cause variations in entrainment behavior. B-MIC pairs each participant with multiple partners for an object placement game and open-ended discussions, as well as with a Wizard of Oz for a baseline of their speech. In addition to fully transcribed recordings, it includes demographic information and four completed psychological questionnaires for each subject and turn annotations for perceived emotion and acoustic outliers. This enables the study of speakers' entrainment behavior in different contexts and the sources of variation in this behavior. In this paper, we introduce B-MIC and describe our collection, annotation, and preprocessing methodologies. We report a preliminary study demonstrating varied entrainment behavior across different conversation types and discuss the rich potential for future work on the corpus. | The Brooklyn Multi-Interaction Corpus for Analyzing Variation in Entrainment Behavior |
d643522 | This paper presents a deterministic dependency parser based on memory-based learning, which parses English text in linear time. When trained and evaluated on the Wall Street Journal section of the Penn Treebank, the parser achieves a maximum attachment score of 87.1%. Unlike most previous systems, the parser produces labeled dependency graphs, using as arc labels a combination of bracket labels and grammatical role labels taken from the Penn Treebank II annotation scheme. The best overall accuracy obtained for identifying both the correct head and the correct arc label is 86.0%, when restricted to grammatical role labels (7 labels), and 84.4% for the maximum set (50 labels). | Deterministic Dependency Parsing of English Text |
d248818945 | With their Discovery of Inference Rules from Text (DIRT) algorithm, Lin and Pantel (2001) made a seminal contribution to the field of rule acquisition from text, by adapting the distributional hypothesis ofHarris (1954)to patterns that model binary relations such as X treat Y, where patterns are implemented as syntactic dependency paths. DIRT's relevance is renewed in today's neural era given the recent focus on interpretability in the field of natural language processing. We propose a novel take on the DIRT algorithm, where we implement the distributional hypothesis using the contextualized embeddings provided by BERT, a transformer-network-based language model(Vaswani et al., 2017;Devlin et al., 2018). In particular, we change the similarity measure between pairs of slots (i.e., the set of words matched by a pattern) from the original formula that relies on lexical items to a formula computed using contextualized embeddings. We empirically demonstrate that this new similarity method yields a better implementation of the distributional hypothesis, and this, in turn, yields patterns that outperform the original algorithm in the question answering-based evaluation proposed byLin and Pantel (2001). | Do Transformer Networks Improve the Discovery of Rules from Text? |
d5395686 | We use the noisy-channel theory of human sentence comprehension to develop an incremental processing cost model that unifies and extends key features of expectation-based and memory-based models. In this model, which we call noisy-context surprisal, the processing cost of a word is the surprisal of the word given a noisy representation of the preceding context. We show that this model accounts for an outstanding puzzle in sentence comprehension, language-dependent structural forgetting effects (Gibson andThomas, 1999;Vasishth et al., 2010;Frank et al., 2016), which are previously not well modeled by either expectation-based or memory-based approaches. Additionally, we show that this model derives and generalizes locality effects(Gibson, 1998;Demberg and Keller, 2008), a signature prediction of memory-based models. We give corpusbased evidence for a key assumption in this derivation. | Noisy-context surprisal as a human sentence processing cost model |
d4303517 | In this paper we address two key challenges for extractive multi-document summarization: the search problem of finding the best scoring summary and the training problem of learning the best model parameters. We propose an A* search algorithm to find the best extractive summary up to a given length, which is both optimal and efficient to run. Further, we propose a discriminative training algorithm which directly maximises the quality of the best summary, rather than assuming a sentence-level decomposition as in earlier work. Our approach leads to significantly better results than earlier techniques across a number of evaluation metrics. | Multi-document summarization using A* search and discriminative training |
d251436388 | Word embedding models have become commonplace in a wide range of NLP applications. In order to train and use the best possible models, accurate evaluation is needed. For extrinsic evaluation of word embedding models, analogy evaluation sets have been shown to be a good quality estimator. We introduce an Icelandic adaptation of a large analogy dataset, BATS, evaluate it on three different word embedding models and show that our evaluation set is apt at measuring the capabilities of such models. | IceBATS: An Icelandic Adaptation of the Bigger Analogy Test Set |
d39132425 | In this paper, we introduce a coverage-based scoring function that discriminates between parallel and non-parallel sentences. When plugged into Bleualign, a state-of-the-art sentence aligner, our function improves both precision and recall of alignments over the originally proposed BLEU score. Furthermore, since our scoring function uses Moses phrase tables directly we avoid the need to translate the texts to be aligned, which is time-consuming and a potential source of alignment errors. | First Steps Towards Coverage-Based Sentence Alignment |
d2193818 | Violence is a serious problems for cities like Chicago and has been exacerbated by the use of social media by gang-involved youths for taunting rival gangs. We present a corpus of tweets from a young and powerful female gang member and her communicators, which we have annotated with discourse intention, using a deep read to understand how and what triggered conversations to escalate into aggression. We use this corpus to develop a part-of-speech tagger and phrase table for the variant of English that is used, as well as a classifier for identifying tweets that express grieving and aggression. This work is licensed under a Creative Commons Attribution 4.0 International License.License details: | Automatically Processing Tweets from Gang-Involved Youth: Towards Detecting Loss and Aggression |
d3183295 | The problem of text alignment is to establish the correspondence between subparts of two ore more translations or versions of the same document. Most of the methods used in alignment are based on the statistical analysis of word or character frequencies or of string occurrences. In order to achieve more accurate results, other methods have incorporated some structural properties of the documents as further criteria. When addressing the problem of alignment to align different versions of medieval texts namely prose and verse versions, we need to consider more efficient methods of content comparison. In this article, we propose an extension to the existing methods of alignment where we consider further linguistic and structural properties of the texts. As a linguistic criterion of alignment, we propose some heuristics to calculate similarities at the lexical, morphological, syntactic and semantic level of the texts. On the other hand, as a structural criterion, we extend the similarity measures to take into account different properties of the rhetorical structure of the texts. The process of alignment is therefore an optimization problem that maximizes linguistic and structural similarities between aligned pairs of parallel versions. | SAM: System for Multi-criteria Text Alignment |
d252847513 | We present a novel technique to infer ranked dialog flows from human-to-human conversations that can be used as an initial conversation design or to analyze the complexities of the conversations in a call center. This technique aims to identify, for a given service, the most common sequences of questions and responses from the human agent. Multiple dialog flows for different ranges of top paths can be produced so they can be reviewed in rank order and be refined in successive iterations until additional flows have the desired level of detail. The system ingests historical conversations and efficiently condenses them into a weighted deterministic finite-state automaton, which is then used to export dialog flow designs that can be readily used by conversational agents. A proofof-concept experiment was conducted with the MultiWoz data set, a sample output is presented and future directions are outlined. | Inferring Ranked Dialog Flows from Human-to-Human Conversations |
d11163854 | Due to the diversity of natural language processing (NLP) tools and resources, combining them into processing pipelines is an important issue, and sharing these pipelines with others remains a problem. We present DKPro Core, a broad-coverage component collection integrating a wide range of third-party NLP tools and making them interoperable. Contrary to other recent endeavors that rely heavily on web services, our collection consists only of portable components distributed via a repository, making it particularly interesting with respect to sharing pipelines with other researchers, embedding NLP pipelines in applications, and the use on high-performance computing clusters. Our collection is augmented by a novel concept for automatically selecting and acquiring resources required by the components at runtime from a repository. Based on these contributions, we demonstrate a way to describe a pipeline such that all required software and resources can be automatically obtained, making it easy to share it with others, e.g. in order to reproduce results or as examples in teaching, documentation, or publications. | A broad-coverage collection of portable NLP components for building shareable analysis pipelines |
d9854845 | We present an interactive translation method to support non-professional users to write an original document.The method, combining dictionary lookup function and user-guided stepwise interactive machine translation, allows the user to obtain clear result with an easy operation. We implemented the method as an English writing support facility that serves as a translation support front-end to an arbitrary application. | An Interactive Translation Support Facility for Non-Professional Users |
d6774555 | People with motor disabilities often face substantial challenges using interfaces designed for manual interaction. Although such obstacles might be partially alleviated by automatic speech recognition, these individuals may also have cooccurring speech-language challenges that result in high recognition error rates. In this paper, we investigate how augmenting speech applications with dialogue interaction can improve system performance among such users. We construct an end-to-end spoken dialogue system for our target users, adult wheelchair users with multiple sclerosis and other progressive neurological conditions in a specialized-care residence, to access information and communication services through speech. We use boosting to discriminatively learn meaningful confidence scores and ask confirmation questions within a partially observable Markov decision process (POMDP) framework. Among our target users, the POMDP dialogue manager significantly increased the number of successfully completed dialogues (out of 20 dialogue tasks) compared to a baseline threshold-based strategy (p = 0.02). The reduction in dialogue completion times was more pronounced among speakers with higher error rates, illustrating the benefits of probabilistic dialogue modeling for our target population. | Probabilistic Dialogue Modeling for Speech-Enabled Assistive Technology |
d6575020 | Statistical classification methods usually rely on a single best model to make accurate predictions. Such a model aims to maximize accuracy by balancing precision and recall. The Model Switching method as presented in this paper performs with higher predictive accuracy and 100% recall by using a set of decomposable models instead of a single one.The implemented system, MS1, is tested on a case study, predicting Prepositional Phrase Attachment (PPA). The results show that iV is more accurate than other statistical techniques that select single models for classification and competitive with other successful NLP approaches in PPA disambiguation. The Model Switching method may be preferable to other methods because of its generality (i.e., wide range of applicability), and its competitive accuracy in prediction. It may also be used as an analytical tool to investigate the nature of the domain and the characteristics of the data with the help of generated models. This is an ambiguous sentence, which can be interpreted two different ways, depending on the site of PPA. The prepositional phrase (PP) in the above sentence is "on the paper." If it is attached to Kayaalp, Pedersen ~ Bruce 33 | A Statistical Decision Making Method: A Case Study on Prepositional Phrase Attachment* |
d724953 | Compound terms play a surprisingly key role in the organization of lexical ontologies. However, their inclusion forces one to address the issues of completeness and consistency that naturally arise from this organizational role. In this paper we show how creative exploration in the space of literal compounds can reveal not only additional compound terms to systematically balance an ontology, but can also discover new and potentially innovative concepts in their own right. | Creative Discovery in Lexical Ontologies |
d7821398 | This paper clarifies the occurrence factors of commuters unable to return home and the returning-home decision-making at the time of the Great East Japan Earthquake by using Twitter data. First, to extract the behavior data from the tweet data, we identify each user's returning-home behavior using support vector machines. Second, we create non-verbal explanatory factors using geotag data and verbal explanatory factors using tweet data. Then, we model users' returning-home decisionmaking by using a discrete choice model and clarify the factors quantitatively. Finally, by sensitivity analysis, we show the effects of the existence of emergency evacuation facilities and line of communication. | Returning-Home Analysis in Tokyo Metropolitan Area at the time of the Great East Japan Earthquake using Twitter Data |
d43500893 | Coordination is a complex phenomenon that poses many problems for the parsing of English by computer. This paper examines some of these problems and suggests solutions within the framework of ATN parsing. Examples of complex coordination phenomena, extracted from texts translated by ENGSPAN TM , the Pan American Health Organization's English-Spanish machine translation system, are presented. Schemata for extending simple networks to accommodate coordinate constructions are presented, and strategies for parsing these constructions are discussed. Focus is centered on the complications involved in parsing constructions with more than two conjuncts. | COORDINATION: SOME PROBLEMS AND SOLUTIONS FOR THE ANALYSIS OF ENGLISH WITH AN ATN |
d248780121 | The contextualized embeddings obtained from neural networks pre-trained as Language Models (LM) or Masked Language Models (MLM) are not well suitable for solving the Lexical Semantic Change Detection (LSCD) task because they are more sensitive to changes in word forms rather than word meaning, a property previously known as the word form bias or orthographic bias(Laicher et al., 2021). Unlike many other NLP tasks, it is also not obvious how to fine-tune such models for LSCD. In order to conclude if there are any differences between senses of a particular word in two corpora, a human annotator or a system shall analyze many examples containing this word from both corpora. This makes annotation of LSCD datasets very labour-consuming. The existing LSCD datasets contain up to 100 words that are labeled according to their semantic change, which is hardly enough for fine-tuning.To solve these problems we fine-tune the XLM-R MLM (Conneau et al., 2020) as part of a gloss-based WSD system on a large WSD dataset in English. Then we employ zero-shot cross-lingual transferability of XLM-R to build the contextualized embeddings for examples in Spanish. In order to obtain the graded change score for each word, we calculate the average distance between our improved contextualized embeddings of its old and new occurrences. For the binary change detection subtask, we apply thresholding to the same scores.Our solution has shown the best results among all other participants in all subtasks except for the optional sense gain detection subtask. | GlossReader at LSCDiscovery: Train to Select a Proper Gloss in English - Discover Lexical Semantic Change in Spanish |
d11124006 | Hospital Acquired Infections (HAI) has a major impact on public health and on related healthcare cost. HAI experts are fighting against this issue but they are struggling to access data. Information systems in hospitals are complex, highly heterogeneous, and generally not convenient to perform a real time surveillance. Developing a tool able to parse patient records in order to automatically detect signs of a possible issue would be a tremendous help for these experts and could allow them to react more rapidly and as a consequence to reduce the impact of such infections. Recent advances in Computational Intelligence Techniques such as Information Extraction, Risk Patterns Detection in documents and Decision Support Systems now allow to develop such systems. | Natural Language Processing to detect Risk Patterns related to Hospital Acquired Infections |
d15826297 | Our research target is reference relations between descriptions of script and an actor/actress who actually plays in the drama scene that correspond to the scene direction which is a part of the script. In this paper, first we analyze sentences used as the scene directions, and classify them. Then we propose the rules to extract subjects and predicates from those sentences. With the extracted subjects and predicates, we build the existence/action map that explains the situations happening on each scene. The existence/action map we build describes scenes very correctly as for whether each player appears in each scene or not. Our experiment shows that the recall is around 80% and the precision is over 90%. This means that our system of inferring reference relations works well for scene directions. Then we develop the scene retrieval system in which this map is used to retrieve scenes from the drama video database according to the input query. We also show some experimental results of our retrieval system. | Scene Direction Based Reference In Drama Scenes |
d6124991 | We provide a simple but novel supervised weighting scheme for adjusting term frequency in tf-idf for sentiment analysis and text classification. We compare our method to baseline weighting schemes and find that it outperforms them on multiple benchmarks. The method is robust and works well on both snippets and longer documents. | Credibility Adjusted Term Frequency: A Supervised Term Weighting Scheme for Sentiment Analysis and Text Classification |
d1938367 | In order to make the growing amount of conceptual knowledge available through ontologies and datasets accessible to humans, NLP applications need access to information on how this knowledge can be verbalized in natural language. One way to provide this kind of information are ontology lexicons, which apart from the actual verbalizations in a given target language can provide further, rich linguistic information about them. Compiling such lexicons manually is a very time-consuming task and requires expertise both in Semantic Web technologies and lexicon engineering, as well as a very good knowledge of the target language at hand. In this paper we present an alternative approach to generating ontology lexicons by means of crowdsourcing: We use CrowdFlower to generate a small Japanese ontology lexicon for ten exemplary ontology elements from the DBpedia ontology according to a two-stage workflow, the main underlying idea of which is to turn the task of generating lexicon entries into a translation task; the starting point of this translation task is a manually created English lexicon for DBpedia. Comparison of the results to a manually created Japanese lexicon shows that the presented workflow is a viable option if an English seed lexicon is already available. | Crowdsourcing Ontology Lexicons |
d203579506 | We conduct a manual error analysis of the CoNLL-SIGMORPHON 2017 Shared Task on Morphological Reinflection. In this task, systems are given a word in citation form (e.g., hug) and asked to produce the corresponding inflected form (e.g., the simple past hugged). This design lets us analyze errors much like we might analyze children's production errors. We propose an error taxonomy and use it to annotate errors made by the top two systems across twelve languages. Many of the observed errors are related to inflectional patterns sensitive to inherent linguistic properties such as animacy or affect; many others are failures to predict truly unpredictable inflectional behaviors. We also find nearly one quarter of the residual "errors" reflect errors in the gold data. | Weird Inflects but OK: Making Sense of Morphological Generation Errors |
d53628129 | This paper describes our system for the first and third shared tasks of the third Social Media Mining for Health Applications (SMM4H) workshop, which aims to detect the tweets mentioning drug names and adverse drug reactions. In our system we propose a neural approach with hierarchical tweet representation and multi-head self-attention (HTR-MSA) for both tasks. Our system achieved the first place in both the first and third shared tasks of SMM4H with an F-score of 91.83% and 52.20% respectively. | Detecting Tweets Mentioning Drug Name and Adverse Drug Reaction with Hierarchical Tweet Representation and Multi-Head Self-Attention |
d6372072 | We present a policy-based error analysis approach that demonstrates a limitation to the current commonly adopted paradigm for sentence compression. We demonstrate that these limitations arise from the strong assumption of locality of the decision making process in the search for an acceptable derivation in this paradigm. | Evaluating the Syntactic Transformations in Gold Standard Corpora for Statistical Sentence Compression |
d17506596 | Recently I have been intrigued by the reappearance of an old friend, George Kingsley Zipf, in a number of not entirely expected places. The law named for him is ubiquitous, but Zipf did not actually discover the law so much as provide a plausible explanation. Others have proposed modifications to Zipf's Law, and closer examination uncovers systematic deviations from its normative form. We demonstrate how Zipf's analysis can be extended to include some of these phenomena. | Applications and Explanations of Zipf's Law |
d258463956 | Discontinuity is a nearly universal phenomenon observed in natural languages. Several approaches have been proposed so far by different grammar formalisms but they are widely regarded as distinct approaches owing to their theoretical motivations. This paper proposes the correspondence principle which will enable the representation of discontinuity by way of the unification of the representations of linguistic structures in three grammar formalisms: Phrase Structure Grammar (PSG), Dependency Grammar (DG), Categorial Grammar (CG). The goal is not to unify PSG, DG and CG, but rather to sketch out a way of representing discontinuity by uniting constituency relations (as in PSG), headdependent relations (as in DG) and functorargument relations (as in CG) for the encoding of discontinuous expressions in natural languages. The implications for natural language syntax and computational linguistics will be offered towards the end of the paper. | The Representation of Discontinuity and the Correspondence Principle |
d29085815 | Skip N-gram Modeling for Near-Synonym Choice (Near-Synonym) (Pointwise Mutual Information, PMI) N (N-gram) (Skip N-gram) | |
d248779873 | Languages can respond to external events in various ways -the creation of new words or named entities, additional senses might develop for already existing words or the valence of words can change. In this work, we explore the semantic shift of the Dutch words "natie" ("nation"), "volk" ("people") and "vaderland" ("fatherland") over a period that is known for the rise of nationalism in Europe: 1700 -1880(Jensen, 2016. The semantic change is measured by means of Dynamic Bernoulli Word Embeddings (Rudolph and Blei, 2018) which allow for comparison between word embeddings over different time slices. The word embeddings were generated based on Dutch fiction literature divided over different decades. From the analysis of the absolute drifts, it appears that the word "natie" underwent a relatively small drift. However, the drifts of "vaderland" and "volk" show multiple peaks, culminating around the turn of the nineteenth century. To verify whether this semantic change can indeed be attributed to nationalistic movements, a detailed analysis of the nearest neighbours of the target words is provided. From the analysis, it appears that "natie", "volk" and "vaderland" became more nationalisticallyloaded over time. | "Vaderland", "Volk" and "Natie": Semantic Change Related to Nationalism in Dutch Literature Between 1700 and 1880 Captured with Dynamic Bernoulli Word Embeddings |
d219309700 | In this paper we investigate the automatic generation of paraphrases by using machine translation techniques. Three contributions we make are the construction of a large paraphrase corpus for English and Dutch, a re-ranking heuristic to use machine translation for paraphrase generation and a proper evaluation methodology. A large parallel corpus is constructed by aligning clustered headlines that are scraped from a news aggregator site. To generate sentential paraphrases we use a standard phrase-based machine translation (PBMT) framework modified with a re-ranking component (henceforth PBMT-R). We demonstrate this approach for Dutch and English and evaluate by using human judgements collected from 76 participants. The judgments are compared to two automatic machine translation evaluation metrics. We observe that as the paraphrases deviate more from the source sentence, the performance of the PBMT-R system degrades less than that of the word substitution baseline system. | Creating and using large monolingual parallel corpora for sentential paraphrase generation |
d16766513 | This paper presents a two-step procedure to extract positive meaning from verbal negation. We first generate potential positive interpretations manipulating syntactic dependencies. Then, we score them according to their likelihood. Manual annotations show that positive interpretations are ubiquitous and intuitive to humans. Experimental results show that dependencies are better suited than semantic roles for this task, and automation is possible. | Understanding Negation in Positive Terms Using Syntactic Dependencies |
d226283785 | A problem in automatically generated stories for image sequences is that they use overly generic vocabulary and phrase structure and fail to match the distributional characteristics of human-generated text. We address this problem by introducing explicit representations for objects and their relations by extracting scene graphs from the images. Utilizing an embedding of this scene graph enables our model to more explicitly reason over objects and their relations during story generation, compared to the global features from an object classifier used in previous work. We apply metrics that account for the diversity of words and phrases of generated stories as well as for reference to narratively-salient image features and show that our approach outperforms previous systems. Our experiments also indicate that our models obtain competitive results on reference-based metrics. | Diverse and Relevant Visual Storytelling with Scene Graph Embeddings |
d14868730 | We describe work aimed at building commonsense knowledge by reading word definitions using deep understanding techniques. The end result is a knowledge base allowing complex concepts to be reasoned about using OWL-DL reasoners. We show that we can use this system to automatically create a mid-level ontology for WordNet verbs that has good agreement with human intuition with respect to both the hypernym and causality relations. We present a detailed error analysis that reveals areas of future work needed to enable high-performance learning of conceptual knowledge by reading. | Automatically Deriving Event Ontologies for a CommonSense Knowledge Base |
d8153742 | ||
d51876123 | The huge cost of creating labeled training data is a common problem for supervised learning tasks such as sentiment classification. Recent studies showed that pretraining with unlabeled data via a language model can improve the performance of classification models. In this paper, we take the concept a step further by using a conditional language model, instead of a language model. Specifically, we address a sentiment classification task for a tweet analysis service as a case study and propose a pretraining strategy with unlabeled dialog data (tweet-reply pairs) via an encoder-decoder model. Experimental results show that our strategy can improve the performance of sentiment classifiers and outperform several state-of-theart strategies including language model pretraining. | Pretraining Sentiment Classifiers with Unlabeled Dialog Data |
d219300751 | ||
d226283730 | Aspect-oriented Fine-grained Opinion Extraction (AFOE) aims at extracting aspect terms and opinion terms from review in the form of opinion pairs or additionally extracting sentiment polarity of aspect term to form opinion triplet. Because of containing several opinion factors, the complete AFOE task is usually divided into multiple subtasks and achieved in the pipeline. However, pipeline approaches easily suffer from error propagation and inconvenience in real-world scenarios. To this end, we propose a novel tagging scheme, Grid Tagging Scheme (GTS), to address the AFOE task in an end-to-end fashion only with one unified grid tagging task. Additionally, we design an effective inference strategy on GTS to exploit mutual indication between different opinion factors for more accurate extractions. To validate the feasibility and compatibility of GTS, we implement three different GTS models respectively based on CNN, BiL-STM, and BERT, and conduct experiments on the aspect-oriented opinion pair extraction and opinion triplet extraction datasets. Extensive experimental results indicate that GTS models outperform strong baselines significantly and achieve state-of-the-art performance. | Grid Tagging Scheme for Aspect-oriented Fine-grained Opinion Extraction |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.