_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d30715383 | Conventionaltopic modeling schemes, such as Latent Dirichlet Allocation, are known to perform inadequately when applied to tweets, due to the sparsity of short documents. To alleviate these disadvantages, we apply several pooling techniques, aggregating similar tweets into individual documents, and specifically study the aggregation of tweets sharing authors or hashtags. The results show that aggregating similar tweets into individual documents significantly increases topic coherence. | Twitter Topic Modeling by Tweet Aggregation |
d218960562 | ||
d17647520 | Online mental health forums provide users with an anonymous support platform that is facilitated by moderators responsible for finding and addressing critical posts, especially those related to self-harm. Given the seriousness of these posts, it is important that the moderators are able to locate these critical posts quickly in order to respond with timely support. We approached the task of automatically triaging forum posts as a multiclass classification problem. Our model uses a supervised classifier with various features including lexical, psycholinguistic, and topic modeling features. On a dataset of mental forum posts from ReachOut.com 1 , our approach identified critical cases with a F-score of over 80%, showing the effectiveness of the model. Among 16 participating teams and 60 total runs, our best run achieved macro-average F1-score of 41% for the critical categories (The best score among all the runs was 42%). | Triaging Mental Health Forum Posts |
d5543799 | Given a set of terms from a given domain, how can we structure them into a taxonomy without manual intervention? This is the task 17 of SemEval 2015. Here we present our simple taxonomy structuring techniques which, despite their simplicity, ranked first in this 2015 benchmark. We use large quantities of text (English Wikipedia) and simple heuristics such as term overlap and document and sentence co-occurrence to produce hypernym lists. We describe these techniques and present an initial evaluation of results.Domain ListsWe were provided with the following lists of domain terms, with no explanation of how they were created (though WN stands for WordNet): chemical.terms: agarose, nickel sulfate heptahydrate, | INRIASAC: Simple Hypernym Extraction Methods |
d7272687 | Rare term vector replacement (RTVR) is a novel technique for dimensionality reduction. In this paper, we introduce an updating algorithm for RTVR. It is capable of updating both the projection matrix for the reduction and the reduced corpus matrix directly, without having to recompute the expensive projection operation. We introduce an effective batch updating algorithm, and present performance measurements on a subset of the Reuters newswire corpus that show that a 12.5% to 50% split of the documents into corpus and update vectors leads to a three to four fold speedup over a complete rebuild. Thus, we have enabled optimized updating for rare term vector replacement. | Updating Rare Term Vector Replacement |
d3781940 | In this paper, we present a comment labeling system based on a deep learning strategy. We treat the answer selection task as a sequence labeling problem and propose recurrent convolution neural networks to recognize good comments. In the recurrent architecture of our system, our approach uses 2-dimensional convolutional neural networks to learn the distributed representation for question-comment pair, and assigns the labels to the comment sequence with a recurrent neural network over CNN. Compared with the conditional random fields based method, our approach performs better performance on Macro-F1 (53.82%), and achieves the highest accuracy (73.18%), F1-value (79.76%) on predicting the Good class in this answer selection challenge. | ICRC-HIT: A Deep Learning based Comment Sequence Labeling System for Answer Selection Challenge |
d258967833 | Document-level multi-event extraction aims to extract the structural information from a given document automatically. Most recent approaches usually involve two steps: (1) modeling entity interactions; (2) decoding entity interactions into events. However, such approaches ignore a global view of inter-dependency of multiple events. Moreover, an event is decoded by iteratively merging its related entities as arguments, which might suffer from error propagation and is computationally inefficient. In this paper, we propose an alternative approach for document-level multi-event extraction with event proxy nodes and Hausdorff distance minimization. The event proxy nodes, representing pseudo-events, are able to build connections with other event proxy nodes, essentially capturing global information. The Hausdorff distance makes it possible to compare the similarity between the set of predicted events and the set of ground-truth events. By directly minimizing Hausdorff distance, the model is trained towards the global optimum directly, which improves performance and reduces training time. Experimental results show that our model outperforms previous state-of-the-art method in F1-score on two datasets with only a fraction of training time. 1 | Document-Level Multi-Event Extraction with Event Proxy Nodes and Hausdorff Distance Minimization |
d252819367 | Visual Dialogue (VD) task has recently received increasing attention in AI research. VD aims to generate multi-round, interactive responses based on the dialog history and image content. Existing textual dialogue models cannot fully understand visual information, resulting in a lack of scene features when communicating with humans continuously. Therefore, how to efficiently fuse multi-modal data features remains to be a challenge. In this work, we propose a knowledge transfer method with visual prompt (VPTG) fusing multi-modal data, which is a flexible module that can utilize the text-only seq2seq model to handle VD tasks. The VPTG conducts text-image co-learning and multi-modal information fusion with visual prompts and visual knowledge distillation. Specifically, we construct visual prompts from visual representations and then induce sequence-to-sequence (seq2seq) models to fuse visual information and textual contexts by visual-text patterns. Moreover, we also realize visual knowledge transfer through distillation between two different models' text representations, so that the seq2seq model can actively learn visual semantic representations. Extensive experiments on the multi-modal dialogue understanding and generation (MDUG) datasets show the proposed VPTG outperforms other single-modal methods, which demonstrate the effectiveness of visual prompt and visual knowledge transfer. . 2020. Towards a human-like opendomain chatbot. arXiv: Computation and Language. | Knowledge Transfer with Visual Prompt in Multi-modal Dialogue Understanding and Generation |
d13971268 | We have developed a complete spoken dialogue framework that includes rule-based and trainable dialogue managers, speech recognition, spoken language understanding and generation modules, and a comprehensive web visualization interface. | Leveraging POMDPs trained with User Simulations and Rule-based Dialogue Management in a Spoken Dialogue System |
d251490928 | Norwegian has been one of many languages lacking sufficient available text to train quality language models. In an attempt to bridge this gap, we introduce the Norwegian Colossal Corpus (NCC), which comprises 49GB of clean Norwegian textual data containing over 7B words. The NCC is composed of different and varied sources, ranging from books and newspapers to government documents and public reports, showcasing the various uses of the Norwegian language in society. The corpus contains mainly Norwegian Bokmål and Norwegian Nynorsk. Each document in the corpus is tagged with metadata that enables the creation of sub-corpora for specific needs. Its structure makes it easy to combine with large web archives that for licensing reasons could not be distributed together with the NCC. By releasing this corpus openly to the public, we hope to foster the creation of both better Norwegian language models and multilingual language models with support for Norwegian. | The Norwegian Colossal Corpus: A Text Corpus for Training Large Norwegian Language Models |
d233210694 | ||
d250390896 | In this paper, we describe our submission to the misogyny classification challenge at SemEval-2022. We propose two models for the two subtasks of the challenge: The first uses joint image and text classification to classify memes as either misogynistic or not. This model uses a majority voting ensemble structure built on traditional classifiers and additional image information such as age, gender and nudity estimations. The second model uses a RoBERTa classifier on the text transcriptions to additionally identify the type of problematic ideas the memes perpetuate. Our submissions perform above all organizer submitted baselines. For binary misogyny classification, our system achieved the fifth place on the leaderboard, with a macro F1-score of 0.665. For multi-label classification identifying the type of misogyny, our model achieved place 19 on the leaderboard, with a weighted F1-score of 0.637. | QiNiAn at SemEval-2022 Task 5: Multi-Modal Misogyny Detection and Classification |
d51950470 | We tackle four different tasks of non-literal language classification: token and construction level metaphor detection, classification of idiomatic use of infinitive-verb compounds, and classification of non-literal particle verbs. One of the tasks operates on the token level, while the three other tasks classify constructions such as "hot topic" or "stehen lassen" (to allow sth. to stand vs. to abandon so.). The two metaphor detection tasks are in English, while the two non-literal language detection tasks are in German. We propose a simple context-encoding LSTM model and show that it outperforms the state-of-the-art on two tasks. Additionally, we experiment with different embeddings for the token level metaphor detection task and find that 1) their performance varies according to the genre, and 2) Mikolov et al.(2013)embeddings perform best on 3 out of 4 genres, despite being one of the simplest tested models. In summary, we present a large-scale analysis of a neural model for non-literal language classification (i) at different granularities, (ii) in different languages, (iii) over different non-literal language phenomena. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/ | One Size Fits All? A simple LSTM for Non-literal Token-and Construction-level Classification |
d11040903 | We present a description of the system submitted to the Semantic Textual Similarity (STS) shared task at SemEval 2016. The task is to assess the degree to which two sentences carry the same meaning. We have designed two different methods to automatically compute a similarity score between sentences. The first method combines a variety of semantic similarity measures as features in a machine learning model. In our second approach, we employ training data from the Interpretable Similarity subtask to create a combined wordsimilarity measure and assess the importance of both aligned and unaligned words. Finally, we combine the two methods into a single hybrid model. Our best-performing run attains a score of 0.7732 on the 2015 STS evaluation data and 0.7488 on the 2016 STS evaluation data. | NaCTeM at SemEval-2016 Task 1: Inferring sentence-level semantic similarity from an ensemble of complementary lexical and sentence-level features |
d33978990 | This user study reports on an ongoing pilot that aims at using machine translation on a large scale, for the translation of technical documentation for a globally acting automotive supplier. The pilot is conducted by a language service provider and a research institution. First results go beyond expectations. | Towards the Integration of MT into a LSP Translation Workflow |
d6910502 | Systems that automatically discover semantic classes have emerged in part to address the limitations of broad-coverage lexical resources such as WordNet and Cyc. The current state of the art discovers many semantic classes but fails to label their concepts. We propose an algorithm labeling semantic classes and for leveraging them to extract is-a relationships using a top-down approach. | Automatically Labeling Semantic Classes |
d6186855 | This paper discusses the design, recording and preprocessing of a Czech sign language corpus. The corpus is intended for training and testing of sign language recognition (SLR) systems. The UWB-07-SLR-P corpus contains video data of 4 signers recorded from 3 different perspectives. Two of the perspectives contain whole body and provide 3D motion data, the third one is focused on signer's face and provide data for face expression and lip feature extraction. Each signer performed 378 signs with 5 repetitions. The corpus consists of several types of signs: numbers (35 signs), one and two-handed finger alphabet (64), town names (35) and other signs (244). Each sign is stored in a separate AVI file. In total the corpus consists of 21853 video files in total length of 11.1 hours. Additionally each sign is preprocessed and basic features such as 3D hand and head trajectories are available. The corpus is mainly focused on feature extraction and isolated SLR rather than continuous SLR experiments. | Collection and Preprocessing of Czech Sign Language Corpus for Sign Language Recognition |
d236477876 | ||
d12786686 | ||
d18597965 | Recently, many machine translation systems have been developed. However, for translation of conversation, correct translation rates and quality of translation are particularly low. This is due to machine translation systems not being able to generate translation results which fit the context of the conversation . We previously proposed a method of Machine Translation Using Inductive Learning with Genetic Algorithms(GA-ILMT). We compare this system's results to two others that use rule-based translation method, and evaluate the results of experiments done with GA-ILMT, measuring it's performance when applied to travel English. As a result of the evaluation experiments, we confirmed that GA-ILMT can generate translation results which are more appropriate to the context of the conversation. | A STUDY OF PERFORMANCE EVALUATION FOR GA-ILMT USING TRAVEL ENGLISH |
d3011968 | Training efficient statistical approaches for natural language understanding generally requires data with segmental semantic annotations. Unfortunately, building such resources is costly. In this paper, we propose an approach that produces annotations in an unsupervised way. The first step is an implementation of latent Dirichlet allocation that produces a set of topics with probabilities for each topic to be associated with a word in a sentence. This knowledge is then used as a bootstrap to infer a segmentation of a word sentence into topics using either integer linear optimisation or stochastic word alignment models (IBM models) to produce the final semantic annotation. The relation between automaticallyderived topics and task-dependent concepts is evaluated on a spoken dialogue task with an available reference annotation. | Unsupervised Concept Annotation using Latent Dirichlet Allocation and Segmental Methods |
d258588125 | Large language models (LLMs) have been widely studied for their ability to store and utilize positive knowledge. However, negative knowledge, such as "lions don't live in the ocean", is also ubiquitous in the world but rarely mentioned explicitly in the text. What do LLMs know about negative knowledge? This work examines the ability of LLMs to negative commonsense knowledge. We design a constrained keywords-to-sentence generation task (CG) and a Boolean question-answering task (QA) to probe LLMs. Our experiments reveal that LLMs frequently fail to generate valid sentences grounded in negative commonsense knowledge, yet they can correctly answer polar yes-or-no questions. We term this phenomenon the belief conflict of LLMs. Our further analysis shows that statistical shortcuts and negation reporting bias from language modeling pre-training cause this conflict. 1 | Say What You Mean! Large Language Models Speak Too Positively about Negative Commonsense Knowledge |
d232021571 | ||
d218974101 | ||
d193799747 | Cet article traite de l'autoapprentissage d'un système i-vector/PLDA pour le regroupement en locuteurs de collections d'archives audiovisuelles françaises. Les paramètres d'extraction des i-vectors et du calcul des scores PLDA sont appris de façon non supervisée sur les données de la collection elle-même. Différents mélanges de données cibles et de données externes sont comparés pour la phase d'apprentissage. Les résultats expérimentaux sur deux corpora cibles distincts montrent que l'utilisation des données des corpora en question pour l'apprentissage itératif non supervisé et l'adaptation des paramètres de la PLDA peut améliorer un système existant, appris sur des données annotées externes. De tels résultats indiquent que la structuration automatique en locuteurs de petites collections non annotées ne devrait reposer que sur l'existence d'un corpus externe annoté, qui peut etre spécifiquement adaptéà chaque collection cible. Nous montronségalement qu'une collection suffisamment grande peut se passer de l'utilisation de ce corpus externe.ABSTRACTFirst investigations on self trained speaker diarizationThis paper investigates self trained cross-show speaker diarization applied to collections of French TV archives, based on an i-vector/PLDA framework. The parameters used for i-vectors extraction and PLDA scoring are trained in a unsupervised way, using the data of the collection itself. Performances are compared, using combinations of target data and external data for training. The experimental results on two distinct target corpora show that using data from the corpora themselves to perform unsupervised iterative training and domain adaptation of PLDA parameters can improve an existing system, trained on external annotated data. Such results indicate that performing speaker indexation on small collections of unlabeled audio archives should only rely on the availability of a sufficient external corpus, which can be specifically adapted to every target collection. We show that a minimum collection size is required to exclude the use of such an external bootstrap.Actes de la conférence conjointe JEP-TALN-RECITAL 2016, volume 1 : JEP | |
d1389109 | We propose two models for verbalizing numbers, a key component in speech recognition and synthesis systems. The first model uses an end-to-end recurrent neural network. The second model, drawing inspiration from the linguistics literature, uses finite-state transducers constructed with a minimal amount of training data. While both models achieve near-perfect performance, the latter model can be trained using several orders of magnitude less data than the former, making it particularly useful for low-resource languages. | Minimally Supervised Number Normalization |
d51994591 | A number of different discourse connectives can be used to mark the same discourse relation, but it is unclear what factors affect connective choice. One recent account is the Rational Speech Acts theory, which predicts that speakers try to maximize the informativeness of an utterance such that the listener can interpret the intended meaning correctly. Existing prior work uses referential language games to test the rational account of speakers' production of concrete meanings, such as identification of objects within a picture. Building on the same paradigm, we design a novel Discourse Continuation Game to investigate speakers' production of abstract discourse relations. Experimental results reveal that speakers significantly prefer a more informative connective, in line with predictions of the RSA model. | Do speakers produce discourse connectives rationally? |
d7241136 | We propose a first-ever attempt to employ a Long Short-Term memory based framework to predict humor in dialogues. We analyze data from a popular TV-sitcom, whose canned laughters give an indication of when the audience would react. We model the setuppunchline relation of conversational humor with a Long Short-Term Memory, with utterance encodings obtained from a Convolutional Neural Network. Out neural network framework is able to improve the F-score of 8% over a Conditional Random Field baseline. We show how the LSTM effectively models the setup-punchline relation reducing the number of false positives and increasing the recall. We aim to employ our humor prediction model to build effective empathetic machine able to understand jokes. | A Long Short-Term Memory Framework for Predicting Humor in Dialogues |
d360083 | We propose a complete probabilistic discriminative framework for performing sentencelevel discourse analysis. Our framework comprises a discourse segmenter, based on a binary classifier, and a discourse parser, which applies an optimal CKY-like parsing algorithm to probabilities inferred from a Dynamic Conditional Random Field. We show on two corpora that our approach outperforms the state-of-the-art, often by a wide margin. | A Novel Discriminative Framework for Sentence-Level Discourse Analysis |
d373250 | We propose a comparison between various supervised machine learning methods to predict and detect humor in dialogues. We retrieve our humorous dialogues from a very popular TV sitcom: "The Big Bang Theory". We build a corpus where punchlines are annotated using the canned laughter embedded in the audio track. Our comparative study involves a linear-chain Conditional Random Field over a Recurrent Neural Network and a Convolutional Neural Network. Using a combination of word-level and audio frame-level features, the CNN outperforms the other methods, obtaining the best F-score of 68.5% over 66.5% by CRF and 52.9% by RNN. Our work is a starting point to developing more effective machine learning and neural network models on the humor prediction task, as well as developing machines capable in understanding humor in general. | Deep Learning of Audio and Language Features for Humor Prediction |
d8048647 | There has been a great amount of work done in the field of bitext alignment, but the problem of aligning words in massively parallel texts with hundreds or thousands of languages is largely unexplored. While the basic task is similar, there are also important differences in purpose, method and evaluation between the problems. In this work, I present a nonparametric Bayesian model that can be used for simultaneous word alignment in massively parallel corpora. This method is evaluated on a corpus containing 1144 translations of the New Testament. | Bayesian Word Alignment for Massively Parallel Texts |
d41540 | We present an approach for answering questions that span multiple sentences and exhibit sophisticated cross-sentence anaphoric phenomena, evaluating on a rich source of such questions -the math portion of the Scholastic Aptitude Test (SAT). By using a tree transducer cascade as its basic architecture, our system (called EU-CLID) propagates uncertainty from multiple sources (e.g. coreference resolution or verb interpretation) until it can be confidently resolved. Experiments show the first-ever results (43% recall and 91% precision) on SAT algebra word problems. We also apply EUCLID to the public Dolphin algebra question set, and improve the state-of-the-art F 1 -score from 73.9% to 77.0%. | Beyond Sentential Semantic Parsing: Tackling the Math SAT with a Cascade of Tree Transducers |
d6479002 | Machine Translation Using Abductive Inference | |
d18068032 | Several natural language annotation schemas have been proposed for different natural language understanding tasks. In this paper we present a hierarchical and recursive tagset for annotating natural language recipes. Our recipe annotation tagset is developed to capture both syntactic and semantic information in the text. First, we propose our hierarchical recursive tagset that captures cooking attributes and relationships among them. Furthermore, we develop different heuristics to automatically annotate natural language recipes using our proposed tagset. These heuristics use surface-level and syntactic information from the text and the association between words. We are able to annotate the recipe text with 91% accuracy in an ideal situation. | Hierarchical Recursive Tagset for Annotating Cooking Recipes |
d174799628 | Individual differences in speakers are reflected in their language use as well as in their interests and opinions. Characterizing these differences can be useful in human-computer interaction, as well as analysis of human-human conversations. In this work, we introduce a neural model for learning a dynamically updated speaker embedding in a conversational context. Initial model training is unsupervised, using context-sensitive language generation as an objective, with the context being the conversation history. Further finetuning can leverage task-dependent supervised training. The learned neural representation of speakers is shown to be useful for content ranking in a socialbot and dialog act prediction in human-human conversations. 1 | A Dynamic Speaker Model for Conversational Interactions |
d237055466 | ||
d246352 | Word graphs are able to represent a large number of different utterance hypotheses in a very compact manner. However, usually they contain a huge amount of redundancy in terms of word hypotheses that cover almost identical intervals in time. We address this problem by introducing hypergraphs for speech processing. Hypergraphs can be classified as an extension to word graphs and charts, their edges possibly having several start and end vertices. By converting ordinary word graphs to hypergraphs one can reduce the number of edges considerably. We define hypergraphs formally, present an algorithm to convert word graphs into hypergraphs and state consistency properties for edges and their combination. Finally, we present some empirical results concerning graph size and parsing efficiency. | Time Mapping with Hypergraphs |
d216086600 | ||
d12245632 | We present a discriminative learning method to improve the consistency of translations in phrase-based Statistical Machine Translation (SMT) systems. Our method is inspired by Translation Memory (TM) systems which are widely used by human translators in industrial settings. We constrain the translation of an input sentence using the most similar 'translation example' retrieved from the TM. Differently from previous research which used simple fuzzy match thresholds, these constraints are imposed using discriminative learning to optimise the translation performance. We observe that using this method can benefit the SMT system by not only producing consistent translations, but also improved translation outputs. We report a 0.9 point improvement in terms of BLEU score on English-Chinese technical documents. | Consistent Translation using Discriminative Learning: A Translation Memory-inspired Approach * |
d53361527 | The paper reports on exploring various machine learning techniques and a range of textual and meta-data features to train classifiers for linking related event templates automatically extracted from online news. With the best model using textual features only we achieved 94.7% (92.9%) F1 score on GOLD (SILVER) dataset. These figures were further improved to 98.6% (GOLD) and 97% (SILVER) F1 score by adding meta-data features, mainly thanks to the strong discriminatory power of automatically extracted geographical information related to events. This work is licensed under a Creative Commons Attribution 4.0 International Licence.Licence details: | On Training Classifiers for Linking Event Templates |
d248810900 | Notwithstanding recent advances, syntactic generalization remains a challenge for text decoders. While some studies showed gains from incorporating source-side symbolic syntactic and semantic structure into text generation Transformers, very little work addressed the decoding of such structure. We propose a general approach for tree decoding using a transition-based approach. Examining the challenging test case of incorporating Universal Dependencies syntax into machine translation, we present substantial improvements on test sets that focus on syntactic generalization, while presenting improved or comparable performance on standard MT benchmarks. Further qualitative analysis addresses cases where syntactic generalization in the vanilla Transformer decoder is inadequate and demonstrates the advantages afforded by integrating syntactic information. 1 | Enhancing the Transformer Decoder with Transition-based Syntax |
d15797110 | Deep Random Walk (DeepWalk) can learn a latent space representation for describing the topological structure of a network. However, for relational network classification, DeepWalk can be suboptimal as it lacks a mechanism to optimize the objective of the target task. In this paper, we present Discriminative Deep Random Walk (DDRW), a novel method for relational network classification. By solving a joint optimization problem, DDRW can learn the latent space representations that well capture the topological structure and meanwhile are discriminative for the network classification task. Our experimental results on several real social networks demonstrate that DDRW significantly outperforms DeepWalk on multilabel network classification tasks, while retaining the topological structure in the latent space. DDRW is stable and consistently outperforms the baseline methods by various percentages of labeled data. DDRW is also an online method that is scalable and can be naturally parallelized. | Discriminative Deep Random Walk for Network Classification |
d42933 | Among researchers interested in computational models of discourse, there has been a long-standing debate between proponents of approaches based on domain-independent rhetorical relations, and those who subscribe to approaches based on intentionality. In this paper, we argue that the main theories representing these two approaches, RST (Mann and Thompson 1988) and G&S (Grosz and Sidner 1986), make similar claims about how speakers' intentions determine a structure of their discourse. The similarity occurs because the nucleus-satellite relation among text spans in RST corresponds to the dominance relation among intentions in G&S. Building on this similarity, we sketch a partial mapping between the two theories to show that the main points of the two theories are equivalent. Furthermore, the additional claims found in only RST or only G&S are largely consistent. The issue of what structure is determined by semantic (domain) relations in the discourse and how this structure might be related to the intentional structure is discussed. We suggest the synthesis of the two theories would be useful to researchers in both natural language interpretation and generation. | Toward a Synthesis of Two Accounts of Discourse Structure |
d6004168 | We develop a recursive neural network (RNN) to extract answers to arbitrary natural language questions from supporting sentences, by training on a crowdsourced data set (to be released upon presentation). The RNN defines feature representations at every node of the parse trees of questions and supporting sentences, when applied recursively, starting with token vectors from a neural probabilistic language model. In contrast to prior work, we fix neither the types of the questions nor the forms of the answers; the system classifies tokens to match a substring chosen by the question's author.Our classifier decides to follow each parse tree node of a support sentence or not, by classifying its RNN embedding together with those of its siblings and the root node of the question, until reaching the tokens it selects as the answer. A novel co-training task for the RNN, on subtree recognition, boosts performance, along with a scheme to consistently handle words that are not well-represented in the language model. On our data set, we surpass an open source system epitomizing a classic "pattern bootstrapping" approach to question answering. | Answer Extraction by Recursive Parse Tree Descent |
d3204901 | We consider the use of language models whose size and accuracy are intermediate between different order n-gram models. Two types of models are studied in particular. Aggregate Markov models are classbased bigram models in which the mapping from words to classes is probabilistic. Mixed-order Markov models combine bigram models whose predictions are conditioned on different words. Both types of models are trained by Expectation-Maximization (EM) algorithms for maximum likelihood estimation. We examine smoothing procedures in which these models are interposed between different order n-grams. This is found to significantly reduce the perplexity of unseen word combinations. | Aggregate and mixed-order Markov models for statistical language processing |
d219303680 | Situation: Guidelines for an Actual Constraint-Based ApproachModern linguistic theories often make use of the notion of constraint to represent information. This notion allows a fine-grained representation of information, a clear distinction between linguistic objects and their properties, and a better declarativity. Several works try to take advantage of a constraint-based implementation (see for example [Maruyama90], [Carpenter95], [Duchier99]). However, the parsing process cannot be interpreted as an actual constraint satisfaction one. This problem mainly comes from the generative conception of grammars on the linguistic analysis. Indeed, in constraint based theories, constraints can appear at different levels: lexical entries, grammar rules, universal principles. However, during a parse, one has first to select a local tree and then to verify that this tree satisfies the different contraints. This problem comes from the generative interpretation of the relation between grammars and languages . .In this case, the notion of derivation is central and parsing consists in finding a derivation generating an input. We propose then a new formalism called Property Grammars representing the linguistic information by means of constraints. These constraints constituting an actual system, it becomes possible to consider parsing as a satisfaction process. An optimal use of constraints should follow some requirements. In particular, all linguistic information has to be represented by means of constraints. This information constitutes a system of constraints, then all the constraints are at the same level and the order of verification of the constraints is not relevant. Encapsulation is another important characteristics which stipulates that a constraint must represent homogeneous information. The last important point concerns the notion of grammaticality. In the particular problem of parsing, finding an exact solution consists in associating a syntactic structure to a given input. In the case of generative approaches, this amounts to finding a derivation from a distinguished symbol to this input. However, the question when parsing real natural language inputs should not be the grammaticality, but the possibility of providing some information about the input. We propose then to replace the notion of grammaticality with that of characterization which is much more general: a characterization is the state of the constraint system for a given input. | PROPERTY GRAMMARS: A SOLUTION FOR PA RSING WITH CONSTRAINTS |
d8969587 | The easy-first non-directional dependency parser has demonstrated its advantage over transition based dependency parsers which parse sentences from left to right. This work investigates easy-first method on Chinese POS tagging, dependency parsing and joint tagging and dependency parsing. In particular, we generalize the easy-first dependency parsing algorithm to a general framework and apply this framework to Chinese POS tagging and dependency parsing. We then propose the first joint tagging and dependency parsing algorithm under the easy-first framework. We train the joint model with both supervised objective and additional loss which only relates to one of the individual tasks (either tagging or parsing). In this way, we can bias the joint model towards the preferred task. Experimental results show that both the tagger and the parser achieve state-of-the-art accuracy and runs fast. And our joint model achieves tagging accuracy of 94.27 which is the best result reported so far. | Easy-First Chinese POS Tagging and Dependency Parsing |
d11825156 | Extraction of entities from ad creatives is an important problem that can benefit many computational advertising tasks. Supervised and semi-supervised solutions rely on labeled data which is expensive, time consuming, and difficult to procure for ad creatives. A small set of manually derived constraints on feature expectations over unlabeled data can be used to partially and probabilistically label large amounts of data. Utilizing recent work in constraint-based semi-supervised learning, this paper injects light weight supervision specified as these "constraints" into a semi-Markov conditional random field model of entity extraction in ad creatives. Relying solely on the constraints, the model is trained on a set of unlabeled ads using an online learning algorithm. We demonstrate significant accuracy improvements on a manually labeled test set as compared to a baseline dictionary approach. We also achieve accuracy that approaches a fully supervised classifier. | Minimally-Supervised Extraction of Entities from Text Advertisements |
d250390563 | Current virtual assistant (VA) platforms are beholden to the limited number of languages they support. Every component, such as the tokenizer and intent classifier, is engineered for specific languages in these intricate platforms. Thus, supporting a new language in such platforms is a resource-intensive operation requiring expensive re-training and re-designing. In this paper, we propose a benchmark for evaluating language-agnostic intent classification, the most critical component of VA platforms. To ensure the benchmarking is challenging and comprehensive, we include 29 public and internal datasets across 10 low-resource languages and evaluate various training and testing settings with consideration of both accuracy and training time. The benchmarking result shows that Watson Assistant, among 7 commercial VA platforms and pre-trained multilingual language models (LMs), demonstrates close-tobest accuracy with the best accuracy-training time trade-off. | Benchmarking Language-agnostic Intent Classification for Virtual Assistant Platforms |
d5286012 | In an EBMT system, we will meet the word sense disambiguation problem. The disambiguation methods used at present can't be used easily in EBMT. We propose a new method for word sense disambiguation in EBMT: it is based on a language mode of the target language. Its main idea is that a proper word sense can make the whole sentence fluent. We use a language model to measure this fluency, and use dynamic programming method to compute the proper words sense sequence in EBMT. It has the virtues of easily being used, and doesn't need extra lingual knowledge, besides, the general performance of it can be improved by using more target language resource to train. And experiment shows our method works well. | Make Word Sense Disambiguation in EBMT Practical |
d7149733 | We describe a simple and efficient Java object model and application programming interface (API) for (possibly multi-modal) annotated natural language corpora. Corpora are represented as elements like Sentences, Turns, Utterances, Words, Gestures and Markables. The API allows linguists to access corpora in terms of these discourse-level elements, i.e. at a conceptual level they are familiar with, with the flexibility offered by a general purpose programming language. It is also a contribution to corpus standardization efforts because it is based on a straightforward and easily extensible data model which can serve as a target for conversion of different corpus formats. ¡ | An API for Discourse-level Access to XML-encoded Corpora |
d246702343 | Universal adversarial texts (UATs) refer to short pieces of text units that can largely affect the predictions of Natural Language Processing (NLP) models. Recent studies on universal adversarial attacks require the availability of validation/test data which may not always be available in practice. In this paper, we propose two types of Data-Free Adjusted Gradient (DFAG) attacks to show that it is possible to generate effective UATs with manually crafted examples. Based on the proposed DFAG attacks, we explore the vulnerability of commonly used NLP models from two perspectives: network architecture and pre-trained embedding. The empirical results on three text classification datasets show that: 1) CNNbased and LSTM models are more vulnerable to UATs than self-attention models; 2) the vulnerability/robustness difference between of CNN/LSTM models and self-attention models could be attributed to whether or not they rely on training data artifacts for predictions; and 3) the pre-trained embeddings could expose vulnerability to both UAT and transferred UTA attacks. | Exploring the Vulnerability of Natural Language Processing Models via Universal Adversarial Texts |
d261494545 | We approach the task of assessing the suitability of a source text for translation by transferring the knowledge from established MT evaluation metrics to a model able to predict MT quality a priori from the source text alone. To open the door to experiments in this regard, we depart from reference English-German parallel corpora to build a corpus of 14,253 source text-quality score tuples. The tuples include four state-of-the-art metrics: cush-LEPOR, BERTScore, COMET, and Tran-sQuest. With this new resource at hand, we fine-tune XLM-RoBERTa, both in a single-task and a multi-task setting, to predict these evaluation scores from the source text alone. Results for this methodology are promising, with the single-task model able to approximate well-established MT evaluation and quality estimation metrics -without looking at the actual machine translations-achieving low Root Mean Square Error values in the [0.1-0.2] range and Pearson's correlation scores up to 0.688. | Return to the Source: Assessing Machine Translation Suitability |
d14711142 | This paper proposes to apply machine learning techniques to the task of combining outputs of multiple LVCSR models. The proposed technique has advantages over that by voting schemes such as ROVER, especially when the majority of participating models are not reliable. In this machine learning framework, as features of machine learning, information such as the model IDs which output the hypothesized word are useful for improving the word recognition rate. Experimental results show that the combination results achieve a relative word error reduction of up to 39 % against the best performing single model and that of up to 23 % against ROVER. We further empirically show that it performs better when LVCSR models to be combined are chosen so as to cover as many correctly recognized words as possible, rather than choosing models in descending order of their word correct rates. | An Empirical Study on Multiple LVCSR Model Combination by Machine Learning |
d252762446 | Seq2seq models have been shown to struggle with compositional generalisation, i.e. generalising to new and potentially more complex structures than seen during training. Taking inspiration from grammar-based models that excel at compositional generalisation, we present a flexible end-to-end differentiable neural model that composes two structural operations: a fertility step, which we introduce in this work, and a reordering step based on previous work . To ensure differentiability, we use the expected value of each step, which we compute using dynamic programming. Our model outperforms seq2seq models by a wide margin on challenging compositional splits of realistic semantic parsing tasks that require generalisation to longer examples. It also compares favourably to other models targeting compositional generalisation. 1 | Compositional Generalisation with Structured Reordering and Fertility Layers |
d199379356 | ||
d259095770 | Content Warning: This paper contains examples of misgendering and erasure that could be offensive and potentially triggering.Gender bias in language technologies has been widely studied, but research has mostly been restricted to a binary paradigm of gender. It is essential also to consider non-binary gender identities, as excluding them can cause further harm to an already marginalized group. In this paper, we comprehensively evaluate popular language models for their ability to correctly use English gender-neutral pronouns (e.g., singular they, them) and neo-pronouns (e.g., ze, xe, thon) that are used by individuals whose gender identity is not represented by binary pronouns. We introduce MISGENDERED, a framework for evaluating large language models' ability to correctly use preferred pronouns, consisting of (i) instances declaring an individual's pronoun, followed by a sentence with a missing pronoun, and (ii) an experimental setup for evaluating masked and auto-regressive language models using a unified method. When prompted outof-the-box, language models perform poorly at correctly predicting neo-pronouns (averaging 7.6% accuracy) and gender-neutral pronouns (averaging 31.0% accuracy). This inability to generalize results from a lack of representation of non-binary pronouns in training data and memorized associations. Few-shot adaptation with explicit examples in the prompt improves the performance but plateaus at only 45.4% for neo-pronouns. We release the full dataset, code, and demo at https://tamannahossainkay. github.io/misgendered/. * Last two authors contributed equally. | MISGENDERED: Limits of Large Language Models in Understanding Pronouns |
d14181397 | Retrieving a most stgulficant paragraph m a newspaper arUcle can act as a kind of surnmanzatmn It can gwe the human reader some hints on the contents of the arucle and help him to decide whether It deseei'ves a full readmg or not It may also act as a filter for a robust natural language understanding system, to extract relevant mformatton from that paragraph m order to enable conceptual mformauon retrieval Talang a newspaper arUcle and a base corpus, word co-occurrences w3th higher resolving power are ~dent~fied These co-occurrences are used to estabhsh hnks between the paragraphs of the arUcle The paragraph which presents the larger number of hnks tO other paragraphs ~s considered a most slgmficant one Though designed and tested for the Portuguese language, the staUshcal nature of our proposal should ensure ns portabtlny to other languages | Statistical methods for retrieving most significant paragraphs in newspaper articles • jea@dt fct uni pt |
d7356424 | Sentiment lexicons have been leveraged as a useful source of features for sentiment analysis models, leading to the state-of-the-art accuracies. On the other hand, most existing methods use sentiment lexicons without considering context, typically taking the count, sum of strength, or maximum sentiment scores over the whole input. We propose a context-sensitive lexicon-based method based on a simple weighted-sum model, using a recurrent neural network to learn the sentiments strength, intensification and negation of lexicon sentiments in composing the sentiment value of sentences. Results show that our model can not only learn such operation details, but also give significant improvements over state-of-the-art recurrent neural network baselines without lexical features, achieving the best results on a Twitter benchmark. | Context-Sensitive Lexicon Features for Neural Sentiment Analysis |
d250390663 | In this short paper, we compare existing value systems and approaches in NLP and HCI for collecting narrative data. Building on these parallel discussions, we shed light on the challenges facing some popular NLP dataset types, which we discuss these in relation to widelyused narrative-based HCI research methods; and we highlight points where NLP methods can broaden qualitative narrative studies. In particular, we point towards contextuality, positionality, dataset size, and open research design as central points of difference and windows for collaboration when studying narratives. Through the use case of narratives, this work contributes to a larger conversation regarding the possibilities for bridging NLP and HCI through speculative mixed-methods. | Narrative Datasets through the Lenses of NLP and HCI |
d49325612 | Semantic parsing is the task of transducing natural language (NL) utterances into formal meaning representations (MRs), commonly represented as tree structures. Annotating NL utterances with their corresponding MRs is expensive and timeconsuming, and thus the limited availability of labeled data often becomes the bottleneck of data-driven, supervised models. We introduce STRUCTVAE, a variational auto-encoding model for semisupervised semantic parsing, which learns both from limited amounts of parallel data, and readily-available unlabeled NL utterances. STRUCTVAE models latent MRs not observed in the unlabeled data as treestructured latent variables. Experiments on semantic parsing on the ATIS domain and Python code generation show that with extra unlabeled data, STRUCTVAE outperforms strong supervised models. 1 | STRUCTVAE: Tree-structured Latent Variable Models for Semi-supervised Semantic Parsing |
d218973875 | ||
d253018728 | Image-to-text tasks, such as open-ended image captioning and controllable image description, have received extensive attention for decades. Here, we further advance this line of work by presenting Visual Spatial Description (VSD), a new perspective for image-to-text toward spatial semantics. Given an image and two objects inside it, VSD aims to produce one description focusing on the spatial perspective between the two objects. Accordingly, we manually annotate a dataset to facilitate the investigation of the newly-introduced task and build several benchmark encoder-decoder models by using VL-BART and VL-T5 as backbones. In addition, we investigate pipeline and joint end-to-end architectures for incorporating visual spatial relationship classification (VSRC) information into our model. Finally, we conduct experiments on our benchmark dataset to evaluate all our models. Results show that our models are impressive, providing accurate and human-like spatial-oriented text descriptions. Meanwhile, VSRC has great potential for VSD, and the joint end-to-end architecture is the better choice for their integration. We make the dataset and codes public for research purposes. 1 | Visual Spatial Description: Controlled Spatial-Oriented Image-to-Text Generation |
d8317576 | This paper describes the development of QuestionBank, a corpus of 4000 parseannotated questions for (i) use in training parsers employed in QA, and (ii) evaluation of question parsing. We present a series of experiments to investigate the effectiveness of QuestionBank as both an exclusive and supplementary training resource for a state-of-the-art parser in parsing both question and non-question test sets. We introduce a new method for recovering empty nodes and their antecedents (capturing long distance dependencies) from parser output in CFG trees using LFG f-structure reentrancies. Our main findings are (i) using QuestionBank training data improves parser performance to 89.75% labelled bracketing f-score, an increase of almost 11% over the baseline; (ii) back-testing experiments on nonquestion data (Penn-II WSJ Section 23) shows that the retrained parser does not suffer a performance drop on non-question material; (iii) ablation experiments show that the size of training material provided by QuestionBank is sufficient to achieve optimal results; (iv) our method for recovering empty nodes captures long distance dependencies in questions from the ATIS corpus with high precision (96.82%) and low recall (39.38%). In summary, Ques-tionBank provides a useful new resource in parser-based QA research. | QuestionBank: Creating a Corpus of Parse-Annotated Questions |
d257921458 | Human-in-the-loop topic modelling incorporates users' knowledge into the modelling process, enabling them to refine the model iteratively. Recent research has demonstrated the value of user feedback, but there are still issues to consider, such as the difficulty in tracking changes, comparing different models and the lack of evaluation based on real-world examples of use. We developed a novel, interactive human-in-the-loop topic modeling system with a user-friendly interface that enables users compare and record every step they take, and a novel topic words suggestion feature to help users provide feedback that is faithful to the ground truth. Our system also supports not only what traditional topic models can do, i.e., learning the topics from the whole corpus, but also targeted topic modelling, i.e., learning topics for specific aspects of the corpus. In this article, we provide an overview of the system and present the results of a series of user studies designed to assess the value of the system in progressively more realistic applications of topic modelling. . 2009. Reading tea leaves: How humans interpret topic models. In Advances in neural information processing systems, pages 288-296.Jaegul Choo, Changhyun Lee, Chandan K Reddy, and Haesun Park. 2013. Utopian: User-driven topic modeling based on interactive nonnegative matrix factorization. | A User-Centered, Interactive, Human-in-the-Loop Topic Modelling System |
d251903830 | Semantically meaningful sentence embeddings are important for numerous tasks in natural language processing. To obtain such embeddings, recent studies explored the idea of utilizing synthetically generated data from pretrained language models (PLMs) as a training corpus. However, PLMs often generate sentences much different from the ones written by human. We hypothesize that treating all these synthetic examples equally for training deep neural networks can have an adverse effect on learning semantically meaningful embeddings. To analyze this, we first train a classifier that identifies machine-written sentences, and observe that the linguistic features of the sentences identified as written by a machine are significantly different from those of human-written sentences. Based on this, we propose a novel approach that first trains the classifier to measure the importance of each sentence. The distilled information from the classifier is then used to train a reliable sentence embedding model. Through extensive evaluation on four real-world datasets, we demonstrate that our model trained on synthetic data generalizes well and outperforms the existing baselines. 1 | Reweighting Strategy based on Synthetic Data Identification for Sentence Similarity |
d226291898 | ||
d258463984 | Paraphrase detection allows identifying the degree of likelihood between source and suspect sentences. It is a critical machine learning problem in computational linguistics. This is due to the expression variability and ambiguities especially in Arabic language. Previous neural models have yielded promising results, but are computationally expensive. They cannot directly align long-form sentences expressing different meanings. To address this issue, Siamese neural network is proposed for Arabic paraphrase detection based on deep contextual semantic textual similarity. Despite that the pre-trained word embedding models have advanced NLP, they ignored the contextual information and meaning within the sentence. In this paper, the potential of deep contextualized word representations was firstly investigated using Arabic Bidirectional Encoder Representation from Transformers (AraBERT) as an embedding layer. Then, Long Short Term Memory (LSTM) modeled high-level semantic knowledge. Finally, cosine distance identified the degree of semantic textual similarity. Using our own generated corpus, experiments showed that the proposed model outperformed state-ofthe-art methods, in terms of F1 score. | Siamese AraBERT-LSTM Model based Approach for Arabic Paraphrase Detection |
d8464601 | This paper describes the design of speech act tags for spoken dialogue corpora and its evaluation. Compared with the tags used for conventional corpus annotation, the proposed speech intention tag is specialized enough to determine system operations. However, detailed information description increases tag types. This causes an ambiguous tag selection. Therefore, we have designed an organization of tags, with focusing attention on layered tagging and context-dependent tagging. Over 35,000 utterance units in the CIAIR corpus have been tagged by hand. To evaluate the reliability of the intention tag, a tagging experiment was conducted. The reliability of tagging is evaluated by comparing the tagging among some annotators using kappa value. As a result, we confirmed that reliable data could be built. This corpus with speech intention tag could be widely used from basic research to applications of spoken dialogue. In particular, this would play an important role from the viewpoint of practical use of spoken dialogue corpora. | Layered Speech-Act Annotation for Spoken Dialogue Corpus |
d8208398 | Parallel corpora are critical resources for machine translation research and development since parallel corpora contain translation equivalences of various granularities. Manual annotation of word alignment is of significance to provide gold-standard for developing and evaluating both example-based machine translation model and statistical machine translation model. This paper presents the work of word alignment annotation in the NICT Japanese-Chinese parallel corpus, which is constructed at the National Institute of Information and Communications Technology (NICT). We describe the specification of word alignment annotation and the tools specially developed for the manual annotation. The manual annotation on 17,000 sentence pairs has been completed. We examined the manually annotated word alignment data and extracted translation knowledge from the word aligned corpus. | Word Alignment Annotation in a Japanese-Chinese Parallel Corpus |
d52141295 | Authenticity' has long been a primary concern of sociolinguistic analyses. Early sociolinguistic work insisted that data collected should be 'spontaneous and naturally occurring', a methodological dictum that was, in large part, borrowed from dialectology's search for 'authentic' Englishes that were thought to be endangered by modernization and, later, urbanization. In many ways, authentic Englishes are imagined to represent both literally and imaginatively 'authentic identities' of the speakers of those languages. The emphasis on 'authentic' Englishes significantly coincides with the development across a number of English-speaking communities of a 'Standard Language Ideology', which promotes myths of 'purity' and 'timelessness' of the standard language. As standardized Englishes are usually adopted as media languages --and frequently named after the media that use them, such as 'BBC English' or 'American Broadcast Standard' --these media languages risk losing features that may signal 'authentic' language or identities. And the pursuit if authenticity in media Englishes is amplified in Englishes of pop culture, where authenticity must be manufactured as part of the process of creation. This essay will explore the historical basis for the processes that manufacture authenticity in English varieties as normal recurring process of standardization in a pluricentric model of world Englishes.11 | Assessing Authenticity in Media Englishes and the Englishes of Popular Culture |
d19018265 | Fluency and continuity properties are essential in synthesizing a high quality singing voice. In order to synthesize a smooth and continuous singing voice, the Hidden Markov Model-based synthesis approach is employed in this study to construct a Mandarin singing voice synthesis system. The system is designed to generate Mandarin songs with arbitrary lyrics and melody in a certain pitch range. In this study, a singing voice database is designed and collected, considering the phonetic converge of Mandarin singing voices. Synthesis units and a question set are defined carefully and tailored the meet the minimum requirement for Mandarin singing voice synthesis. In addition, pitch-shift pseudo data extension and vibrato creation are applied to obtain more natural synthesized singing voices.The evaluation results show that the system, based on tailored synthesis units and the question set, can improve the quality and intelligibility of the synthesized singing voice. Using pitch-shift pseudo data and vibrato creation can further improve the quality and naturalness of the synthesized singing voices. | HMM-based Mandarin Singing Voice Synthesis Using Tailored Synthesis Units and Question Sets |
d21695849 | One of the major challenges in the field of Natural Language Processing (NLP) is the handling of idioms; seemingly ordinary phrases which could be further conjugated or even spread across the sentence to fit the context. Since idioms are a part of natural language, the ability to tackle them brings us closer to creating efficient NLP tools. This paper presents a multilingual parallel idiom dataset for seven Indian languages in addition to English and demonstrates its usefulness for two NLP applications -Machine Translation and Sentiment Analysis. We observe significant improvement for both the subtasks over baseline models trained without employing the idiom dataset. | No more beating about the bush: A Step towards Idiom Handling for Indian Language NLP |
d41049793 | This paper proposes a novel scheme that enhance the modulation spectrum of speech features in noise speech recognition via non-negative matrix factorization (NMF). In the presented approach, we apply NMF to obtain a set of non-negative basis spectra vectors which derived from the clean speech to represent the important components for speech recognition. The difference compared to the conventional NMF-based scheme that leverages iterative search to update the full-band modulation spectra is two: first, we apply the orthogonal projection to update the low sub-band modulation spectra. Second, we process the low half-band of the | Sub-band modulation spectrum factorization in robust speech recognition |
d593447 | Existing sentiment classifiers usually work for only one specific language, and different classification models are used in different languages. In this paper we aim to build a universal sentiment classifier with a single classification model in multiple different languages. In order to achieve this goal, we propose to learn multilingual sentiment-aware word embeddings simultaneously based only on the labeled reviews in English and unlabeled parallel data available in a few language pairs. It is not required that the parallel data exist between English and any other language, because the sentiment information can be transferred into any language via pivot languages. We present the evaluation results of our universal sentiment classifier in five languages, and the results are very promising even when the parallel data between English and the target languages are not used. Furthermore, the universal single classifier is compared with a few cross-language sentiment classifiers relying on direct parallel data between the source and target languages, and the results show that the performance of our universal sentiment classifier is very promising compared to that of different crosslanguage classifiers in multiple target languages. | Towards a Universal Sentiment Classifier in Multiple languages |
d1862839 | SRI: Description of the JV-FASTUS System Used for MUC-5 | |
d2926844 | Semantic space models represent the meaning of a word as a vector in high-dimensional space. They offer a framework in which the meaning representation of a word can be computed from its context, but the question remains how they support inferences. While there has been some work on paraphrase-based inferences in semantic space, it is not clear how semantic space models would support inferences involving hyponymy, like horse ran → animal moved. In this paper, we first discuss what a point in semantic space stands for, contrasting semantic space with Gärdenforsian conceptual space. Building on this, we propose an extension of the semantic space representation from a point to a region. We present a model for learning a region representation for word meaning in semantic space, based on the fact that points at close distance tend to represent similar meanings. We show that this model can be used to predict, with high precision, when a hyponymybased inference rule is applicable. Moving beyond paraphrase-based and hyponymy-based inference rules, we last discuss in what way semantic space models can support inferences. | Supporting inferences in semantic space: representing words as regions |
d3102580 | The developlnent of natural language processing systems is currently driven to a large extent by measures of knowledgebase size and coverage of individual phenomena relative to a corpus. While these measures have led to significant advances for knowledge-lean applications, they do not adequately motivate progress in computational semantics leading to the development of large-scale, general purpose NLP systems. In this article, we argue that depth of semantic representation is essential for covering a broad range of phenomena in the computational treatment of language and propose (lepth as an important additional dimension for measuring the semantic coverage of NLP systems. We propose an operationalization of this measure and show how to characterize an NLP system along the dimensions of size, corpus coverage, and depth. The proposed framework is illustrated using sever~fl prominent NLP systems. We hope the preliminary proposals made in this article will lead to prolonged debates in the field and will continue to be refined. | Measuring Semantic Coverage |
d53072814 | This work aims to detect specific attributes of a place (e.g., if it has a romantic atmosphere, or if it offers outdoor seating) from its user reviews via distant supervision: without direct annotation of the review text, we use the crowdsourced attribute labels of the place as labels of the review text. We then use reviewlevel attention to pay more attention to those reviews related to the attributes. The experimental results show that our attention-based model predicts attributes for places from reviews with over 98% accuracy. The attention weights assigned to each review provide explanation of capturing relevant reviews. | Distantly Supervised Attribute Detection from Reviews |
d6048999 | The P k evaluation metric, initially proposed byBeeferman, Berger, and Lafferty (1997), is becoming the standard measure for assessing text segmentation algorithms. However, a theoretical analysis of the metric finds several problems: the metric penalizes false negatives more heavily than false positives, overpenalizes near misses, and is affected by variation in segment size distribution. We propose a simple modification to the P k metric that remedies these problems. This new metric-called WindowDiff-moves a fixed-sized window across the text and penalizes the algorithm whenever the number of boundaries within the window does not match the true number of boundaries for that window of text. | A Critique and Improvement of an Evaluation Metric for Text Segmentation |
d174799670 | Advances in the automated detection of offensive Internet postings make this mechanism very attractive to social media companies, who are increasingly under pressure to monitor and action activity on their sites. However, these advances also have important implications as a threat to the fundamental right of free expression. In this article, we analyze which Twitter posts could actually be deemed offenses under German criminal law. German law follows the deductive method of the Roman law tradition based on abstract rules as opposed to the inductive reasoning in Anglo-American common law systems. This allows us to show how legal conclusions can be reached and implemented without relying on existing court decisions. We present a data annotation schema, consisting of a series of binary decisions, for determining whether a specific post would constitute a criminal offense. This schema serves as a step towards an inexpensive creation of a sufficient amount of data for automated classification. We find that the majority of posts deemed morally offensive actually do not constitute a criminal offense and still contribute to public discourse. Furthermore, laymen can provide sufficiently reliable data to an expert reference but are, for instance, more lenient in the interpretation of what constitutes a disparaging statement. | From Legal to Technical Concept: Towards an Automated Classification of German Political Twitter Postings as Criminal Offenses |
d21705089 | Automatic summarization has so far focused on datasets of ten to twenty rather short documents, typically news articles. But automatic systems could in theory analyze hundreds of documents from a wide range of sources and provide an overview to the interested reader. Such a summary would ideally present the most general issues of a given topic and allow for more in-depth information on specific aspects within said topic. In this paper, we present a new approach for creating hierarchical summarization corpora from large, heterogeneous document collections. We first extract relevant content using crowdsourcing and then ask trained annotators to order the relevant information hierarchically. This yields tree structures covering the specific facets discussed in a document collection. Our resulting corpus is freely available and can be used to develop and evaluate hierarchical summarization systems. | Beyond Generic Summarization: A Multi-faceted Hierarchical Summarization Corpus of Large Heterogeneous Data |
d42910362 | Translation and Communication: Translating and the Computer 6 | |
d11614121 | In this paper we describe a method for selecting pairs of parallel documents (documents that are a translation of each other) from a large collection of documents obtained from the web. Our approach is based on a coverage score that reflects the number of distinct bilingual phrase pairs found in each pair of documents, normalized by the total number of unique phrases found in them. Since parallel documents tend to share more bilingual phrase pairs than non-parallel documents, our alignment algorithm selects pairs of documents with the maximum coverage score from all possible pairings involving either one of the two documents. | Shared Task Papers |
d765547 | Statistical MT has made great progress in the last few years, but current translation models are weak on re-ordering and target language fluency. Syntactic approaches seek to remedy these problems. In this paper, we take the framework for acquiring multi-level syntactic translation rules of (Galley et al., 2004) from aligned tree-string pairs, and present two main extensions of their approach: first, instead of merely computing a single derivation that minimally explains a sentence pair, we construct a large number of derivations that include contextually richer rules, and account for multiple interpretations of unaligned words. Second, we propose probability estimates and a training procedure for weighting these rules. We contrast different approaches on real examples, show that our estimates based on multiple derivations favor phrasal re-orderings that are linguistically better motivated, and establish that our larger rules provide a 3.63 BLEU point increase over minimal rules. | Scalable Inference and Training of Context-Rich Syntactic Translation Models |
d2455968 | This article describes the competitive grouping algorithm at the core of our Integrated Segmentation and Alignment (ISA) model. ISA extracts phrase pairs from a bilingual corpus without requiring the precalculated word alignment as many other phrase alignment models do. Experiments conducted within the WPT-05 shared task on statistical machine translation demonstrate the simplicity and effectiveness of this approach. | Competitive Grouping in Integrated Phrase Segmentation and Alignment Model |
d2519809 | The paper presents the Constructive Dialogue Model as a new approach to formulate system goals in intelligent dialogue systems. The departure point is in general communicative principles which constrain cooperative and coherent communication. Dialogue participants are engaged in a cooperative task whereby a model of the joint purpose is constructed. Contributions are planned as reactions to the changing context, and no dialogue grammar is needed. Also speech act classification is abandoned, in favour of contextual reasoning and rationality considerations. | Goal Formulation based on Communicative Principles |
d830228 | Cross-Language Information Retrieval (CLIR) system uses dictionaries for information retrieval. However, out of vocabulary (OOV) terms cannot be found in dictionaries. Although many researchers in the past have endeavored to solve the OOV term translation problem, but little attention has been paid to hybrid translations "α1antitrypsin deficiency (α1-抗胰蛋白酶缺乏症)". This paper presents a novel OOV term translation mining approach, which proposes a new adaptive rules system for hybrid translations and a new recursive feature selection method for supervised machine learning. We evaluate the propose method with English-Chinese OOV term translation. Our experiments show that adaptive rules system and recursive feature selection with Bayesian Net can significantly outperform existing supervised models. | Web based English-Chinese OOV term translation using Adaptive rules and Recursive feature selection |
d14430950 | DICTIONARIES, DICTIONARY GRAMMARS AND DICTIONARY ENTRY PARSING | |
d226238689 | ||
d237099294 | ||
d236477909 | Stance detection determines whether the author of a text is in favor of, against or neutral to a specific target and provides valuable insights into important events such as presidential election. However, progress on stance detection has been hampered by the absence of large annotated datasets. In this paper, we present P-STANCE, a large stance detection dataset in the political domain, which contains 21,574 labeled tweets. We provide a detailed description of the newly created dataset and develop deep learning models on it. Our best model achieves a macro-average F1-score of 80.53%, which we improve further by using semi-supervised learning. Moreover, our P-STANCE dataset can facilitate research in the fields of cross-domain stance detection such as cross-target stance detection where a classifier is adapted from a different but related target. We publicly release our dataset and code. 1 | P-Stance: A Large Dataset for Stance Detection in Political Domain |
d2363925 | Many languages of the world (some with very large numbers of native speakers) are not yet supported on computers. In this paper we first present a simple method to provide an extra layer of easily customizable language-encoding support for less computerized languages. We then describe an editor called Sanchay Editor, which uses this method and also has many other facilities useful for those using less computerized languages for simple text editing or for Natural Language Processing purposes, especially for annotation. | A Mechanism to Provide Language-Encoding Support and an NLP Friendly Editor |
d233365241 | This paper presents CovRelex, a scientific paper retrieval system targeting entities and relations via relation extraction on COVID-19 scientific papers. This work aims at building a system supporting users efficiently in acquiring knowledge across a huge number of COVID-19 scientific papers published rapidly. Our system can be accessed via https://www.jaist.ac.jp/is/labs/ nguyen-lab/systems/covrelex/. | CovRelex: A COVID-19 Retrieval System with Relation Extraction |
d1400617 | ||
d9998098 | This paper describes a prototype news analysis system Which classifies and indexes news stories in real time.The system extracts stories from a newswire, parses the sentences of the story, and then maps the syntactic structures into a concept base.This process results in an index containing both ger~eral categories and specific details. Central to this system is a Government-Binding parser which processes each sentence of a news item.The system is completely modular and can be interfaced with different news feeds or concept bases. | A News Analysis System |
d15392152 | This paper introduces GAF, a grounded annotation framework to represent events in a formal context that can represent information from both textual and extra-textual sources. GAF makes a clear distinction between mentions of events in text and their formal representation as instances in a semantic layer. Instances are represented by RDF compliant URIs that are shared across different research disciplines. This allows us to complete textual information with external sources and facilitates reasoning. The semantic layer can integrate any linguistic information and is compatible with previous event representations in NLP. Through a use case on earthquakes in Southeast Asia, we demonstrate GAF flexibility and ability to reason over events with the aid of extra-linguistic resources. | GAF: A Grounded Annotation Framework for Events |
d1183768 | Document-level sentiment analysis can benefit from fine-grained subjectivity, so that sentiment polarity judgments are based on the relevant parts of the document. While finegrained subjectivity annotations are rarely available, encouraging results have been obtained by modeling subjectivity as a latent variable. However, latent variable models fail to capitalize on our linguistic knowledge about discourse structure. We present a new method for injecting linguistic knowledge into latent variable subjectivity modeling, using discourse connectors. Connector-augmented transition features allow the latent variable model to learn the relevance of discourse connectors for subjectivity transitions, without subjectivity annotations. This yields significantly improved performance on documentlevel sentiment analysis in English and Spanish. We also describe a simple heuristic for automatically identifying connectors when no predefined list is available. | Discourse Connectors for Latent Subjectivity in Sentiment Analysis |
d44075056 | Word vector models learn about semantics through corpora. Convolutional Neural Networks (CNNs) can learn about semantics through images. At the most abstract level, some of the information in these models must be shared, as they model the same real-world phenomena. Here we employ techniques previously used to detect semantic representations in the human brain to detect semantic representations in CNNs. We show the accumulation of semantic information in the layers of the CNN, and discover that, for misclassified images, the correct class can be recovered in intermediate layers of a CNN. | The Emergence of Semantics in Neural Network Representations of Visual Information |
d17626021 | In the present study we explore various methods for improving the transition-based parsing of coordinated structures in French. Features targeting syntactic parallelism in coordinated structures are used as additional features when training the statistical model, but also as an efficient means to find and correct annotation errors in training corpora. In terms of annotation, we compare four different annotations for coordinated structures, demonstrate the importance of globally unambiguous annotation for punctuation, and discuss the decision process of a transition-based parser for coordination, explaining why certain annotations consistently out-perform others. We compare the gains provided by different annotation standards, by targeted features, and by using a wider beam. Our best configuration gives a 37.28% reduction in the coordination error rate, when compared to the baseline SPMRL test corpus for French after manual corrections.This work is licensed under a Creative Commons Attribution 4.0 International Licence. Page numbers and proceedings footer are added by the organisers. Licence details: http://creativecommons.org/licenses/by/4.0/. | Improving the parsing of French coordination through annotation standards and targeted features |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.