_id
stringlengths
4
10
text
stringlengths
0
18.4k
title
stringlengths
0
8.56k
d239016535
We present substructure distribution projection (SUBDP), a technique that projects a distribution over structures in one domain to another, by projecting substructure distributions separately. Models for the target domain can then be trained, using the projected distributions as soft silver labels. We evaluate SUBDP on zeroshot cross-lingual dependency parsing, taking dependency arcs as substructures: we project the predicted dependency arc distributions in the source language(s) to target language(s), and train a target language parser on the resulting distributions. Given an English treebank as the only source of human supervision, SUBDP achieves better unlabeled attachment score than all prior work on the Universal Dependencies v2.2 (Nivre et al., 2020) test set across eight diverse target languages, as well as the best labeled attachment score on six languages. In addition, SUBDP improves zeroshot cross-lingual dependency parsing with very few (e.g., 50) supervised bitext pairs, across a broader range of target languages.
Substructure Distribution Projection for Zero-Shot Cross-Lingual Dependency Parsing
d236460034
Pre-trained language models (PLMs) have achieved great success in natural language processing. Most of PLMs follow the default setting of architecture hyper-parameters (e.g., the hidden dimension is a quarter of the intermediate dimension in feed-forward sub-networks) in BERT (Devlin et al., 2019). Few studies have been conducted to explore the design of architecture hyper-parameters in BERT, especially for the more efficient PLMs with tiny sizes, which are essential for practical deployment on resource-constrained devices. In this paper, we adopt the one-shot Neural Architecture Search (NAS) to automatically search architecture hyper-parameters. Specifically, we carefully design the techniques of one-shot learning and the search space to provide an adaptive and efficient development way of tiny PLMs for various latency constraints. We name our method AutoTinyBERT 1 and evaluate its effectiveness on the GLUE and SQuAD benchmarks. The extensive experiments show that our method outperforms both the SOTA searchbased baseline (NAS-BERT) and the SOTA distillation-based methods (such as DistilBERT, TinyBERT, MiniLM and MobileBERT). In addition, based on the obtained architectures, we propose a more efficient development method that is even faster than the development of a single PLM.
AutoTinyBERT: Automatic Hyper-parameter Optimization for Efficient Pre-trained Language Models
d253761982
Topic-sensitive query set expansion is an important area of research that aims to improve search results for information retrieval. It is particularly crucial for queries related to sensitive and emerging topics. In this work, we describe a method for query set expansion about emerging topics using vector space interpolation. We use a transformer model called OP-TIMUS, which is suitable for vector space manipulation due to its variational autoencoder nature. One of our proposed methods -Dirichlet interpolation shows promising results for query expansion. Our methods effectively generate new queries about the sensitive topic by incorporating set-level diversity, which is not captured by traditional sentence-level augmentation methods such as paraphrasing or backtranslation.
Vector Space Interpolation for Query Expansion
d1280040
Grammars that expect words from the lexicon may be at odds with the transparent projection of syntactic and semantic scope relations of smaller units. We propose a morphosyntactic framework based on Combinatory Categorial Grammar that provides flexible constituency, flexible category consistency, and lexical projection of morphosyntactic properties and attachment to grammar in order to establish a morphemic grammar-lexicon. These mechanisms provide enough expressive power in the lexicon to formulate semantically transparent specifications without the necessity to confine structure forming to words and phrases. For instance, bound morphemes as lexical items can have phrasal scope or word scope, independent of their attachment characteristics but consistent with their semantics. The controls can be attuned in the lexicon to language-particular properties. The result is a transparent interface of inflectional morphology, syntax, and semantics. We present a computational system and show the application of the framework to English and Turkish.the issues raised so far. For convenience, we call a grammar that expects words from the lexicon a lexemic grammar and a grammar that expects morphemes a morphemic grammar. A lexemic PSG provides a lexical interface for inflected words (X 0 s) such that a regular grammar subcomponent handles lexical insertion at X 0 . 2 In (4d), the right conjunctçocuk-lar-a is analyzed asas a regular grammar). Assuming a syncategorematic coordination schema, that is, X → X and X, the N 0 in the left and right conjuncts of this example would not be of the same type. Revising the coordination schema such that only the root features coordinate would not be a solution either. In (4e), the relation of possession that is marked on the right conjunct must be carried over to the left conjunct as well. What is required for these examples is that the syntactic constituent X in the schema be analyzed as X-PLU(-POSS)-DAT, after N 0 and N 0 coordination.What we need then is not a lexemic but a morphemic organization in which bracketing of free and bound morphemes is regulated in syntax. The lexicon, of course, must now supply the ingredients of a morphosyntactic calculus. This leads to a theory in which semantic composition parallels morphosyntactic combination by virtue of bound morphemes' being able to pick their domains just like words (above X 0 , if needed). A comparison of English and Turkish in this regard is noteworthy. The English relative pronouns that/whom and the Turkish relative participle -dig-i would have exactly the same semantics when the latter is granted a representational status in the lexicon (see Section 6).Furthermore, rule-based PSGs project a rigid notion of surface constituency. Steedman (2000) argued, however, that syntactic processes such as identical element deletion under coordination call for flexible constituency, such as SO (subject-object) in the SVO & SO gapping pattern of English and SV (subject-verb) constituency in the OSV & SV pattern of Turkish. Nontraditional constituents are also needed in specifying semantically transparent constituency of words, affixes, clitics, and phrases.Constraint-based PSGs such as HPSG appeal to coindexation and feature passing via unification, rather than movement, to deal with such processes. HPSG also makes the commitment that inflectional morphology is internal to the lexicon, handled either by lexical rules(Pollard and Sag 1994)or by lexical inheritance(Miller and Sag 1997). We look at (5c) to highlight a problem with the stem-and-inflections view. As words enter syntax fully inflected, the sign of the verb ver-dig-i in the relative clause (5c) would be as in (6a), in which the SUBCAT list of the verb stem is, as specified in the lexical entry for ver, unsaturated. The participle adds coindexation in MOD| · · · |INDEX. The HPSG analysis of this example would be as inFigure 1. Although passing the agreement features of the head separately (Sehitoglu 1996) solves the case problem alluded to in (5c), however, structure sharing of the NP dat with the SLASH, INDEX, and CONTENT features of ver-dig-i is needed for semantics (GIVEE), but this conflicts with the head features of the topmost NP acc in the tree. The relative participle as a lexical entry (e.g., (6b)) would resolve the problem with subcategorization because its SUBCAT list is empty (like the relative pronoun that in English), hence there would be no indirect dependence of the nonlocal SLASH feature and the local SUBCAT feature via semantics (CONTENT). Such morphemic alternatives are not considered in HPSG, however, and require a significant revision in the theory. Furthermore, HPSG's lexical
The Combinatory Morphemic Lexicon
d231986079
This paper presents the system that we propose for the Reliable Intelligence Identification on Vietnamese Social Network Sites (ReINTEL) task of the Vietnamese Language and Speech Processing 2020 (VLSP 2020) Shared Task. In this task, the VLSP 2020 provides a dataset with approximately 6,000 training news/posts annotated with reliable or unreliable labels, and a test set consists of 2,000 examples without labels. In this paper, we conduct experiments on different transfer learning models, which are bert4news and PhoBERT fine-tuned to predict whether the news is reliable or not. In our experiments, we achieve the AUC score of 94.52% on the private test set from ReIN-TEL's organizers.
ReINTEL Challenge 2020: Exploiting Transfer Learning Models for Reliable Intelligence Identification on Vietnamese Social Network Sites
d227905369
d259370692
News reports about emerging issues often include several conflicting story lines. Individual stories can be conceptualized as samples from an underlying mixture of competing narratives.
My side, your side and the evidence: Discovering aligned actor groups and the narratives they weave
d241583770
In this paper, we present a first attempt at enriching German Universal Dependencies (UD) treebanks with enhanced dependencies. Similarly to the converter for English (Schuster and Manning, 2016), we develop a rule-based system for deriving enhanced dependencies from the basic layer, covering three linguistic phenomena: relative clauses, coordination, and raising/control. For quality control, we manually correct or validate a set of 196 sentences, finding that around 90% of added relations are correct. Our data analysis reveals that difficulties arise mainly due to inconsistencies in the basic layer annotations. We show that the English system is in general applicable to German data, but that adapting to the particularities of the German treebanks and language increases precision and recall by up to 10%. Comparing the application of our converter on gold standard dependencies vs. automatic parses, we find that F1 drops by around 10% in the latter setting due to error propagation. Finally, an enhanced UD parser trained on a converted treebank performs poorly when evaluated against our annotations, indicating that more work remains to be done to create gold standard enhanced German treebanks.
A Corpus Study of Creating Rule-Based Enhanced Universal Dependencies for German
d26604137
We present a new method for the joint task of tagging and non-projective dependency parsing. We demonstrate its usefulness with an application to discontinuous phrase-structure parsing where decoding lexicalized spines and syntactic derivations is performed jointly. The main contributions of this paper are (1) a reduction from joint tagging and non-projective dependency parsing to the Generalized Maximum Spanning Arborescence problem, and (2) a novel decoding algorithm for this problem through Lagrangian relaxation. We evaluate this model and obtain state-of-the-art results despite strong independence assumptions.
Efficient Discontinuous Phrase-Structure Parsing via the Generalized Maximum Spanning Arborescence
d248721759
Neural abstractive summarization models are prone to generate summaries which are factually inconsistent with their source documents. Previous work has introduced the task of recognizing such factual inconsistency as a downstream application of natural language inference (NLI). However, state-of-the-art NLI models perform poorly in this context due to their inability to generalize to the target task. In this work, we show that NLI models can be effective for this task when the training data is augmented with high-quality task-oriented examples. We introduce Falsesum, a data generation pipeline leveraging a controllable text generation model to perturb human-annotated summaries, introducing varying types of factual inconsistencies. Unlike previously introduced document-level NLI datasets, our generated dataset contains examples that are diverse and inconsistent yet plausible. We show that models trained on a Falsesum-augmented NLI dataset improve the state-of-the-art performance across four benchmarks for detecting factual inconsistency in summarization.
Falsesum: Generating Document-level NLI Examples for Recognizing Factual Inconsistency in Summarization
d222378698
Narrative generation is an open-ended NLP task in which a model generates a story given a prompt. The task is similar to neural response generation for chatbots; however, innovations in response generation are often not applied to narrative generation, despite the similarity between these tasks. We aim to bridge this gap by applying and evaluating advances in decoding methods for neural response generation to neural narrative generation. In particular, we employ GPT-2 and perform ablations across nucleus sampling thresholds and diverse decoding hyperparameters-specifically, maximum mutual information-analyzing results over multiple criteria with automatic and human evaluation. We find that (1) nucleus sampling is generally best with thresholds between 0.7 and 0.9; (2) a maximum mutual information objective can improve the quality of generated stories; and (3) established automatic metrics do not correlate well with human judgments of narrative quality on any qualitative metric.
Decoding Methods for Neural Narrative Generation
d214356
The problem of parsing ambiguous structures concerns (i) their representation and (ii) the specification of mechanisms allowing to delay and control their evaluation. We first propose to use a particular kind of disjunctions called controlled disjunctions: these formulae allows the representation and the implementation of specific constralnts that can occur between ambiguous values. But an efficient control of ambiguous structures also has to take into account lexical as well as syntactic information concerning this object. We then propose the use of unary quasi-trees specifying constraints at these different levels. The two devices allow an efficient implementation of the control of the ambiguity. Moreover, they are independent from a particular formalism and can be used whatever the linguistic theory.
Parsing Ambiguous Structures using Controlled Disjunctions and Unary Quasi-Trees
d232135081
One strategy for facilitating reading comprehension is to present information in a questionand-answer format. We demo a system that integrates the tasks of question answering (QA) and question generation (QG) in order to produce Q&A items that convey the content of multi-paragraph documents. We report some experiments for QA and QG that yield improvements on both tasks, and assess how they interact to produce a list of Q&A items for a text. The demo is accessible at qna.sdl.com.
AnswerQuest: A System for Generating Question-Answer Items from Multi-Paragraph Documents
d14661696
We investigated the speech recognition of a person with an articulation disorder resulting from the athetoid type of cerebral palsy. The articulation of the first speech tends to become unstable due to strain on speech-related muscles, and that causes degradation of speech recognition. In this paper, we introduce a robust feature extraction method based on PCA (Principal Component Analysis) and RP (Random Projection) for dysarthric speech recognition. PCA-based feature extraction performs reducing the influence of the unstable speaking style caused by the athetoid symptoms. Moreover, we investigate the feasibility of random projection for feature transformation in order to gain more performance in dysarthric speech recognition task. Its effectiveness is confirmed by word recognition experiments.
Robust Feature Extraction to Utterance Fluctuation of Articulation Disorders Based on Random Projection
d222272191
Natural language understanding (NLU) and Natural language generation (NLG) tasks hold a strong dual relationship, where NLU aims at predicting semantic labels based on natural language utterances and NLG does the opposite. The prior work mainly focused on exploiting the duality in model training in order to obtain the models with better performance. However, regarding the fast-growing scale of models in the current NLP area, sometimes we may have difficulty retraining whole NLU and NLG models. To better address the issue, this paper proposes to leverage the duality in the inference stage without the need of retraining. The experiments on three benchmark datasets demonstrate the effectiveness of the proposed method in both NLU and NLG, providing the great potential of practical usage. 1
Dual Inference for Improving Language Understanding and Generation
d251280556
Traditional Chinese Medicine (TCM) is a natural, safe, and effective therapy that has spread and been applied worldwide. The unique TCM diagnosis and treatment system requires a comprehensive analysis of a patient's symptoms hidden in the clinical record written in free text. Prior studies have shown that this system can be informationized and intelligentized with the aid of artificial intelligence (AI) technology, such as natural language processing (NLP). However, existing datasets are not of sufficient quality nor quantity to support the further development of data-driven AI technology in TCM. Therefore, in this paper, we focus on the core task of the TCM diagnosis and treatment system-syndrome differentiation (SD)-and we introduce the first public large-scale benchmark for SD, called TCM-SD. Our benchmark contains 54,152 real-world clinical records covering 148 syndromes. Furthermore, we collect a large-scale unlabelled textual corpus in the field of TCM and propose a domain-specific pre-trained language model, called ZY-BERT. We conducted experiments using deep neural networks to establish a strong performance baseline, reveal various challenges in SD, and prove the potential of domain-specific pre-trained language model. Our study and analysis reveal opportunities for incorporating computer science and linguistics knowledge to explore the empirical validity of TCM theories.
TCM-SD: A Benchmark for Probing Syndrome Differentiation via Natural Language Processing
d13336402
In this paper we present ONTOSCORE, a system for scoring sets of concepts on the basis of an ontology. We apply our system to the task of scoring alternative speech recognition hypotheses (SRH) in terms of their semantic coherence. We conducted an annotation experiment and showed that human annotators can reliably differentiate between semantically coherent and incoherent speech recognition hypotheses.
Semantic Coherence Scoring Using an Ontology
d208100108
d7526189
Rhetorical moves are a useful framework for analyzing the hidden rhetorical organization in research papers, in teaching academic writing. We propose a * 國立清華大學資工系 NTHU NLPLAB 黃冠誠 等 method for learning to classify the moves of a given set sentences in a academic paper. In our approach, we learn a set of move-specific common patterns, which are characteristic of moves, to help annotate sentences with moves. The method involves using statistical method to find common patterns in a corpus of research papers, assigning the patterns with moves, using patterns to annotate sentences in a corpus, and train a move classifier on the annotated sentences. At run-time, sentences are transformed into feature vectors to predict the given sentences. We present a prototype system, MoveTagger, that applies the method to a corpus of research papers. The proposed method outperforms previous research with a significantly higher accuracy.
學術論文簡介的自動文步分析與寫作提示 學術論文簡介的自動文步分析與寫作提示 學術論文簡介的自動文步分析與寫作提示 學術論文簡介的自動文步分析與寫作提示 Automatic Move Analysis of Research Articles for Assisting Writing 黃冠誠 黃冠誠 黃冠誠 黃冠誠 * * * * 、 、 、 、吳鑑城 吳鑑城 吳鑑城 吳鑑城 * * * * 、 、 、 、許湘翎 許湘翎 許湘翎 許湘翎 * * * * 、 、 、 、顏孜曦 顏孜曦 顏孜曦 顏孜曦 * * * * 、 、 、 、張俊盛 張俊盛 張俊盛 張俊盛 * * * *
d234742170
This paper describes the systems submitted to IWSLT 2021 by the Volctrans team. We participate in the offline speech translation and textto-text simultaneous translation tracks. For offline speech translation, our best end-to-end model achieves 7.9 BLEU improvements over the benchmark on the MuST-C test set and is even approaching the results of a strong cascade solution. For text-to-text simultaneous translation, we explore the best practice to optimize the wait-k model. As a result, our final submitted systems exceed the benchmark at around 7 BLEU on the same latency regime. We release our code and model to facilitate both future research works and industrial applications 1 .
The Volctrans Neural Speech Translation System for IWSLT 2021
d14156544
Spoken monologues feature greater sentence length and structural complexity than do spoken dialogues. To achieve high parsing performance for spoken monologues, it could prove effective to simplify the structure by dividing a sentence into suitable language units. This paper proposes a method for dependency parsing of Japanese monologues based on sentence segmentation. In this method, the dependency parsing is executed in two stages: at the clause level and the sentence level. First, the dependencies within a clause are identified by dividing a sentence into clauses and executing stochastic dependency parsing for each clause. Next, the dependencies over clause boundaries are identified stochastically, and the dependency structure of the entire sentence is thus completed. An experiment using a spoken monologue corpus shows this method to be effective for efficient dependency parsing of Japanese monologue sentences.
Dependency Parsing of Japanese Spoken Monologue Based on Clause Boundaries
d226964921
In this paper we present the first work on the automated scoring of mindreading ability in middle childhood and early adolescence. We create MIND-CA, a new corpus of 11,311 question-answer pairs in English from 1,066 children aged 7 to 14. We perform machine learning experiments and carry out extensive quantitative and qualitative evaluation. We obtain promising results, demonstrating the applicability of state-of-the-art NLP solutions to a new domain and task.This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/.
"What is on your mind?" Automated Scoring of Mindreading in Childhood and Early Adolescence
d222142461
Humans use language to accomplish a wide variety of tasks -asking for and giving advice being one of them. In online advice forums, advice is mixed in with non-advice, like emotional support, and is sometimes stated explicitly, sometimes implicitly. Understanding the language of advice would equip systems with a better grasp of language pragmatics; practically, the ability to identify advice would drastically increase the efficiency of adviceseeking online, as well as advice-giving in natural language generation systems.We present a dataset in English from two Reddit advice forums -r/AskParents and r/needadvice -annotated for whether sentences in posts contain advice or not. Our analysis reveals rich linguistic phenomena in advice discourse. We present preliminary models showing that while pre-trained language models are able to capture advice better than rulebased systems, advice identification is challenging, and we identify directions for future research.
Help! Need Advice on Identifying Advice
d202661057
In this paper, we study how word-like units are represented and activated in a recurrent neural model of visually grounded speech. The model used in our experiments is trained to project an image and its spoken description in a common representation space. We show that a recurrent model trained on spoken sentences implicitly segments its input into word-like units and reliably maps them to their correct visual referents. We introduce a methodology originating from linguistics to analyse the representation learned by neural networks -the gating paradigm -and show that the correct representation of a word is only activated if the network has access to first phoneme of the target word, suggesting that the network does not rely on a global acoustic pattern. Furthermore, we find out that not all speech frames (MFCC vectors in our case) play an equal role in the final encoded representation of a given word, but that some frames have a crucial effect on it. Finally, we suggest that word representation could be activated through a process of lexical competition.
Word Recognition, Competition, and Activation in a Model of Visually Grounded Speech
d455602
India is a country with diverse culture, language and varied heritage. Due to this, it is very rich in languages and their dialects. Being a multilingual society, a dictionary in multiple languages becomes its need and one of the major resources to support a language. There are dictionaries for many Indian languages, but very few are available in multiple languages. WordNet is one of the most prominent lexical resources in the field of Natural Language Processing. IndoWordNet is an integrated multilingual WordNet for Indian languages. These WordNet resources are used by researchers to experiment and resolve the issues in multilinguality through computation. However, there are few cases where WordNet is used by the non-researchers or general public. This paper focuses on providing an online interface -IndoWordNet Dictionary to nonresearchers as well as researchers. It is developed to render multilingual WordNet information of 19 Indian languages in a dictionary format. The WordNet information is rendered in multiple views such as: sense based, thesaurus based, word usage based and language based. English WordNet information is also rendered using this interface. The IndoWordNet dictionary will help users to know meanings of a word in multiple Indian languages.
IndoWordNet Dictionary: An Online Multilingual Dictionary using IndoWordNet
d235313804
Language modeling (LM) for automatic speech recognition (ASR) does not usually incorporate utterance level contextual information. For some domains like voice assistants, however, additional context, such as time at which an utterance was spoken, provides a rich input signal. We introduce an attention mechanism for training neural speech recognition language models on both text and nonlinguistic contextual data 1 . When applied to a large de-identified dataset of utterances collected by a popular voice assistant platform, our method reduces perplexity by 7.0% relative over a standard LM that does not incorporate contextual information. When evaluated on utterances extracted from the long tail of the dataset, our method improves perplexity by 9.0% relative over a standard LM and by over 2.8% relative when compared to a state-of-theart model for contextual LM.
Attention-based Contextual Language Model Adaptation for Speech Recognition
d235266085
Is bias amplified when neural machine translation (NMT) models are optimized for speed and evaluated on generic test sets using BLEU? We investigate architectures and techniques commonly used to speed up decoding in Transformer-based models, such as greedy search, quantization, average attention networks (AANs) and shallow decoder models and show their effect on gendered noun translation. We construct a new gender bias test set, SimpleGEN, based on gendered noun phrases in which there is a single, unambiguous, correct answer. While we find minimal overall BLEU degradation as we apply speed optimizations, we observe that gendered noun translation performance degrades at a much faster rate. * This work conducted while author was working at Facebook AI.
Gender Bias Amplification During Speed-Quality Optimization in Neural Machine Translation
d15227523
This paper describes the TALP participation in the WMT13 evaluation campaign. Our participation is based on the combination of several statistical machine translation systems: based on standard phrasebased Moses systems. Variations include techniques such as morphology generation, training sentence filtering, and domain adaptation through unit derivation. The results show a coherent improvement on TER, METEOR, NIST, and BLEU scores when compared to our baseline system.
The TALP-UPC Phrase-based Translation Systems for WMT13: System Combination with Morphology Generation, Domain Adaptation and Corpus Filtering
d51869635
Data protection concerns and knowledge of how to implement a practical HLT solution that delivers meaningful value have historically prevented public sector clients from using MT, CAT, and TMS • We will show how our team successfully addressed data protection and client objectives to develop a practical, domain-specific PEMT solution to a public sector client who is now transforming how they use HLT Key Takeaways:• How to develop a customized PEMT solution for public sector • How to build and optimize TM corpora for statistical MT training • How to gauge technological and procedural efficiencies for overall program success and scalability
PEMT for the Public Sector -Evolution of a Solution PEMT for the Public Sector Evolution of a Solution
d1801348
In this paper we examine the representativeness of the EventCorefBank (ECB) (Bejan and Harabagiu, 2010) with regards to the language population of large-volume streams of news. The ECB corpus is one of the data sets used for evaluation of the task of event coreference resolution. Our analysis shows that the ECB in most cases covers one seminal event per domain, what considerably simplifies event and so language diversity that one comes across in the news. We augmented the corpus with a new corpus component, consisting of 502 texts, describing different instances of event types that were already captured by the 43 topics of the ECB, making it more representative of news articles on the web. The new "ECB+" corpus is available for further research.
Using a sledgehammer to crack a nut? Lexical diversity and event coreference resolution
d215416276
Multilingual sequence labeling is a task of predicting label sequences using a single unified model for multiple languages. Compared with relying on multiple monolingual models, using a multilingual model has the benefit of a smaller model size, easier in online serving, and generalizability to low-resource languages. However, current multilingual models still underperform individual monolingual models significantly due to model capacity limitations. In this paper, we propose to reduce the gap between monolingual models and the unified multilingual model by distilling the structural knowledge of several monolingual models (teachers) to the unified multilingual model (student). We propose two novel KD methods based on structure-level information:(1) approximately minimizes the distance between the student's and the teachers' structurelevel probability distributions, (2) aggregates the structure-level knowledge to local distributions and minimizes the distance between two local probability distributions. Our experiments on 4 multilingual tasks with 25 datasets show that our approaches outperform several strong baselines and have stronger zero-shot generalizability than both the baseline model and teacher models.
Structure-Level Knowledge Distillation For Multilingual Sequence Labeling
d252377998
This paper presents an analysis of annotation using an automatic pre-annotation for a mid-level annotation complexity taskdependency syntax annotation. It compares the annotation efforts made by annotators using a pre-annotated version (with a high-accuracy parser) and those made by fully manual annotation. The aim of the experiment is to judge the final annotation quality when pre-annotation is used. In addition, it evaluates the effect of automatic linguistically-based (rule-formulated) checks and another annotation on the same data available to the annotators, and their influence on annotation quality and efficiency. The experiment confirmed that the pre-annotation is an efficient tool for faster manual syntactic annotation which increases the consistency of the resulting annotation without reducing its quality.
Quality and Efficiency of Manual Annotation: Pre-annotation Bias
d231740732
Most research in the area of automatic essay grading (AEG) is geared towards scoring the essay holistically while there has also been little work done on scoring individual essay traits. In this paper, we describe a way to score essays using a multi-task learning (MTL) approach, where scoring the essay holistically is the primary task, and scoring the essay traits is the auxiliary task. We compare our results with a single-task learning (STL) approach, using both LSTMs and BiLSTMs. To find out which traits work best for different types of essays, we conduct ablation tests for each of the essay traits. We also report the runtime and number of training parameters for each system. We find that MTL-based BiLSTM system gives the best results for scoring the essay holistically, as well as performing well on scoring the essay traits. The MTL systems also give a speed-up of between 2.30 to 3.70 times the speed of the STL system, when it comes to scoring the essay and all the traits.
Many Hands Make Light Work: Using Essay Traits to Automatically Score Essays
d235829052
We find that existing language modeling datasets contain many near-duplicate examples and long repetitive substrings. As a result, over 1% of the unprompted output of language models trained on these datasets is copied verbatim from the training data. We develop two tools that allow us to deduplicate training datasets-for example removing from C4 a single 61 word English sentence that is repeated over 60,000 times. Deduplication allows us to train models that emit memorized text ten times less frequently and require fewer training steps to achieve the same or better accuracy. We can also reduce train-test overlap, which affects over 4% of the validation set of standard datasets, thus allowing for more accurate evaluation. Code for deduplication is released at https://github.com/goog e-research/ dedup icate-text-datasets. * Equal contribution. † Google Research, Brain Team. ‡ University of Pennsylvania. Correspond to kather-inelee@google.com and daphnei@seas.upenn.edu.8424This train-test set overlap not only causes researchers to over-estimate model accuracy, but also biases model selection towards models and hyperparameters that intentionally overfit their training datasets.3. Training models on deduplicated datasets is more efficient. Processing a dataset with our framework requires a CPU-only linear-time algorithm. And so because these datasets are up to 19% smaller, even including the deduplication runtime itself, training on deduplicated datasets directly reduces the training cost in terms of time, dollar, and the environment (Bender et al., 2021; Strubell et al., 2019; Patterson et al., 2021).Edith Cohen. 2016. Min-hash sketches: A brief survey.
Deduplicating Training Data Makes Language Models Better
d236459946
We present a new human-human dialogue dataset -PhotoChat, the first dataset that casts light on the photo sharing behavior in online messaging. PhotoChat contains 12k dialogues, each of which is paired with a user photo that is shared during the conversation. Based on this dataset, we propose two tasks to facilitate research on image-text modeling: a photo-sharing intent prediction task that predicts whether one intends to share a photo in the next conversation turn, and a photo retrieval task that retrieves the most relevant photo according to the dialogue context. In addition, for both tasks, we provide baseline models using the state-of-the-art models and report their benchmark performances. The best image retrieval model achieves 10.4% re-call@1 (out of 1000 candidates) and the best photo intent prediction model achieves 58.1% F1 score, indicating that the dataset presents interesting yet challenging real-world problems. We are releasing PhotoChat to facilitate future research work among the community.
PhotoChat: A Human-Human Dialogue Dataset with Photo Sharing Behavior for Joint Image-Text Modeling
d1835281
The paper aims to come up with a system that examines the degree of semantic equivalence between two sentences. At the core of the paper is the attempt to grade the similarity of two sentences by finding the maximal weighted bipartite match between the tokens of the two sentences. The tokens include single words, or multiwords in case of Named Entitites, adjectivally and numerically modified words. Two token similarity measures are used for the task -WordNet based similarity, and a statistical word similarity measure which overcomes the shortcomings of WordNet based similarity. As part of three systems created for the task, we explore a simple bag of words tokenization scheme, a more careful tokenization scheme which captures named entities, times, dates, monetary entities etc., and finally try to capture context around tokens using grammatical dependencies.
sranjans : Semantic Textual Similarity using Maximal Weighted Bipartite Graph Matching
d3416249
The Grammatical Framework (GF) offers perfect translation between controlled subsets of natural languages. E.g., an abstract syntax for a set of sentences in school mathematics is the interlingua between the corresponding sentences in English and Hindi, say. GF "resource grammars" specify how to say something in English or Hindi; these are reused with "application grammars" that specify what can be said (mathematics, tourist phrases, etc.). More recent robust parsing and parse-tree disambiguation allow GF to parse arbitrary English text. We report here an experiment to linearise the resulting tree directly to other languages (e.g. Hindi, German, etc.), i.e., we use a languageindependent resource grammar as the interlingua. We focus particularly on the last part of the translation system, the interlingual lexicon and word sense disambiguation (WSD). We improved the quality of the wide coverage interlingual translation lexicon by using the Princeton and Universal WordNet data. We then integrated an existing WSD tool and replaced the usual GF style lexicons, which give one target word per source word, by the WordNet based lexicons. These new lexicons and WSD improve the quality of translation in most cases, as we show by examples. Both WordNets and WSD in general are well known, but this is the first use of these tools with GF.
Developing an interlingual translation lexicon using WordNets and Grammatical Framework
d15769251
This short paper presents some motivations behind the organization of the ACL/EACL01 "Workshop on Sharing Tools and Resources for Research and Education", concentrating on the possible connection of Tools and Resources repositories. Taking some papers printed in this volume and the ACL Natural Language Software Registry as a basis, we outline some of the steps to be done on the side of NLP tool repositories in order to achieve this goal.
Extending NLP Tools Repositories for the Interaction with Language Data Resources Repositories
d227230285
FinCausal-2020 is the shared task which focuses on the causality detection of factual data for financial analysis. The financial data facts don't provide much explanation on the variability of these data. This paper aims to propose an efficient method to classify the data into one which is having any financial cause or not. Many models were used to classify the data, out of which SVM model gave an F-Score of 0.9435, BERT with specific fine-tuning achieved best results with F-Score of 0.9677.
NITK NLP at FinCausal-2020 Task 1 Using BERT and Linear models
d236478321
Multimodal fusion is a core problem for multimodal sentiment analysis. Previous works usually treat all three modal features equally and implicitly explore the interactions between different modalities. In this paper, we break this kind of methods in two ways. Firstly, we observe that textual modality plays the most important role in multimodal sentiment analysis, and this can be seen from the previous works. Secondly, we observe that comparing to the textual modality, the other two kinds of nontextual modalities (visual and acoustic) can provide two kinds of semantics, shared and private semantics. The shared semantics from the other two modalities can obviously enhance the textual semantics and make the sentiment analysis model more robust, and the private semantics can be complementary to the textual semantics and meanwhile provide different views to improve the performance of sentiment analysis together with the shared semantics. Motivated by these two observations, we propose a text-centered shared-private framework (TCSP) for multimodal fusion, which consists of the cross-modal prediction and sentiment regression parts. Experiments on the MOSEI and MOSI datasets demonstrate the effectiveness of our shared-private framework, which outperforms all baselines. Furthermore, our approach provides a new way to utilize the unlabeled data for multimodal sentiment analysis.
A Text-Centered Shared-Private Framework via Cross-Modal Prediction for Multimodal Sentiment Analysis
d658944
Building on the use of local contexts, or frames, for human category acquisition, we explore the treatment of contexts as categories. This allows us to examine and evaluate the categorical properties that local unsupervised methods can distinguish and their relationship to corpus POS tags. From there, we use lexical information to combine contexts in a way which preserves the intended category, providing a platform for grammatical category induction.
Categorizing Local Contexts as a Step in Grammatical Category Induction
d237581112
Understanding linguistic modality is widely seen as important for downstream tasks such as Question Answering and Knowledge Graph Population. Entailment Graph learning might also be expected to benefit from attention to modality. We build Entailment Graphs using a news corpus filtered with a modality parser, and show that stripping modal modifiers from predicates in fact increases performance. This suggests that for some tasks, the pragmatics of modal modification of predicates allows them to contribute as evidence of entailment.
Blindness to Modality Helps Entailment Graph Mining
d222291650
Generalization of models to out-ofdistribution (OOD) data has captured tremendous attention recently. Specifically, compositional generalization, i.e., whether a model generalizes to new structures built of components observed during training, has sparked substantial interest. In this work, we investigate compositional generalization in semantic parsing, a natural test-bed for compositional generalization, as output programs are constructed from sub-components. We analyze a wide variety of models and propose multiple extensions to the attention module of the semantic parser, aiming to improve compositional generalization. We find that the following factors improve compositional generalization: (a) using contextual representations, such as ELMO and BERT, (b) informing the decoder what input tokens have previously been attended to, (c) training the decoder attention to agree with pre-computed token alignments, and (d) downsampling examples corresponding to frequent program templates. While we substantially reduce the gap between in-distribution and OOD generalization, performance on OOD compositions is still substantially lower.
Improving Compositional Generalization in Semantic Parsing
d222290555
It is fairly common to use code-mixing on a social media platform to express opinions and emotions in multilingual societies. The purpose of this task is to detect the sentiment of codemixed social media text. Code-mixed text poses a great challenge for the traditional NLP system, which currently uses monolingual resources to deal with the problem of multilingual mixing. This task has been solved in the past using lexicon lookup in respective sentiment dictionaries and using a long short-term memory (LSTM) neural network for monolingual resources. In this paper, we (my codalab username is kongjun) present a system that uses a bilingual vector gating mechanism for bilingual resources to complete the task. The model consists of two main parts: the vector gating mechanism, which combines the character and word levels, and the attention mechanism, which extracts the important emotional parts of the text. The results show that the proposed system outperforms the baseline algorithm. We achieved fifth place in Spanglish and 19th place in Hinglish.The code of this paper is availabled at : https://github.com/JunKong5/ Semveal2020-task9This work is licensed under a Creative Commons Attribution 4.0 International License.License details:
HPCC-YNU at SemEval-2020 Task 9: A Bilingual Vector Gating Mechanism for Sentiment Analysis of Code-Mixed Text
d227334932
Previous works have shown that contextual information can improve the performance of neural machine translation (NMT). However, most existing document-level NMT methods only consider a few number of previous sentences. How to make use of the whole document as global contexts is still a challenge. To address this issue, we hypothesize that a document can be represented as a graph that connects relevant contexts regardless of their distances. We employ several types of relations, including adjacency, syntactic dependency, lexical consistency, and coreference, to construct the document graph. Then, we incorporate both source and target graphs into the conventional Transformer architecture with graph convolutional networks. Experiments on various NMT benchmarks, including IWSLT English-French, Chinese-English, WMT English-German and Opensubtitle English-Russian, demonstrate that using document graphs can significantly improve the translation quality. Extensive analysis verifies that the document graph is beneficial for capturing discourse phenomena. * Work was done when Mingzhou Xu was interning at Noah's Ark Lab. S … … S S ⽶格尔 Miguel 其实 actually 他的 his 名字 name ⽶格尔 Miguel 看着 look at 我 I 我 I 说 say ⽶格尔 Miguel Figure 1: The structure of graph. Solid lines in blue depict adjacency relations. Dash lines in green denote dependency relations. Lexical consistency is represented as dashed lines in red. The brown line means a coreference relation. S denotes Sentence node. We just show aspects of sentences for convenience. 1
Document Graph for Neural Machine Translation
d5108363
In a world in which web users are continuously blasted by ads and often compelled to deal with user-unfriendly interfaces, we sometimes feel like we want to evade from the sensory overload of standard web pages and take refuge in a safe web corner, in which contents and design are in harmony with our current frame of mind. Sentic Corner is an intelligent user interface that dynamically collects audio, video, images and text related to the user's current feelings and activities as an interconnected knowledge base, which is browsable through a multi-faceted classification website.
Taking Refuge in Your Personal Sentic Corner
d252873427
Sign language gloss translation aims to translate the sign glosses into spoken language texts, which is challenging due to the scarcity of labeled gloss-text parallel data. Back-translation (BT), which generates pseudo parallel data by translating in-domain spoken language texts into sign glosses, has been applied to alleviate the data scarcity problem. However, the lack of large-scale high-quality in-domain spoken language text data limits the effect of BT. In this paper, to overcome the limitation, we propose a Prompt based domain text Generation (PGEN) approach to produce the large-scale in-domain spoken language text data. Specifically, PGEN randomly concatenates sentences from the original in-domain spoken language text data as prompts to induce a pre-trained language model (i.e., GPT-2) to generate spoken language texts in similar style. Experimental results on three benchmarks of sign language gloss translation in varied languages demonstrate that BT with spoken language texts generated by PGEN significantly outperforms the compared methods. In addition, as the scale of spoken language texts generated by PGEN increases, the BT technique can achieve further improvements, demonstrating the effectiveness of our approach. We release the code and data for facilitating future research in this field 1 .
Scaling Back-Translation with Domain Text Generation for Sign Language Gloss Translation
d243865419
Recent literatures have shown that knowledge graph (KG) learning models are highly vulnerable to adversarial attacks. However, there is still a paucity of vulnerability analyses of cross-lingual entity alignment under adversarial attacks. This paper proposes an adversarial attack model with two novel attack techniques to perturb the KG structure and degrade the quality of deep cross-lingual entity alignment. First, an entity density maximization method is employed to hide the attacked entities in dense regions in two KGs, such that the derived perturbations are unnoticeable. Second, an attack signal amplification method is developed to reduce the gradient vanishing issues in the process of adversarial attacks for further improving the attack effectiveness.
Adversarial Attack against Cross-lingual Knowledge Graph Alignment
d207968164
There are advances on developing methods that do not require parallel corpora, but issues remain with automatic evaluation metrics. Current works(Pang and Gimpel, 2018;Mir et al., 2019) agree on the following three evaluation aspects. (1) Style accuracy of transferred sentences (measured by a pretrained classifier).(2) Semantic similarity between the original and transferred sentences. (3) Naturalness or fluency: researchers use perplexity of transferred sentences, using the language model pretrained on the original corpora.Problem 1: Style Transfer Tasks. If we think about the practical use cases of style transfer (writing assistance, dialogue, author obfuscation or anonymity, adjusting reading difficulty in education, artistic creations such as works involving literature), we would find that the two would-becollected non-parallel corpora have different vocabularies, and it is hard to differentiate stylerelated words from content-related words. For example, when transferring Dickens' to modern style literature(Pang and Gimpel, 2018), the former may contain "English farm", "horses"; the latter may contain "vampire", "pop music." But these words should stay the same, as they are content-related but not style-related. On the other hand, Dickens' literature may contain "devil-maycare" and "flummox", but these words are stylerelated and should be changed. Recent works, however, mostly deal with the operational style where corpus-specific content words are changed. The operational style transfer models work well on Yelp sentiment transfer which almost all researches focus on, but it does not inspire systems in practical use cases.Problem 2: Metrics. Consider: Oliver deemed the gathering in York a great success. The ex- § Abstract written while the author was a student at the University of Chicago. pected transfer from Dickens to modern literature style should be similar to "Oliver thought the gathering was successful" (actual style transfer). However, the most likely transfer (if we use most existing models) will be "Karl enjoyed the party in LA" (operational style transfer). In evaluating semantic similarity, Mir et al.(2019)masked style keywords determined by a classifier. In this case, all corpus-specific content words (as well as style words) will be masked, and evaluation will fail. However, we can create the list of style keywords with outside knowledge. We can also consider keeping the words as they are without masking. Similar problems exist for the other two metrics.Problem 3: Trade-off and Aggregation. Aggregation of metrics is especially helpful as there are tradeoffs(Pang and Gimpel, 2018;Mir et al., 2019), and we need to tune and select models systematically. Use A, B, C to represent the three metrics. For sentence s, define G t 1 ,t 2 ,t 3 ,t 4 (s) =where t i 's are the parameters to be learned. 1 (Small and large C's are both bad.) The current research strives for a universal metric. We can randomly sample a few hundred pairs of transferred sentences from a range of style transfer outputs (from different models-good ones and bad ones) from a range of style transfer tasks, and ask annotators which of the two transferred sentences (from the same original sentence) is better. We can then train the parameters based on pairwise comparison. To make G more convincing, we may design more complicated functions G = f (A, B, C). If we do not need a universal evaluator, then we can repeat the above procedure by only sampling pairs of transferred sentences from the dataset of interest, which is more accurate for the particular task.
Towards Actual (Not Operational) Textual Style Transfer Auto-Evaluation
d2327888
Searching the Annotated Portuguese Childes Corpora
d140503
The lexical transfer phase is the most crucial step in MT because most of difficult problems are caused by lexical differences between two languages. In order to treat lexical issues systematically in transfer-based MT systems, we introduce the concept of bilingual-sings which are defined by pairs of equivalent monolingual signs. The bilingual signs not only relate the local linguistic structures of two languages but also play a central role in connecting the linguistic processes of translation with knowledge based inferences. We also show that they can be effectively used to formulate appropriate questions for disambiguating "transfer ambiguities", which is crucial in interactive MT systems.
Lexical Transfer based on bilingual signs: Towards interaction during transfer • Jun-ich Tsujii
d21730294
This paper describes an automatic spelling corrector for Amharic, the working language of the Federal Government of Ethiopia. We used a corpus-driven approach with the noisy channel for spelling correction. It infers linguistic knowledge from a text corpus. The approach can be ported to other written languages with little effort as long as they are typed using a QWERTY keyboard with direct mappings between keystrokes and characters. Since Amharic letters are syllabic, we used a modified version of the System for Ethiopic Representation in ASCII for transliteration in the like manner as most Amharic keyboard input methods do. The proposed approach is evaluated with Amharic and English test data and has scored better performance result than the baseline systems: GNU Aspell and Hunspell. We get better result due to the smoothed language model, the generalized error model and the ability to take into account the context of misspellings. Besides, instead of using a handcrafted lexicon for spelling error detection, we used a term list derived from frequently occurring terms in a text corpus. Such a term list, in addition to ease of compilation, has also an advantage in handling rare terms, proper nouns, and neologisms.
Portable Spelling Corrector for a Less-Resourced Language: Amharic
d209064370
We describe two end-to-end autoencoding models for semi-supervised graph-based projective dependency parsing. The first model is a Locally Autoencoding Parser (LAP) encoding the input using continuous latent variables in a sequential manner; The second model is a Globally Autoencoding Parser (GAP) encoding the input into dependency trees as latent variables, with exact inference. Both models consist of two parts: an encoder enhanced by deep neural networks (DNN) that can utilize the contextual information to encode the input into latent variables, and a decoder which is a generative model able to reconstruct the input. Both LAP and GAP admit a unified structure with different loss functions for labeled and unlabeled data with shared parameters. We conducted experiments on WSJ and UD dependency parsing data sets, showing that our models can exploit the unlabeled data to improve the performance given a limited amount of labeled data, and outperform a previously proposed semi-supervised model.
Semi-supervised Autoencoding Projective Dependency Parsing
d222290927
Formality style transfer is the task of converting informal sentences to grammaticallycorrect formal sentences, which can be used to improve performance of many downstream NLP tasks. In this work, we propose a semisupervised formality style transfer model that utilizes a language model-based discriminator to maximize the likelihood of the output sentence being formal, which allows us to use maximization of token-level conditional probabilities for training. We further propose to maximize mutual information between source and target styles as our training objective instead of maximizing the regular likelihood that often leads to repetitive and trivial generated responses. Experiments showed that our model outperformed previous state-of-theart baselines significantly in terms of both automated metrics and human judgement. We further generalized our model to unsupervised text style transfer task, and achieved significant improvements on two benchmark sentiment style transfer datasets.
Semi-supervised Formality Style Transfer using Language Model Discriminator and Mutual Information Maximization
d222291675
Humans can learn structural properties about a word from minimal experience, and deploy their learned syntactic representations uniformly in different grammatical contexts. We assess the ability of modern neural language models to reproduce this behavior in English and evaluate the effect of structural supervision on learning outcomes. First, we assess few-shot learning capabilities by developing controlled experiments that probe models' syntactic nominal number and verbal argument structure generalizations for tokens seen as few as two times during training. Second, we assess invariance properties of learned representation: the ability of a model to transfer syntactic generalizations from a base context (e.g., a simple declarative active-voice sentence) to a transformed context (e.g., an interrogative sentence). We test four models trained on the same dataset: an n-gram baseline, an LSTM, and two LSTM-variants trained with explicit structural supervision(Dyer et al., 2016;Charniak et al., 2016). We find that in most cases, the neural models are able to induce the proper syntactic generalizations after minimal exposure, often from just two examples during training, and that the two structurally supervised models generalize more accurately than the LSTM model. All neural models are able to leverage information learned in base contexts to drive expectations in transformed contexts, indicating that they have learned some invariance properties of syntax.
Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models
d195068920
We present the first sentence simplification model that learns explicit edit operations (ADD, DELETE, and KEEP) via a neural programmer-interpreter approach. Most current neural sentence simplification systems are variants of sequence-to-sequence models adopted from machine translation. These methods learn to simplify sentences as a byproduct of the fact that they are trained on complex-simple sentence pairs. By contrast, our neural programmer-interpreter is directly trained to predict explicit edit operations on targeted parts of the input sentence, resembling the way that humans might perform simplification and revision. Our model outperforms previous state-of-the-art neural sentence simplification models (without external knowledge) by large margins on three benchmark text simplification corpora in terms of SARI (+0.95 WikiLarge, +1.89 WikiSmall, +1.41 Newsela), and is judged by humans to produce overall better and simpler output sentences 1 .
EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing
d236428851
One of the difficulties in training dialogue systems is the lack of training data. We explore the possibility of creating dialogue data through the interaction between a dialogue system and a user simulator. Our goal is to develop a modelling framework that can incorporate new dialogue scenarios through self-play between the two agents. In this framework, we first pre-train the two agents on a collection of source domain dialogues, which equips the agents to converse with each other via natural language. With further fine-tuning on a small amount of target domain data, the agents continue to interact with the aim of improving their behaviors using reinforcement learning with structured reward functions. In experiments on the MultiWOZ dataset, two practical transfer learning problems are investigated: 1) domain adaptation and 2) single-to-multiple domain transfer. We demonstrate that the proposed framework is highly effective in bootstrapping the performance of the two agents in transfer learning. We also show that our method leads to improvements in dialogue system performance on complete datasets.
Transferable Dialogue Systems and User Simulators
d233296711
Image captioning has conventionally relied on reference-based automatic evaluations, where machine captions are compared against captions written by humans. This is in contrast to the reference-free manner in which humans assess caption quality.In this paper, we report the surprising empirical finding that CLIP (Radford et al., 2021), a cross-modal model pretrained on 400M im-age+caption pairs from the web, can be used for robust automatic evaluation of image captioning without the need for references. Experiments spanning several corpora demonstrate that our new reference-free metric, CLIPScore, achieves the highest correlation with human judgements, outperforming existing reference-based metrics like CIDEr and SPICE. Information gain experiments demonstrate that CLIPScore, with its tight focus on image-text compatibility, is complementary to existing reference-based metrics that emphasize text-text similarities. Thus, we also present a reference-augmented version, RefCLIPScore, which achieves even higher correlation. Beyond literal description tasks, several case studies reveal domains where CLIPScore performs well (clip-art images, alt-text rating), but also where it is relatively weaker in comparison to reference-based metrics, e.g., news captions that require richer contextual knowledge.
CLIPScore: A Reference-free Evaluation Metric for Image Captioning
d234469796
The distributed and continuous representations used by neural networks are at odds with representations employed in linguistics, which are typically symbolic. Vector quantization has been proposed as a way to induce discrete neural representations that are closer in nature to their linguistic counterparts. However, it is not clear which metrics are the best-suited to analyze such discrete representations. We compare the merits of four commonly used metrics in the context of weakly supervised models of spoken language. We compare the results they show when applied to two different models, while systematically studying the effect of the placement and size of the discretization layer. We find that different evaluation regimes can give inconsistent results. While we can attribute them to the properties of the different metrics in most cases, one point of concern remains: the use of minimal pairs of phoneme triples as stimuli disadvantages larger discrete unit inventories, unlike metrics applied to complete utterances. Furthermore, while in general vector quantization induces representations that correlate with units posited in linguistics, the strength of this correlation is only moderate.
Discrete representations in neural models of spoken language
d225039995
Social media such as Twitter is a hotspot of user-generated information. In this ongoing Covid-19 pandemic, there has been an abundance of data on social media which can be classified as informative and uninformative content. In this paper, we present our work to detect informative Covid-19 English tweets using RoBERTa model as a part of the W-NUT workshop 2020. We show the efficacy of our model on a public dataset with an F1-score of 0.89 on the validation dataset and 0.87 on the leaderboard.
DSC-IIT ISM at WNUT-2020 Task 2: Detection of COVID-19 informative tweets using RoBERTa
d16108951
Most recent studies on coreference resolution advocate accurate yet relatively complex models, relying on, for example, entitymention or graph-based representations. As it has been convincingly demonstrated at the recent CoNLL 2012 shared task, such algorithms considerably outperform popular basic approaches, in particular mention-pair models. This study advocates a novel approach that keeps the simplicity of a mention-pair framework, while showing state-of-the-art results. Apart from being very efficient and straightforward to implement, our model facilitates experimental work on the pairwise classifier, in particular on feature engineering. The proposed model achieves the performance level of up to 61.82% (MELA F, v4 scorer) on the CoNLL test data, on par with complex state-of-the-art systems.
A State-of-the-Art Mention-Pair Model for Coreference Resolution
d15292331
This paper presents a data-driven, simple cluster-and-label approach using optimized count-based methods for word-level language identification for a large domain-specific multilingual diachronic corpus of periodicals published at least yearly between 1864 and 2014 in Switzerland. Our system requires no annotated data or training, only minimal human effort in evaluating and labeling 50 clusters for a corpus of almost 40 million tokens. Despite being unsupervised, our results show an accuracy that is comparable to the corpus annotations which result from an existing code switching algorithm and the combined usage of two supervised systems using character and byte n-gram models (Volk and Clematide, 2014).
Leveraging Data-Driven Methods in Word-Level Language Identification for a Multilingual Alpine Heritage Corpus
d220713853
Problems involving code-mixed language are often plagued by a lack of resources and an absence of materials to perform sophisticated transfer learning with. In this paper we describe our submission to the Sentimix Hindi-English task involving sentiment classification of code-mixed texts, and with an F1 score of 67.1%, we demonstrate that simple convolution and attention may well produce reasonable results.
HCMS at SemEval-2020 Task 9: A Neural Approach to Sentiment Analysis for Code-Mixed Texts
d220793196
The coronavirus disease has claimed the lives of over 350,000 people and infected more than 6 million people worldwide. Several search engines have surfaced to provide researchers with additional tools to find and retrieve information from the rapidly growing corpora on COVID-19. These engines lack extraction and visualization tools necessary to retrieve and interpret complex relations inherent to scientific literature. Moreover, because these engines mainly rely upon semantic information, their ability to capture complex global relationships across documents is limited, which reduces the quality of similarity-based article recommendations for users. In this work, we present the COVID-19 Knowledge Graph (CKG), a heterogeneous graph for extracting and visualizing complex relationships between COVID-19 scientific articles. The CKG combines semantic information with document topological information for the application of similar document retrieval. The CKG is constructed using the latent schema of the data, and then enriched with biomedical entity information extracted from the unstructured text of articles using scalable AWS technologies to form relations in the graph. Finally, we propose a document similarity engine that leverages low-dimensional graph embeddings from the CKG with semantic embeddings for similar article retrieval. Analysis demonstrates the quality of relationships in the CKG and shows that it can be used to uncover meaningful information in COVID-19 scientific articles. The CKG helps power www.cord19.aws and is publicly available.
COVID-19 Knowledge Graph: Accelerating Information Retrieval and Discovery for Scientific Literature ACM Reference Format
d9781535
To speak fluently is a complex skill. In order to help the learner to acquire it we propose an electronic version of an age old method: pattern drills (PD). While being highly regarded in the fifties, pattern drills have become unpopular since then. Despite certain shortcomings we do believe in the virtues of this approach, at least with regard to the memorization of basic structures and the acquisition of fluency, the skill to produce language at a 'normal' rate. Of course, the method has to be improved, and we will show here how this can be achieved. Unlike tapes or books, computers are open media, allowing for dynamic changes, taking users' performances and preferences into account. Our drill-tutor, a small webapplication still in its prototype phase, allows for this. It is a free, electronic version of pattern drills, i.e. an exercise generator, open and adaptable to the users' ever changing needs.
A Generic Cognitively Motivated Web-Environment to Help People to Become Quickly Fluent in a New Language
d365927
We propose a computational model of text reuse tailored for ancient literary texts, available to us often only in small and noisy samples. The model takes into account source alternation patterns, so as to be able to align even sentences with low surface similarity. We demonstrate its ability to characterize text reuse in the Greek New Testament.
A Computational Model of Text Reuse in Ancient Literary Texts
d9934746
This paper presents a discriminative parser that does not use a generative model in any way, yet whose accuracy still surpasses a generative baseline. The parser performs feature selection incrementally during training, as opposed to a priori, which enables it to work well with minimal linguistic cleverness. The main challenge in building this parser was fitting the training data into memory. We introduce gradient sampling, which increased training speed 100-fold. Our implementation is freely available at
Computational Challenges in Parsing by Classification
d8686087
We extend existing methods for automatic sentence boundary detection by leveraging multiple recognizer hypotheses in order to provide robustness to speech recognition errors. For each hypothesized word sequence, an HMM is used to estimate the posterior probability of a sentence boundary at each word boundary. The hypotheses are combined using confusion networks to determine the overall most likely events. Experiments show improved detection of sentences for conversational telephone speech, though results are mixed for broadcast news.
Improving Automatic Sentence Boundary Detection with Confusion Networks A. Stolcke£
d6364632
While there is a wide consensus in the NLP community over the modeling of temporal relations between events, mainly based on Allen's temporal logic, the question on how to annotate other types of event relations, in particular causal ones, is still open. In this work, we present some annotation guidelines to capture causality between event pairs, partly inspired by TimeML. We then implement a rule-based algorithm to automatically identify explicit causal relations in the TempEval-3 corpus. Based on this annotation, we report some statistics on the behavior of causal cues in text and perform a preliminary investigation on the interaction between causal and temporal relations.
Annotating causality in the TempEval-3 corpus
d2601332
Les tables du Lexique-Grammaire, dont le développement a été initié parGross (1975), constituent un lexique syntaxique très riche pour le français. Nous présentons dans cet article le travail réalisé afin de formaliser la classification existante pour les verbes distributionnels à l'aide de formules logiques, dans le but d'assurer la maintenance du lexique. Nous détaillons les différents types de propriétés définitoires, ainsi que leur codage dans la table des classes, regroupant l'ensemble des propriétés de toutes les classes de verbes. La définition formelle de l'ensemble des propriétés définitoires pour chaque classe a permis de représenter la classification sous forme d'arbre de décision, afin d'aider à trouver la classe associée à toute nouvelle entrée, ce qui permet de garantir la future alimentation du lexique.ABSTRACT. Lexicon-Grammar tables, whose development was initiated byGross (1975), are a very rich syntactic lexicon for the French language. This paper presents the work done to formalize the existing classification of verbs using logical formulas, in order to maintain the lexicon. We describe the different types of definitional features and their coding in the table of classes, which is composed by all the features of all classes of verbs. The formal definition of all the defining features for each class has to consider the classification with a decision tree to help finding the class associated with a new entry, which ensures the future extension of the lexicon.
Maintenance du Lexique-Grammaire Formules définitoires et arbre de classement
d258890935
Thanks to the great progress seen in the machine translation (MT) field in recent years, the use and perception of MT by translators need to be revisited. The main objective of this paper is to determine the perception, productivity and the postediting effort (in terms of time and number of editings) of six translators when using Statistical Machine Translation (SMT) and Neural Machine Translation (NMT) systems. This presentation is focused on how translators perceive these two systems in order to know which one they prefer and what type of errors and problems present each system, as well as how translators solve these issues. These tests will be performed with the Dynamic Quality Framework (DQF) tools (quick comparison and productivity tasks) using Google Neural Machine Translation and Microsoft Translator (SMT) APIs in two different English into Spanish texts, an instruction manual and a marketing webpage. Results showed that translators considerably prefer NMT over SMT. Moreover, NMT is more adequate and fluent than SMT.
Determining translators' perception, productivity and post-editing effort when using SMT and NMT systems
d235742760
We consider a joint information extraction (IE) model, solving named entity recognition, coreference resolution and relation extraction jointly over the whole document. In particular, we study how to inject information from a knowledge base (KB) in such IE model, based on unsupervised entity linking. The used KB entity representations are learned from either (i) hyperlinked text documents (Wikipedia), or (ii) a knowledge graph (Wikidata), and appear complementary in raising IE performance. Representations of corresponding entity linking (EL) candidates are added to text span representations of the input document, and we experiment with (i) taking a weighted average of the EL candidate representations based on their prior (in Wikipedia), and (ii) using an attention scheme over the EL candidate list. Results demonstrate an increase of up to 5% F1-score for the evaluated IE tasks on two datasets. Despite a strong performance of the prior-based model, our quantitative and qualitative analysis reveals the advantage of using the attention-based approach.
Injecting Knowledge Base Information into End-to-End Joint Entity and Relation Extraction and Coreference Resolution
d232021489
Antidote RX est la sixième édition d'Antidote, un logiciel d'aide à la rédaction développé et commercialisé par la société Druide informatique. Antidote RX comporte un correcteur grammatical avancé, dix dictionnaires de consultation et dix guides linguistiques. Il fonctionne sous les systèmes d'exploitation Windows, Mac OS X et Linux.
Présentation du logiciel Antidote RX
d256231270
Missing information is a common issue of dialogue summarization where some information in the reference summaries is not covered in the generated summaries. To address this issue, we propose to utilize natural language inference (NLI) models to improve coverage while avoiding introducing factual inconsistencies. Specifically, we use NLI to compute fine-grained training signals to encourage the model to generate content in the reference summaries that have not been covered, as well as to distinguish between factually consistent and inconsistent generated sentences. Experiments on the DIALOG-SUM and SAMSUM datasets confirm the effectiveness of the proposed approach in balancing coverage and faithfulness, validated with automatic metrics and human evaluations. Additionally, we compute the correlation between commonly used automatic metrics with human judgments in terms of three different dimensions regarding coverage and factual consistency to provide insight into the most suitable metric for evaluating dialogue summaries. * Work done while interning at Amazon. 1 We release our source code for research purposes: https://github.com/amazon-science/AWS-SWING.GENERATED SUMMARY REFERENCE SUMMARY1. Charlee is attending Portuguese theater as a subject at university.2. He and other students are preparing a play by Mrożek translated into Portuguese.
SWING : Balancing Coverage and Faithfulness for Dialogue Summarization
d202778394
We present a novel approach for generating poetry automatically for the morphologically rich Finnish language by using a genetic algorithm. The approach improves the state of the art of the previous Finnish poem generators by introducing a higher degree of freedom in terms of structural creativity. Our approach is evaluated and described within the paradigm of computational creativity, where the fitness functions of the genetic algorithm are assimilated with the notion of aesthetics. The output is considered to be a poem 81.5% of the time by human evaluators.
Generating Modern Poetry Automatically in Finnish
d1451196
Relation between gender and language has been studied by many authors, however, there is still some uncertainty left regarding gender influence on language usage in the professional environment. Often, the studied data sets are too small or texts of individual authors are too short in order to capture differences of language usage wrt gender successfully. This study draws from a larger corpus of speeches transcripts of the Lithuanian Parliament to explore language differences of political debates by gender via stylometric analysis. Experimental set up consists of stylistic features that indicate lexical style and do not require external linguistic tools, namely the most frequent words, in combination with unsupervised machine learning algorithms. Results show that gender differences in the language use remain in professional environment not only in usage of function words, preferred linguistic constructions, but in the presented topics as well.
Stylometric Analysis of Parliamentary Speeches: Gender Dimension
d52141494
One of the big challenges connected to large vocabulary Arabic speech recognition is the limit of vocabulary, which causes high outof-vocabulary words. Also, the Arabic language characteristics are another challenge. These challenges negatively affect the performance of the created systems. In this work we try to handle these challenges by proposing a new unsupervised graph-base method. Finally, we have obtained a 4.6% relative reduction in the word error rate. Comparing our suggested method with other methods in the literature, it has given better results. Moreover, it has presented a major step towards solving this problem. In addition, it can be easily adaptable to other languages.
Unsupervised Method for Improving Arabic Speech Recognition Systems
d232021927
The paper presents the project Semantic Network with a Wide Range of Semantic Relations and its main achievements. The ultimate objective of the project is to expand Princeton WordNet with conceptual frames that define the syntagmatic relations of verb synsets and the semantic classes of nouns felicitous to combine with particular verbs. At this stage of the work: a) over 5,000 WordNet verb synsets have been supplied with manually evaluated FrameNet semantic frames, b) 253 semantic types have been manually mapped to the appropriate WordNet concepts providing detailed ontological representation of the semantic classes of nouns.
Towards Expanding WordNet with Conceptual Frames
d248524712
We present a comprehensive work on automated veracity assessment from dataset creation to developing novel methods based on Natural Language Inference (NLI), focusing on misinformation related to the COVID-19 pandemic. We first describe the construction of the novel PANACEA dataset consisting of heterogeneous claims on COVID-19 and their respective information sources. The dataset construction includes work on retrieval techniques and similarity measurements to ensure a unique set of claims. We then propose novel techniques for automated veracity assessment based on Natural Language Inference including graph convolutional networks and attention based approaches. We have carried out experiments on evidence retrieval and veracity assessment on the dataset using the proposed techniques and found them competitive with SOTA methods, and provided a detailed discussion.
Natural Language Inference with Self-Attention for Veracity Assessment of Pandemic Claims
d248524999
Hierarchical text classification aims to leverage label hierarchy in multi-label text classification. Existing methods encode label hierarchy in a global view, where label hierarchy is treated as the static hierarchical structure containing all labels. Since global hierarchy is static and irrelevant to text samples, it makes these methods hard to exploit hierarchical information. Contrary to global hierarchy, local hierarchy as a structured labels hierarchy corresponding to each text sample. It is dynamic and relevant to text samples, which is ignored in previous methods. To exploit global and local hierarchies, we propose Hierarchy-guided BERT with Global and Local hierarchies (HBGL), which utilizes the large-scale parameters and prior language knowledge of BERT to model both global and local hierarchies. Moreover, HBGL avoids the intentional fusion of semantic and hierarchical modules by directly modeling semantic and hierarchical information with BERT. Compared with the state-of-the-art method HGCLR, our method achieves significant improvement on three benchmark datasets. Our code is available at http://github. com/kongds/HBGL.
Exploiting Global and Local Hierarchies for Hierarchical Text Classification
d237940293
Pre-trained LMs have shown impressive performance on downstream NLP tasks, but we have yet to establish a clear understanding of their sophistication when it comes to processing, retaining, and applying information presented in their input. In this paper we tackle a component of this question by examining robustness of models' ability to deploy relevant context information in the face of distracting content. We present models with cloze tasks requiring use of critical context information, and introduce distracting content to test how robustly the models retain and use that critical information for prediction. We also systematically manipulate the nature of these distractors, to shed light on dynamics of models' use of contextual cues. We find that although models appear in simple contexts to make predictions based on understanding and applying relevant facts from prior context, the presence of distracting but irrelevant content has clear impact in confusing model predictions. In particular, models appear particularly susceptible to factors of semantic similarity and word position. The findings are consistent with the conclusion that LM predictions are driven in large part by superficial contextual cues, rather than by robust representations of context meaning.
Sorting through the noise: Testing robustness of information processing in pre-trained language models
d225076095
Leveraging large amounts of unlabeled data using Transformer-like architectures, like BERT, has gained popularity in recent times owing to their effectiveness in learning general representations that can then be further fine-tuned for downstream tasks to much success. However, training these models can be costly both from an economic and environmental standpoint. In this work, we investigate how to effectively use unlabeled data: by exploring the task-specific semi-supervised approach, Cross-View Training (CVT) and comparing it with task-agnostic BERT in multiple settings that include domain and task relevant English data. CVT uses a much lighter model architecture and we show that it achieves similar performance to BERT on a set of sequence tagging tasks, with lesser financial and environmental impact.
To BERT or Not to BERT: Comparing Task-specific and Task-agnostic Semi-Supervised Approaches for Sequence Tagging
d201125260
Pronoun resolution is a major area of natural language understanding. However, large-scale training sets are still scarce, since manually labelling data is costly. In this work, we introduce WIKICREM (Wikipedia CoREferences Masked) a large-scale, yet accurate dataset of pronoun disambiguation instances. We use a language-model-based approach for pronoun resolution in combination with our WI-KICREM dataset. We compare a series of models on a collection of diverse and challenging coreference resolution problems, where we match or outperform previous state-of-theart approaches on 6 out of 7 datasets, such as GAP, DPR, WNLI, PDP, WINOBIAS, and WINOGENDER. We release our model to be used off-the-shelf for solving pronoun disambiguation. 1 The code can be found at https://github.com/ vid-koci/bert-commonsense. The dataset and the models can be obtained from https://ora.ox.ac.uk/objects/uuid: c83e94bb-7584-41a1-aef9-85b0e764d9e3
WikiCREM: A Large Unsupervised Corpus for Coreference Resolution
d5957852
In this paper the role of the lexicon within typical application tasks based on NLP is analysed. A large scale semantic lexicon is studied within the framework of a NLP application. The coverage of the lexicon with respect the target domain and a (semi)automatic tuning approach have been evaluated. The impact of a corpus-driven inductive architecture aiming to compensate lacks in lexical information are thus measured and discussed.
Tuning lexicons to new operational scenarios
d222178270
Multilingual transformer models like mBERT and XLM-RoBERTa have obtained great improvements for many NLP tasks on a variety of languages. However, recent works also showed that results from high-resource languages could not be easily transferred to realistic, low-resource scenarios. In this work, we study trends in performance for different amounts of available resources for the three African languages Hausa, isiXhosa and Yorùbá on both NER and topic classification. We show that in combination with transfer learning or distant supervision, these models can achieve with as little as 10 or 100 labeled sentences the same performance as baselines with much more supervised training data. However, we also find settings where this does not hold. Our discussions and additional experiments on assumptions such as time and hardware restrictions highlight challenges and opportunities in low-resource learning.
Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on African Languages
d237940211
Argumentative structure prediction aims to establish links between textual units and label the relationship between them, forming a structured representation for a given input text. The former task, linking, has been identified by earlier works as particularly challenging, as it requires finding the most appropriate structure out of a very large search space of possible link combinations. In this paper, we improve a state-of-the-art linking model by using multi-task and multi-corpora training strategies. Our auxiliary tasks help the model to learn the role of each sentence in the argumentative structure. Combining multi-corpora training with a selective sampling strategy increases the training data size while ensuring that the model still learns the desired target distribution well. Experiments on essays written by English-as-a-foreign-language learners show that both strategies significantly improve the model's performance; for instance, we observe a 15.8% increase in the F1-macro for individual link predictions.
Multi-Task and Multi-Corpora Training Strategies to Enhance Argumentative Sentence Linking Performance
d207926231
Machine reading comprehension, the task of evaluating a machine's ability to comprehend a passage of text, has seen a surge in popularity in recent years. There are many datasets that are targeted at reading comprehension, and many systems that perform as well as humans on some of these datasets. Despite all of this interest, there is no work that systematically defines what reading comprehension is. In this work, we justify a question answering approach to reading comprehension and describe the various kinds of questions one might use to more fully test a system's comprehension of a passage, moving beyond questions that only probe local predicate-argument structures. The main pitfall of this approach is that questions can easily have surface cues or other biases that allow a model to shortcut the intended reasoning process. We discuss ways proposed in current literature to mitigate these shortcuts, and we conclude with recommendations for future dataset collection efforts.
On Making Reading Comprehension More Comprehensive
d235352777
Dialogue policy learning, a subtask that determines the content of system response generation and then the degree of task completion, is essential for task-oriented dialogue systems. However, the unbalanced distribution of system actions in dialogue datasets often causes difficulty in learning to generate desired actions and responses. In this paper, we propose a retrieve-and-memorize framework to enhance the learning of system actions. Specially, we first design a neural context-aware retrieval module to retrieve multiple candidate system actions from the training set given a dialogue context. Then, we propose a memoryaugmented multi-decoder network to generate the system actions conditioned on the candidate actions, which allows the network to adaptively select key information in the candidate actions and ignore noises. We conduct experiments on the large-scale multi-domain task-oriented dialogue dataset MultiWOZ 2.0 and MultiWOZ 2.1. Experimental results show that our method achieves competitive performance among several state-of-the-art models in the context-to-response generation task.
Retrieve & Memorize: Dialog Policy Learning with Multi-Action Memory
d233481119
This contribution describes a two-course module that seeks to provide humanities majors with a basic understanding of language technology and its applications using Python. The learning materials consist of interactive Jupyter Notebooks and accompanying YouTube videos, which are openly available with a Creative Commons licence.
Applied Language Technology: NLP for the Humanities
d133468229
Recent research has demonstrated that goaloriented dialogue agents trained on large datasets can achieve striking performance when interacting with human users. In real world applications, however, it is important to ensure that the agent performs smoothly interacting with not only regular users but also those malicious ones who would attack the system through interactions in order to achieve goals for their own advantage. In this paper, we develop algorithms to evaluate the robustness of a dialogue agent by carefully designed attacks using adversarial agents. Those attacks are performed in both black-box and whitebox settings. Furthermore, we demonstrate that adversarial training using our attacks can significantly improve the robustness of a goaloriented dialogue system. On a case-study of the negotiation agent developed by(Lewis et al., 2017), our attacks reduced the average advantage of rewards between the attacker and the trained RL-based agent from 2.68 to −5.76 on a scale from −10 to 10 for randomized goals. Moreover, with the proposed adversarial training, we are able to improve the robustness of negotiation agents by 1.5 points on average against all our attacks.
Evaluating and Enhancing the Robustness of Dialogue Systems: A Case Study on a Negotiation Agent
d252763314
For professional technical text keyword extraction problems, relevance and specificity are crucial, in order to achieve keyword extraction with relevance and specificity, we take semantic information, sequence relations and syntactic structure into account. Extraction of text semantic information using pre-trained language model BERT; We construct semantic association graph using sequence relation and syntactic structure, in order to capture long-distance semantic dependencies between words; We calculate keyword weights based on random walk algorithm and lexical knowledge, in order to take into account the relevance and specificity of keywords. Experimental results on professional text datasets show that keyword extracted by our model have better relevance and specificity.
专 专 专业 业 业技 技 技术 术 术文 文 文本 本 本关 关 关键 键 键词 词 词抽 抽 抽取 取 取方 方 方法 法 法 Keyword Extraction on Professional Technical Text
d15359997
In this paper we present a view of natural language generation in which the control structure of the generator is clearly separated from the content decisions made during generation, allowing us to explore and compare different control strategies in a systematic way. Our approach factors control into two components, a 'generation tree' which maps out the relationships between different decisions, and an algorithm for traversing such a tree which determines which choices are actually made. We illustrate the approach with examples of stylistic control and automatic text revision using both generative and empirical techniques. We argue that this approach provides a useful basis for the theoretical study of control in generation, and a framework for implementing generators with a range of control strategies. We also suggest that this approach can be developed into tool for analysing and adapting control aspects of other advanced wide-coverage generation systems.
Modelling control in generation *
d31533866
This paper describes our approach for SemEval-2017 Task 4 -Sentiment Analysis in Twitter (SAT). Its five subtasks are divided into two categories: (1) sentiment classification, i.e., predicting topic-based tweet sentiment polarity, and (2) sentiment quantification, that is, estimating the sentiment distributions of a set of given tweets. We build a convolutional sentence classification system for the task of SAT. Official results show that the experimental results of our system are comparative.
EICA at SemEval-2017 Task 4: A Convolutional Neural Network for Topic-based Sentiment Classification
d237416535
Personas are useful for dialogue response prediction.However, the personas used in current studies are pre-defined and hard to obtain before a conversation. To tackle this issue, we study a new task, named Speaker Persona Detection (SPD), which aims to detect speaker personas based on the plain conversational text. In this task, a best-matched persona is searched out from candidates given the conversational text. This is a many-to-many semantic matching task because both contexts and personas in SPD are composed of multiple sentences. The long-term dependency and the dynamic redundancy among these sentences increase the difficulty of this task. We build a dataset for SPD, dubbed as Persona Match on Persona-Chat (PMPC).Furthermore, we evaluate several baseline models and propose utterance-to-profile (U2P) matching networks for this task. The U2P models operate at a fine granularity which treat both contexts and personas as sets of multiple sequences. Then, each sequence pair is scored and an interpretable overall score is obtained for a context-persona pair through aggregation. Evaluation results show that the U2P models outperform their baseline counterparts significantly.
Detecting Speaker Personas from Conversational Texts
d247547046
A limitation of current neural dialog models is that they tend to suffer from a lack of specificity and informativeness in generated responses, primarily due to dependence on training data that covers a limited variety of scenarios and conveys limited knowledge. One way to alleviate this issue is to extract relevant knowledge from external sources at decoding time and incorporate it into the dialog response. In this paper, we propose a posthoc knowledge-injection technique where we first retrieve a diverse set of relevant knowledge snippets conditioned on both the dialog history and an initial response from an existing dialog model. We construct multiple candidate responses, individually injecting each retrieved snippet into the initial response using a gradient-based decoding method, and then select the final response with an unsupervised ranking step. Our experiments in goal-oriented and knowledge-grounded dialog settings demonstrate that human annotators judge the outputs from the proposed method to be more engaging and informative compared to responses from prior dialog systems. We further show that knowledge-augmentation promotes success in achieving conversational goals in both experimental settings.
Achieving Conversational Goals with Unsupervised Post-hoc Knowledge Injection
d250390791
Earlier NLP studies on framing have focused heavily on shallow classification of issue framing, while framing effect arising from pragmatic cues remains neglected. We put forward this latter type of framing as pragmatic framing. To bridge this gap, we take presuppositiontriggering adverbs such as 'again' as a study case, and investigate how different German newspapers use them to covertly evoke different attitudinal subtexts. Our study demonstrates the crucial role of presuppositions in framing, and emphasizes the necessity of more attention on pragmatic framing in future research.
"Again, Dozens of Refugees Drowned": A Computational Study of Political Framing Evoked by Presuppositions
d4612975
Alignments between natural language and Knowledge Base (KB) triples are an essential prerequisite for training machine learning approaches employed in a variety of Natural Language Processing problems. These include Relation Extraction, KB Population, Question Answering and Natural Language Generation from KB triples. Available datasets that provide those alignments are plagued by significant shortcomings -they are of limited size, they exhibit a restricted predicate coverage, and/or they are of unreported quality. To alleviate these shortcomings, we present T-REx, a dataset of large scale alignments between Wikipedia abstracts and Wikidata triples. T-REx consists of 11 million triples aligned with 3.09 million Wikipedia abstracts (6.2 million sentences). T-REx is two orders of magnitude larger than the largest available alignments dataset and covers 2.5 times more predicates. Additionally, we stress the quality of this language resource thanks to an extensive crowdsourcing evaluation. T-REx is publicly available at https://w3id.org/t-rex.
T-REx: A Large Scale Alignment of Natural Language with Knowledge Base Triples
d3861770
We present a preliminary attempt to apply the TARSQI Toolkit to the medical domain, specifically electronic health records, for use in answering temporally motivated questions.
Applying the TARSQI Toolkit to augment text mining of EHRs
d245124443
Auto-regressive neural sequence models have been shown to be effective across text generation tasks. However, their left-to-right decoding order prevents generation from being parallelized. Insertion Transformer(Stern et al., 2019)is an attractive alternative that allows outputting multiple tokens in a single generation step. Nevertheless, due to the incompatibility between absolute positional encoding and insertion-based generation schemes, it needs to refresh the encoding of every token in the generated partial hypothesis at each step, which could be costly. We design a novel reusable positional encoding scheme for Insertion Transformers called Fractional Positional Encoding (FPE), which allows reusing representations calculated in previous steps. Empirical studies on various text generation tasks demonstrate the effectiveness of FPE, which leads to floating-point operation reduction and latency improvements on batched decoding.
Towards More Efficient Insertion Transformer with Fractional Positional Encoding
d10576751
Effective knowledge resources are critical for developing successful clinical decision support systems that alleviate the cognitive load on physicians in patient care. In this paper, we describe two new methods for building a knowledge resource of disease to medication associations. These methods use fundamentally different content and are based on advanced natural language processing and machine learning techniques. One method uses distributional semantics on large medical text, and the other uses data mining on a large number of patient records. The methods are evaluated using 25,379 unique disease-medication pairs extracted from 100 de-identified longitudinal patient records of a large multi-provider hospital system. We measured recall (R), precision (P), and F scores for positive and negative association prediction, along with coverage and accuracy. While individual methods performed well, a combined stacked classifier achieved the best performance, indicating the limitations and unique value of each resource and method. In predicting positive associations, the stacked combination significantly outperformed the baseline (a distant semi-supervised method on large medical text), achieving F scores of 0.75 versus 0.55 on the pairs seen in the patient records, and F scores of 0.69 and 0.35 on unique pairs.
Scoring Disease-Medication Associations using Advanced NLP, Ma- chine Learning, and Multiple Content Sources