context
stringlengths 3.85k
99.8k
| questions
sequencelengths 1
12
| answers
sequencelengths 1
12
|
|---|---|---|
Although Neural Machine Translation (NMT) has dominated recent research on translation tasks BIBREF0, BIBREF1, BIBREF2, NMT heavily relies on large-scale parallel data, resulting in poor performance on low-resource or zero-resource language pairs BIBREF3. Translation between these low-resource languages (e.g., Arabic$\rightarrow $Spanish) is usually accomplished with pivoting through a rich-resource language (such as English), i.e., Arabic (source) sentence is translated to English (pivot) first which is later translated to Spanish (target) BIBREF4, BIBREF5. However, the pivot-based method requires doubled decoding time and suffers from the propagation of translation errors.One common alternative to avoid pivoting in NMT is transfer learning BIBREF6, BIBREF7, BIBREF8, BIBREF9 which leverages a high-resource pivot$\rightarrow $target model (parent) to initialize a low-resource source$\rightarrow $target model (child) that is further optimized with a small amount of available parallel data. Although this approach has achieved success in some low-resource language pairs, it still performs very poorly in extremely low-resource or zero-resource translation scenario. Specifically, BIBREF8 reports that without any child model training data, the performance of the parent model on the child test set is miserable.In this work, we argue that the language space mismatch problem, also named domain shift problem BIBREF10, brings about the zero-shot translation failure in transfer learning. It is because transfer learning has no explicit training process to guarantee that the source and pivot languages share the same feature distributions, causing that the child model inherited from the parent model fails in such a situation. For instance, as illustrated in the left of Figure FIGREF1, the points of the sentence pair with the same semantics are not overlapping in source space, resulting in that the shared decoder will generate different translations denoted by different points in target space. Actually, transfer learning for NMT can be viewed as a multi-domain problem where each source language forms a new domain. Minimizing the discrepancy between the feature distributions of different source languages, i.e., different domains, will ensure the smooth transition between the parent and child models, as shown in the right of Figure FIGREF1. One way to achieve this goal is the fine-tuning technique, which forces the model to forget the specific knowledge from parent data and learn new features from child data. However, the domain shift problem still exists, and the demand of parallel child data for fine-tuning heavily hinders transfer learning for NMT towards the zero-resource setting.In this paper, we explore the transfer learning in a common zero-shot scenario where there are a lot of source$\leftrightarrow $pivot and pivot$\leftrightarrow $target parallel data but no source$\leftrightarrow $target parallel data. In this scenario, we propose a simple but effective transfer approach, the key idea of which is to relieve the burden of the domain shift problem by means of cross-lingual pre-training. To this end, we firstly investigate the performance of two existing cross-lingual pre-training methods proposed by BIBREF11 in zero-shot translation scenario. Besides, a novel pre-training method called BRidge Language Modeling (BRLM) is designed to make full use of the source$\leftrightarrow $pivot bilingual data to obtain a universal encoder for different languages. Once the universal encoder is constructed, we only need to train the pivot$\rightarrow $target model and then test this model in source$\rightarrow $target direction directly. The main contributions of this paper are as follows:We propose a new transfer learning approach for NMT which uses the cross-lingual language model pre-training to enable a high performance on zero-shot translation.We propose a novel pre-training method called BRLM, which can effectively alleviates the distance between different source language spaces.Our proposed approach significantly improves zero-shot translation performance, consistently surpassing pivoting and multilingual approaches. Meanwhile, the performance on supervised translation direction remains the same level or even better when using our method.In recent years, zero-shot translation in NMT has attracted widespread attention in academic research. Existing methods are mainly divided into four categories: pivot-based method, transfer learning, multilingual NMT, and unsupervised NMT.Pivot-based Method is a common strategy to obtain a source$\rightarrow $target model by introducing a pivot language. This approach is further divided into pivoting and pivot-synthetic. While the former firstly translates a source language into the pivot language which is later translated to the target language BIBREF4, BIBREF5, BIBREF12, the latter trains a source$\rightarrow $target model with pseudo data generated from source-pivot or pivot-target parallel data BIBREF13, BIBREF14. Although the pivot-based methods can achieve not bad performance, it always falls into a computation-expensive and parameter-vast dilemma of quadratic growth in the number of source languages, and suffers from the error propagation problem BIBREF15.Transfer Learning is firstly introduced for NMT by BIBREF6, which leverages a high-resource parent model to initialize the low-resource child model. On this basis, BIBREF7 and BIBREF8 use shared vocabularies for source/target language to improve transfer learning, while BIBREF16 relieve the vocabulary mismatch by mainly using cross-lingual word embedding. Although these methods are successful in the low-resource scene, they have limited effects in zero-shot translation.Multilingual NMT (MNMT) enables training a single model that supports translation from multiple source languages into multiple target languages, even those unseen language pairs BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. Aside from simpler deployment, MNMT benefits from transfer learning where low-resource language pairs are trained together with high-resource ones. However, BIBREF22 point out that MNMT for zero-shot translation easily fails, and is sensitive to the hyper-parameter setting. Also, MNMT usually performs worse than the pivot-based method in zero-shot translation setting BIBREF23.Unsupervised NMT (UNMT) considers a harder setting, in which only large-scale monolingual corpora are available for training. Recently, many methods have been proposed to improve the performance of UNMT, including using denoising auto-encoder, statistic machine translation (SMT) and unsupervised pre-training BIBREF24, BIBREF25, BIBREF26, BIBREF11. Since UNMT performs well between similar languages (e.g., English-German translation), its performance between distant languages is still far from expectation.Our proposed method belongs to the transfer learning, but it is different from traditional transfer methods which train a parent model as starting point. Before training a parent model, our approach fully leverages cross-lingual pre-training methods to make all source languages share the same feature space and thus enables a smooth transition for zero-shot translation.In this section, we will present a cross-lingual pre-training based transfer approach. This method is designed for a common zero-shot scenario where there are a lot of source$\leftrightarrow $pivot and pivot$\leftrightarrow $target bilingual data but no source$\leftrightarrow $target parallel data, and the whole training process can be summarized as follows step by step:Pre-train a universal encoder with source/pivot monolingual or source$\leftrightarrow $pivot bilingual data.Train a pivot$\rightarrow $target parent model built on the pre-trained universal encoder with the available parallel data. During the training process, we freeze several layers of the pre-trained universal encoder to avoid the degeneracy issue BIBREF27.Directly translate source sentences into target sentences with the parent model, which benefits from the availability of the universal encoder.The key difficulty of this method is to ensure the intermediate representations of the universal encoder are language invariant. In the rest of this section, we first present two existing methods yet to be explored in zero-shot translation, and then propose a straightforward but effective cross-lingual pre-training method. In the end, we present the whole training and inference protocol for transfer.Two existing cross-lingual pre-training methods, Masked Language Modeling (MLM) and Translation Language Modeling (TLM), have shown their effectiveness on XNLI cross-lingual classification task BIBREF11, BIBREF28, but these methods have not been well studied on cross-lingual generation tasks in zero-shot condition. We attempt to take advantage of the cross-lingual ability of the two methods for zero-shot translation.Specifically, MLM adopts the Cloze objective of BERT BIBREF29 and predicts the masked words that are randomly selected and replaced with [MASK] token on monolingual corpus. In practice, MLM takes different language monolingual corpora as input to find features shared across different languages. With this method, word pieces shared in all languages have been mapped into a shared space, which makes the sentence representations across different languages close BIBREF30.Since MLM objective is unsupervised and only requires monolingual data, TLM is designed to leverage parallel data when it is available. Actually, TLM is a simple extension of MLM, with the difference that TLM concatenates sentence pair into a whole sentence, and then randomly masks words in both the source and target sentences. In this way, the model can either attend to surrounding words or to the translation sentence, implicitly encouraging the model to align the source and target language representations. Note that although each sentence pair is formed into one sentence, the positions of the target sentence are reset to count form zero.Aside from MLM and TLM, we propose BRidge Language Modeling (BRLM) to further obtain word-level representation alignment between different languages. This method is inspired by the assumption that if the feature spaces of different languages are aligned very well, the masked words in the corrupted sentence can also be guessed by the context of the correspondingly aligned words on the other side. To achieve this goal, BRLM is designed to strengthen the ability to infer words across languages based on alignment information, instead of inferring words within monolingual sentence as in MLM or within the pseudo sentence formed by concatenating sentence pair as in TLM.As illustrated in Figure FIGREF9, BRLM stacks shared encoder over both side sentences separately. In particular, we design two network structures for BRLM, which are divided into Hard Alignment (BRLM-HA) and Soft Alignment (BRLM-SA) according to the way of generating the alignment information. These two structures actually extend MLM into a bilingual scenario, with the difference that BRLM leverages external aligner tool or additional attention layer to explicitly introduce alignment information during model training.Hard Alignment (BRLM-HA). We first use external aligner tool on source$\leftrightarrow $pivot parallel data to extract the alignment information of sentence pair. During model training, given source$\leftrightarrow $pivot sentence pair, BRLM-HA randomly masks some words in source sentence and leverages alignment information to obtain the aligned words in pivot sentence for masked words. Based on the processed input, BRLM-HA adopts the Transformer BIBREF1 encoder to gain the hidden states for source and pivot sentences respectively. Then the training objective of BRLM-HA is to predict the masked words by not only the surrounding words in source sentence but also the encoder outputs of the aligned words. Note that this training process is also carried out in a symmetric situation, in which we mask some words in pivot sentence and obtain the aligned words in the source sentence.Soft Alignment (BRLM-SA). Instead of using external aligner tool, BRLM-SA introduces an additional attention layer to learn the alignment information together with model training. In this way, BRLM-SA avoids the effect caused by external wrong alignment information and enables many-to-one soft alignment during model training. Similar with BRLM-HA, the training objective of BRLM-SA is to predict the masked words by not only the surrounding words in source sentence but also the outputs of attention layer. In our implementation, the attention layer is a multi-head attention layer adopted in Transformer, where the queries come from the masked source sentence, the keys and values come from the pivot sentence.In principle, MLM and TLM can learn some implicit alignment information during model training. However, the alignment process in MLM is inefficient since the shared word pieces only account for a small proportion of the whole corpus, resulting in the difficulty of expanding the shared information to align the whole corpus. TLM also lacks effort in alignment between the source and target sentences since TLM concatenates the sentence pair into one sequence, making the explicit alignment between the source and target infeasible. BRLM fully utilizes the alignment information to obtain better word-level representation alignment between different languages, which better relieves the burden of the domain shift problem.We consider the typical zero-shot translation scenario in which a high resource pivot language has parallel data with both source and target languages, while source and target languages has no parallel data between themselves. Our proposed cross-lingual pretraining based transfer approach for source$\rightarrow $target zero-shot translation is mainly divided into two phrases: the pretraining phase and the transfer phase.In the pretraining phase, we first pretrain MLM on monolingual corpora of both source and pivot languages, and continue to pretrain TLM or the proposed BRLM on the available parallel data between source and pivot languages, in order to build a cross-lingual encoder shared by the source and pivot languages.In the transfer phase, we train pivot$\rightarrow $target NMT model initialized by the cross-lingually pre-trained encoder, and finally transfer the trained NMT model to source$\rightarrow $target translation thanks to the shared encoder. Note that during training pivot$\rightarrow $target NMT model, we freeze several layers of the cross-lingually pre-trained encoder to avoid the degeneracy issue.For the more complicated scenario that either the source side or the target side has multiple languages, the encoder and the decoder are also shared across each side languages for efficient deployment of translation between multiple languages.We evaluate our cross-lingual pre-training based transfer approach against several strong baselines on two public datatsets, Europarl BIBREF31 and MultiUN BIBREF32, which contain multi-parallel evaluation data to assess the zero-shot performance. In all experiments, we use BLEU as the automatic metric for translation evaluation.The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18. For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. We remove the multi-parallel sentences between different training corpora to ensure zero-shot settings. We use the devtest2006 as the validation set and the test2006 as the test set for Fr$\rightarrow $Es and De$\rightarrow $Fr. For distant language pair Ro$\rightarrow $De, we extract 1,000 overlapping sentences from newstest2016 as the test set and the 2,000 overlapping sentences split from the training set as the validation set since there is no official validation and test sets. For vocabulary, we use 60K sub-word tokens based on Byte Pair Encoding (BPE) BIBREF33.For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation. We use 80K BPE splits as the vocabulary. Note that all sentences are tokenized by the tokenize.perl script, and we lowercase all data to avoid a large vocabulary for the MultiUN corpus.We use traditional transfer learning, pivot-based method and multilingual NMT as our baselines. For the fair comparison, the Transformer-big model with 1024 embedding/hidden units, 4096 feed-forward filter size, 6 layers and 8 heads per layer is adopted for all translation models in our experiments. We set the batch size to 2400 per batch and limit sentence length to 100 BPE tokens. We set the $\text{attn}\_\text{drop}=0$ (a dropout rate on each attention head), which is favorable to the zero-shot translation and has no effect on supervised translation directions BIBREF22. For the model initialization, we use Facebook's cross-lingual pretrained models released by XLM to initialize the encoder part, and the rest parameters are initialized with xavier uniform. We employ the Adam optimizer with $\text{lr}=0.0001$, $t_{\text{warm}\_\text{up}}=4000$ and $\text{dropout}=0.1$. At decoding time, we generate greedily with length penalty $\alpha =1.0$.Regarding MLM, TLM and BRLM, as mentioned in the pre-training phase of transfer protocol, we first pre-train MLM on monolingual data of both source and pivot languages, then leverage the parameters of MLM to initialize TLM and the proposed BRLM, which are continued to be optimized with source-pivot bilingual data. In our experiments, we use MLM+TLM, MLM+BRLM to represent this training process. For the masking strategy during training, following BIBREF29, $15\%$ of BPE tokens are selected to be masked. Among the selected tokens, $80\%$ of them are replaced with [MASK] token, $10\%$ are replaced with a random BPE token, and $10\%$ unchanged. The prediction accuracy of masked words is used as a stopping criterion in the pre-training stage. Besides, we use fastalign tool BIBREF34 to extract word alignments for BRLM-HA.Table TABREF19 and TABREF26 report zero-shot results on Europarl and Multi-UN evaluation sets, respectively. We compare our approaches with related approaches of pivoting, multilingual NMT (MNMT) BIBREF19, and cross-lingual transfer without pretraining BIBREF16. The results show that our approaches consistently outperform other approaches across languages and datasets, especially surpass pivoting, which is a strong baseline in the zero-shot scenario that multilingual NMT systems often fail to beat BIBREF19, BIBREF20, BIBREF23. Pivoting translates source to pivot then to target in two steps, causing inefficient translation process. Our approaches use one encoder-decoder model to translate between any zero-shot directions, which is more efficient than pivoting. Regarding the comparison between transfer approaches, our cross-lingual pretraining based transfer outperforms transfer method that does not use pretraining by a large margin.Regarding comparison between the baselines in table TABREF19, we find that pivoting is the strongest baseline that has significant advantage over other two baselines. Cross-lingual transfer for languages without shared vocabularies BIBREF16 manifests the worst performance because of not using source$\leftrightarrow $pivot parallel data, which is utilized as beneficial supervised signal for the other two baselines.Our best approach of MLM+BRLM-SA achieves the significant superior performance to all baselines in the zero-shot directions, improving by 0.9-4.8 BLEU points over the strong pivoting. Meanwhile, in the supervised direction of pivot$\rightarrow $target, our approaches performs even better than the original supervised Transformer thanks to the shared encoder trained on both large-scale monolingual data and parallel data between multiple languages.MLM alone that does not use source$\leftrightarrow $pivot parallel data performs much better than the cross-lingual transfer, and achieves comparable results to pivoting. When MLM is combined with TLM or the proposed BRLM, the performance is further improved. MLM+BRLM-SA performs the best, and is better than MLM+BRLM-HA indicating that soft alignment is helpful than hard alignment for the cross-lingual pretraining.Like experimental results on Europarl, MLM+BRLM-SA performs the best among all proposed cross-lingual pretraining based transfer approaches as shown in Table TABREF26. When comparing systems consisting of one encoder-decoder model for all zero-shot translation, our approaches performs significantly better than MNMT BIBREF19.Although it is challenging for one model to translate all zero-shot directions between multiple distant language pairs of MultiUN, MLM+BRLM-SA still achieves better performances on Es $\rightarrow $ Ar and Es $\rightarrow $ Ru than strong pivoting$_{\rm m}$, which uses MNMT to translate source to pivot then to target in two separate steps with each step receiving supervised signal of parallel corpora. Our approaches surpass pivoting$_{\rm m}$ in all zero-shot directions by adding back translation BIBREF33 to generate pseudo parallel sentences for all zero-shot directions based on our pretrained models such as MLM+BRLM-SA, and further training our universal encoder-decoder model with these pseudo data. BIBREF22 gu2019improved introduces back translation into MNMT, while we adopt it in our transfer approaches. Finally, our best MLM+BRLM-SA with back translation outperforms pivoting$_{\rm m}$ by 2.4 BLEU points averagely, and outperforms MNMT BIBREF22 by 4.6 BLEU points averagely. Again, in supervised translation directions, MLM+BRLM-SA with back translation also achieves better performance than the original supervised Transformer.We first evaluate the representational invariance across languages for all cross-lingual pre-training methods. Following BIBREF23, we adopt max-pooling operation to collect the sentence representation of each encoder layer for all source-pivot sentence pairs in the Europarl validation sets. Then we calculate the cosine similarity for each sentence pair and average all cosine scores. As shown in Figure FIGREF27, we can observe that, MLM+BRLM-SA has the most stable and similar cross-lingual representations of sentence pairs on all layers, while it achieves the best performance in zero-shot translation. This demonstrates that better cross-lingual representations can benefit for the process of transfer learning. Besides, MLM+BRLM-HA is not as superior as MLM+BRLM-SA and even worse than MLM+TLM on Fr-En, since MLM+BRLM-HA may suffer from the wrong alignment knowledge from an external aligner tool. We also find an interesting phenomenon that as the number of layers increases, the cosine similarity decreases.We further sample an English-Russian sentence pair from the MultiUN validation sets and visualize the cosine similarity between hidden states of the top encoder layer to further investigate the difference of all cross-lingual pre-training methods. As shown in Figure FIGREF38, the hidden states generated by MLM+BRLM-SA have higher similarity for two aligned words. It indicates that MLM+BRLM-SA can gain better word-level representation alignment between source and pivot languages, which better relieves the burden of the domain shift problem.To freeze parameters is a common strategy to avoid catastrophic forgetting in transfer learning BIBREF27. Table TABREF43 shows the performance of transfer learning with freezing different layers on MultiUN test set, in which En$\rightarrow $Ru denotes the parent model, Ar$\rightarrow $Ru and Es$\rightarrow $Ru are two child models, and all models are based on MLM+BRLM-SA. We can find that updating all parameters during training will cause a notable drop on the zero-shot direction due to the catastrophic forgetting. On the contrary, freezing all the parameters leads to the decline on supervised direction because the language features extracted during pre-training is not sufficient for MT task. Freezing the first four layers of the transformer shows the best performance and keeps the balance between pre-training and fine-tuning.In this paper, we propose a cross-lingual pretraining based transfer approach for the challenging zero-shot translation task, in which source and target languages have no parallel data, while they both have parallel data with a high resource pivot language. With the aim of building the language invariant representation between source and pivot languages for smooth transfer of the parent model of pivot$\rightarrow $target direction to the child model of source$\rightarrow $target direction, we introduce one monolingual pretraining method and two bilingual pretraining methods to construct an universal encoder for the source and pivot languages. Experiments on public datasets show that our approaches significantly outperforms several strong baseline systems, and manifest the language invariance characteristics in both sentence level and word level neural representations.We would like to thank the anonymous reviewers for the helpful comments. This work was supported by National Key R&D Program of China (Grant No. 2016YFE0132100), National Natural Science Foundation of China (Grant No. 61525205, 61673289). This work was also partially supported by Alibaba Group through Alibaba Innovative Research Program and the Priority Academic Program Development (PAPD) of Jiangsu Higher Education Institutions.
|
[
"which multilingual approaches do they compare with?",
"what are the pivot-based baselines?",
"which datasets did they experiment with?",
"what language pairs are explored?"
] |
[
[
"",
""
],
[
"",
""
],
[
"",
""
],
[
"De-En, En-Fr, Fr-En, En-Es, Ro-En, En-De, Ar-En, En-Ru",
""
]
] |
Named entity recognition is an important task of natural language processing, featuring in many popular text processing toolkits. This area of natural language processing has been actively studied in the latest decades and the advent of deep learning reinvigorated the research on more effective and accurate models. However, most of existing approaches require large annotated corpora. To the best of our knowledge, no such work has been done for the Armenian language, and in this work we address several problems, including the creation of a corpus for training machine learning models, the development of gold-standard test corpus and evaluation of the effectiveness of established approaches for the Armenian language.Considering the cost of creating manually annotated named entity corpus, we focused on alternative approaches. Lack of named entity corpora is a common problem for many languages, thus bringing the attention of many researchers around the globe. Projection based transfer schemes have been shown to be very effective (e.g. BIBREF0 , BIBREF1 , BIBREF2 ), using resource-rich language's corpora to generate annotated data for the low-resource language. In this approach, the annotations of high-resource language are projected over the corresponding tokens of the parallel low-resource language's texts. This strategy can be applied for language pairs that have parallel corpora. However, that approach would not work for Armenian as we did not have access to sufficiently large parallel corpus with a resource-rich language.Another popular approach is using Wikipedia. Klesti Hoxha and Artur Baxhaku employ gazetteers extracted from Wikipedia to generate an annotated corpus for Albanian BIBREF3 , and Weber and Pötzl propose a rule-based system for German that leverages the information from Wikipedia BIBREF4 . However, the latter relies on external tools such as part-of-speech taggers, making it nonviable for the Armenian language.Nothman et al. generated a silver-standard corpus for 9 languages by extracting Wikipedia article texts with outgoing links and turning those links into named entity annotations based on the target article's type BIBREF5 . Sysoev and Andrianov used a similar approach for the Russian language BIBREF6 BIBREF7 . Based on its success for a wide range of languages, our choice fell on this model to tackle automated data generation and annotation for the Armenian language.Aside from the lack of training data, we also address the absence of a benchmark dataset of Armenian texts for named entity recognition. We propose a gold-standard corpus with manual annotation of CoNLL named entity categories: person, location, and organization BIBREF8 BIBREF9 , hoping it will be used to evaluate future named entity recognition models.Furthermore, popular entity recognition models were applied to the mentioned data in order to obtain baseline results for future research in the area. Along with the datasets, we developed GloVe BIBREF10 word embeddings to train and evaluate the deep learning models in our experiments.The contributions of this work are (i) the silver-standard training corpus, (ii) the gold-standard test corpus, (iii) GloVe word embeddings, (iv) baseline results for 3 different models on the proposed benchmark data set. All aforementioned resources are available on GitHub.We used Sysoev and Andrianov's modification of the Nothman et al. approach to automatically generate data for training a named entity recognizer. This approach uses links between Wikipedia articles to generate sequences of named-entity annotated tokens.The main steps of the dataset extraction system are described in Figure FIGREF3 .First, each Wikipedia article is assigned a named entity class (e.g. the article Քիմ Քաշքաշյան (Kim Kashkashian) is classified as PER (person), Ազգերի լիգա(League of Nations) as ORG (organization), Սիրիա(Syria) as LOC etc). One of the core differences between our approach and Nothman's system is that we do not rely on manual classification of articles and do not use inter-language links to project article classifications across languages. Instead, our classification algorithm uses only an article's Wikidata entry's first instance of label's parent subclass of labels, which are, incidentally, language independent and thus can be used for any language.Then, outgoing links in articles are assigned the article's type they are leading to. Sentences are included in the training corpus only if they contain at least one named entity and all contained capitalized words have an outgoing link to an article of known type. Since in Wikipedia articles only the first mention of each entity is linked, this approach becomes very restrictive and in order to include more sentences, additional links are inferred. This is accomplished by compiling a list of common aliases for articles corresponding to named entities, and then finding text fragments matching those aliases to assign a named entity label. An article's aliases include its title, titles of disambiguation pages with the article, and texts of links leading to the article (e.g. Լենինգրադ (Leningrad), Պետրոգրադ (Petrograd), Պետերբուրգ (Peterburg) are aliases for Սանկտ Պետերբուրգ (Saint Petersburg)). The list of aliases is compiled for all PER, ORG, LOC articles.After that, link boundaries are adjusted by removing the labels for expressions in parentheses, the text after a comma, and in some cases breaking into separate named entities if the linked text contains a comma. For example, [LOC Աբովյան (քաղաք)] (Abovyan (town)) is reworked into [LOC Աբովյան] (քաղաք).Instead of manually classifying Wikipedia articles as it was done in Nothman et al., we developed a rule-based classifier that used an article's Wikidata instance of and subclass of attributes to find the corresponding named entity type.The classification could be done using solely instance of labels, but these labels are unnecessarily specific for the task and building a mapping on it would require a more time-consuming and meticulous work. Therefore, we classified articles based on their first instance of attribute's subclass of values. Table TABREF4 displays the mapping between these values and named entity types. Using higher-level subclass of values was not an option as their values often were too general, making it impossible to derive the correct named entity category.Using the algorithm described above, we generated 7455 annotated sentences with 163247 tokens based on 20 February 2018 dump of Armenian Wikipedia.The generated data is still significantly smaller than the manually annotated corpora from CoNLL 2002 and 2003. For comparison, the train set of English CoNLL 2003 corpus contains 203621 tokens and the German one 206931, while the Spanish and Dutch corpora from CoNLL 2002 respectively 273037 and 218737 lines. The smaller size of our generated data can be attributed to the strict selection of candidate sentences as well as simply to the relatively small size of Armenian Wikipedia.The accuracy of annotation in the generated corpus heavily relies on the quality of links in Wikipedia articles. During generation, we assumed that first mentions of all named entities have an outgoing link to their article, however this was not always the case in actual source data and as a result the train set contained sentences where not all named entities are labeled. Annotation inaccuracies also stemmed from wrongly assigned link boundaries (for example, in Wikipedia article Արթուր Ուելսլի Վելինգթոն (Arthur Wellesley) there is a link to the Napoleon article with the text "է Նապոլեոնը" ("Napoleon is"), when it should be "Նապոլեոնը" ("Napoleon")). Another kind of common annotation errors occurred when a named entity appeared inside a link not targeting a LOC, ORG, or PER article (e.g. "ԱՄՆ նախագահական ընտրություններում" ("USA presidential elections") is linked to the article ԱՄՆ նախագահական ընտրություններ 2016 (United States presidential election, 2016) and as a result [LOC ԱՄՆ] (USA) is lost).In order to evaluate the models trained on generated data, we manually annotated a named entities dataset comprising 53453 tokens and 2566 sentences selected from over 250 news texts from ilur.am. This dataset is comparable in size with the test sets of other languages (Table TABREF10 ). Included sentences are from political, sports, local and world news (Figures FIGREF8 , FIGREF9 ), covering the period between August 2012 and July 2018. The dataset provides annotations for 3 popular named entity classes: people (PER), organizations (ORG), and locations (LOC), and is released in CoNLL03 format with IOB tagging scheme. Tokens and sentences were segmented according to the UD standards for the Armenian language BIBREF11 .During annotation, we generally relied on categories and guidelines assembled by BBN Technologies for TREC 2002 question answering track. Only named entities corresponding to BBN's person name category were tagged as PER. Those include proper names of people, including fictional people, first and last names, family names, unique nicknames. Similarly, organization name categories, including company names, government agencies, educational and academic institutions, sports clubs, musical ensembles and other groups, hospitals, museums, newspaper names, were marked as ORG. However, unlike BBN, we did not mark adjectival forms of organization names as named entities. BBN's gpe name, facility name, location name categories were combined and annotated as LOC.We ignored entities of other categories (e.g. works of art, law, or events), including those cases where an ORG, LOC or PER entity was inside an entity of extraneous type (e.g. ՀՀ (RA) in ՀՀ Քրեական Օրենսգիրք (RA Criminal Code) was not annotated as LOC).Quotation marks around a named entity were not annotated unless those quotations were a part of that entity's full official name (e.g. «Նաիրիտ գործարան» ՓԲԸ ("Nairit Plant" CJSC)).Depending on context, metonyms such as Կրեմլ (Kremlin), Բաղրամյան 26 (Baghramyan 26) were annotated as ORG when referring to respective government agencies. Likewise, country or city names were also tagged as ORG when referring to sports teams representing them.Apart from the datasets, we also developed word embeddings for the Armenian language, which we used in our experiments to train and evaluate named entity recognition algorithms. Considering their ability to capture semantic regularities, we used GloVe to train word embeddings. We assembled a dataset of Armenian texts containing 79 million tokens from the articles of Armenian Wikipedia, The Armenian Soviet Encyclopedia, a subcorpus of Eastern Armenian National Corpus BIBREF12 , over a dozen Armenian news websites and blogs. Included texts covered topics such as economics, politics, weather forecast, IT, law, society and politics, coming from non-fiction as well as fiction genres.Similar to the original embeddings published for the English language, we release 50-, 100-, 200- and 300-dimensional word vectors for Armenian with a vocabulary size of 400000. Before training, all the words in the dataset were lowercased. For the final models we used the following training hyperparameters: 15 window size and 20 training epochs.In this section we describe a number of experiments targeted to compare the performance of popular named entity recognition algorithms on our data. We trained and evaluated Stanford NER, spaCy 2.0, and a recurrent model similar to BIBREF13 , BIBREF14 that uses bidirectional LSTM cells for character-based feature extraction and CRF, described in Guillaume Genthial's Sequence Tagging with Tensorflow blog post BIBREF15 .Stanford NER is conditional random fields (CRF) classifier based on lexical and contextual features such as the current word, character-level n-grams of up to length 6 at its beginning and the end, previous and next words, word shape and sequence features BIBREF16 .spaCy 2.0 uses a CNN-based transition system for named entity recognition. For each token, a Bloom embedding is calculated based on its lowercase form, prefix, suffix and shape, then using residual CNNs, a contextual representation of that token is extracted that potentially draws information from up to 4 tokens from each side BIBREF17 . Each update of the transition system's configuration is a classification task that uses the contextual representation of the top token on the stack, preceding and succeeding tokens, first two tokens of the buffer, and their leftmost, second leftmost, rightmost, second rightmost children. The valid transition with the highest score is applied to the system. This approach reportedly performs within 1% of the current state-of-the-art for English . In our experiments, we tried out 50-, 100-, 200- and 300-dimensional pre-trained GloVe embeddings. Due to time constraints, we did not tune the rest of hyperparameters and used their default values.The main model that we focused on was the recurrent model with a CRF top layer, and the above-mentioned methods served mostly as baselines. The distinctive feature of this approach is the way contextual word embeddings are formed. For each token separately, to capture its word shape features, character-based representation is extracted using a bidirectional LSTM BIBREF18 . This representation gets concatenated with a distributional word vector such as GloVe, forming an intermediate word embedding. Using another bidirectional LSTM cell on these intermediate word embeddings, the contextual representation of tokens is obtained (Figure FIGREF17 ). Finally, a CRF layer labels the sequence of these contextual representations. In our experiments, we used Guillaume Genthial's implementation of the algorithm. We set the size of character-based biLSTM to 100 and the size of second biLSTM network to 300.Experiments were carried out using IOB tagging scheme, with a total of 7 class tags: O, B-PER, I-PER, B-LOC, I-LOC, B-ORG, I-ORG.We randomly selected 80% of generated annotated sentences for training and used the other 20% as a development set. The models with the best F1 score on the development set were tested on the manually annotated gold dataset.Table TABREF19 shows the average scores of evaluated models. The highest F1 score was achieved by the recurrent model using a batch size of 8 and Adam optimizer with an initial learning rate of 0.001. Updating word embeddings during training also noticeably improved the performance. GloVe word vector models of four different sizes (50, 100, 200, and 300) were tested, with vectors of size 50 producing the best results (Table TABREF20 ).For spaCy 2.0 named entity recognizer, the same word embedding models were tested. However, in this case the performance of 200-dimensional embeddings was highest (Table TABREF21 ). Unsurprisingly, both deep learning models outperformed the feature-based Stanford recognizer in recall, the latter however demonstrated noticeably higher precision.It is clear that the development set of automatically generated examples was not an ideal indicator of models' performance on gold-standard test set. Higher development set scores often led to lower test scores as seen in the evaluation results for spaCy 2.0 and Char-biLSTM+biLSTM+CRF (Tables TABREF21 and TABREF20 ). Analysis of errors on the development set revealed that many were caused by the incompleteness of annotations, when named entity recognizers correctly predicted entities that were absent from annotations (e.g. [ԽՍՀՄ-ի LOC] (USSR's), [Դինամոն ORG] (the_Dinamo), [Պիրենեյան թերակղզու LOC] (Iberian Peninsula's) etc). Similarly, the recognizers often correctly ignored non-entities that are incorrectly labeled in data (e.g. [օսմանների PER], [կոնսերվատորիան ORG] etc).Generally, tested models demonstrated relatively high precision of recognizing tokens that started named entities, but failed to do so with descriptor words for organizations and, to a certain degree, locations. The confusion matrix for one of the trained recurrent models illustrates that difference (Table TABREF22 ). This can be partly attributed to the quality of generated data: descriptor words are sometimes superfluously labeled (e.g. [Հավայան կղզիների տեղաբնիկները LOC] (the indigenous people of Hawaii)), which is likely caused by the inconsistent style of linking in Armenian Wikipedia (in the article ԱՄՆ մշակույթ (Culture of the United States), its linked text fragment "Հավայան կղզիների տեղաբնիկները" ("the indigenous people of Hawaii") leads to the article Հավայան կղզիներ(Hawaii)).We release two named-entity annotated datasets for the Armenian language: a silver-standard corpus for training NER models, and a gold-standard corpus for testing. It is worth to underline the importance of the latter corpus, as we aim it to serve as a benchmark for future named entity recognition systems designed for the Armenian language. Along with the corpora, we publish GloVe word vector models trained on a collection of Armenian texts.Additionally, to establish the applicability of Wikipedia-based approaches for the Armenian language, we provide evaluation results for 3 different named entity recognition systems trained and tested on our datasets. The results reinforce the ability of deep learning approaches in achieving relatively high recall values for this specific task, as well as the power of using character-extracted embeddings alongside conventional word embeddings.There are several avenues of future work. Since Nothman et al. 2013, more efficient methods of exploiting Wikipedia have been proposed, namely WiNER BIBREF19 , which could help increase both the quantity and quality of the training corpus. Another potential area of work is the further enrichment of the benchmark test set with additional annotation of other classes such as MISC or more fine-grained types (e.g. CITY, COUNTRY, REGION etc instead of LOC).
|
[
"what ner models were evaluated?",
"what is the source of the news sentences?",
"did they use a crowdsourcing platform for manual annotations?"
] |
[
[
"",
""
],
[
"",
""
],
[
"",
""
]
] |
“I'm supposed to trust the opinion of a MS minion? The people that produced Windows ME, Vista and 8? They don't even understand people, yet they think they can predict the behavior of new, self-guiding AI?” –anonymous“I think an AI would make it easier for Patients to confide their information because by nature, a robot cannot judge them. Win-win? :D”' –anonymousDogmatism describes the tendency to lay down opinions as incontrovertibly true, without respect for conflicting evidence or the opinions of others BIBREF0 . Which user is more dogmatic in the examples above? This question is simple for humans. Phrases like “they think” and “they don't even understand,” suggest an intractability of opinion, while “I think” and “win-win?” suggest the opposite. Can we train computers to draw similar distinctions? Work in psychology has called out many aspects of dogmatism that can be modeled computationally via natural language, such as over-confidence and strong emotions BIBREF1 .We present a statistical model of dogmatism that addresses two complementary goals. First, we validate psychological theories by examining the predictive power of feature sets that guide the model's predictions. For example, do linguistic signals of certainty help to predict a post is dogmatic, as theory would suggest? Second, we apply our model to answer four questions:R1: What kinds of topics (e.g., guns, LGBT) attract the highest levels of dogmatism?R2: How do dogmatic beliefs cluster?R3: How does dogmatism influence a conversation on social media? R4: How do other user behaviors (e.g., frequency and breadth of posts) relate to dogmatism?We train a predictive model to classify dogmatic posts from Reddit, one of the most popular discussion communities on the web. Posts on Reddit capture discussion and debate across a diverse set of domains and topics – users talk about everything from climate change and abortion, to world news and relationship advice, to the future of artificial intelligence. As a prerequisite to training our model, we have created a corpus of 5,000 Reddit posts annotated with levels of dogmatism, which we are releasing to share with other researchers.Using the model, we operationalize key domain-independent aspects of psychological theories of dogmatism drawn from the literature. We find these features have predictive power that largely supports the underlying theory. For example, posts that use less confident language tend to be less dogmatic. We also discover evidence for new attributes of dogmatism. For example, dogmatic posts tend not to verbalize cognition, through terms such as “I think,” “possibly,” or “might be.”Our model is trained on only 5,000 annotated posts, but once trained, we use it to analyze millions of other Reddit posts to answer our research questions. We find a diverse set of topics are colored by dogmatic language (e.g., people are dogmatic about religion, but also about LGBT issues). Further, we find some evidence for dogmatism as a deeper personality trait – people who are strongly dogmatic about one topic are more likely to express dogmatic views about others as well. Finally, in conversation, we discover that one user's dogmatism tends to bring out dogmatism in their conversational partner, forming a vicious cycle.Posts on Reddit capture debate and discussion across a diverse set of topics, making them a natural starting point for untangling domain-independent linguistic features of dogmatism.Data collection. Subreddits are sub-communities on Reddit oriented around specific interests or topics, such as technology or politics. Sampling from Reddit as a whole would bias the model towards the most commonly discussed content. But by sampling posts from individual subreddits, we can control the kinds of posts we use to train our model. To collect a diverse training dataset, we have randomly sampled 1000 posts each from the subreddits politics, business, science, and AskReddit, and 1000 additional posts from the Reddit frontpage. All posts in our sample appeared between January 2007 and March 2015, and to control for length effects, contain between 300 and 400 characters. This results in a total training dataset of 5000 posts.Dogmatism annotations. Building a useful computational model requires labeled training data. We labeled the Reddit dataset using crowdworkers on Amazon Mechanical Turk (AMT), creating the first public corpus annotated with levels of dogmatism. We asked crowdworkers to rate levels of dogmatism on a 5-point Likert scale, as supported by similar annotation tasks in prior work BIBREF2 . Concretely, we gave crowdworkers the following task: Given a comment, imagine you hold a well-informed, different opinion from the commenter in question. We'd like you to tell us how likely that commenter would be to engage you in a constructive conversation about your disagreement, where you each are able to explore the other's beliefs. The options are:(5): It's unlikely you'll be able to engage in any substantive conversation. When you respectfully express your disagreement, they are likely to ignore you or insult you or otherwise lower the level of discourse.(4): They are deeply rooted in their opinion, but you are able to exchange your views without the conversation degenerating too much.(3): It's not likely you'll be able to change their mind, but you're easily able to talk and understand each other's point of view.(2): They may have a clear opinion about the subject, but would likely be open to discussing alternative viewpoints.(1): They are not set in their opinion, and it's possible you might change their mind. If the comment does not convey an opinion of any kind, you may also select this option.To ensure quality work, we restricted the task to Masters workers and provided examples corresponding to each point on the scale. Including examples in a task has been shown to significantly increase the agreement and quality of crowdwork BIBREF3 . For instance, here is an example of a highly dogmatic (5) comment: I won't be happy until I see the executive suite of BofA, Wells, and all the others, frog-marched into waiting squad cars. It's ALREADY BEEN ESTABLISHED that...And a minimally dogmatic (1) comment: I agree. I would like to compile a playlist for us trance yogi's, even if you just would like to experiment with it. Is there any preference on which platform to use?Each comment has been annotated by three independent workers on AMT, which is enough to produce reliable results in most labeling tasks BIBREF4 . To compute an aggregate measure of dogmatism for each comment, we summed the scores of all three workers. We show the resulting distribution of annotations in Figure 1 .Inter-annotator agreement. To evaluate the reliability of annotations we compute Krippendorff's $\alpha $ , a measure of agreement designed for variable levels of measurement such as a Likert scale BIBREF5 . An $\alpha $ of 0 indicates agreement indistinguishable from chance, while an $\alpha $ of 1 indicates perfect agreement. Across all annotations we find $\alpha =0.44$ . While workers agree much more than chance, clearly dogmatism is also subjective. In fact, when we examine only the middle two quartiles of the dogmatism annotations, we find agreement is no better than chance. Alternatively, when we measure agreement only among the top and bottom quartiles of annotations, we find agreement of $\alpha =0.69$ . This suggests comments with scores that are only slightly dogmatic are unreliable and often subject to human disagreement. For this reason, we use only the top and bottom quartiles of comments when training our model.We now consider strategies for identifying dogmatism based on prior work in psychology. We start with the Linguistic Inquiry and Word Count (LIWC), a lexicon popular in the social sciences BIBREF6 . LIWC provides human validated lists of words that correspond to high-level psychological categories such as certainty or perception. In other studies, LIWC has uncovered linguistic signals relating to politeness BIBREF2 , deception BIBREF7 , or authority in texts BIBREF8 . Here, we examine how dogmatism relates to 17 of LIWC's categories (Table 1 ).To compute the relationships between LIWC categories and dogmatism, we first count the relevant category terms that appear in each annotated Reddit comment, normalized by its word count. We then calculate odds ratios on the aggregate counts of each LIWC category over the top and bottom quartiles of dogmatic comments. As we have discussed, using the top and bottom quartiles of comments provides a more reliable signal of dogmatism. We check for significant differences in categories between dogmatic and non-dogmatic comments using the Mann-Whitney U test and apply Holmes method for correction. All odds we report in this section are significant after correction.Dogmatic statements tend to express a high degree of certainty BIBREF1 . Here we consider LIWC categories that express certainty both positively (certainty) and negatively (tentativeness). For example, the word “always” is certain, while “possibly” is tentative. Conforming to existing theory, certainty is more associated with dogmatic comments (1.52 odds), while tentativeness is more associated with the absence of dogmatism (0.88 odds).Terms used to verbalize cognition can act as a hedge that often characterizes non-dogmatic language. LIWC's insight category captures this effect through words such as “think,” “know,” or “believe.” These words add nuance to a statement BIBREF9 , signaling it is the product of someone's mind (“I think you should give this paper a good review”) and not meant to be interpreted as an objective truth. Along these lines, we find the use of terms in the insight category is associated with non-dogmatic comments (0.83 odds).Sensory language, with its focus on description and detail, often signals a lack of any kind of opinion, dogmatic or otherwise. LIWC's perception category captures this idea through words associated with hearing, feeling, or seeing. For example, these words might occur when recounting a personal experience (“I saw his incoming fist”), which even if emotionally charged or negative, is less likely to be dogmatic. We find perception is associated with non-dogmatic comments at 0.77 odds.Drawing comparisons or qualifying something as relative to something else conveys a nuance that is absent from traditionally dogmatic language. The LIWC categories comparison and relativity capture these effects through comparison words such as “than” or “as” and qualifying words such as “during” or “when.” For example, the statement “I hate politicians” is more dogmatic than “I hate politicians when they can't get anything done.' Relativity is associated with non-dogmatic comments at 0.80 odds, but comparison does not reach significance.Pronouns can be surprisingly revealing indicators of language: for example, signaling one's gender or hierarchical status in a conversation BIBREF10 . We find first person singular pronouns are a useful negative signal for dogmatism (0.46 odds), while second person singular pronouns (2.18 odds) and third person plural (1.63 odds) are a useful positive signal. Looking across the corpus, we see I often used with a hedge (“I think” or “I know”), while you and they tend to characterize the beliefs of others, often in a strongly opinionated way (“you are a moron” or “they are keeping us down”). Other pronoun types do not show significant relationships.Like pronouns, verb tense can reveal subtle signals in language use, such as the tendency of medical inpatients to focus on the past BIBREF11 . On social media, comments written in the present tense are more likely to be oriented towards a user's current interaction (“this is all so stupid”), creating opportunities to signal dogmatism. Alternatively, comments in the past tense are more likely to refer to outside experiences (“it was an awful party”), speaking less to a user's stance towards an ongoing discussion. We find present tense is a positive signal for dogmatism (1.11 odds) and past tense is a negative signal (0.69 odds).Dogmatic language can be either positively or negatively charged in sentiment: for example, consider the positive statement “Trump is the SAVIOR of this country!!!” or the negative statement “Are you REALLY that stupid?? Education is the only way out of this horrible mess. It's hard to imagine how anyone could be so deluded.” In diverse communities, where people hold many different kinds of opinions, dogmatic opinions will often tend to come into conflict with one another BIBREF12 , producing a greater likelihood of negative sentiment. Perhaps for this reason, negative emotion (2.09 odds) and swearing (3.80 odds) are useful positive signals of dogmatism, while positive emotion shows no significant relationship.Finally, we find that interrogative language (1.12 odds) and negation (1.35 odds) are two additional positive signals of dogmatism. While interrogative words like “how” or “what” have many benign uses, they disproportionately appear in our data in the form of rhetorical or emotionally charged questions, such as “how can anyone be that dumb?”Many of these linguistic signals are correlated with each other, suggesting that dogmatism is the cumulative effect of many component relationships. For example, consider the relatively non-dogmatic statement: “I think the reviewers are wrong in this instance.” Removing signals of insight, we have: “the reviewers are wrong in this instance,” which is slightly more dogmatic. Then removing relativity, we have: “the reviewers are wrong.” And finally, adding certainty, we have a dogmatic statement: “the reviewers are always wrong.”We now show how we can use the linguistic feature sets we have described to build a classifier that predicts dogmatism in comments. A predictive model further validates our feature sets, and also allows us to analyze dogmatism in millions of other Reddit comments in a scalable way, with multiple uses in ongoing, downstream analyses.Prediction task. Our goal is (1) to understand how well we can use the strategies in Section 3 to predict dogmatism, and (2) to test the domain-independence of these strategies. First, we test the performance of our model under cross-validation within the Reddit comment dataset. We then evaluate the Reddit-based model on a held out corpus of New York Times comments annotated using the technique in Section 2. We did not refer to this second dataset during feature construction.For classification, we consider two classes of comments: dogmatic and non-dogmatic. As in the prior analysis, we draw these comments from the top and bottom quartiles of the dogmatism distribution. This means the classes are balanced, with 2,500 total comments in the Reddit training data and 500 total comments in the New York Times testing data.We compare the predictions of logistic regression models based on unigram bag-of-words features (BOW), sentiment signals (SENT), the linguistic features from our earlier analyses (LING), and combinations of these features. BOW and SENT provide baselines for the task. We compute BOW features using term frequency-inverse document frequency (TF-IDF) and category-based features by normalizing counts for each category by the number of words in each document. The BOW classifiers are trained with regularization (L2 penalties of 1.5).Classification results. We present classification accuracy in Table 2 . BOW shows an AUC of 0.853 within Reddit and 0.776 on the held out New York Times comments. The linguistic features boost classification results within Reddit (0.881) and on the held out New York Times comments (0.791). While linguistic signals by themselves provide strong predictive power (0.801 AUC within domain), sentiment signals are much less predictive.These results suggest that linguistic features inspired by prior efforts in psychology are useful for predicting dogmatism in practice and generalize across new domains.We now apply our dogmatism classifier to a larger dataset of posts, examining how dogmatic language shapes the Reddit community. Concretely, we apply the BOW+LING model trained on the full Reddit dataset to millions of new unannotated posts, labeling these posts with a probability of dogmatism according to the classifier (0=non-dogmatic, 1=dogmatic). We then use these dogmatism annotations to address four research questions.A natural starting point for analyzing dogmatism on Reddit is to examine how it characterizes the site's sub-communities. For example, we might expect to see that subreddits oriented around topics such as abortion or climate change are more dogmatic, and subreddits about cooking are less so.To answer this question, we randomly sample 1.6 million posts from the entire Reddit community between 2007 and 2015. We then annotate each of these posts with dogmatism using our classifier, and compute the average dogmatism level for each subreddit in the sample with at least 100 posts.We present the results of this analysis in Table 3 . The subreddits with the highest levels of dogmatism tend to be oriented around politics and religion (DebateAChristian or ukpolitics), while those with the lowest levels tend to focus on hobbies (photography or homebrewing). The subreddit with the highest average dogmatism level, cringepics, is a place to make fun of socially awkward messages, often from would-be romantic partners. Dogmatism here tends to take the form of “how could someone be that stupid” and is directed at the subject of the post, as opposed to other members of the community.Similarly, SubredditDrama is a community where people come to talk about fights on the internet or social media. These fights are often then extended in discussion, for example: “If the best you can come up with is that something you did was legal, it's probably time to own up to being an ass.” The presence of this subreddit in our analysis provides a further sanity check that our model is capturing a robust signal of dogmatism.Dogmatism is widely considered to be a domain-specific attitude (for example, oriented towards religion or politics) as opposed to a deeper personality trait BIBREF1 . Here we use Reddit as a lens to examine this idea more closely. Are users who are dogmatic about one topic likely to be dogmatic about others? Do clusters of dogmatism exist around particular topics? To find out, we examine the relationships between subreddits over which individual users are dogmatic. For example, if many users often post dogmatic comments on both the politics and Christianity subreddits, but less often on worldnews, that would suggest politics and Christianity are linked per a boost in likelihood of individuals being dogmatic in both.We sample 1000 Reddit users who posted at least once a year between 2007 and 2015 to construct a corpus of 10 million posts that constitute their entire post history. We then annotate these posts using the classifier and compute the average dogmatism score per subreddit per user. For example, one user might have an average dogmatism level of 0.55 for the politics subreddit and 0.45 for the economics subreddit. Most users do not post in all subreddits, so we track only subreddits for which a user had posted at least 10 times. Any subreddits with an average dogmatism score higher than 0.50 we consider to be a user's dogmatic subreddits. We then count all pairs of these dogmatic subreddits. For example, 45 users have politics and technology among their dogmatic subreddits, so we consider politics and technology as linked 45 times. We compute the mutual information BIBREF13 between these links, which gives us a measure of the subreddits that are most related through dogmatism.We present the results of this analysis in Table 4 , choosing clusters that represent a diverse set of topics. For example, Libertarianism is linked through dogmatism to other political communities like Anarcho_Capitalism, ronpaul, or ukpolitics, as well as other topical subreddits like guns or economy. Similarly, people who are dogmatic in the business subreddit also tend to be dogmatic in subreddits for Bitcoin, socialism, and technology. Notably, when we apply the same mutual information analysis to links defined by subreddits posted in by the same user, we see dramatically different results. For example, the subreddits most linked to science through user posts are UpliftingNews, photoshopbattles, and firstworldanarchist, and millionairemakers.Finally, we see less obvious connections between subreddits that suggest some people may be dogmatic by nature. For example, among the users who are dogmatic on politics, they are also disproportionately dogmatic on unrelated subreddits such as science ( $p<0.001$ ), technology ( $p<0.001$ ), IAmA ( $p<0.001$ ), and AskReddit ( $p<0.05$ ), with p-values computed under a binomial test.We have shown dogmatism is captured by many linguistic features, but can we discover other high-level user behaviors that are similarly predictive?To find out, we compute metrics of user behavior using the data sample of 1000 users and 10 million posts described in Section 5.2. Specifically, we calculate (1) activity: a user's total number of posts, (2) breadth: the number of subreddits a user has posted in, (3) focus: the proportion of a user's posts that appear in the subreddit where they are most active, and (4) engagement: the average number of posts a user contributes to each discussion they engage in. We then fit these behavioral features to a linear regression model where we predict each user's average dogmatism level. Positive coefficients in this model are positively predictive of dogmatism, while negative coefficients are negatively predictive. We find this model is significantly predicitive of dogmatism ( $R^2=0.1$ , $p<0.001$ ), with all features reaching statistical significance ( $p<0.001$ ). Activity and focus are positively associated with dogmatism, while breadth and engagement are negatively associated (Table 5 ). Together, these results suggest dogmatic users tend to post frequently and in specific communities, but are not as inclined to continue to engage with a discussion, once it has begun.How does interacting with a dogmatic comment impact a conversation? Are users able to shrug it off? Or do otherwise non-dogmatic users become more dogmatic themselves?To answer this question, we sample 600,000 conversations triples from Reddit. These conversations consist of two people (A and B) talking, with the structure: A1 $\rightarrow $ B $\rightarrow $ A2. This allows us to measure the impact of B's dogmatism on A's response, while also controlling for the dogmatism level initially set by A. Concretely, we model the impact of dogmatism on these conversations through a linear regression. This model takes two features, the dogmatism levels of A1 and B, and predicts the dogmatism response of A2. If B's dogmatism has no effect on A's response, the coefficient that corresponds to B will not be significant in the model. Alternatively, if B's dogmatism does have some effect, it will be captured by the model's coefficient.We find the coefficient of the B feature in the model is positively associated with dogmatism ( $p<0.001$ ). In other words, engagement with a dogmatic comment tends to make a user more dogmatic themselves. This effect holds when we run the same model on data subsets consisting only of dogmatic or non-dogmatic users, and also when we conservatively remove all words used by B from A's response (i.e., controlling for quoting effects).In contrast to the computational models we have presented, dogmatism is usually measured in psychology through survey scales, in which study participants answer questions designed to reveal underlying personality attributes BIBREF1 . Over time, these surveys have been updated BIBREF14 and improved to meet standards of psychometric validity BIBREF15 .These surveys are often used to study the relationship between dogmatism and other psychological phenomena. For example, dogmatic people tend to show an increased tendency for confrontation BIBREF16 or moral conviction and religiosity BIBREF17 , and less likelihood of cognitive flexibility BIBREF18 , even among stereotypically non-dogmatic groups like atheists BIBREF19 . From a behavioral standpoint, dogmatic people solve problems differently, spending less time framing a problem and expressing more certainty in their solution BIBREF20 . Here we similarly examine how user behaviors on Reddit relate to a language model of dogmatism.Ertel sought to capture dogmatism linguistically, though a small lexicon of words that correspond with high-level concepts like certainty and compromise dota. McKenny then used this dictionary to relate dogmatism to argument quality in student essays dogmatism-essays. Our work expands on this approach, applying supervised models based on a broader set of linguistic categories to identify dogmatism in text.Other researchers have studied topics similar to dogmatism, such as signals of cognitive style in right-wing political thought BIBREF21 , the language used by trolls on social media BIBREF22 , or what makes for impartial language on twitter BIBREF23 . A similar flavor of work has examined linguistic models that capture politeness BIBREF2 , deception BIBREF24 , and authority BIBREF8 . We took inspiration from these models when constructing the feature sets in our work.Finally, while we examine what makes an opinion dogmatic, other work has pushed further into the structure of arguments, for example classifying their justifications BIBREF25 , or what makes an argument likely to win BIBREF26 . Our model may allow future researchers to probe these questions more deeply.We have constructed the first corpus of social media posts annotated with dogmatism scores, allowing us to explore linguistic features of dogmatism and build a predictive model that analyzes new content. We apply this model to Reddit, where we discover behavioral predictors of dogmatism and topical patterns in the comments of dogmatic users.Could we use this computational model to help users shed their dogmatic beliefs? Looking forward, our work makes possible new avenues for encouraging pro-social behavior in online communities.
|
[
"what are the topics pulled from Reddit?",
"What predictive model do they build?"
] |
[
[
"",
"training data has posts from politics, business, science and other popular topics; the trained model is applied to millions of unannotated posts on all of Reddit"
],
[
"",
""
]
] |
There has been significant progress on Named Entity Recognition (NER) in recent years using models based on machine learning algorithms BIBREF0 , BIBREF1 , BIBREF2 . As with other Natural Language Processing (NLP) tasks, building NER systems typically requires a massive amount of labeled training data which are annotated by experts. In real applications, we often need to consider new types of entities in new domains where we do not have existing annotated data. For such new types of entities, however, it is very hard to find experts to annotate the data within short time limits and hiring experts is costly and non-scalable, both in terms of time and money.In order to quickly obtain new training data, we can use crowdsourcing as one alternative way at lower cost in a short time. But as an exchange, crowd annotations from non-experts may be of lower quality than those from experts. It is one biggest challenge to build a powerful NER system on such a low quality annotated data. Although we can obtain high quality annotations for each input sentence by majority voting, it can be a waste of human labors to achieve such a goal, especially for some ambiguous sentences which may require a number of annotations to reach an agreement. Thus majority work directly build models on crowd annotations, trying to model the differences among annotators, for example, some of the annotators may be more trustful BIBREF3 , BIBREF4 .Here we focus mainly on the Chinese NER, which is more difficult than NER for other languages such as English for the lack of morphological variations such as capitalization and in particular the uncertainty in word segmentation. The Chinese NE taggers trained on news domain often perform poor in other domains. Although we can alleviate the problem by using character-level tagging to resolve the problem of poor word segmentation performances BIBREF5 , still there exists a large gap when the target domain changes, especially for the texts of social media. Thus, in order to get a good tagger for new domains and also for the conditions of new entity types, we require large amounts of labeled data. Therefore, crowdsourcing is a reasonable solution for these situations.In this paper, we propose an approach to training a Chinese NER system on the crowd-annotated data. Our goal is to extract additional annotator independent features by adversarial training, alleviating the annotation noises of non-experts. The idea of adversarial training in neural networks has been used successfully in several NLP tasks, such as cross-lingual POS tagging BIBREF6 and cross-domain POS tagging BIBREF7 . They use it to reduce the negative influences of the input divergences among different domains or languages, while we use adversarial training to reduce the negative influences brought by different crowd annotators. To our best knowledge, we are the first to apply adversarial training for crowd annotation learning.In the learning framework, we perform adversarial training between the basic NER and an additional worker discriminator. We have a common Bi-LSTM for representing annotator-generic information and a private Bi-LSTM for representing annotator-specific information. We build another label Bi-LSTM by the crowd-annotated NE label sequence which reflects the mind of the crowd annotators who learn entity definitions by reading the annotation guidebook. The common and private Bi-LSTMs are used for NER, while the common and label Bi-LSTMs are used as inputs for the worker discriminator. The parameters of the common Bi-LSTM are learned by adversarial training, maximizing the worker discriminator loss and meanwhile minimizing the NER loss. Thus the resulting features of the common Bi-LSTM are worker invariant and NER sensitive.For evaluation, we create two Chinese NER datasets in two domains: dialog and e-commerce. We require the crowd annotators to label the types of entities, including person, song, brand, product, and so on. Identifying these entities is useful for chatbot and e-commerce platforms BIBREF8 . Then we conduct experiments on the newly created datasets to verify the effectiveness of the proposed adversarial neural network model. The results show that our system outperforms very strong baseline systems. In summary, we make the following contributions:Our work is related to three lines of research: Sequence labeling, Adversarial training, and Crowdsourcing.Sequence labeling. NER is widely treated as a sequence labeling problem, by assigning a unique label over each sentential word BIBREF9 . Early studies on sequence labeling often use the models of HMM, MEMM, and CRF BIBREF10 based on manually-crafted discrete features, which can suffer the feature sparsity problem and require heavy feature engineering. Recently, neural network models have been successfully applied to sequence labeling BIBREF1 , BIBREF11 , BIBREF2 . Among these work, the model which uses Bi-LSTM for feature extraction and CRF for decoding has achieved state-of-the-art performances BIBREF11 , BIBREF2 , which is exploited as the baseline model in our work.Adversarial Training. Adversarial Networks have achieved great success in computer vision such as image generation BIBREF12 , BIBREF13 . In the NLP community, the method is mainly exploited under the settings of domain adaption BIBREF14 , BIBREF7 , cross-lingual BIBREF15 , BIBREF6 and multi-task learning BIBREF16 , BIBREF17 . All these settings involve the feature divergences between the training and test examples, and aim to learn invariant features across the divergences by an additional adversarial discriminator, such as domain discriminator. Our work is similar to these work but is applies on crowdsourcing learning, aiming to find invariant features among different crowdsourcing workers.Crowdsourcing. Most NLP tasks require a massive amount of labeled training data which are annotated by experts. However, hiring experts is costly and non-scalable, both in terms of time and money. Instead, crowdsourcing is another solution to obtain labeled data at a lower cost but with relative lower quality than those from experts. BIBREF18 snow2008cheap collected labeled results for several NLP tasks from Amazon Mechanical Turk and demonstrated that non-experts annotations were quite useful for training new systems. In recent years, a series of work have focused on how to use crowdsourcing data efficiently in tasks such as classification BIBREF19 , BIBREF20 , and compare quality of crowd and expert labels BIBREF21 .In sequence labeling tasks, BIBREF22 dredze2009sequence viewed this task as a multi-label problem while BIBREF3 rodrigues2014sequence took workers identities into account by assuming that each sentential word was tagged correctly by one of the crowdsourcing workers and proposed a CRF-based model with multiple annotators. BIBREF4 nguyen2017aggregating introduced a crowd representation in which the crowd vectors were added into the LSTM-CRF model at train time, but ignored them at test time. In this paper, we apply adversarial training on crowd annotations on Chinese NER in new domains, and achieve better performances than previous studies on crowdsourcing learning.We use a neural CRF model as the baseline system BIBREF9 , treating NER as a sequence labeling problem over Chinese characters, which has achieved state-of-the-art performances BIBREF5 . To this end, we explore the BIEO schema to convert NER into sequence labeling, following BIBREF2 lample-EtAl:2016:N16-1, where sentential character is assigned with one unique tag. Concretely, we tag the non-entity character by label “O”, the beginning character of an entity by “B-XX”, the ending character of an entity by “E-XX” and the other character of an entity by “I-XX”, where “XX” denotes the entity type.We build high-level neural features from the input character sequence by a bi-directional LSTM BIBREF2 . The resulting features are combined and then are fed into an output CRF layer for decoding. In summary, the baseline model has three main components. First, we make vector representations for sentential characters $\mathbf {x}_1\mathbf {x}_2\cdots \mathbf {x}_n$ , transforming the discrete inputs into low-dimensional neural inputs. Second, feature extraction is performed to obtain high-level features $\mathbf {h}_1^{\text{ner}}\mathbf {h}_2^{\text{ner}}\cdots \mathbf {h}_n^{\text{ner}}$ , by using a bi-directional LSTM (Bi-LSTM) structure together with a linear transformation over $\mathbf {x}_1\mathbf {x}_2\cdots \mathbf {x}_n$ . Third, we apply a CRF tagging module over $\mathbf {h}_1^{\text{ner}}\mathbf {h}_2^{\text{ner}}\cdots \mathbf {h}_n^{\text{ner}}$ , obtaining the final output NE labels. The overall framework of the baseline model is shown by the right part of Figure 1 .To represent Chinese characters, we simply exploit a neural embedding layer to map discrete characters into the low-dimensional vector representations. The goal is achieved by a looking-up table $\mathbf {E}^W$ , which is a model parameter and will be fine-tuned during training. The looking-up table can be initialized either by random or by using a pretrained embeddings from large scale raw corpus. For a given Chinese character sequence $c_1c_2\cdots c_n$ , we obtain the vector representation of each sentential character by: $ \mathbf {x}_t = \text{look-up}(c_t, \mathbf {E}^W), \text{~~~} t \in [1, n]$ .Based on the vector sequence $\mathbf {x}_1\mathbf {x}_2\cdots \mathbf {x}_n$ , we extract higher-level features $\mathbf {h}_1^{\text{ner}}\mathbf {h}_2^{\text{ner}}\cdots \mathbf {h}_n^{\text{ner}}$ by using a bidirectional LSTM module and a simple feed-forward neural layer, which are then used for CRF tagging at the next step.LSTM is a type of recurrent neural network (RNN), which is designed for solving the exploding and diminishing gradients of basic RNNs BIBREF23 . It has been widely used in a number of NLP tasks, including POS-tagging BIBREF11 , BIBREF24 , parsing BIBREF25 and machine translation BIBREF26 , because of its strong capabilities of modeling natural language sentences.By traversing $\mathbf {x}_1\mathbf {x}_2\cdots \mathbf {x}_n$ by order and reversely, we obtain the output features $\mathbf {h}_1^{\text{private}}\mathbf {h}_2^{\text{private}}\cdots \mathbf {h}_n^{\text{private}}$ of the bi-LSTM, where $\mathbf {h}_t^{\text{private}} = \overrightarrow{\mathbf {h}}_t \oplus \overleftarrow{\mathbf {h}}_t $ . Here we refer this Bi-LSTM as private in order to differentiate it with the common Bi-LSTM over the same character inputs which will be introduced in the next section.Further we make an integration of the output vectors of bi-directional LSTM by a linear feed-forward neural layer, resulting in the features $\mathbf {h}_1^{\text{ner}}\mathbf {h}_2^{\text{ner}}\cdots \mathbf {h}_n^{\text{ner}}$ by equation: $$\mathbf {h}_t^{\text{ner}} = \mathbf {W} \mathbf {h}_t^{\text{private}} + \mathbf {b},$$ (Eq. 6) where $\mathbf {W}$ and $\mathbf {b}$ are both model parameters.Finally we feed the resulting features $\mathbf {h}_t^{\text{ner}}, t\in [1, n]$ into a CRF layer directly for NER decoding. CRF tagging is one globally normalized model, aiming to find the best output sequence considering the dependencies between successive labels. In the sequence labeling setting for NER, the output label of one position has a strong dependency on the label of the previous position. For example, the label before “I-XX” must be either “B-XX” or “I-XX”, where “XX” should be exactly the same.CRF involves two parts for prediction. First we should compute the scores for each label based $\mathbf {h}_t^{\text{ner}}$ , resulting in $\mathbf {o}_t^{\text{ner}}$ , whose dimension is the number of output labels. The other part is a transition matrix $\mathbf {T}$ which defines the scores of two successive labels. $\mathbf {T}$ is also a model parameter. Based on $\mathbf {o}_t^{\text{ner}}$ and $\mathbf {T}$ , we use the Viterbi algorithm to find the best-scoring label sequence.We can formalize the CRF tagging process as follows: $$\begin{split}
& \mathbf {o}_t^{\text{ner}} = \mathbf {W}^{\text{ner}} \mathbf {h}_t^{\text{ner}}, \text{~~~~} t \in [1,n] \\
& \text{score}(\mathbf {X}, \mathbf {y}) = \sum _{t = 1}^{n}(\mathbf {o}_{t,y_t} + T_{y_{t-1},y_t}) \\
& \mathbf {y}^{\text{ner}} = \mathop {arg~max}_{\mathbf {y}}\big (\text{score}(\mathbf {X}, \mathbf {y}))\big ), \\
\end{split}$$ (Eq. 8) where $\text{score}(\cdot )$ is the scoring function for a given output label sequence $\mathbf {y} = y_1y_2 \cdots y_n$ based on input $\mathbf {X}$ , $\mathbf {y}^{\text{ner}}$ is the resulting label sequence, $\mathbf {W}^{\text{ner}}$ is a model parameter.To train model parameters, we exploit a negative log-likelihood objective as the loss function. We apply softmax over all candidate output label sequences, thus the probability of the crowd-annotated label sequence is computed by: $$p(\mathbf {\bar{y}}|\mathbf {X}) = \frac{\exp \big (\text{score}(\mathbf {X}, \mathbf {\bar{y}})\big )}{\sum _{\mathbf {y} \in \mathbf {Y}_{\mathbf {X}}} \exp \big (\text{score}(\mathbf {X}, \mathbf {y})\big )},$$ (Eq. 10) where $\mathbf {\bar{y}}$ is the crowd-annotated label sequences and $\mathbf {Y}_{\mathbf {X}}$ is all candidate label sequence of input $\mathbf {X}$ .Based on the above formula, the loss function of our baseline model is: $$\text{loss}(\Theta , \mathbf {X}, \mathbf {\bar{y}}) = -\log p(\mathbf {\bar{y}}|\mathbf {X}),$$ (Eq. 11) where $\Theta $ is the set of all model parameters. We use standard back-propagation method to minimize the loss function of the baseline CRF model.Adversarial learning has been an effective mechanism to resolve the problem of the input features between the training and test examples having large divergences BIBREF27 , BIBREF13 . It has been successfully applied on domain adaption BIBREF7 , cross-lingual learning BIBREF15 and multi-task learning BIBREF17 . All settings involve feature shifting between the training and testing.In this paper, our setting is different. We are using the annotations from non-experts, which are noise and can influence the final performances if they are not properly processed. Directly learning based on the resulting corpus may adapt the neural feature extraction into the biased annotations. In this work, we assume that individual workers have their own guidelines in mind after short training. For example, a perfect worker can annotate highly consistently with an expert, while common crowdsourcing workers may be confused and have different understandings on certain contexts. Based on the assumption, we make an adaption for the original adversarial neural network to our setting.Our adaption is very simple. Briefly speaking, the original adversarial learning adds an additional discriminator to classify the type of source inputs, for example, the domain category in the domain adaption setting, while we add a discriminator to classify the annotation workers. Solely the features from the input sentence is not enough for worker classification. The annotation result of the worker is also required. Thus the inputs of our discriminator are different. Here we exploit both the source sentences and the crowd-annotated NE labels as basic inputs for the worker discrimination.In the following, we describe the proposed adversarial learning module, including both the submodels and the training method. As shown by the left part of Figure 1 , the submodel consists of four parts: (1) a common Bi-LSTM over input characters; (2) an additional Bi-LSTM to encode crowd-annotated NE label sequence; (3) a convolutional neural network (CNN) to extract features for worker discriminator; (4) output and prediction.To build the adversarial part, first we create a new bi-directional LSTM, named by the common Bi-LSTM: $$\mathbf {h}_1^{\text{\tiny common}} \mathbf {h}_2^{\text{\tiny common}} \cdots \mathbf {h}_n^{\text{\tiny common}} = \text{Bi-LSTM}(\mathbf {x}_1\mathbf {x}_2\cdots \mathbf {x}_n).$$ (Eq. 13) As shown in Figure 1 , this Bi-LSTM is constructed over the same input character representations of the private Bi-LSTM, in order to extract worker independent features.The resulting features of the common Bi-LSTM are used for both NER and the worker discriminator, different with the features of private Bi-LSTM which are used for NER only. As shown in Figure 1 , we concatenate the outputs of the common and private Bi-LSTMs together, and then feed the results into the feed-forward combination layer of the NER part. Thus Formula 6 can be rewritten as: $$\mathbf {h}_t^{\text{ner}} = \mathbf {W} (\mathbf {h}_t^{\text{common}} \oplus \mathbf {h}_t^{\text{private}}) + \mathbf {b},$$ (Eq. 14) where $\mathbf {W}$ is wider than the original combination because the newly-added $\mathbf {h}_t^{\text{common}}$ .Noticeably, although the resulting common features are used for the worker discriminator, they actually have no capability to distinguish the workers. Because this part is exploited to maximize the loss of the worker discriminator, it will be interpreted in the later training subsection. These features are invariant among different workers, thus they can have less noises for NER. This is the goal of adversarial learning, and we hope the NER being able to find useful features from these worker independent features.In order to incorporate the annotated NE labels to predict the exact worker, we build another bi-directional LSTM (named by label Bi-LSTM) based on the crowd-annotated NE label sequence. This Bi-LSTM is used for worker discriminator only. During the decoding of the testing phase, we will never have this Bi-LSTM, because the worker discriminator is no longer required.Assuming the crowd-annotated NE label sequence annotated by one worker is $\mathbf {\bar{y}} = \bar{y}_1\bar{y}_2 \cdots \bar{y}_n$ , we exploit a looking-up table $\mathbf {E}^{L}$ to obtain the corresponding sequence of their vector representations $\mathbf {x^{\prime }}_1\mathbf {x^{\prime }}_2\cdots \mathbf {x^{\prime }}_n$ , similar to the method that maps characters into their neural representations. Concretely, for one NE label $\bar{y}_t$ ( $t \in [1, n]$ ), we obtain its neural vector by: $\mathbf {x^{\prime }}_t = \text{look-up}(\bar{y}_t, \mathbf {E}^L)$ .Next step we apply bi-directional LSTM over the sequence $\mathbf {x^{\prime }}_1\mathbf {x^{\prime }}_2\cdots \mathbf {x^{\prime }}_n$ , which can be formalized as: $$\mathbf {h}_1^{\text{label}} \mathbf {h}_2^{\text{label}} \cdots \mathbf {h}_n^{\text{label}} = \text{Bi-LSTM}(\mathbf {x^{\prime }}_1\mathbf {x^{\prime }}_2\cdots \mathbf {x^{\prime }}_n).$$ (Eq. 16) The resulting feature sequence is concatenated with the outputs of the common Bi-LSTM, and further be used for worker classification.Following, we add a convolutional neural network (CNN) module based on the concatenated outputs of the common Bi-LSTM and the label Bi-LSTM, to produce the final features for worker discriminator. A convolutional operator with window size 5 is used, and then max pooling strategy is applied over the convolution sequence to obtain the final fixed-dimensional feature vector. The whole process can be described by the following equations: $$\begin{split}
&\mathbf {h}_t^{\text{worker}} = \mathbf {h}_t^{\text{common}} \oplus \mathbf {h}_t^{\text{label}} \\
&\mathbf {\tilde{h}}_t^{\text{worker}} = \tanh (\mathbf {W}^{\text{cnn}}[\mathbf {h}_{t-2}^{\text{worker}}, \mathbf {h}_{t-1}^{\text{worker}}, \cdots , \mathbf {h}_{t+2}^{\text{worker}}]) \\
&\mathbf {h}^{\text{worker}} = \text{max-pooling}(\mathbf {\tilde{h}}_1^{\text{worker}}\mathbf {\tilde{h}}_2^{\text{worker}} \cdots \mathbf {\tilde{h}}_n^{\text{worker}}) \\
\end{split}$$ (Eq. 18) where $t \in [1,n]$ and $\mathbf {W}^{\text{cnn}}$ is one model parameter. We exploit zero vector to paddle the out-of-index vectors.After obtaining the final feature vector for the worker discriminator, we use it to compute the output vector, which scores all the annotation workers. The score function is defined by: $$\mathbf {o}^{\text{worker}} = \mathbf {W}^{\text{worker}} \mathbf {h}^{\text{worker}},$$ (Eq. 20) where $\mathbf {W}^{\text{worker}}$ is one model parameter and the output dimension equals the number of total non-expert annotators. The prediction is to find the worker which is responsible for this annotation.The training objective with adversarial neural network is different from the baseline model, as it includes the extra worker discriminator. Thus the new objective includes two parts, one being the negative log-likelihood from NER which is the same as the baseline, and the other being the negative the log-likelihood from the worker discriminator.In order to obtain the negative log-likelihood of the worker discriminator, we use softmax to compute the probability of the actual worker $\bar{z}$ as well, which is defined by: $$p(\bar{z}|\mathbf {X}, \mathbf {\bar{y}}) = \frac{\exp (\mathbf {o}^{\text{worker}}_{\bar{z}})}{\sum _{z} \exp (\mathbf {o}^{\text{worker}}_z)},$$ (Eq. 22) where $z$ should enumerate all workers.Based on the above definition of probability, our new objective is defined as follows: $$\begin{split}
\text{R}(\Theta , \Theta ^{\prime }, \mathbf {X}, \mathbf {\bar{y}}, \bar{z}) &= \text{loss}(\Theta , \mathbf {X}, \mathbf {\bar{y}}) - \text{loss}(\Theta , \Theta ^{\prime }, \mathbf {X}) \\
\text{~~~~~~} &= -\log p(\mathbf {\bar{y}}|\mathbf {X}) + \log p(\bar{z}|\mathbf {X}, \mathbf {\bar{y}}),
\end{split}$$ (Eq. 23) where $\Theta $ is the set of all model parameters related to NER, and $\Theta ^{\prime }$ is the set of the remaining parameters which are only related to the worker discriminator, $\mathbf {X}$ , $\mathbf {\bar{y}}$ and $\bar{z}$ are the input sentence, the crowd-annotated NE labels and the corresponding annotator for this annotation, respectively. It is worth noting that the parameters of the common Bi-LSTM are included in the set of $\Theta $ by definition.In particular, our goal is not to simply minimize the new objective. Actually, we aim for a saddle point, finding the parameters $\Theta $ and $\Theta ^{\prime }$ satisfying the following conditions: $$\begin{split}
\hat{\Theta } &= \mathop {arg~min}_{\Theta }\text{R}(\Theta , \Theta ^{\prime }, \mathbf {X}, \mathbf {\bar{y}}, \bar{z}) \\
\hat{\Theta }^{\prime } &= \mathop {arg~max}_{\Theta ^{\prime }}\text{R}(\hat{\Theta }, \Theta ^{\prime }, \mathbf {X}, \mathbf {\bar{y}}, \bar{z}) \\
\end{split}$$ (Eq. 24) where the first equation aims to find one $\Theta $ that minimizes our new objective $\text{R}(\cdot )$ , and the second equation aims to find one $\Theta ^{\prime }$ maximizing the same objective.Intuitively, the first equation of Formula 24 tries to minimize the NER loss, but at the same time maximize the worker discriminator loss by the shared parameters of the common Bi-LSTM. Thus the resulting features of common Bi-LSTM actually attempt to hurt the worker discriminator, which makes these features worker independent since they are unable to distinguish different workers. The second equation tries to minimize the worker discriminator loss by its own parameter $\Theta ^{\prime }$ .We use the standard back-propagation method to train the model parameters, the same as the baseline model. In order to incorporate the term of the argmax part of Formula 24 , we follow the previous work of adversarial training BIBREF13 , BIBREF15 , BIBREF17 , by introducing a gradient reverse layer between the common Bi-LSTM and the CNN module, whose forward does nothing but the backward simply negates the gradients.With the purpose of obtaining evaluation datasets from crowd annotators, we collect the sentences from two domains: Dialog and E-commerce domain. We hire undergraduate students to annotate the sentences. They are required to identify the predefined types of entities in the sentences. Together with the guideline document, the annotators are educated some tips in fifteen minutes and also provided with 20 exemplifying sentences.Labeled Data: DL-PS. In Dialog domain (DL), we collect raw sentences from a chatbot application. And then we randomly select 20K sentences as our pool and hire 43 students to annotate the sentences. We ask the annotators to label two types of entities: Person-Name and Song-Name. The annotators label the sentences independently. In particular, each sentence is assigned to three annotators for this data. Although the setting can be wasteful of labor, we can use the resulting dataset to test several well-known baselines such as majority voting.After annotation, we remove some illegal sentences reported by the annotators. Finally, we have 16,948 sentences annotated by the students. Table 1 shows the information of annotated data. The average Kappa value among the annotators is 0.6033, indicating that the crowd annotators have moderate agreement on identifying entities on this data.In order to evaluate the system performances, we create a set of corpus with gold annotations. Concretely, we randomly select 1,000 sentences from the final dataset and let two experts generate the gold annotations. Among them, we use 300 sentences as the development set and the remaining 700 as the test set. The rest sentences with only student annotations are used as the training set.Labeled data: EC-MT and EC-UQ. In E-commerce domain (EC), we collect raw sentences from two types of texts: one is titles of merchandise entries (EC-MT) and another is user queries (EC-UQ). The annotators label five types of entities: Brand, Product, Model, Material, and Specification. These five types of entities are very important for E-commerce platform, for example building knowledge graph of merchandises. Five students participate the annotations for this domain since the number of sentences is small. We use the similar strategy as DL-PS to annotate the sentences, except that only two annotators are assigned for each sentence, because we aim to test the system performances under very small duplicated annotations.Finally, we obtain 2,337 sentences for EC-MT and 2,300 for EC-UQ. Table 1 shows the information of annotated results. Similarly, we produce the development and test datasets for system evaluation, by randomly selecting 400 sentences and letting two experts to generate the groundtruth annotations. Among them, we use 100 sentences as the development set and the remaining 300 as the test set. The rest sentences with only crowdsourcing annotations are used as the training set.Unlabeled data. The vector representations of characters are basic inputs of our baseline and proposed models, which are obtained by the looking-up table $\mathbf {E}^W$ . As introduced before, we can use pretrained embeddings from large-scale raw corpus to initialize the table. In order to pretrain the character embeddings, we use one large-scale unlabeled data from the user-generated content in Internet. Totally, we obtain a number of 5M sentences. Finally, we use the tool word2vec to pretrain the character embeddings based on the unlabeled dataset in our experiments.For evaluation, we use the entity-level metrics of Precision (P), Recall (R), and their F1 value in our experiments, treating one tagged entity as correct only when it matches the gold entity exactly.There are several hyper-parameters in the baseline LSTM-CRF and our final models. We set them empirically by the development performances. Concretely, we set the dimension size of the character embeddings by 100, the dimension size of the NE label embeddings by 50, and the dimension sizes of all the other hidden features by 200.We exploit online training with a mini-batch size 128 to learn model parameters. The max-epoch iteration is set by 200, and the best-epoch model is chosen according to the development performances. We use RMSprop BIBREF28 with a learning rate $10^{-3}$ to update model parameters, and use $l_2$ -regularization by a parameter $10^{-5}$ . We adopt the dropout technique to avoid overfitting by a drop value of $0.2$ .The proposed approach (henceforward referred to as “ALCrowd”) is compared with the following systems:CRF: We use the Crfsuite tool to train a model on the crowdsourcing labeled data. As for the feature settings, we use the supervised version of BIBREF0 zhao2008unsupervised.CRF-VT: We use the same settings of the CRF system, except that the training data is the voted version, whose groundtruths are produced by majority voting at the character level for each annotated sentence.CRF-MA: The CRF model proposed by BIBREF3 rodrigues2014sequence, which uses a prior distributation to model multiple crowdsourcing annotators. We use the source code provided by the authors.LSTM-CRF: Our baseline system trained on the crowdsourcing labeled data.LSTM-CRF-VT: Our baseline system trained on the voted corpus, which is the same as CRF-VT.LSTM-Crowd: The LSTM-CRF model with crowd annotation learning proposed by BIBREF4 nguyen2017aggregating. We use the source code provided by the authors.The first three systems are based on the CRF model using traditional handcrafted features, and the last three systems are based on the neural LSTM-CRF model. Among them, CRF-MA, LSTM-Crowd and our system with adversarial learning (ALCrowd) are based on crowd annotation learning that directly trains the model on the crowd-annotations. Five systems, including CRF, CRF-MA, LSTM-CRF, LSTM-Crowd, and ALCrowd, are trained on the original version of labeled data, while CRF-VT and LSTM-CRF-VT are trained on the voted version. Since CRF-VT, CRF-MA and LSTM-CRF-VT all require ground-truth answers for each training sentence, which are difficult to be produced with only two annotations, we do not apply the three models on the two EC datasets.In this section, we show the model performances of our proposed crowdsourcing learning system (ALCrowd), and meanwhile compare it with the other systems mentioned above. Table 2 shows the experimental results on the DL-PS datasets and Table 3 shows the experiment results on the EC-MT and EC-UQ datasets, respectively.The results of CRF and LSTM-CRF mean that the crowd annotation is an alternative solution with low cost for labeling data that could be used for training a NER system even there are some inconsistencies. Compared with CRF, LSTM-CRF achieves much better performances on all the three data, showing +6.12 F1 improvement on DL-PS, +4.51 on EC-MT, and +9.19 on EC-UQ. This indicates that LSTM-CRF is a very strong baseline system, demonstrating the effectiveness of neural network.Interestingly, when compared with CRF and LSTM-CRF, CRF-VT and LSTM-CRF-VT trained on the voted version perform worse in the DL-PS dataset. This trend is also mentioned in BIBREF4 nguyen2017aggregating. This fact shows that the majority voting method might be unsuitable for our task. There are two possible reasons accounting for the observation. On the one hand, simple character-level voting based on three annotations for each sentence may be still not enough. In the DL-PS dataset, even with only two predefined entity types, one character can have nine NE labels. Thus the majority-voting may be incapable of handling some cases. While the cost by adding more annotations for each sentence would be greatly increased. On the other hand, the lost information produced by majority-voting may be important, at least the ambiguous annotations denote that the input sentence is difficult for NER. The normal CRF and LSTM-CRF models without discard any annotations can differentiate these difficult contexts through learning.Three crowd-annotation learning systems provide better performances than their counterpart systems, (CRF-MA VS CRF) and (LSTM-Crowd/ALCrowd VS LSTM-CRF). Compared with the strong baseline LSTM-CRF, ALCrowd shows its advantage with +1.08 F1 improvements on DL-PS, +1.24 on EC-MT, and +2.38 on EC-UQ, respectively. This indicates that adding the crowd-annotation learning is quite useful for building NER systems. In addition, ALCrowd also outperforms LSTM-Crowd on all the datasets consistently, demonstrating the high effectiveness of ALCrowd in extracting worker independent features. Among all the systems, ALCrowd performs the best, and significantly better than all the other models (the p-value is below $10^{-5}$ by using t-test). The results indicate that with the help of adversarial training, our system can learn a better feature representation from crowd annotation.Impact of Character Embeddings. First, we investigate the effect of the pretrained character embeddings in our proposed crowdsourcing learning model. The comparison results are shown in Figure 2 , where Random refers to the random initialized character embeddings, and Pretrained refers to the embeddings pretrained on the unlabeled data. According to the results, we find that our model with the pretrained embeddings significantly outperforms that using the random embeddings, demonstrating that the pretrained embeddings successfully provide useful information.Case Studies. Second, we present several case studies in order to study the differences between our baseline and the worker adversarial models. We conduct a closed test on the training set, the results of which can be regarded as modifications of the training corpus, since there exist inconsistent annotations for each training sentence among the different workers. Figure 3 shows the two examples from the DL-PS dataset, which compares the outputs of the baseline and our final models, as well as the majority-voting strategy.In the first case, none of the annotations get the correct NER result, but our proposed model can capture it. The result of LSTM-CRF is the same as majority-voting. In the second example, the output of majority-voting is the worst, which can account for the reason why the same model trained on the voted corpus performs so badly, as shown in Table 2 . The model of LSTM-CRF fails to recognize the named entity “Xiexie” because of not trusting the second annotation, treating it as one noise annotation. Our proposed model is able to recognize it, because of its ability of extracting worker independent features.In this paper, we presented an approach to performing crowd annotation learning based on the idea of adversarial training for Chinese Named Entity Recognition (NER). In our approach, we use a common and private Bi-LSTMs for representing annotator-generic and -specific information, and learn a label Bi-LSTM from the crowd-annotated NE label sequences. Finally, the proposed approach adopts a LSTM-CRF model to perform tagging. In our experiments, we create two data sets for Chinese NER tasks in the dialog and e-commerce domains. The experimental results show that the proposed approach outperforms strong baseline systems.This work is supported by the National Natural Science Foundation of China (Grant No. 61572338, 61525205, and 61602160). This work is also partially supported by the joint research project of Alibaba and Soochow University. Wenliang is also partially supported by Collaborative Innovation Center of Novel Software Technology and Industrialization.
|
[
"What accuracy does the proposed system achieve?",
"What crowdsourcing platform is used?"
] |
[
[
"F1 scores of 85.99 on the DL-PS data, 75.15 on the EC-MT data and 71.53 on the EC-UQ data ",
"F1 of 85.99 on the DL-PS dataset (dialog domain); 75.15 on EC-MT and 71.53 on EC-UQ (e-commerce domain)"
],
[
"",
"They did not use any platform, instead they hired undergraduate students to do the annotation."
]
] |
Deep Learning approaches have achieved impressive results on various NLP tasks BIBREF0 , BIBREF1 , BIBREF2 and have become the de facto approach for any NLP task. However, these deep learning techniques have found to be less effective for low-resource languages when the available training data is very less BIBREF3 . Recently, several approaches like Multi-task learning BIBREF4 , multilingual learning BIBREF5 , semi-supervised learning BIBREF2 , BIBREF6 and transfer learning BIBREF7 , BIBREF3 have been explored by the deep learning community to overcome data sparsity in low-resource languages. Transfer learning trains a model for a parent task and fine-tunes the learned parent model weights (features) for a related child task BIBREF7 , BIBREF8 . This effectively reduces the requirement on training data for the child task as the model would have learned relevant features from the parent task data thereby, improving the performance on the child task.Transfer learning has also been explored in the multilingual Neural Machine Translation BIBREF3 , BIBREF9 , BIBREF10 . The goal is to improve the NMT performance on the source to target language pair (child task) using an assisting source language (assisting to target translation is the parent task). Here, the parent model is trained on the assisting and target language parallel corpus and the trained weights are used to initialize the child model. The child model can now be fine-tuned on the source-target language pairs, if parallel corpus is available. The divergence between the source and the assisting language can adversely impact the benefits obtained from transfer learning. Multiple studies have shown that transfer learning works best when the languages are related BIBREF3 , BIBREF10 , BIBREF9 . Several studies have tried to address lexical divergence between the source and the target languages BIBREF10 , BIBREF11 , BIBREF12 . However, the effect of word order divergence and its mitigation has not been explored. In a practical setting, it is not uncommon to have source and assisting languages with different word order. For instance, it is possible to find parallel corpora between English and some Indian languages, but very little parallel corpora between Indian languages. Hence, it is natural to use English as an assisting language for inter-Indian language translation.To see how word order divergence can be detrimental, let us consider the case of the standard RNN (Bi-LSTM) encoder-attention-decoder architecture BIBREF13 . The encoder generates contextual representations (annotation vectors) for each source word, which are used by the attention network to match the source words to the current decoder state. The contextual representation is word-order dependent. Hence, if the assisting and the source languages do not have similar word order the generated contextual representations will not be consistent. The attention network (and hence the decoder) sees different contextual representations for similar words in parallel sentences across different languages. This makes it difficult to transfer knowledge learned from the assisting language to the source language.We illustrate this by visualizing the contextual representations generated by the encoder of an English to Hindi NMT system for two versions of the English input: (a) original word order (SVO) (b) word order of the source language (SOV, for Bengali). Figure FIGREF1 shows that the encoder representations obtained are very different. The attention network and the decoder now have to work with very different representations. Note that the plot below does not take into account further lexical and other divergences between source and assisting languages, since we demonstrated word order divergence with the same language on the source side.To address this word order divergence, we propose to pre-order the assisting language sentences to match the word order of the source language. We consider an extremely resource constrained scenario, where we do not have any parallel corpus for the child task. We are limited to a bilingual dictionary for transfer information from the assisting to the source language. From our experiments, we show that there is a significant increase in the translation accuracy for the unseen source-target language pair. BIBREF3 explored transfer learning for NMT on low-resource languages. They studied the influence of language divergence between languages chosen for training the parent and child model, and showed that choosing similar languages for training the parent and child model leads to better improvements from transfer learning. A limitation of BIBREF3 approach is that they ignore the lexical similarity between languages and also the source language embeddings are randomly initialized. BIBREF10 , BIBREF11 , BIBREF12 take advantage of lexical similarity between languages in their work. BIBREF10 proposed to use Byte-Pair Encoding (BPE) to represent the sentences in both the parent and the child language to overcome the above limitation. They show using BPE benefits transfer learning especially when the involved languages are closely-related agglutinative languages. Similarly, BIBREF11 utilize lexical similarity between the source and assisting languages by training a character-level NMT system. BIBREF12 address lexical divergence by using bilingual embeddings and mixture of universal token embeddings. One of the languages' vocabulary, usually English vocabulary is considered as universal tokens and every word in the other languages is represented as a mixture of universal tokens. They show results on extremely low-resource languages.To the best of our knowledge, no work has addressed word order divergence in transfer learning for multilingual NMT. However, some work exists for other NLP tasks that could potentially address word order. For Named Entity Recognition (NER), BIBREF14 use a self-attention layer after the Bi-LSTM layer to address word-order divergence for Named Entity Recognition (NER) task. The approach does not show any significant improvements over multiple languages. A possible reason is that the divergence has to be addressed before/during construction of the contextual embeddings in the Bi-LSTM layer, and the subsequent self-attention layer does not address word-order divergence. BIBREF15 use adversarial training for cross-lingual question-question similarity ranking in community question answering. The adversarial training tries to force the encoder representations of similar sentences from different input languages to have similar representations.Pre-ordering the source language sentences to match the target language word order has been useful in addressing word-order divergence for Phrase-Based SMT BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 . Recently, BIBREF20 proposed a way to measure and reduce the divergence between the source and target languages based on morphological and syntactic properties, also termed as anisomorphism. They demonstrated that by reducing the anisomorphism between the source and target languages, consistent improvements in NMT performance were obtained. The NMT system used additional features like word forms, POS tags and dependency relations in addition to parallel corpora. On the other hand, BIBREF21 observed a drop in performance due to pre-ordering for NMT. Unlike BIBREF20 , the NMT system was trained on pre-ordered sentences and no additional features were provided to the system. Note that all these works address source-target divergence, not divergence between source languages in multilingual NMT.Consider the task of translating from an extremely low-resource language (source) to a target language. The parallel corpus between the two languages if available may be too small to train a NMT model. Similar to existing works BIBREF3 , BIBREF10 , BIBREF12 , we use transfer learning to overcome data sparsity and train a NMT model between the source and the target languages. Specifically, the NMT model (parent model) is trained on the assisting language and target language pairs. We choose English as the assisting language in all our experiments. In our resource-scarce scenario, we have no parallel corpus for the child task. Hence, at test time, the source language sentence is translated using the parent model after performing a word-by-word translation into the assisting language.Since the source language and the assisting language (English) have different word order, we hypothesize that it leads to inconsistencies in the contextual representations generated by the encoder for the two languages. In this paper, we propose to pre-order English sentences (assisting language sentences) to match the word-order of the source language and train the parent model on this pre-ordered corpus. In our experiments, we look at scenarios where the assisting language has SVO word order and the source language has SOV word order.For instance, consider the English sentence Anurag will meet Thakur. One of the pre-ordering rule swaps the position of the noun phrase followed by a transitive verb with the transitive verb. The original and the resulting re-ordered parse tree will be as shown in the Table TABREF5 . Applying this reordering rule to the above sentence Anurag will meet Thakur will yield the reordered sentence Anurag Thakur will meet. Additionally, the Table TABREF5 shows the parse trees for the above sentence with and without pre-ordering.Pre-ordering should also be beneficial for other word order divergence scenarios (e.g., SOV to SVO), but we leave verification of these additional scenarios for future work.In this section, we describe the languages experimented with, datasets used, the network hyper-parameters used in our experiments.We experimented with English INLINEFORM0 Hindi translation as the parent task. English is the assisting source language. Bengali, Gujarati, Marathi, Malayalam and Tamil are the primary source languages, and translation from these to Hindi constitute the child tasks. Hindi, Bengali, Gujarati and Marathi are Indo-Aryan languages, while Malayalam and Tamil are Dravidian languages. All these languages have a canonical SOV word order.For training English-Hindi NMT systems, we use the IITB English-Hindi parallel corpus BIBREF22 ( INLINEFORM0 sentences from the training set) and the ILCI English-Hindi parallel corpus ( INLINEFORM1 sentences). The ILCI (Indian Language Corpora Initiative) multilingual parallel corpus BIBREF23 spans multiple Indian languages from the health and tourism domains. We use the 520-sentence dev-set of the IITB parallel corpus for validation. For each child task, we use INLINEFORM2 sentences from ILCI corpus as the test set.We use OpenNMT-Torch BIBREF24 to train the NMT system. We use the standard sequence-to-sequence architecture with attention BIBREF13 . We use an encoder which contains two layers of bidirectional LSTMs with 500 neurons each. The decoder contains two LSTM layers with 500 neurons each. Input feeding approach BIBREF1 is used where the previous attention hidden state is fed as input to the decoder LSTM. We use a mini-batch of size 50 and use a dropout layer. We begin with an initial learning rate of INLINEFORM0 and decay the learning rate by a factor of INLINEFORM1 when the perplexity on validation set increases. The training is stopped when the learning rate falls below INLINEFORM2 or number of epochs=22. The English input is initialized with pre-trained embeddings trained using fastText BIBREF25 .English vocabulary consists of INLINEFORM0 tokens appearing at least 2 times in the English training corpus. For constructing the Hindi vocabulary we considered only those tokens appearing at least 5 times in the training split resulting in a vocabulary size of INLINEFORM1 tokens. For representing English and other source languages into a common space, we translate each word in the source language into English using a bilingual dictionary (Google Translate word translation in our case). In an end-to-end solution, it would have been ideal to use bilingual embeddings or obtain word-by-word translations via bilingual embeddings BIBREF14 . But, the quality of publicly available bilingual embeddings for English-Indian languages is very low for obtaining good-quality, bilingual representations BIBREF26 , BIBREF27 . We also found that these embeddings were not useful for transfer learning.We use the CFILT-preorder system for reordering English sentences to match the Indian language word order. It contains two re-ordering systems: (1) generic rules that apply to all Indian languages BIBREF17 , and (2) hindi-tuned rules which improve the generic rules by incorporating improvements found through an error analysis of English-Hindi reordering BIBREF28 . These Hindi-tuned rules have been found to improve reordering for many English to Indian language pairs BIBREF29 .In this section, we describe the results from our experiments on NMT task. We report the results on X-Hindi pair, where X is one of Bengali, Gujarati, Marathi, Tamil, and Malayalam. The results are presented in the Table TABREF6 . We report BLEU scores and LeBLEU scores BIBREF30 . We observe that both the pre-ordering configurations significantly improve the BLEU scores over the baseline scores. We observe larger gains when generic pre-ordering rules are used compared to the Hindi-tuned pre-ordering rules.These results support our hypothesis that word-order divergence can limit the benefits of multilingual translation. Reducing the word order divergence can improve translation in extremely low-resource scenarios.An analysis of the outputs revealed that pre-ordering significantly reducing the number of UNK tokens (placeholder for unknown words) in the test output (Table TABREF14 ). We hypothesize that due to word order divergence between English and Indian languages, the encoder representation generated is not consistent leading to decoder generating unknown words. However, the pre-ordered models generate better contextual representations leading to less number of unknown tokens and better translation which is also reflected in the BLEU scores.In this paper, we show that handling word-order divergence between source and assisting languages is crucial for the success of multilingual NMT in an extremely low-resource setting. We show that pre-ordering the assisting language to match the word order of the source language significantly improves translation quality in an extremely low-resource setting. While the current work focused on Indian languages, we would like to validate the hypothesis on a more diverse set of languages.
|
[
"How do they match words before reordering them?",
"On how many language pairs do they show that preordering assisting language sentences helps translation quality?",
"Which dataset(s) do they experiment with?"
] |
[
[
"",
""
],
[
"5",
""
],
[
"",
""
]
] |
Simplified language is a variety of standard language characterized by reduced lexical and syntactic complexity, the addition of explanations for difficult concepts, and clearly structured layout. Among the target groups of simplified language commonly mentioned are persons with cognitive impairment or learning disabilities, prelingually deaf persons, functionally illiterate persons, and foreign language learners BIBREF0.Two natural language processing tasks deal with the concept of simplified language: automatic readability assessment and automatic text simplification. Readability assessment refers to the process of determining the level of difficulty of a text, e.g., along readability measures, school grades, or levels of the Common European Framework of Reference for Languages (CEFR) BIBREF1. Readability measures, in their traditional form, take into account only surface features. For example, the Flesch Reading Ease Score BIBREF2 measures the length of words (in syllables) and sentences (in words). While readability has been shown to correlate with such features to some extent BIBREF3, a consensus has emerged according to which they are not sufficient to account for all of the complexity inherent in a text. As [p. 2618]kauchak-et-al-2014 state, “the usability of readability formulas is limited and there is little evidence that the output of these tools directly results in improved understanding by readers”. Recently, more sophisticated models employing (deeper) linguistic features such as lexical, semantic, morphological, morphosyntactic, syntactic, pragmatic, discourse, psycholinguistic, and language model features have been proposed BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8.Automatic text simplification was initiated in the late 1990s BIBREF9, BIBREF10 and since then has been approached by means of rule-based and statistical methods. As part of a rule-based approach, the operations carried out typically include replacing complex lexical and syntactic units by simpler ones. A statistical approach generally conceptualizes the simplification task as one of converting a standard-language into a simplified-language text using machine translation. nisioi-et-al-2017 introduced neural machine translation to automatic text simplification. Research on automatic text simplification is comparatively widespread for languages such as English, Swedish, Spanish, and Brazilian Portuguese. To the authors' knowledge, no productive system exists for German. suter-2015, suter-et-al-2016 presented a prototype of a rule-based system for German.Machine learning approaches to both readability assessment and text simplification rely on data systematically prepared in the form of corpora. Specifically, for automatic text simplification via machine translation, pairs of standard-language/simplified-language texts aligned at the sentence level (i.e., parallel corpora) are needed.The paper at hand introduces a corpus developed for use in automatic readability assessment and automatic text simplification of German. The focus of this publication is on representing information that is valuable for these tasks but that hitherto has largely been ignored in machine learning approaches centering around simplified language, specifically, text structure (e.g., paragraphs, lines), typography (e.g., font type, font style), and image (content, position, and dimensions) information. The importance of considering such information has repeatedly been asserted theoretically BIBREF11, BIBREF12, BIBREF0. The remainder of this paper is structured as follows: Section SECREF2 presents previous corpora used for automatic readability assessment and text simplification. Section SECREF3 describes our corpus, introducing its novel aspects and presenting the primary data (Section SECREF7), the metadata (Section SECREF10), the secondary data (Section SECREF28), the profile (Section SECREF35), and the results of machine learning experiments carried out on the corpus (Section SECREF37).A number of corpora for use in automatic readability assessment and automatic text simplification exist. The most well-known example is the Parallel Wikipedia Simplification Corpus (PWKP) compiled from parallel articles of the English Wikipedia and Simple English Wikipedia BIBREF13 and consisting of around 108,000 sentence pairs. The corpus profile is shown in Table TABREF2. While the corpus represents the largest dataset involving simplified language to date, its application has been criticized for various reasons BIBREF15, BIBREF14, BIBREF16; among these, the fact that Simple English Wikipedia articles are not necessarily direct translations of articles from the English Wikipedia stands out. hwang-et-al-2015 provided an updated version of the corpus that includes a total of 280,000 full and partial matches between the two Wikipedia versions. Another frequently used data collection for English is the Newsela Corpus BIBREF14 consisting of 1,130 news articles, each simplified into four school grade levels by professional editors. Table TABREF3 shows the profile of the Newsela Corpus. The table obviates that the difference in vocabulary size between the English and the simplified English side of the PWKP Corpus amounts to only 18%, while the corresponding number for the English side and the level representing the highest amount of simplification in the Newsela Corpus (Simple-4) is 50.8%. Vocabulary size as an indicator of lexical richness is generally taken to correlate positively with complexity BIBREF17.gasperin-et-al-2010 compiled the PorSimples Corpus consisting of Brazilian Portuguese texts (2,116 sentences), each with a natural and a strong simplification, resulting in around 4,500 aligned sentences. drndarevic-saggion-2012, bott-et-al-2012, bott-saggion-2012 produced the Simplext Corpus consisting of 200 Spanish/simplified Spanish document pairs, amounting to a total of 1,149 (Spanish)/1,808 (simplified Spanish) sentences (approximately 1,000 aligned sentences).klaper-ebling-volk-2013 created the first parallel corpus for German/simplified German, consisting of 256 parallel texts downloaded from the web (approximately 70,000 tokens).Section SECREF2 demonstrated that the only corpus containing simplified German available is that of klaper-ebling-volk-2013. Since its creation, a number of legal and political developments have spurred the availability of data in simplified German. Among these developments is the introduction of a set of regulations for accessible information technology (Barrierefreie-Informationstechnik-Verordnung, BITV 2.0) in Germany and the ratification of the United Nations Convention on the Rights of Persons with Disabilities (CRPD) in Switzerland. The paper at hand introduces a corpus that represents an enhancement of the corpus of klaper-ebling-volk-2013 in the following ways:The corpus contains more parallel data.The corpus additionally contains monolingual-only data (simplified German).The corpus newly contains information on text structure, typography, and images.The simplified German side of the parallel data together with the monolingual-only data can be used for automatic readability assessment. The parallel data in the corpus is useful both for deriving rules for a rule-based text simplification system in a data-driven manner and for training a data-driven machine translation system. A data augmentation technique such as back-translation BIBREF18 can be applied to the monolingual-only data to arrive at additional (synthetic) parallel data.The corpus contains PDFs and webpages collected from web sources in Germany, Austria, and Switzerland at the end of 2018/beginning of 2019. The web sources mostly consist of websites of governments, specialised institutions, translation agencies, and non-profit organisations (92 different domains). The documents cover a range of topics, such as politics (e.g., instructions for voting), health (e.g., what to do in case of pregnancy), and culture (e.g., introduction to art museums).For the webpages, a static dump of all documents was created. Following this, the documents were manually checked to verify the language. The main content was subsequently extracted, i.e., HTML markup and boilerplate removed using the Beautiful Soup library for Python. Information on text structure (e.g., paragraphs, lines) and typography (e.g., boldface, italics) was retained. Similarly, image information (content, position, and dimensions of an image) was preserved.For PDFs, the PDFlib Text and Image Extraction Toolkit (TET) was used to extract the plain text and record information on text structure, typography, and images. The toolkit produces output in an XML format (TETML).Metadata was collected automatically from the HTML (webpages) and TETML (PDFs) files, complemented manually, and recorded in the Open Language Archives Community (OLAC) Standard. OLAC is based on a reduced version of the Dublin Core Metadata Element Set (DCMES). Of the 15 elements of this “Simple Dublin Core” set, the following 12 were actively used along with controlled vocabularies of OLAC and Dublin Core:title: title of the document, with the language specified as the value of an xml:lang attribute and alternatives to the original title (e.g., translations) stored as dcterms:alternative (cf. Figure FIGREF11 for an example)contributor: all person entities linked to the creation of a document, with an olac:code attribute with values from the OLAC role vocabulary used to further specify the role of the contributor, e.g., author, editor, publisher, or translatordate: date mentioned in the metadata of the HTML or PDF source or, for news and blog articles, date mentioned in the body of the text, in W3C date and time formatdescription: value of the description in the metadata of an HTML document or list of sections of a PDF document, using the Dublin Core qualifier TableOfContentsformat: distinction between the Internet Media Types (MIME types) text/html (for webpages) and application/pdf (for PDFs)identifier: URL of the document or International Standard Book Number (ISBN) for books or brochureslanguage: language of the document as value of the attribute olac:code (i.e., de, as conforming to ISO 639), with the CEFR level as optional element contentpublisher: organization or person that made the document availablerelation: used to establish a link between documents in German and simplified German for the parallel part of the corpus, using the Dublin Core qualifiers hasVersion (for the German text) and isVersionOf (for the simplified German text)rights: any piece of information about the rights of a document, as far as available in the sourcesource: source document, i.e., HTML for web documents and TETML for PDFstype: nature or genre of the content of the document, which, in accordance with the DCMI Type Vocabulary, is Text in all cases and additionally StillImage in cases where a document also contains images. Additionally, the linguistic type is specified according to the OLAC Linguistic Data Type Vocabulary, as either primary_text (applies to most documents) or lexicon in cases where a document represents an entry of a simplified language vocabularyThe elements coverage (to denote the spatial or temporal scope of the content of a resource), creator (to denote the author of a text, see contributor above), and subject (to denote the topic of the document content) were not used.Figure FIGREF11 shows an example of OLAC metadata. The source document described with this metadata record is a PDF structured into chapters, with text corresponding to the CEFR level A2 and images. Metadata in OLAC can be converted into the metadata standard of CLARIN (a European research infrastructure for language resources and technology), the Component MetaData Infrastructure (CMDI). The CMDI standard was chosen since it is the supported metadata version of CLARIN, which is specifically popular in German-speaking countries.Information on the language level of a simplified German text (typically A1, A2, or B1) is particularly valuable, as it allows for conducting automatic readability assessment and graded automatic text simplification experiments on the data. 52 websites and 233 PDFs (amounting to approximately 26,000 sentences) have an explicit language level label.Annotations were added in the Text Corpus Format by WebLicht (TCF) developed as part of CLARIN. TCF supports standoff annotation, which allows for representation of annotations with conflicting hierarchies. TCF does not assign a separate file for each annotation layer; instead, the source text and all annotation layers are stored jointly in a single file. A token layer acts as the key element to which all other annotation layers are linked.The following types of annotations were added: text structure, fonts, images, tokens, parts of speech, morphological units, lemmas, sentences, and dependency parses. TCF does not readily accommodate the incorporation of all of these types of information. We therefore extended the format in the following ways:Information on the font type and font style (e.g., italics, bold print) of a token and its position on the physical page (for PDFs only) was specified as attributes to the token elements of the tokens layer (cf. Figure FIGREF34 for an example)Information on physical page segmentation (for PDFs only), paragraph segmentation, and line segmentation was added as part of a textspan element in the textstructure layerA separate images layer was introduced to hold image elements that take as attributes the x and y coordinates of the images, their dimensions (width and height), and the number of the page on which they occurA separate fonts layer was introduced to preserve detailed information on the font configurations referenced in the tokens layerLinguistic annotation was added automatically using the ParZu dependency parser for German BIBREF19 (for tokens and dependency parses), the NLTK toolkit BIBREF20 (for sentences), the TreeTagger BIBREF21 (for part-of-speech tags and lemmas), and Zmorge BIBREF22 (for morphological units). Figure FIGREF34 shows a sample corpus annotation. Together, the metadata shown in Figure FIGREF11 and the annotations presented in Figure FIGREF34 constitute a complete TCF file.The resulting corpus contains 6,217 documents (5,461 monolingual documents plus 378 documents for each side of the parallel data). Table TABREF36 shows the corpus profile. The monolingual-only documents on average contain fewer sentences than the simplified German side of the parallel data (average document length in sentences 31.64 vs. 55.75). The average sentence length is almost equal (approx. 11 tokens). Hence, the monolingual-only texts are shorter than the simplified German texts in the parallel data. Compared to their German counterparts, the simplified German texts in the parallel data have clearly undergone a process of lexical simplification: The vocabulary is smaller by 51% (33,384 vs. 16,352 types), which is comparable to the rate of reduction reported in Section SECREF2 for the Newsela Corpus (50.8%).battisti-2019 applied unsupervised machine learning techniques to the simplified German texts of the corpus presented in this paper with the aim of investigating evidence of multiple complexity levels. While the detailed results are beyond the scope of this paper, the author found features based on the structural information that is a unique property of this corpus (e.g., number of images, number of paragraphs, number of lines, number of words of a specific font type, and adherence to a one-sentence-per-line rule) to be predictive of the level of difficulty of a simplified German text. To our knowledge, this is the first study to deliver empirical proof of the relevance of such features.We have introduced a corpus compiled for use in automatic readability assessment and automatic text simplification of German. While such tasks have been addressed for other languages, research on German is still scarce. The features exploited as part of machine learning approaches to readability assessment so far typically include surface and/or (deeper) linguistic features. The corpus presented in this paper additionally contains information on text structure, typography, and images. These features have been shown to be indicative of simple vs. complex texts both theoretically and, using the corpus described in this paper, empirically.Information on text structure, typography, and images can also be leveraged as part of a neural machine translation approach to text simplification. A set of parallel documents used in machine translation additionally requires sentence alignments, which are still missing from our corpus. Hence, as a next step, we will include such information using the Customized Alignment for Text Simplification (CATS) tool BIBREF23.
|
[
"Which information about text structure is included in the corpus?",
"Which information about typography is included in the corpus?"
] |
[
[
"",
"paragraph, lines, textspan element (paragraph segmentation, line segmentation, Information on physical page segmentation(for PDF only))"
],
[
"",
""
]
] |
Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . For an input question, these systems typically generate a KB query, which can be executed to retrieve the answers from a KB. Figure 1 illustrates the process used to parse two sample questions in a KBQA system: (a) a single-relation question, which can be answered with a single $<$ head-entity, relation, tail-entity $>$ KB tuple BIBREF6 , BIBREF7 , BIBREF2 ; and (b) a more complex case, where some constraints need to be handled for multiple entities in the question. The KBQA system in the figure performs two key tasks: (1) entity linking, which links $n$ -grams in questions to KB entities, and (2) relation detection, which identifies the KB relation(s) a question refers to.The main focus of this work is to improve the relation detection subtask and further explore how it can contribute to the KBQA system. Although general relation detection methods are well studied in the NLP community, such studies usually do not take the end task of KBQA into consideration. As a result, there is a significant gap between general relation detection studies and KB-specific relation detection. First, in most general relation detection tasks, the number of target relations is limited, normally smaller than 100. In contrast, in KBQA even a small KB, like Freebase2M BIBREF2 , contains more than 6,000 relation types. Second, relation detection for KBQA often becomes a zero-shot learning task, since some test instances may have unseen relations in the training data. For example, the SimpleQuestions BIBREF2 data set has 14% of the golden test relations not observed in golden training tuples. Third, as shown in Figure 1 (b), for some KBQA tasks like WebQuestions BIBREF0 , we need to predict a chain of relations instead of a single relation. This increases the number of target relation types and the sizes of candidate relation pools, further increasing the difficulty of KB relation detection. Owing to these reasons, KB relation detection is significantly more challenging compared to general relation detection tasks.This paper improves KB relation detection to cope with the problems mentioned above. First, in order to deal with the unseen relations, we propose to break the relation names into word sequences for question-relation matching. Second, noticing that original relation names can sometimes help to match longer question contexts, we propose to build both relation-level and word-level relation representations. Third, we use deep bidirectional LSTMs (BiLSTMs) to learn different levels of question representations in order to match the different levels of relation information. Finally, we propose a residual learning method for sequence matching, which makes the model training easier and results in more abstract (deeper) question representations, thus improves hierarchical matching.In order to assess how the proposed improved relation detection could benefit the KBQA end task, we also propose a simple KBQA implementation composed of two-step relation detection. Given an input question and a set of candidate entities retrieved by an entity linker based on the question, our proposed relation detection model plays a key role in the KBQA process: (1) Re-ranking the entity candidates according to whether they connect to high confident relations detected from the raw question text by the relation detection model. This step is important to deal with the ambiguities normally present in entity linking results. (2) Finding the core relation (chains) for each topic entity selection from a much smaller candidate entity set after re-ranking. The above steps are followed by an optional constraint detection step, when the question cannot be answered by single relations (e.g., multiple entities in the question). Finally the highest scored query from the above steps is used to query the KB for answers.Our main contributions include: (i) An improved relation detection model by hierarchical matching between questions and relations with residual learning; (ii) We demonstrate that the improved relation detector enables our simple KBQA system to achieve state-of-the-art results on both single-relation and multi-relation KBQA tasks.Previous research BIBREF4 , BIBREF20 formulates KB relation detection as a sequence matching problem. However, while the questions are natural word sequences, how to represent relations as sequences remains a challenging problem. Here we give an overview of two types of relation sequence representations commonly used in previous work.(1) Relation Name as a Single Token (relation-level). In this case, each relation name is treated as a unique token. The problem with this approach is that it suffers from the low relation coverage due to limited amount of training data, thus cannot generalize well to large number of open-domain relations. For example, in Figure 1 , when treating relation names as single tokens, it will be difficult to match the questions to relation names “episodes_written” and “starring_roles” if these names do not appear in training data – their relation embeddings $\mathbf {h}^r$ s will be random vectors thus are not comparable to question embeddings $\mathbf {h}^q$ s.(2) Relation as Word Sequence (word-level). In this case, the relation is treated as a sequence of words from the tokenized relation name. It has better generalization, but suffers from the lack of global information from the original relation names. For example in Figure 1 (b), when doing only word-level matching, it is difficult to rank the target relation “starring_roles” higher compared to the incorrect relation “plays_produced”. This is because the incorrect relation contains word “plays”, which is more similar to the question (containing word “play”) in the embedding space. On the other hand, if the target relation co-occurs with questions related to “tv appearance” in training, by treating the whole relation as a token (i.e. relation id), we could better learn the correspondence between this token and phrases like “tv show” and “play on”.The two types of relation representation contain different levels of abstraction. As shown in Table 1 , the word-level focuses more on local information (words and short phrases), and the relation-level focus more on global information (long phrases and skip-grams) but suffer from data sparsity. Since both these levels of granularity have their own pros and cons, we propose a hierarchical matching approach for KB relation detection: for a candidate relation, our approach matches the input question to both word-level and relation-level representations to get the final ranking score. Section "Improved KB Relation Detection" gives the details of our proposed approach.This section describes our hierarchical sequence matching with residual learning approach for relation detection. In order to match the question to different aspects of a relation (with different abstraction levels), we deal with three problems as follows on learning question/relation representations.We provide our model with both types of relation representation: word-level and relation-level. Therefore, the input relation becomes $\mathbf {r}=\lbrace r^{word}_1,\cdots ,r^{word}_{M_1}\rbrace \cup \lbrace r^{rel}_1,\cdots ,r^{rel}_{M_2}\rbrace $ , where the first $M_1$ tokens are words (e.g. {episode, written}), and the last $M_2$ tokens are relation names, e.g., {episode_written} or {starring_roles, series} (when the target is a chain like in Figure 1 (b)). We transform each token above to its word embedding then use two BiLSTMs (with shared parameters) to get their hidden representations $[\mathbf {B}^{word}_{1:M_1}:\mathbf {B}^{rel}_{1:M_2}]$ (each row vector $\mathbf {\beta }_i$ is the concatenation between forward/backward representations at $i$ ). We initialize the relation sequence LSTMs with the final state representations of the word sequence, as a back-off for unseen relations. We apply one max-pooling on these two sets of vectors and get the final relation representation $\mathbf {h}^r$ .From Table 1 , we can see that different parts of a relation could match different contexts of question texts. Usually relation names could match longer phrases in the question and relation words could match short phrases. Yet different words might match phrases of different lengths.As a result, we hope the question representations could also comprise vectors that summarize various lengths of phrase information (different levels of abstraction), in order to match relation representations of different granularity. We deal with this problem by applying deep BiLSTMs on questions. The first-layer of BiLSTM works on the word embeddings of question words $\mathbf {q}=\lbrace q_1,\cdots ,q_N\rbrace $ and gets hidden representations $\mathbf {\Gamma }^{(1)}_{1:N}=[\mathbf {\gamma }^{(1)}_1;\cdots ;\mathbf {\gamma }^{(1)}_N]$ . The second-layer BiLSTM works on $\mathbf {\Gamma }^{(1)}_{1:N}$ to get the second set of hidden representations $\mathbf {\Gamma }^{(2)}_{1:N}$ . Since the second BiLSTM starts with the hidden vectors from the first layer, intuitively it could learn more general and abstract information compared to the first layer.Note that the first(second)-layer of question representations does not necessarily correspond to the word(relation)-level relation representations, instead either layer of question representations could potentially match to either level of relation representations. This raises the difficulty of matching between different levels of relation/question representations; the following section gives our proposal to deal with such problem.Now we have question contexts of different lengths encoded in $\mathbf {\Gamma }^{(1)}_{1:N}$ and $\mathbf {\Gamma }^{(2)}_{1:N}$ . Unlike the standard usage of deep BiLSTMs that employs the representations in the final layer for prediction, here we expect that two layers of question representations can be complementary to each other and both should be compared to the relation representation space (Hierarchical Matching). This is important for our task since each relation token can correspond to phrases of different lengths, mainly because of syntactic variations. For example in Table 1 , the relation word written could be matched to either the same single word in the question or a much longer phrase be the writer of.We could perform the above hierarchical matching by computing the similarity between each layer of $\mathbf {\Gamma }$ and $\mathbf {h}^r$ separately and doing the (weighted) sum between the two scores. However this does not give significant improvement (see Table 2 ). Our analysis in Section "Relation Detection Results" shows that this naive method suffers from the training difficulty, evidenced by that the converged training loss of this model is much higher than that of a single-layer baseline model. This is mainly because (1) Deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable, the training usually falls to local optima where one layer has good matching scores and the other always has weight close to 0. (2) The training of deeper architectures itself is more difficult.To overcome the above difficulties, we adopt the idea from Residual Networks BIBREF23 for hierarchical matching by adding shortcut connections between two BiLSTM layers. We proposed two ways of such Hierarchical Residual Matching: (1) Connecting each $\mathbf {\gamma }^{(1)}_i$ and $\mathbf {\gamma }^{(2)}_i$ , resulting in a $\mathbf {\gamma }^{^{\prime }}_i=\mathbf {\gamma }^{(1)}_i + \mathbf {\gamma }^{(2)}_i$ for each position $i$ . Then the final question representation $\mathbf {h}^q$ becomes a max-pooling over all $\mathbf {\gamma }^{^{\prime }}_i$ s, 1 $\le $ i $\le $ $N$ . (2) Applying max-pooling on $\mathbf {\Gamma }^{(1)}_{1:N}$ and $\mathbf {\gamma }^{(2)}_i$0 to get $\mathbf {\gamma }^{(2)}_i$1 and $\mathbf {\gamma }^{(2)}_i$2 , respectively, then setting $\mathbf {\gamma }^{(2)}_i$3 . Finally we compute the matching score of $\mathbf {\gamma }^{(2)}_i$4 given $\mathbf {\gamma }^{(2)}_i$5 as $\mathbf {\gamma }^{(2)}_i$6 .Intuitively, the proposed method should benefit from hierarchical training since the second layer is fitting the residues from the first layer of matching, so the two layers of representations are more likely to be complementary to each other. This also ensures the vector spaces of two layers are comparable and makes the second-layer training easier.During training we adopt a ranking loss to maximizing the margin between the gold relation $\mathbf {r}^+$ and other relations $\mathbf {r}^-$ in the candidate pool $R$ . $$l_{\mathrm {rel}} = \max \lbrace 0, \gamma - s_{\mathrm {rel}}(\mathbf {r}^+; \mathbf {q}) + s_{\mathrm {rel}}(\mathbf {r}^-; \mathbf {q})\rbrace \nonumber $$ (Eq. 12) where $\gamma $ is a constant parameter. Fig 2 summarizes the above Hierarchical Residual BiLSTM (HR-BiLSTM) model.Another way of hierarchical matching consists in relying on attention mechanism, e.g. BIBREF24 , to find the correspondence between different levels of representations. This performs below the HR-BiLSTM (see Table 2 ).This section describes our KBQA pipeline system. We make minimal efforts beyond the training of the relation detection model, making the whole system easy to build.Following previous work BIBREF4 , BIBREF5 , our KBQA system takes an existing entity linker to produce the top- $K$ linked entities, $EL_K(q)$ , for a question $q$ (“initial entity linking”). Then we generate the KB queries for $q$ following the four steps illustrated in Algorithm "KBQA Enhanced by Relation Detection" .[htbp] InputInput OutputOutput Top query tuple $(\hat{e},\hat{r}, \lbrace (c, r_c)\rbrace )$ Entity Re-Ranking (first-step relation detection): Use the raw question text as input for a relation detector to score all relations in the KB that are associated to the entities in $EL_K(q)$ ; use the relation scores to re-rank $EL_K(q)$ and generate a shorter list $EL^{\prime }_{K^{\prime }}(q)$ containing the top- $K^{\prime }$ entity candidates (Section "Entity Re-Ranking" ) Relation Detection: Detect relation(s) using the reformatted question text in which the topic entity is replaced by a special token $<$ e $>$ (Section "Relation Detection" ) Query Generation: Combine the scores from step 1 and 2, and select the top pair $(\hat{e},\hat{r})$ (Section "Query Generation" ) Constraint Detection (optional): Compute similarity between $q$ and any neighbor entity $c$ of the entities along $EL_K(q)$0 (connecting by a relation $EL_K(q)$1 ) , add the high scoring $EL_K(q)$2 and $EL_K(q)$3 to the query (Section "Constraint Detection" ). KBQA with two-step relation detection Compared to previous approaches, the main difference is that we have an additional entity re-ranking step after the initial entity linking. We have this step because we have observed that entity linking sometimes becomes a bottleneck in KBQA systems. For example, on SimpleQuestions the best reported linker could only get 72.7% top-1 accuracy on identifying topic entities. This is usually due to the ambiguities of entity names, e.g. in Fig 1 (a), there are TV writer and baseball player “Mike Kelley”, which is impossible to distinguish with only entity name matching.Having observed that different entity candidates usually connect to different relations, here we propose to help entity disambiguation in the initial entity linking with relations detected in questions.Sections "Entity Re-Ranking" and "Relation Detection" elaborate how our relation detection help to re-rank entities in the initial entity linking, and then those re-ranked entities enable more accurate relation detection. The KBQA end task, as a result, benefits from this process.In this step, we use the raw question text as input for a relation detector to score all relations in the KB with connections to at least one of the entity candidates in $EL_K(q)$ . We call this step relation detection on entity set since it does not work on a single topic entity as the usual settings. We use the HR-BiLSTM as described in Sec. "Improved KB Relation Detection" . For each question $q$ , after generating a score $s_{rel}(r;q)$ for each relation using HR-BiLSTM, we use the top $l$ best scoring relations ( $R^{l}_q$ ) to re-rank the original entity candidates. Concretely, for each entity $e$ and its associated relations $R_e$ , given the original entity linker score $s_{linker}$ , and the score of the most confident relation $r\in R_q^{l} \cap R_e$ , we sum these two scores to re-rank the entities: $$s_{\mathrm {rerank}}(e;q) =& \alpha \cdot s_{\mathrm {linker}}(e;q) \nonumber \\
+ & (1-\alpha ) \cdot \max _{r \in R_q^{l} \cap R_e} s_{\mathrm {rel}}(r;q).\nonumber $$ (Eq. 15) Finally, we select top $K^{\prime }$ $<$ $K$ entities according to score $s_{rerank}$ to form the re-ranked list $EL_{K^{\prime }}^{^{\prime }}(q)$ .We use the same example in Fig 1 (a) to illustrate the idea. Given the input question in the example, a relation detector is very likely to assign high scores to relations such as “episodes_written”, “author_of” and “profession”. Then, according to the connections of entity candidates in KB, we find that the TV writer “Mike Kelley” will be scored higher than the baseball player “Mike Kelley”, because the former has the relations “episodes_written” and “profession”. This method can be viewed as exploiting entity-relation collocation for entity linking.In this step, for each candidate entity $e \in EL_K^{\prime }(q)$ , we use the question text as the input to a relation detector to score all the relations $r \in R_e$ that are associated to the entity $e$ in the KB. Because we have a single topic entity input in this step, we do the following question reformatting: we replace the the candidate $e$ 's entity mention in $q$ with a token “ $<$ e $>$ ”. This helps the model better distinguish the relative position of each word compared to the entity. We use the HR-BiLSTM model to predict the score of each relation $r \in R_e$ : $s_{rel} (r;e,q)$ .Finally, the system outputs the $<$ entity, relation (or core-chain) $>$ pair $(\hat{e}, \hat{r})$ according to: $$s(\hat{e}, \hat{r}; q) =& \max _{e \in EL_{K^{\prime }}^{^{\prime }}(q), r \in R_e} \left( \beta \cdot s_{\mathrm {rerank}}(e;q) \right. \nonumber \\
&\left.+ (1-\beta ) \cdot s_{\mathrm {rel}} (r;e,q) \right),
\nonumber $$ (Eq. 19) where $\beta $ is a hyperparameter to be tuned.Similar to BIBREF4 , we adopt an additional constraint detection step based on text matching. Our method can be viewed as entity-linking on a KB sub-graph. It contains two steps: (1) Sub-graph generation: given the top scored query generated by the previous 3 steps, for each node $v$ (answer node or the CVT node like in Figure 1 (b)), we collect all the nodes $c$ connecting to $v$ (with relation $r_c$ ) with any relation, and generate a sub-graph associated to the original query. (2) Entity-linking on sub-graph nodes: we compute a matching score between each $n$ -gram in the input question (without overlapping the topic entity) and entity name of $c$ (except for the node in the original query) by taking into account the maximum overlapping sequence of characters between them (see Appendix A for details and B for special rules dealing with date/answer type constraints). If the matching score is larger than a threshold $\theta $ (tuned on training set), we will add the constraint entity $c$ (and $r_c$ ) to the query by attaching it to the corresponding node $v$ on the core-chain.We use the SimpleQuestions BIBREF2 and WebQSP BIBREF25 datasets. Each question in these datasets is labeled with the gold semantic parse. Hence we can directly evaluate relation detection performance independently as well as evaluate on the KBQA end task.SimpleQuestions (SQ): It is a single-relation KBQA task. The KB we use consists of a Freebase subset with 2M entities (FB2M) BIBREF2 , in order to compare with previous research. yin2016simple also evaluated their relation extractor on this data set and released their proposed question-relation pairs, so we run our relation detection model on their data set. For the KBQA evaluation, we also start with their entity linking results. Therefore, our results can be compared with their reported results on both tasks.WebQSP (WQ): A multi-relation KBQA task. We use the entire Freebase KB for evaluation purposes. Following yih-EtAl:2016:P16-2, we use S-MART BIBREF26 entity-linking outputs. In order to evaluate the relation detection models, we create a new relation detection task from the WebQSP data set. For each question and its labeled semantic parse: (1) we first select the topic entity from the parse; and then (2) select all the relations and relation chains (length $\le $ 2) connected to the topic entity, and set the core-chain labeled in the parse as the positive label and all the others as the negative examples.We tune the following hyper-parameters on development sets: (1) the size of hidden states for LSTMs ({50, 100, 200, 400}); (2) learning rate ({0.1, 0.5, 1.0, 2.0}); (3) whether the shortcut connections are between hidden states or between max-pooling results (see Section "Hierarchical Matching between Relation and Question" ); and (4) the number of training epochs.For both the relation detection experiments and the second-step relation detection in KBQA, we have entity replacement first (see Section "Relation Detection" and Figure 1 ). All word vectors are initialized with 300- $d$ pretrained word embeddings BIBREF27 . The embeddings of relation names are randomly initialized, since existing pre-trained relation embeddings (e.g. TransE) usually support limited sets of relation names. We leave the usage of pre-trained relation embeddings to future work. Table 2 shows the results on two relation detection tasks. The AMPCNN result is from BIBREF20 , which yielded state-of-the-art scores by outperforming several attention-based methods. We re-implemented the BiCNN model from BIBREF4 , where both questions and relations are represented with the word hash trick on character tri-grams. The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions. Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3% (p $<$ 0.001 and 0.01 compared to the best baseline BiLSTM w/ words on SQ and WQ respectively).Note that using only relation names instead of words results in a weaker baseline BiLSTM model. The model yields a significant performance drop on SimpleQuestions (91.2% to 88.9%). However, the drop is much smaller on WebQSP, and it suggests that unseen relations have a much bigger impact on SimpleQuestions.The bottom of Table 2 shows ablation results of the proposed HR-BiLSTM. First, hierarchical matching between questions and both relation names and relation words yields improvement on both datasets, especially for SimpleQuestions (93.3% vs. 91.2/88.8%). Second, residual learning helps hierarchical matching compared to weighted-sum and attention-based baselines (see Section "Hierarchical Matching between Relation and Question" ). For the attention-based baseline, we tried the model from BIBREF24 and its one-way variations, where the one-way model gives better results. Note that residual learning significantly helps on WebQSP (80.65% to 82.53%), while it does not help as much on SimpleQuestions. On SimpleQuestions, even removing the deep layers only causes a small drop in performance. WebQSP benefits more from residual and deeper architecture, possibly because in this dataset it is more important to handle larger scope of context matching.Finally, on WebQSP, replacing BiLSTM with CNN in our hierarchical matching framework results in a large performance drop. Yet on SimpleQuestions the gap is much smaller. We believe this is because the LSTM relation encoder can better learn the composition of chains of relations in WebQSP, as it is better at dealing with longer dependencies.Next, we present empirical evidences, which show why our HR-BiLSTM model achieves the best scores. We use WebQSP for the analysis purposes. First, we have the hypothesis that training of the weighted-sum model usually falls to local optima, since deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable. This is evidenced by that during training one layer usually gets a weight close to 0 thus is ignored. For example, one run gives us weights of -75.39/0.14 for the two layers (we take exponential for the final weighted sum). It also gives much lower training accuracy (91.94%) compared to HR-BiLSTM (95.67%), suffering from training difficulty.Second, compared to our deep BiLSTM with shortcut connections, we have the hypothesis that for KB relation detection, training deep BiLSTMs is more difficult without shortcut connections. Our experiments suggest that deeper BiLSTM does not always result in lower training accuracy. In the experiments a two-layer BiLSTM converges to 94.99%, even lower than the 95.25% achieved by a single-layer BiLSTM. Under our setting the two-layer model captures the single-layer model as a special case (so it could potentially better fit the training data), this result suggests that the deep BiLSTM without shortcut connections might suffers more from training difficulty.Finally, we hypothesize that HR-BiLSTM is more than combination of two BiLSTMs with residual connections, because it encourages the hierarchical architecture to learn different levels of abstraction. To verify this, we replace the deep BiLSTM question encoder with two single-layer BiLSTMs (both on words) with shortcut connections between their hidden states. This decreases test accuracy to 76.11%. It gives similar training accuracy compared to HR-BiLSTM, indicating a more serious over-fitting problem. This proves that the residual and deep structures both contribute to the good performance of HR-BiLSTM.Table 3 compares our system with two published baselines (1) STAGG BIBREF4 , the state-of-the-art on WebQSP and (2) AMPCNN BIBREF20 , the state-of-the-art on SimpleQuestions. Since these two baselines are specially designed/tuned for one particular dataset, they do not generalize well when applied to the other dataset. In order to highlight the effect of different relation detection models on the KBQA end-task, we also implemented another baseline that uses our KBQA system but replaces HR-BiLSTM with our implementation of AMPCNN (for SimpleQuestions) or the char-3-gram BiCNN (for WebQSP) relation detectors (second block in Table 3 ).Compared to the baseline relation detector (3rd row of results), our method, which includes an improved relation detector (HR-BiLSTM), improves the KBQA end task by 2-3% (4th row). Note that in contrast to previous KBQA systems, our system does not use joint-inference or feature-based re-ranking step, nevertheless it still achieves better or comparable results to the state-of-the-art.The third block of the table details two ablation tests for the proposed components in our KBQA systems: (1) Removing the entity re-ranking step significantly decreases the scores. Since the re-ranking step relies on the relation detection models, this shows that our HR-BiLSTM model contributes to the good performance in multiple ways. Appendix C gives the detailed performance of the re-ranking step. (2) In contrast to the conclusion in BIBREF4 , constraint detection is crucial for our system. This is probably because our joint performance on topic entity and core-chain detection is more accurate (77.5% top-1 accuracy), leaving a huge potential (77.5% vs. 58.0%) for the constraint detection module to improve.Finally, like STAGG, which uses multiple relation detectors (see yih2015semantic for the three models used), we also try to use the top-3 relation detectors from Section "Relation Detection Results" . As shown on the last row of Table 3 , this gives a significant performance boost, resulting in a new state-of-the-art result on SimpleQuestions and a result comparable to the state-of-the-art on WebQSP.KB relation detection is a key step in KBQA and is significantly different from general relation extraction tasks. We propose a novel KB relation detection model, HR-BiLSTM, that performs hierarchical matching between questions and KB relations. Our model outperforms the previous methods on KB relation detection tasks and allows our KBQA system to achieve state-of-the-arts. For future work, we will investigate the integration of our HR-BiLSTM into end-to-end systems. For example, our model could be integrated into the decoder in BIBREF31 , to provide better sequence prediction. We will also investigate new emerging datasets like GraphQuestions BIBREF32 and ComplexQuestions BIBREF30 to handle more characteristics of general QA.
|
[
"On which benchmarks they achieve the state of the art?",
"What they use in their propsoed framework?",
"What does KBQA abbreviate for",
"What is te core component for KBQA?"
] |
[
[
"",
""
],
[
"",
""
],
[
"",
""
],
[
"",
""
]
] |
The application of deep learning methods to NLP is made possible by representing words as vectors in a low-dimensional continuous space. Traditionally, these word embeddings were static: each word had a single vector, regardless of context BIBREF0, BIBREF1. This posed several problems, most notably that all senses of a polysemous word had to share the same representation. More recent work, namely deep neural language models such as ELMo BIBREF2 and BERT BIBREF3, have successfully created contextualized word representations, word vectors that are sensitive to the context in which they appear. Replacing static embeddings with contextualized representations has yielded significant improvements on a diverse array of NLP tasks, ranging from question-answering to coreference resolution.The success of contextualized word representations suggests that despite being trained with only a language modelling task, they learn highly transferable and task-agnostic properties of language. In fact, linear probing models trained on frozen contextualized representations can predict linguistic properties of words (e.g., part-of-speech tags) almost as well as state-of-the-art models BIBREF4, BIBREF5. Still, these representations remain poorly understood. For one, just how contextual are these contextualized word representations? Are there infinitely many context-specific representations that BERT and ELMo can assign to each word, or are words essentially assigned one of a finite number of word-sense representations?We answer this question by studying the geometry of the representation space for each layer of ELMo, BERT, and GPT-2. Our analysis yields some surprising findings:In all layers of all three models, the contextualized word representations of all words are not isotropic: they are not uniformly distributed with respect to direction. Instead, they are anisotropic, occupying a narrow cone in the vector space. The anisotropy in GPT-2's last layer is so extreme that two random words will on average have almost perfect cosine similarity! Given that isotropy has both theoretical and empirical benefits for static embeddings BIBREF6, the extent of anisotropy in contextualized representations is surprising.Occurrences of the same word in different contexts have non-identical vector representations. Where vector similarity is defined as cosine similarity, these representations are more dissimilar to each other in upper layers. This suggests that, much like how upper layers of LSTMs produce more task-specific representations BIBREF4, upper layers of contextualizing models produce more context-specific representations.Context-specificity manifests very differently in ELMo, BERT, and GPT-2. In ELMo, representations of words in the same sentence grow more similar to each other as context-specificity increases in upper layers; in BERT, they become more dissimilar to each other in upper layers but are still more similar than randomly sampled words are on average; in GPT-2, however, words in the same sentence are no more similar to each other than two randomly chosen words.After adjusting for the effect of anisotropy, on average, less than 5% of the variance in a word's contextualized representations can be explained by their first principal component. This holds across all layers of all models. This suggests that contextualized representations do not correspond to a finite number of word-sense representations, and even in the best possible scenario, static embeddings would be a poor replacement for contextualized ones. Still, static embeddings created by taking the first principal component of a word's contextualized representations outperform GloVe and FastText embeddings on many word vector benchmarks.These insights help justify why the use of contextualized representations has led to such significant improvements on many NLP tasks.Skip-gram with negative sampling (SGNS) BIBREF0 and GloVe BIBREF1 are among the best known models for generating static word embeddings. Though they learn embeddings iteratively in practice, it has been proven that in theory, they both implicitly factorize a word-context matrix containing a co-occurrence statistic BIBREF7, BIBREF8. Because they create a single representation for each word, a notable problem with static word embeddings is that all senses of a polysemous word must share a single vector.Given the limitations of static word embeddings, recent work has tried to create context-sensitive word representations. ELMo BIBREF2, BERT BIBREF3, and GPT-2 BIBREF9 are deep neural language models that are fine-tuned to create models for a wide range of downstream NLP tasks. Their internal representations of words are called contextualized word representations because they are a function of the entire input sentence. The success of this approach suggests that these representations capture highly transferable and task-agnostic properties of language BIBREF4.ELMo creates contextualized representations of each token by concatenating the internal states of a 2-layer biLSTM trained on a bidirectional language modelling task BIBREF2. In contrast, BERT and GPT-2 are bi-directional and uni-directional transformer-based language models respectively. Each transformer layer of 12-layer BERT (base, cased) and 12-layer GPT-2 creates a contextualized representation of each token by attending to different parts of the input sentence BIBREF3, BIBREF9. BERT – and subsequent iterations on BERT BIBREF10, BIBREF11 – have achieved state-of-the-art performance on various downstream NLP tasks, ranging from question-answering to sentiment analysis.Prior analysis of contextualized word representations has largely been restricted to probing tasks BIBREF12, BIBREF5. This involves training linear models to predict syntactic (e.g., part-of-speech tag) and semantic (e.g., word relation) properties of words. Probing models are based on the premise that if a simple linear model can be trained to accurately predict a linguistic property, then the representations implicitly encode this information to begin with. While these analyses have found that contextualized representations encode semantic and syntactic information, they cannot answer how contextual these representations are, and to what extent they can be replaced with static word embeddings, if at all. Our work in this paper is thus markedly different from most dissections of contextualized representations. It is more similar to BIBREF13, which studied the geometry of static word embedding spaces.The contextualizing models we study in this paper are ELMo, BERT, and GPT-2. We choose the base cased version of BERT because it is most comparable to GPT-2 with respect to number of layers and dimensionality. The models we work with are all pre-trained on their respective language modelling tasks. Although ELMo, BERT, and GPT-2 have 2, 12, and 12 hidden layers respectively, we also include the input layer of each contextualizing model as its 0th layer. This is because the 0th layer is not contextualized, making it a useful baseline against which to compare the contextualization done by subsequent layers.To analyze contextualized word representations, we need input sentences to feed into our pre-trained models. Our input data come from the SemEval Semantic Textual Similarity tasks from years 2012 - 2016 BIBREF14, BIBREF15, BIBREF16, BIBREF17. We use these datasets because they contain sentences in which the same words appear in different contexts. For example, the word `dog' appears in “A panda dog is running on the road.” and “A dog is trying to get bacon off his back.” If a model generated the same representation for `dog' in both these sentences, we could infer that there was no contextualization; conversely, if the two representations were different, we could infer that they were contextualized to some extent. Using these datasets, we map words to the list of sentences they appear in and their index within these sentences. We do not consider words that appear in less than 5 unique contexts in our analysis.We measure how contextual a word representation is using three different metrics: self-similarity, intra-sentence similarity, and maximum explainable variance.Let $w$ be a word that appears in sentences $\lbrace s_1, ..., s_n \rbrace $ at indices $\lbrace i_1, ..., i_n \rbrace $ respectively, such that $w = s_1[i_1] = ... = s_n[i_n]$. Let $f_{\ell }(s,i)$ be a function that maps $s[i]$ to its representation in layer $\ell $ of model $f$. The self similarity of $w$ in layer $\ell $ iswhere $\cos $ denotes the cosine similarity. In other words, the self-similarity of a word $w$ in layer $\ell $ is the average cosine similarity between its contextualized representations across its $n$ unique contexts. If layer $\ell $ does not contextualize the representations at all, then $\textit {SelfSim}_\ell (w) = 1$ (i.e., the representations are identical across all contexts). The more contextualized the representations are for $w$, the lower we would expect its self-similarity to be.Let $s$ be a sentence that is a sequence $\left< w_1, ..., w_n \right>$ of $n$ words. Let $f_{\ell }(s,i)$ be a function that maps $s[i]$ to its representation in layer $\ell $ of model $f$. The intra-sentence similarity of $s$ in layer $\ell $ isPut more simply, the intra-sentence similarity of a sentence is the average cosine similarity between its word representations and the sentence vector, which is just the mean of those word vectors. This measure captures how context-specificity manifests in the vector space. For example, if both $\textit {IntraSim}_\ell (s)$ and $\textit {SelfSim}_\ell (w)$ are low $\forall \ w \in s$, then the model contextualizes words in that layer by giving each one a context-specific representation that is still distinct from all other word representations in the sentence. If $\textit {IntraSim}_\ell (s)$ is high but $\textit {SelfSim}_\ell (w)$ is low, this suggests a less nuanced contextualization, where words in a sentence are contextualized simply by making their representations converge in vector space.Let $w$ be a word that appears in sentences $\lbrace s_1, ..., s_n \rbrace $ at indices $\lbrace i_1, ..., i_n \rbrace $ respectively, such that $w = s_1[i_1] = ... = s_n[i_n]$. Let $f_{\ell }(s,i)$ be a function that maps $s[i]$ to its representation in layer $\ell $ of model $f$. Where $[ f_{\ell }(s_1, i_1) ... f_{\ell }(s_n, i_n) ]$ is the occurrence matrix of $w$ and $\sigma _1 ... \sigma _m$ are the first $m$ singular values of this matrix, the maximum explainable variance is$\textit {MEV}_\ell (w)$ is the proportion of variance in $w$'s contextualized representations for a given layer that can be explained by their first principal component. It gives us an upper bound on how well a static embedding could replace a word's contextualized representations. The closer $\textit {MEV}_\ell (w)$ is to 0, the poorer a replacement a static embedding would be; if $\textit {MEV}_\ell (w) = 1$, then a static embedding would be a perfect replacement for the contextualized representations.It is important to consider isotropy (or the lack thereof) when discussing contextuality. For example, if word vectors were perfectly isotropic (i.e., directionally uniform), then $\textit {SelfSim}_\ell (w) = 0.95$ would suggest that $w$'s representations were poorly contextualized. However, consider the scenario where word vectors are so anisotropic that any two words have on average a cosine similarity of 0.99. Then $\textit {SelfSim}_\ell (w) = 0.95$ would actually suggest the opposite – that $w$'s representations were well contextualized. This is because representations of $w$ in different contexts would on average be more dissimilar to each other than two randomly chosen words.To adjust for the effect of anisotropy, we use three anisotropic baselines, one for each of our contextuality measures. For self-similarity and intra-sentence similarity, the baseline is the average cosine similarity between the representations of uniformly randomly sampled words from different contexts. The more anisotropic the word representations are in a given layer, the closer this baseline is to 1. For maximum explainable variance (MEV), the baseline is the proportion of variance in uniformly randomly sampled word representations that is explained by their first principal component. The more anisotropic the representations in a given layer, the closer this baseline is to 1: even for a random assortment of words, the principal component would be able to explain a large proportion of the variance.Since contextuality measures are calculated for each layer of a contextualizing model, we calculate separate baselines for each layer as well. We then subtract from each measure its respective baseline to get the anisotropy-adjusted contexuality measure. For example, the anisotropy-adjusted self-similarity iswhere $\mathcal {O}$ is the set of all word occurrences and $f_{\ell }(\cdot )$ maps a word occurrence to its representation in layer $\ell $ of model $f$. Unless otherwise stated, references to contextuality measures in the rest of the paper refer to the anisotropy-adjusted measures, where both the raw measure and baseline are estimated with 1K uniformly randomly sampled word representations.If word representations from a particular layer were isotropic (i.e., directionally uniform), then the average cosine similarity between uniformly randomly sampled words would be 0 BIBREF18. The closer this average is to 1, the more anisotropic the representations. The geometric interpretation of anisotropy is that the word representations all occupy a narrow cone in the vector space rather than being uniform in all directions; the greater the anisotropy, the narrower this cone BIBREF13. As seen in Figure FIGREF20, this implies that in almost all layers of BERT, ELMo and GPT-2, the representations of all words occupy a narrow cone in the vector space. The only exception is ELMo's input layer, which produces static character-level embeddings without using contextual or even positional information BIBREF2. It should be noted that not all static embeddings are necessarily isotropic, however; BIBREF13 found that skipgram embeddings, which are also static, are not isotropic.As seen in Figure FIGREF20, for GPT-2, the average cosine similarity between uniformly randomly words is roughly 0.6 in layers 2 through 8 but increases exponentially from layers 8 through 12. In fact, word representations in GPT-2's last layer are so anisotropic that any two words have on average an almost perfect cosine similarity! This pattern holds for BERT and ELMo as well, though there are exceptions: for example, the anisotropy in BERT's penultimate layer is much higher than in its final layer.Isotropy has both theoretical and empirical benefits for static word embeddings. In theory, it allows for stronger “self-normalization” during training BIBREF18, and in practice, subtracting the mean vector from static embeddings leads to improvements on several downstream NLP tasks BIBREF6. Thus the extreme degree of anisotropy seen in contextualized word representations – particularly in higher layers – is surprising. As seen in Figure FIGREF20, for all three models, the contextualized hidden layer representations are almost all more anisotropic than the input layer representations, which do not incorporate context. This suggests that high anisotropy is inherent to, or least a by-product of, the process of contextualization.Recall from Definition 1 that the self-similarity of a word, in a given layer of a given model, is the average cosine similarity between its representations in different contexts, adjusted for anisotropy. If the self-similarity is 1, then the representations are not context-specific at all; if the self-similarity is 0, that the representations are maximally context-specific. In Figure FIGREF24, we plot the average self-similarity of uniformly randomly sampled words in each layer of BERT, ELMo, and GPT-2. For example, the self-similarity is 1.0 in ELMo's input layer because representations in that layer are static character-level embeddings.In all three models, the higher the layer, the lower the self-similarity is on average. In other words, the higher the layer, the more context-specific the contextualized representations. This finding makes intuitive sense. In image classification models, lower layers recognize more generic features such as edges while upper layers recognize more class-specific features BIBREF19. Similarly, upper layers of LSTMs trained on NLP tasks learn more task-specific representations BIBREF4. Therefore, it follows that upper layers of neural language models learn more context-specific representations, so as to predict the next word for a given context more accurately. Of all three models, representations in GPT-2 are the most context-specific, with those in GPT-2's last layer being almost maximally context-specific.Across all layers, stopwords have among the lowest self-similarity of all words, implying that their contextualized representations are among the most context-specific. For example, the words with the lowest average self-similarity across ELMo's layers are `and', `of', `'s', `the', and `to'. This is relatively surprising, given that these words are not polysemous. This finding suggests that the variety of contexts a word appears in, rather than its inherent polysemy, is what drives variation in its contextualized representations. This answers one of the questions we posed in the introduction: ELMo, BERT, and GPT-2 are not simply assigning one of a finite number of word-sense representations to each word; otherwise, there would not be so much variation in the representations of words with so few word senses.As noted earlier, contextualized representations are more context-specific in upper layers of ELMo, BERT, and GPT-2. However, how does this increased context-specificity manifest in the vector space? Do word representations in the same sentence converge to a single point, or do they remain distinct from one another while still being distinct from their representations in other contexts? To answer this question, we can measure a sentence's intra-sentence similarity. Recall from Definition 2 that the intra-sentence similarity of a sentence, in a given layer of a given model, is the average cosine similarity between each of its word representations and their mean, adjusted for anisotropy. In Figure FIGREF25, we plot the average intra-sentence similarity of 500 uniformly randomly sampled sentences.As word representations in a sentence become more context-specific in upper layers, the intra-sentence similarity also rises. This suggests that, in practice, ELMo ends up extending the intuition behind Firth's BIBREF20 distributional hypothesis to the sentence level: that because words in the same sentence share the same context, their contextualized representations should also be similar.As word representations in a sentence become more context-specific in upper layers, they drift away from one another, although there are exceptions (see layer 12 in Figure FIGREF25). However, in all layers, the average similarity between words in the same sentence is still greater than the average similarity between randomly chosen words (i.e., the anisotropy baseline). This suggests a more nuanced contextualization than in ELMo, with BERT recognizing that although the surrounding sentence informs a word's meaning, two words in the same sentence do not necessarily have a similar meaning because they share the same context.On average, the unadjusted intra-sentence similarity is roughly the same as the anisotropic baseline, so as seen in Figure FIGREF25, the anisotropy-adjusted intra-sentence similarity is close to 0 in most layers of GPT-2. In fact, the intra-sentence similarity is highest in the input layer, which does not contextualize words at all. This is in contrast to ELMo and BERT, where the average intra-sentence similarity is above 0.20 for all but one layer.As noted earlier when discussing BERT, this behavior still makes intuitive sense: two words in the same sentence do not necessarily have a similar meaning simply because they share the same context. The success of GPT-2 suggests that unlike anisotropy, which accompanies context-specificity in all three models, a high intra-sentence similarity is not inherent to contextualization. Words in the same sentence can have highly contextualized representations without those representations being any more similar to each other than two random word representations. It is unclear, however, whether these differences in intra-sentence similarity can be traced back to differences in model architecture; we leave this question as future work.Recall from Definition 3 that the maximum explainable variance (MEV) of a word, for a given layer of a given model, is the proportion of variance in its contextualized representations that can be explained by their first principal component. This gives us an upper bound on how well a static embedding could replace a word's contextualized representations. Because contextualized representations are anisotropic (see section SECREF21), much of the variation across all words can be explained by a single vector. We adjust for anisotropy by calculating the proportion of variance explained by the first principal component of uniformly randomly sampled word representations and subtracting this proportion from the raw MEV. In Figure FIGREF29, we plot the average anisotropy-adjusted MEV across uniformly randomly sampled words.In no layer of ELMo, BERT, or GPT-2 can more than 5% of the variance in a word's contextualized representations be explained by a static embedding, on average. Though not visible in Figure FIGREF29, the raw MEV of many words is actually below the anisotropy baseline: i.e., a greater proportion of the variance across all words can be explained by a single vector than can the variance across all representations of a single word. Note that the 5% threshold represents the best-case scenario, and there is no theoretical guarantee that a word vector obtained using GloVe, for example, would be similar to the static embedding that maximizes MEV. This suggests that contextualizing models are not simply assigning one of a finite number of word-sense representations to each word – otherwise, the proportion of variance explained would be much higher. Even the average raw MEV is below 5% for all layers of ELMo and BERT; only for GPT-2 is the raw MEV non-negligible, being around 30% on average for layers 2 to 11 due to extremely high anisotropy.As noted earlier, we can create static embeddings for each word by taking the first principal component (PC) of its contextualized representations in a given layer. In Table TABREF34, we plot the performance of these PC static embeddings on several benchmark tasks. These tasks cover semantic similarity, analogy solving, and concept categorization: SimLex999 BIBREF21, MEN BIBREF22, WS353 BIBREF23, RW BIBREF24, SemEval-2012 BIBREF25, Google analogy solving BIBREF0 MSR analogy solving BIBREF26, BLESS BIBREF27 and AP BIBREF28. We leave out layers 3 - 10 in Table TABREF34 because their performance is between those of Layers 2 and 11.The best-performing PC static embeddings belong to the first layer of BERT, although those from the other layers of BERT and ELMo also outperform GloVe and FastText on most benchmarks. For all three contextualizing models, PC static embeddings created from lower layers are more effective those created from upper layers. Those created using GPT-2 also perform markedly worse than their counterparts from ELMo and BERT. Given that upper layers are much more context-specific than lower layers, and given that GPT-2's representations are more context-specific than ELMo and BERT's (see Figure FIGREF24), this suggests that the PCs of highly context-specific representations are less effective on traditional benchmarks. Those derived from less context-specific representations, such as those from Layer 1 of BERT, are much more effective.Our findings offer some new directions for future work. For one, as noted earlier in the paper, BIBREF6 found that making static embeddings more isotropic – by subtracting their mean from each embedding – leads to surprisingly large improvements in performance on downstream tasks. Given that isotropy has benefits for static embeddings, it may also have benefits for contextualized word representations, although the latter have already yielded significant improvements despite being highly anisotropic. Therefore, adding an anisotropy penalty to the language modelling objective – to encourage the contextualized representations to be more isotropic – may yield even better results.Another direction for future work is generating static word representations from contextualized ones. While the latter offer superior performance, there are often challenges to deploying large models such as BERT in production, both with respect to memory and run-time. In contrast, static representations are much easier to deploy. Our work in section 4.3 suggests that not only it is possible to extract static representations from contextualizing models, but that these extracted vectors often perform much better on a diverse array of tasks compared to traditional static embeddings such as GloVe and FastText. This may be a means of extracting some use from contextualizing models without incurring the full cost of using them in production.In this paper, we investigated how contextual contextualized word representations truly are. For one, we found that upper layers of ELMo, BERT, and GPT-2 produce more context-specific representations than lower layers. This increased context-specificity is always accompanied by increased anisotropy. However, context-specificity also manifests differently across the three models; the anisotropy-adjusted similarity between words in the same sentence is highest in ELMo but almost non-existent in GPT-2. We ultimately found that after adjusting for anisotropy, on average, less than 5% of the variance in a word's contextualized representations could be explained by a static embedding. This means that even in the best-case scenario, in all layers of all models, static word embeddings would be a poor replacement for contextualized ones. These insights help explain some of the remarkable success that contextualized representations have had on a diverse array of NLP tasks.We thank the anonymous reviewers for their insightful comments. We thank the Natural Sciences and Engineering Research Council of Canada (NSERC) for their financial support.
|
[
"What experiments are proposed to test that upper layers produce context-specific embeddings?",
"How do they calculate a static embedding for each word?"
] |
[
[
"They measure self-similarity, intra-sentence similarity and maximum explainable variance of the embeddings in the upper layers.",
"They plot the average cosine similarity between uniformly random words increases exponentially from layers 8 through 12. \nThey plot the average self-similarity of uniformly randomly sampled words in each layer of BERT, ELMo, and GPT-2 and shown that the higher layer produces more context-specific embeddings.\nThey plot that word representations in a sentence become more context-specific in upper layers, they drift away from one another."
],
[
"They use the first principal component of a word's contextualized representation in a given layer as its static embedding.",
""
]
] |
"During the first two decades of the 21st century, the sharing and processing of vast amounts of dat(...TRUNCATED)
| ["What is the performance of BERT on the task?","What are the other algorithms tested?","Does BERT r(...TRUNCATED)
| [["F1 scores are:\nHUBES-PHI: Detection(0.965), Classification relaxed (0.95), Classification strict(...TRUNCATED)
|
"Accurate grapheme-to-phoneme conversion (g2p) is important for any application that depends on the (...TRUNCATED)
| ["how is model compactness measured?","what was the baseline?","what evaluation metrics were used?",(...TRUNCATED)
|
[
[
"Using file size on disk",
""
],
[
"",
""
],
[
"",
""
],
[
"",
""
]
] |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 1