text stringlengths 82 736 | label int64 0 1 |
|---|---|
in this paper we tackle the challenging task of abstractive document summarization , which is still less investigated to date---in this paper , we review the difficulties of neural abstractive document summarization | 1 |
this partly supports the findings of wallace that verbal irony can not be recognised through lexical clues alone---this supports the findings of wallace that lexical features alone are not effective at identifying irony | 1 |
we believe that extracting dimensions of interpersonal relationships complements previous efforts that extract relationships---in this paper , we target dimensions of interpersonal relationships that characterize the nature of relationships | 1 |
using an ensemble method , the key information extracted from word pairs with dependency relations in the translated text is effectively integrated into the parser for the target language---as the expected , a dependency parser for the target language can effectively make use of them by only considering the most related information extracted from the translated text | 1 |
we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus---we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing | 1 |
to compare with the dataset used by he et al , we provide the collapsed statistics as well---we replicate the test conditions used by he et al as closely as possible in this comparison | 1 |
coreference resolution is a set partitioning problem in which each resulting partition refers to an entity---coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities | 1 |
the topic assignment for each word is irrelevant to all other words---topic assignment of each word is not independent , but rather affected by the topic | 1 |
jeon et al demonstrates that similar answers are a good indicator of similar questions---jeon et al also discussed methods for grouping similar questions based on using the similarity between answers in the archive | 1 |
we use online learning to train model parameters , updating the parameters using the adagrad algorithm---we used 300-dimensional pre-trained glove word embeddings | 0 |
we initialize our word vectors with 300-dimensional word2vec word embeddings---on all datasets and models , we use 300-dimensional word vectors pre-trained on google news | 1 |
we show that the use of source language resources , and in particular the extension to non-symmetric textual entailment relationships , is useful for substantially increasing the amount of texts that are properly translated---from wordnet , we show that using monolingual resources and textual entailment relationships allows substantially increasing the quality of translations | 1 |
all features were log-linearly combined and their weights were optimized by performing minimum error rate training---the model weights were trained using the minimum error rate training algorithm | 1 |
we showed improvements in translation quality incorporating these models within a phrase-based smt sytem---in a phrase-based smt system , we show significant improvements in translation quality | 1 |
in both settings , we show that including context significantly improves results against a context-free version of the model---as expected , this analysis suggests that including context in the model helps more | 1 |
the model is novel in its choice of tasks and the cross-task interaction features---that is novel in terms of the choice of tasks and the features used to capture cross-task interactions | 1 |
twitter is a communication platform which combines sms , instant messages and social networks---twitter is a microblogging social network launched in 2006 with 310 million active users per month and where 340 million tweets are daily generated 1 | 1 |
for the mix one , we also train word embeddings of dimension 50 using glove---we represent terms using pre-trained glove wikipedia 6b word embeddings | 1 |
named entity recognition ( ner ) is the task of finding rigid designators as they appear in free text and classifying them into coarse categories such as person or location ( cite-p-24-4-6 )---named entity recognition ( ner ) is a challenging learning problem | 1 |
in particular , we derive features using discourse relations between argument components and windows of their surrounding sentences---we propose to use features extracted from discourse relations between sentences for argumentative relation mining | 1 |
deep convolutional neural networks s are recently extensively used in many computer vision and nlp tasks---convolutional networks have proven to be very efficient in solving various computer vision tasks | 1 |
we propose to explicitly model the consistency of sentiment between the source and target side with a lexicon-based approach---we propose a lexicon-based approach that examines the consistency of bilingual subjectivity , sentiment | 1 |
sentiment analysis is the computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-11-3-3 )---sentiment analysis is a growing research field , especially on web social networks | 1 |
modern smt systems learn translation models based on large amounts of parallel data---we use pre-trained glove vector for initialization of word embeddings | 0 |
this part of the coreference resolution system is frequently called clustering strategy---this strategy has been successful and commonly used in coreference resolution | 1 |
fortunately , nivre et al propose a constrained decoding procedure for the arc-eager parsing system---djuric et al propose an approach that learns low-dimensional , distributed representations of user comments in order to detect expressions of hate speech | 0 |
relation classification is a crucial ingredient in numerous information extraction systems seeking to mine structured facts from text---relation classification is the task of assigning sentences with two marked entities to a predefined set of relations | 1 |
we chose a supervised machine learning approach in order to achieve maximum precision---we present a machine learning approach to correcting these errors , based largely on character-level | 1 |
firstly , at word-level alignment , luong et al extend the skip-gram model to learn efficient bilingual word embeddings---the skip-gram model proposed by mikolov et al has been adapted to the bilingual setting in luong et al , where the model learns to predict word contexts cross-lingually | 1 |
we use the glove vectors of 300 dimension to represent the input words---the model parameters in word embedding are pretrained using glove | 1 |
to get the the sub-fields of the community , we use latent dirichlet allocation to find topics and label them by hand---in order to model topics of news article bodies , we apply standard latent dirichlet allocation | 1 |
we use the cbow model for the bilingual word embedding learning---we use the word2vec tool with the skip-gram learning scheme | 1 |
the anaphor is a pronoun and the referent is in operating memory ( not in focus )---if the anaphor is a definite noun phrase and the referent is in focus ( i.e . in the cache ) , anaphora resolution will be hindered | 1 |
the bleu metric has been widely accepted as an effective means to automatically evaluate the quality of machine translation outputs---neural networks have recently gained much attention as a way of inducing word vectors | 0 |
coreference resolution is the task of identifying all mentions which refer to the same entity in a document---since coreference resolution is a pervasive discourse phenomenon causing performance impediments in current ie systems , we considered a corpus of aligned english and romanian texts to identify coreferring expressions | 1 |
after this we parse articles using the stanford parser---for parsing , we use the stanford parser | 1 |
items were identified using the unified medical language system vocabularies for dictionary look-up---vocabulary lists were drawn from the french component of the unified medical language system and the vi-dal drug database | 1 |
we experiment on two datasets , the msr paraphrasing corpus and a dataset that we automatically created from the mtc corpus---our experiments were conducted on two datasets : the publicly available microsoft research paraphrasing corpus ( cite-p-15-3-2 ) and a dataset that we constructed from the mtc corpus | 1 |
word2vec defines an efficient way to work with continuous bag-of-word and skip-gram architectures computing vector representations from very large data sets---word2vec has been proposed for building word representations in vector space , which consists of two models , including continuous bag of word and skipgram | 1 |
we identify named entities , parse sentences , and convert constituency trees into dependency structures using the stanford tools---to start , we use stanford corenlp toolkit to extract dependency trees and resolve co-referent entities from the corpus | 1 |
the dataset and parser can be found at http : //www---dataset and parser can be found at http : / / www | 1 |
furthermore , we train a 5-gram language model using the sri language toolkit---we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit | 1 |
unsupervised word embeddings trained from large amounts of unlabeled data have been shown to improve many nlp tasks---as an illustration , consider the task of matching a concept with a project | 0 |
decoding algorithm is a crucial part in statistical machine translation---algorithm is a crucial part in statistical machine translation | 1 |
srilm toolkit was used to create up to 5-gram language models using the mentioned resources---the sri language modeling toolkit was used to build 4-gram word-and character-based language models | 1 |
for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit---for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus | 1 |
in particular , we used the english and spanish sides of the europarl parallel corpus---in our experiments , we use the english-french part of the europarl corpus | 1 |
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided---we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit | 1 |
in our experiments , we choose to use the published glove pre-trained word embeddings---we use pre-trained vectors from glove for word-level embeddings | 1 |
finally , we combine all the above features using a support vector regression model which is implemented in scikit-learn---for training the model , we use the linear kernel svm implemented in the scikit-learn toolkit | 1 |
dredze et al showed that many of the parsing errors in domain adaptation tasks may come from inconsistencies between the annotations of training resources---dredze et al found that problems in domain adaptation are compounded by differences in the annotation schemes between the treebanks | 1 |
for the classifiers we use the scikit-learn machine learning toolkit---for data preparation and processing we use scikit-learn | 1 |
coreference resolution is the next step on the way towards discourse understanding---we use the stanford sentiment treebank in our experiments | 0 |
semantic role labeling ( srl ) consists of finding the arguments of a predicate and labeling them with semantic roles ( cite-p-9-1-5 , cite-p-9-3-0 )---semantic role labeling ( srl ) is the task of labeling the predicate-argument structures of sentences with semantic frames and their roles ( cite-p-18-1-2 , cite-p-18-1-19 ) | 1 |
the penn discourse treebank is the largest available discourse-annotated corpus in english---the penn discourse treebank corpus is the best-known resource for obtaining english connectives | 1 |
however , due to the limited availability of user-specific opinionated data , it is impractical to estimate independent models for each user---at the level of individual users ; however it is impractical to estimate independent sentiment classification models for each user with limited data | 1 |
ji and grishman employed a rulebased approach to propagate consistent triggers and arguments across topic-related documents---ji and grishman employ an approach to propagate consistent event arguments across sentences and documents | 1 |
we evaluated the translation quality of the system using the bleu metric---in this paper , we propose an approach for learning the semantic meaning of manipulation action | 0 |
we built a hierarchical phrase-based mt system based on weighted scfg---the combination of even an efficient parser with such intricate grammars may greatly increase the computational complexity of the system | 0 |
coreference resolution is the next step on the way towards discourse understanding---coreference resolution is the task of grouping all the mentions of entities 1 in a document into equivalence classes so that all the mentions in a given class refer to the same discourse entity | 1 |
we call ¡®but¡¯ and ¡®therefore¡¯ explicit discourse connectives ( dcs )---however , there are explicit ¡® causal ¡¯ and ¡® continuous ¡¯ relations | 1 |
word sense disambiguation ( wsd ) is a key task in computational lexical semantics , inasmuch as it addresses the lexical ambiguity of text by making explicit the meaning of words occurring in a given context ( cite-p-18-3-10 )---word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text | 1 |
we use the moses software package 5 to train a pbmt model---uos uses dependency-parsed features from the corpus , which are then clustered into senses using the maxmax algorithm | 0 |
coreference resolution is a key task in natural language processing ( cite-p-13-1-8 ) aiming to detect the referential expressions ( mentions ) in a text that point to the same entity---although coreference resolution is a subproblem of natural language understanding , coreference resolution evaluation metrics have predominately been discussed in terms of abstract entities and hypothetical system errors | 1 |
the baseline system is a pbsmt engine built using moses with the default configuration---the promt smt system is based on the moses open-source toolkit | 1 |
relation extraction is the task of detecting and characterizing semantic relations between entities from free text---relation extraction is a challenging task in natural language processing | 1 |
this work uses either grapheme or phoneme based models to transliterate words lists---clark and curran describe a log-linear glm for ccg parsing , trained on the penn treebank | 0 |
moreover , for regularization , we place dropout after each lstm layer as suggested in---for all methods , we applied dropout to the input of the lstm layers | 1 |
we describe an approximation to the bleu score that will satisfy these conditions---we compute the interannotator agreement in terms of the bleu score | 1 |
to measure translation accuracy , we use the automatic evaluation measures of bleu and ribes measured over all sentences in the test corpus---to verify sentence generation quantitatively , we evaluated the sentences automatically using bleu score | 1 |
named entity recognition ( ner ) is the task of finding rigid designators as they appear in free text and classifying them into coarse categories such as person or location ( cite-p-24-4-6 )---named entity recognition ( ner ) is the task of identifying and classifying phrases that denote certain types of named entities ( nes ) , such as persons , organizations and locations in news articles , and genes , proteins and chemicals in biomedical literature | 1 |
a typical discussion thread in a web forum consists of a number of individual posts or messages posted by different participating users---a typical discussion thread in an online forum spans multiple pages involving participation from multiple users | 1 |
the tweets were tokenized and part-ofspeech tagged with the cmu ark twitter nlp tool and stanford corenlp---in this paper , we investigate the automatic generation of tables-of-contents , a type of indicative summary | 0 |
arabic is a highly inflectional language with 85 % of words derived from trilateral roots ( alfedaghi and al-anzi 1989 )---first , arabic is a morphologically rich language ( cite-p-19-3-7 ) | 1 |
we use conditional random fields , a popular approach to solve sequence labeling problems---as discussed in the introduction , we use conditional random fields , since they are particularly suitable for sequence labelling | 1 |
a knsmoothed 5-gram language model is trained on the target side of the parallel data with srilm---gram language models are trained over the target-side of the training data , using srilm with modified kneser-ney discounting | 1 |
we employ factorial conditional random field ( fcrf ) to solve both cws and iwr jointly---we employ a factorial conditional random field to perform both tasks of cws and iwr jointly | 1 |
coreference resolution is the task of determining which mentions in a text refer to the same entity---coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities | 1 |
to avoid this problem , tromble et al propose linear bleu , an approximation to the bleu score to efficiently perform mbr decoding on the lattices provided by the component systems---to avoid this problem , tromble et al propose linear bleu , an approximation to the bleu score to efficiently perform mbr decoding when the search space is represented with lattices | 1 |
word2vec is an appropriate tool for this problem---multiword expressions are combinations of words which are lexically , syntactically , semantically or statistically idiosyncratic | 0 |
we introduced a framework for the focused reading of biomedical literature , which is necessary to handle the data overload that plagues even machine reading approaches---in this work , we introduce a focused reading approach to guide the machine reading of biomedical literature towards what literature should be read to answer a biomedical query | 1 |
throughout this paper we will use split bilexical grammars , or sbgs , a notationally simpler variant of split head-automaton grammars , or shags---in this work we use split head-automata grammars , a context-free grammatical formalism whose derivations are projective dependency trees | 1 |
knowledge graphs such as freebase , yago and wordnet are among the most widely used resources in nlp applications---knowledge bases like freebase , dbpedia , and nell are extremely useful resources for many nlp tasks | 1 |
it is a global log-linear regression model that makes use of a global factorization model and local context window methods to represent words in a global vector space model---global vectors for word representation is a global log-bilinear regression model which captures both global and local word co-occurrence statistics | 1 |
the present paper is a contribution towards this goal : it presents the results of a large-scale evaluation of window-based dsms on a wide variety of semantic tasks---we trained a standard 5-gram language model with modified kneser-ney smoothing using the kenlm toolkit on 4 billion running words | 0 |
chinese word segmentation ( cws ) is a preliminary and important task for chinese natural language processing ( nlp )---chinese word segmentation ( cws ) is a critical and a necessary initial procedure with respect to the majority of high-level chinese language processing tasks such as syntax parsing , information extraction and machine translation , since chinese scripts are written in continuous characters without explicit word boundaries | 1 |
although gru does not suffer from the vanishing gradient problem , it can still suffer from exploding gradient---however , training simple rnns is difficult because of the vanishing and exploding gradient problems | 1 |
in our experiment , word embeddings were 200-dimensional as used in , trained on gigaword with word2vec---transition-based methods have become a popular approach in multilingual dependency parsing because of their speed and performance | 0 |
we perform minimum-error-rate training to tune the feature weights of the translation model to maximize the bleu score on development set---we implement the weight tuning component according to the minimum error rate training method | 1 |
for our implementation we use 300-dimensional part-of-speech-specific word embeddings v i generated using the gensim word2vec package---we initialize our model with 300-dimensional word2vec toolkit vectors generated by a continuous skip-gram model trained on around 100 billion words from the google news corpus | 1 |
abeill茅 and abeill茅 and schabes identified the linguistic and computational attractiveness of lexicalized grammars for modeling non-compositional constructions in french well before dop---semantic parsing is the problem of translating human language into computer language , and therefore is at the heart of natural language understanding | 0 |
using espac medlineplus , we trained an initial phrase-based moses system---sarcasm is a sophisticated form of communication in which speakers convey their message in an indirect way | 0 |
in particular , li et al and socher et al proposed a simple kbc model for ckb---li et al proposed an on-the-fly ckb completion model to improve the coverage of ckbs | 1 |
one such classifier is trained for each of our three overlapping feature subspaces---we learn a classifier for each of the three feature subspaces | 1 |
we use word2vec as the vector representation of the words in tweets---the word vectors used in all approaches are taken from the word2vec google news model | 1 |
for word embeddings , we trained a skip-gram model over wikipedia , using word2vec---for the cluster- based method , we use word2vec 2 which provides the word vectors trained on the google news corpus | 1 |
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus---we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing | 1 |
in this section , we provide some pertinent background information about nell that influenced the design of conceptresolver 1---in this section , we provide some pertinent background information about nell that influenced the design of conceptresolver | 1 |
we use 300-dimensional word embeddings from glove to initialize the model---for all models , we use the 300-dimensional glove word embeddings | 1 |
we use the mallet implementation of a maximum entropy classifier to construct our models---bastings et al used neural monkey to develop a new convolutional architecture for encoding the input sentences using dependency trees | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.