text
stringlengths
82
736
label
int64
0
1
to counter neural generation ’ s tendency for shorter hypotheses , we also introduce a bounded length reward mechanism which allows a modified version of our beam search algorithm to remain optimal---for shorter hypotheses , we introduced a bounded length reward mechanism which allows a modified version of our beam search algorithm to remain optimal
1
we specify a non-stochastic version of the formalism , noting that probabilities may be attached to the rewrite rules exactly as in stochastic cfg---we will specify a nonstochastic version , noting that probabilities or other weights may be attached to the rewrite rules exactly as in stochastic cfg
1
marcu and echihabi demonstrated that word pairs extracted from the respective text spans are a good signal of the discourse relation between arguments---marcu and echihabi presented an unsupervised method to recognize discourse relations held between arbitrary spans of text
1
the seminal paper by started a sequence of studies for english---the seminal paper by hindle and rooth started a sequence of studies for english
1
the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting---we used srilm to build a 4-gram language model with kneser-ney discounting
1
as shown in table 3 , our approach resolves non-pronominal anaphors with the recall of 51.3 ( 39.7 ) and the precision of 90.4 ( 87.6 ) for muc-6 ( muc-7 )---as shown in table 3 , our approach resolves non-pronominal anaphors with the recall of 51 . 3 ( 39 . 7 ) and the precision of 90 . 4 ( 87 . 6 )
1
we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus---we focus on training classifiers with weakly and strongly labeled data , as well as semi-supervised learning
0
we used l2-regularized logistic regression classifier as implemented in liblinear---we use the logistic regression implementation of liblinear wrapped by the scikit-learn library
1
one of the very few available discourse annotated corpora is the penn discourse treebank in english---the penn discourse tree bank is the largest resource to date that provides a discourse annotated corpus in english
1
klein and manning presented an unlexicalized pcfg parser that eliminated all the lexicalized parameters---ilp approaches to nlp reveals that they are of a special kind , namely zero-one ilp with unweighted constraints
0
e-commerce sites may have tens of millions of such browse pages in many different languages---websites are automatically generating millions of easily searchable browse pages
1
we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus---we trained a 5-gram language model on the english side of each training corpus using the sri language modeling toolkit
1
in the srilm toolkit , n-gram counts are accessed through a special class type---as done in the srilm toolkit , a back-off m-gram lm is stored using a reverse trie data structure
1
these nlp tools have the potential to make a marked difference for gun violence researchers---taking syntactic role of each word with its narrow semantic meaning into account , can be highly relevant
0
we introduce a novel training algorithm for unsupervised grammar induction , called zoomed learning---we introduce the zoomed learning ( zl ) technique for unsupervised parser training
1
our dnn word alignment model extends classic hmm word alignment model---the alignment aspect of our model is similar to the hmm model for word alignment
1
thus , we propose a new approach based on the expectation-maximization algorithm---in this work , we use the expectation-maximization algorithm
1
by using a non-admissible heuristics , the speed improves by orders of magnitude , at the expense of parsing quality---non-admissible , the parsing speed improves even further , at the risk of returning suboptimal solutions
1
borrowing is the pervasive linguistic phenomenon of transferring and adapting linguistic constructions ( lexical , phonological , morphological , and syntactic ) from a “ donor ” language into a “ recipient ” language ( cite-p-10-3-16 )---borrowing is a major type of word formation in japanese , and numerous foreign words ( proper names or neologisms etc . ) are continuously being imported from other languages ( cite-p-26-3-22 )
1
petrov and mcdonald , 2012 , which includes the top ranked system , this indicates that self-training is already an established technique to improve the accuracy of constituency parsing on out-of-domain data , cf---petrov and mcdonald , 2012 , which includes the top ranked system , this indicates that self-training is already an established technique to improve the accuracy of constituency parsing on english out-of-domain data , cf
1
in this paper , we have proposed a new method for approximate string search , including spelling error correction , which is both accurate and efficient---in this paper , we work on candidate generation at the character level , which can be applied to spelling error correction
1
to write rules for a rule-based analyzer , and to produce an analyzer using machine-learning techniques , it is crucial to construct a dependency-analyzed corpus---for a dependency-analyzed corpus , it is necessary to provide a function to build a selective sampling framework to construct a dependency-analyzed corpus
1
semantic similarity is a central concept that extends across numerous fields such as artificial intelligence , natural language processing , cognitive science and psychology---semantic similarity is a well established research area of natural language processing , concerned with measuring the extent to which two linguistic items are similar ( cite-p-13-1-1 )
1
the decoding weights are optimized with minimum error rate training to maximize bleu scores---feature weights are tuned using minimum error rate training on the 455 provided references
1
framenet is an expert-built lexical-semantic resource incorporating the theory of frame-semantics---we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing
0
in this paper we will consider sentence-level approximations of the popular bleu score---self-training has been applied to parsing and word sense disambiguation
0
we use the cnn model with pretrained word embedding for the convolutional layer---following , we use the word analogical reasoning task to evaluate the quality of word embeddings
1
we propose an extension to the shift-reduce process to address this problem , which gives significant improvements to the parsing accuracies---we propose a simple yet effective extension to the shift-reduce process , which eliminates size
1
we used the scikit-learn implementation of svrs and the skll toolkit---coreference resolution is the task of grouping all the mentions of entities 1 in a document into equivalence classes so that all the mentions in a given class refer to the same discourse entity
0
neelakantan et al propose a multi-sense skip-gram that learns different representations for each sense of a word---neelakantan et al , 2015 ) presents an extension to skip-gram model for learning non-parametric multiple embeddings per word
1
all reported results are averages over three independent mert runs , and we evaluated statistical significance with multeval---on iwslt , all results are averages over three independent mert runs , and we evaluate statistical significance with multeval
1
the statistical significance test is performed using the re-sampling approach---significance tests are conducted using bootstrap sampling
1
the system includes three cascaded components : the tagging semantic role phrase , the identification of semantic role phrase , phrase and frame semantic dependency parsing---which is composed of three cascaded components : the tagging of sr phrase , the identification of semantic-role-phrase and semantic dependency parsing
1
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing---ten of these concepts were identical to ones used in , which allowed us to compare our results to recent work in case of english
0
discourse parsing is a challenging task and plays a critical role in discourse analysis---discourse parsing is the process of assigning a discourse structure to the input provided in the form of natural language
1
for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus---we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit
1
our approach makes full use of subword information to enhance chinese word embeddings---to incorporate the subword information for chinese word embeddings
1
we used the implementation of random forest in scikitlearn as the classifier---coreference resolution is the task of grouping all the mentions of entities 1 in a document into equivalence classes so that all the mentions in a given class refer to the same discourse entity
0
in this paper , we present a method for temporal relation extraction from clinical narratives in french and in english---in this paper , we focus on the extraction of temporal relations between medical events
1
including structural and inter-utterance dependency information further improved performance---structure and inter-utterance dependency provides some increase in performance
1
the contribution of this paper is that it presents an unsupervised machine learning technique for web qa that starts with only a user question---this paper presents an unsupervised svm classifier for answer selection , which is independent of language and question
1
the various models developed are evaluated using bleu and nist---we show that ccg-gtrc can actually be simulated by a ccg-std , proving the equivalence
0
relation extraction is a core task in information extraction and natural language understanding---previous work has shown that unlabeled text can be used to induce unsupervised word clusters that can improve performance of many supervised nlp tasks
0
the challenge of this task lies precisely in the fact that one classifier trained on twitter data should be able to generalize reasonably well on different types of text---novelty of this task lies in the fact that a model built using only twitter data is used to classify instances from other short text
1
we leverage latent dirichlet allocation for topic discovery and modeling in the reference source---instead , we apply lda topic modeling which requires only an adequate amount of raw text in the target language
1
we evaluate the translation quality using the case-sensitive bleu-4 metric---for this task , we use the widely-used bleu metric
1
our word embeddings is initialized with 100-dimensional glove word embeddings---word embeddings are initialized from glove 100-dimensional pre-trained embeddings
1
cassswe operates on part-of-speech annotated texts and is coupled with a preprocessing mechanism , which distinguishes thousands of phrasal verbs , idioms , and multi-word expressions---experimental results show that the combined criterion consistently leads to smaller models than the models pruned using either of the criteria separately
0
word sense disambiguation ( wsd ) is a key enabling-technology that automatically chooses the intended sense of a word in context---automatic evaluation results are shown in table 1 , using bleu-4
0
we have attempted to include all important local methods for nlp in our experiments ( see §3 )---for feature building , we use word2vec pre-trained word embeddings
0
we use a standard maximum entropy classifier implemented as part of mallet---in this work , we propose a coverage mechanism to nmt ( nmt-c overage )
0
sentiment analysis is a research area in the field of natural language processing---sentiment analysis is a recent attempt to deal with evaluative aspects of text
1
in our analyses , we show empirically that these learned attention weights correlate strongly with traditional headedness definitions---we used kenlm with srilm to train a 5-gram language model based on all available target language training data
0
the language model is a trigram model with modified kneser-ney discounting and interpolation---sentiment classification is the task of identifying the sentiment polarity of a given text , which is traditionally categorized as either positive or negative
0
the nonembeddings weights are initialized using xavier initialization---all weights are initialized by the xavier method
1
this raises the question as to which is a more accurate characterisation of what people do---based on this observation , we propose using context gates in nmt to dynamically control the contributions from the source and target contexts
0
furthermore , we evaluate our mtreelstm model with snli , a larger nli dataset---in the first setting , we use snli dataset to train the nli system
1
latent dirichlet allocation is one of the widely adopted generative models for topic modeling---an effective strategy to cluster words into topics , is latent dirichlet allocation
1
alternatively , specialized tools can be developed that directly use the knowledge about spelling variation---spelling variants can then be used to mitigate the problems caused by spelling variation that were described above
1
classical first-order logic ( hereafter called elementary logic ) is often used as logical representation language---elementary logic ( i . e . first-order logic ) can be used as a logical representation language
1
lexical chains are used to link semanticallyrelated words and phrases---we tuned the model weights against the wmt08 test set using z-mert , an implementation of minimum error-rate training included with joshua
0
all other parameters are initialized with glorot normal initialization---bunescu and mooney proposed a shortest path dependency kernel
0
typical language features are label en-coders and word2vec vectors---the most commonly used word embeddings were word2vec and glove
1
coreference resolution is the task of partitioning a set of entity mentions in a text , where each partition corresponds to some entity in an underlying discourse model---coreference resolution is the task of determining which mentions in a text refer to the same entity
1
emotion classification aims to predict the emotion categories of a given text---emotion classification aims to predict the emotion categories to which the given text belongs
1
the hierarchical phrase-based model has been widely adopted in statistical machine translation---the hierarchical phrase-based model is capable of capturing rich translation knowledge with the synchronous context-free grammar
1
woodsend and lapata , 2011 ) use simple wikipedia edit histories and an aligned wikipediasimple wikipedia corpus to induce a model based on quasi-synchronous grammar and integer linear programming---later , xue et al combined the language model and translation model to a translation-based language model and observed better performance in question retrieval
0
semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation---in our trained model , the supported _ by feature also has a high positive weight for ¡°
0
valitutti et al present an interactive system which generates humorous puns obtained through variation of familiar expressions with word substitution---collobert et al , 2011 ) used word embeddings for pos tagging , named entity recognition and semantic role labeling
0
sentiment classification is a task to predict a sentiment label , such as positive/negative , for a given text and has been applied to many domains such as movie/product reviews , customer surveys , news comments , and social media---sentiment classification is a very domain-specific problem ; training a classifier using the data from one domain may fail when testing against data from another
1
however , the simple rnn suffers from the vanishing gradient problem---however , training simple rnns is difficult because of the vanishing and exploding gradient problems
1
word sense disambiguation ( wsd ) is the nlp task that consists in selecting the correct sense of a polysemous word in a given context---word sense disambiguation ( wsd ) is a problem long recognised in computational linguistics ( yngve 1955 ) and there has been a recent resurgence of interest , including a special issue of this journal devoted to the topic ( cite-p-27-8-11 )
1
the data consists of sections of the wall street journal part of the penn treebank , with information on predicate-argument structures extracted from the propbank corpus---le and mikolov extended the word embedding learning model by incorporating paragraph information
0
in addition , we compare against the morfessor categories-map system---we employ the morfessor categories-map algorithm for segmentation
1
zeng et al , 2014 , exploited a convolutional deep neural network to extract lexical and sentence level features---for sentences , we tokenize each sentence by stanford corenlp and use the 300-d word embeddings from glove to initialize the models
0
the resulting statistical parser achieves performance ( 89.1 % f-measure ) on the penn treebank which is only 0.6 % below the best current parser for this task , despite using a smaller vocabulary size and less prior linguistic knowledge---on the standard penn treebank datasets , the parser ¡¯ s performance ( 89 . 1 % f-measure ) is only 0 . 6 % below the best current parsers for this task , despite using a smaller vocabulary and less prior linguistic knowledge
1
we perform pre-training using the skip-gram nn architecture available in the word2vec 13 tool---we use skip-gram representation for the training of word2vec tool
1
this paper proposes a method for dependency parsing of monologue sentences based on sentence segmentation---in order to prevent overfitting , we used early stopping based on the performance on the development set
0
relation extraction is a fundamental task that enables a wide range of semantic applications from question answering ( cite-p-13-3-12 ) to fact checking ( cite-p-13-3-10 )---relation extraction is a well-studied problem ( cite-p-12-1-6 , cite-p-12-3-7 , cite-p-12-1-5 , cite-p-12-1-7 )
1
we use 300-dimensional word embeddings from glove to initialize the model---we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings
1
tang et al proposed a user-product neural network to incorporate both user and product information for sentiment classification---tang et al proposed a novel method dubbed user product neural network which capture user-and product-level information for sentiment classification
1
relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text---relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form
1
we used svm-light-tk , which enables the use of the partial tree kernel---we used a support vector machine with an implementation of the original tree kernel
1
what we have just described is a method for approximating the joint distribution of all variables with a model containing only the most important systematic interactions among variables---we propose an approach to represent uncommon words ’ embeddings by a sparse linear combination of common ones
0
for the task of event trigger prediction , we train a multi-class logistic regression classifier using liblinear---we build all the classifiers using the l2-regularized linear logistic regression from the liblinear package
1
sentiment classification is the task of identifying the sentiment polarity of a given text---sentiment classification is the task of identifying the sentiment polarity of a given text , which is traditionally categorized as either positive or negative
1
case-sensitive bleu scores 4 for the europarl devtest set are shown in table 1---the results evaluated by bleu score is shown in table 2
1
for the features , we directly adopt those described in lin et al , knott---for the features , we directly adopt those described in lin et al , knott
1
here we apply this technique to parser adaptation---we evaluate global translation quality with bleu and meteor
0
we also use mini-batch adagrad for optimization and apply dropout---we apply online training , where model parameters are optimized by using adagrad
1
in this paper , we model our problem in the framework of posterior regularization---semantic similarity is a well established research area of natural language processing , concerned with measuring the extent to which two linguistic items are similar ( cite-p-13-1-1 )
0
we also develop a semantic parser for this corpus---preliminary results indicate that construction and semantic interpretation of cluster trees based on lexical frequency is a useful approach to discovering thematic interrelationships among the suras that constitute the qur ¡¯ an
0
we use the word2vec framework in the gensim implementation to generate the embedding spaces---for the cluster- based method , we use word2vec 2 which provides the word vectors trained on the google news corpus
1
the proposed framework makes a latest attempt to formalize word segmentation as a direct structured learning procedure in terms of the recent distributed representation framework---such as the fixed sized context window , this paper makes a latest attempt to re-formalize cws as a direct segmentation learning task
1
we use the feature set that we described in pil谩n et al and for modeling linguistic complexity in l2 swedish texts---we use the feature set presented in pil谩n et al designed for modeling linguistic complexity in input texts for l2 swedish learners
1
we use the word2vec skip-gram model to train our word embeddings---we use the word2vec tool to pre-train the word embeddings
1
keller and lapata showed that web frequencies correlate reliably with standard corpus frequencies---keller and lapata show that bigram statistics for english language is correlated between corpus and web counts
1
semantic role labeling ( srl ) is a task of automatically identifying semantic relations between predicate and its related arguments in the sentence---automatic text generation is the process of automatically converting data into coherent text - practical applications range from weather reports ( cite-p-12-1-5 ) to neonatal intensive care reports ( cite-p-12-3-8 )
0
our phrase-based system is similar to the alignment template system described by och and ney---our phrase-based smt system is similar to the alignment template system described in och and ney
1
transliteration is the process of converting terms written in one language into their approximate spelling or phonetic equivalents in another language---transliteration is a process of rewriting a word from a source language to a target language in a different writing system using the word ’ s phonological equivalent
1