text
stringlengths
82
736
label
int64
0
1
karmina is a poem with two lines that consists of a hook ( sampiran ) on the first line and a message on the second line---karmina is a poem that consists of two lines with around 8-12 syllables on each line
1
furthermore , we train a 5-gram language model using the sri language toolkit---relation extraction ( re ) is a task of identifying typed relations between known entity mentions in a sentence
0
the spelling error model proposed by brill and moore allows generic string edit operations up to a certain length---additionally , we report the mean reciprocal rank scores for some experimental runs
0
experimental results on the wat¡¯15 englishto-japanese translation dataset demonstrate that our proposed model achieves the best ribes score and outperforms the sequential attentional nmt model---in a similar vein , hashtags can also serve as noisy labels
0
on the resulting counts we apply the loglikelihood ratio---in this case , we use the log-likelihood measure as described in
1
part-of-speech ( pos ) tagging is a well studied problem in these fields---part-of-speech ( pos ) tagging is a critical task for natural language processing ( nlp ) applications , providing lexical syntactic information
1
a configuration is a pair consisting of a representation of the state of the stack , and the current position in the input string---a configuration of m is a 4-tuple ( q , 7 , r/ , w ) where q e q is the current state , 7 is the derivation tree of g under consideration , r/is a node in 7 or t ( where 1 can be thought of as the parent of the root oft ) , and w e a * is the output string produced up to that point in the computation
1
part-of-speech ( pos ) tagging is a well studied problem in these fields---um , utilizes a hierarchical lda-style model ( cite-p-17-1-2 ) to represent content specificity as a hierarchy of topic
0
inspired by this idea , we introduce in this paper a deep learning approach for discourse parsing---in this paper , we propose a recursive model for discourse parsing
1
we also use early stopping based on the performance achieved on the development sets---in addition , we use early stopping based on the performance achieved on the development sets
1
rule extraction follows the algorithm described in---as noted in joachims , support vector machines are well suited for text categorisation
0
word sense disambiguation ( wsd ) is a key enabling technology that automatically chooses the intended sense of a word in context---word sense disambiguation ( wsd ) is the task of identifying the correct sense of an ambiguous word in a given context
1
ko膷isk峄 ? et al simultaneously learn alignments and word representations from bilingual data---retrieved text is then presented to the users with proper names and specialized domain terms translated and hyperlinked
0
the core of the algorithm is a beam-search based decoder operating on the packed forest in a bottom-up manner---several user simulation models have been proposed for dialogue management policy learning
0
anderson et al construct semantic models using visual data and show a high correlation to brain activation patterns from fmri---anderson et al show that semantic models built from visual data correlate highly with fmribased brain activation patterns
1
support vector machines have been shown to outperform other existing methods in text categorization---svms have proven to be an effective means for text categorization as they are capable to robustly deal with high-dimensional , sparse feature spaces
1
goldberg and zhu place this task in a semi-supervised setting , and use unlabelled reviews with graph-based method---from the corpus certainly help to filter out false information which would otherwise be difficult to filter
0
we used 200 dimensional glove word representations , which were pre-trained on 6 billion tweets---we used the 200-dimensional word vectors for twitter produced by glove
1
in order to estimate the parameters of our model , we develop a blocked sampler based on that of johnson et al to sample parse trees for sentences in the raw training corpus according to their posterior probabilities---to sample from our proposal distribution , we use a blocked gibbs sampler based on the one proposed by goodman and used by johnson et al that samples entire parse trees
1
the parse trees for sentences in the test set were obtained using the stanford parser---we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting
0
here we have used a hybrid approach , where machine learning ( ml ) technique and linguistic rules are used to identify the discourse relations---we have followed a hybrid approach , where we first use machine learning ( ml ) technique to identify the discourse relations
1
then we use the standard minimum error-rate training to tune the feature weights to maximize the system潞s bleu score---the data consists of sections of the wall street journal part of the penn treebank , with information on predicate-argument structures extracted from the propbank corpus
0
the bptt approach is not effective at learning long term dependencies because of the exploding gradients problem---however , common approaches have shown to be inefficient in learning long-term dependencies due to a vanishing gradient
1
we extract the 4096-dimensional pre-softmax layer from a for-ward pass through a convolutional neural network , which has been pretrained on the imagenet classification task using caffe---using the deep learning framework caffe , we extracted image embeddings from a deep convolutional neural network that was trained on the imagenet classification task
1
we build a model of all unigrams and bigrams in the gigaword corpus using the c-mphr method , srilm , irstlm , and randlm 3 toolkits---in our experiments , we used the srilm toolkit to build 5-gram language model using the ldc arabic gigaword corpus
0
another source that has been widely used for this task is wordnet---another group of features are derived using wordnet
1
in this paper , we have focused on a sequential model such as a linear-chain crf---in this paper , we exploit non-local features as an estimate of long-distance dependencies
1
word sense disambiguation ( wsd ) is a particular problem of computational linguistics which consists in determining the correct sense for a given ambiguous word---word sense disambiguation ( wsd ) is the task of determining the meaning of an ambiguous word in its context
1
leveraging on aspectual type for temporal relation extraction is a promising approach that has already been explored by costa and branco on tempeval data---and the results of experiments support our intuitions
0
openccg uses a hybrid symbolic-statistical chart realizer which takes logical forms as input and produces sentences by using ccg combinators to combine signs---openccg uses a hybrid symbolic-statistical chart realizer which takes logical forms as input and produces sentences by using ccg com- binators to combine signs
1
the conll data set was taken from the wall street journal portion of the penn treebank and converted into a dependency format---the weights for these features are optimized using mert
0
the resulting model is an instance of a conditional random field---our model is a first order linear chain conditional random field
1
in the table , the second column represents the accuracy of the classification in each data set---garera , et al defines context vectors on the dependency tree rather than using adjacency
0
context representation plays a more important role than context type for learning word embeddings---word embeddings can be directly used for solving intrinsic tasks like word similarity and word analogy
1
the resulting statistical parser achieves performance ( 89.1 % f-measure ) on the penn treebank which is only 0.6 % below the best current parser for this task , despite using a smaller vocabulary size and less prior linguistic knowledge---szarvas extended their methodology to use n-gram features and a semi-supervised selection of the keyword features
0
r眉d et al create features based on search engine results , that they use in an ner system applied to queries---r眉d et al consider using search engines for distant supervision of ner of search queries
1
we have provided just such a framework for improving parsing performance---in this work , we provide just such a framework for training
1
the translation systems were evaluated by bleu score---evaluation was performed using the bleu metric
1
we propose multi-way , multilingual neural machine translation---we describe the attention-based neural machine translation
1
sentence similarity is the process of computing a similarity score between two sentences---sentence similarity can be defined by the degree of semantic equivalence of two given sentences , where sentences are typically 10-20 words long
1
bunescu et al used the category information from wikipedia to disambiguate names---bunescu and pasca disambiguated the names using the category information in wikipedia
1
in this paper we estimate approximate posterior inference using collapsed gibbs sampling ( cite-p-13-1-8 )---in this paper , we propose a text classification algorithm based on latent dirichlet allocation ( lda ) ( cite-p-13-1-1 )
1
implementations of left-corner parsers such as that of henderson adopt a arc-standard strategy , essentially always choosing analysis above , and thus do not introduce this kind of local ambiguity---we used svm classifier that implements linearsvc from the scikit-learn library
0
word alignment is a well-studied problem in natural language computing---word alignment is the task of identifying word correspondences between parallel sentence pairs
1
our system belongs to this family , since we believe that the syntactic processing of complex phenomena is a necessary step in order to perform feature-based opinion mining---we have developed a system that belongs to this family , as we believe that syntactic processing of complex phenomena is a crucial step to perform aspect-based opinion mining
1
we use the skipgram model with negative sampling implemented in the open-source word2vec toolkit to learn word representations---to train the link embeddings , we use the speedy , skip-gram neural language model of mikolov et al via their toolkit word2vec
1
we use pre-trained embeddings from glove---we use theano and pretrained glove word embeddings
1
the development set is used to optimize feature weights using the minimum-error-rate algorithm---we set the feature weights by optimizing the bleu score directly using minimum error rate training on the development set
1
using an ensemble method , the key information extracted from word pairs with dependency relations in the translated text is effectively integrated into the parser for the target language---word pairs with dependency relations in the translated treebank are chosen to generate some additional features to enhance the parser for the target language
1
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing---we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit
1
table 1 shows the evaluation of all the systems in terms of bleu score with the best score highlighted---i have defined a comprehensive set of social events and built a preliminary system that extracts social events from news articles
0
uchiyama et al also propose a statistical token classification method for jcvs---in this paper , we first propose a phrase structure annotation scheme for learner english
0
in the case of mr. jones , for example , the program could identify him by providing his full name and address ; in the case of a tree , some longer description may be necessary---the parsing model used for intra-sentential parsing is a dynamic conditional random field shown in figure 7
0
our experiments showed that the performance of our system is heavily dependent on the choice of the training set , as we managed to significantly improve the performance of our system with respect to the original submission---mikolov et al used distributed representations of words to learn a linear mapping between vector spaces of languages and showed that this mapping can serve as a good dictionary between the languages
0
we also apply an attention mechanism proposed by to lstm units---we applied the additive attention model on top of the multi-layer lstms
1
moreover , we find that jointly learning ¡®natural¡¯ subtasks , in a multi-task learning setup , improves performance---we show that a multi-task learning setup where natural subtasks of the full am problem are added as auxiliary tasks improves performance
1
in this paper , we propose to recast the task of coreference resolution as an optimization problem , namely an integer linear programming ( ilp ) problem---in this paper , we propose an integer linear programming ( ilp ) formulation for coreference resolution which models anaphoricity and coreference
1
parsing is the process of mapping sentences to their syntactic representations---parsing is the process of building an internal representation of the sentence , while disambiguating in local conditions of uncertainty
1
we evaluated the translation quality of the system using the bleu metric---the popular ibm models for statistical machine translation are described in
0
we use mteval from the moses toolkit and tercom to evaluate our systems on the bleu and ter measures---we trained the statistical phrase-based systems using the moses toolkit with mert tuning
1
we use the stanford part of speech tagger to annotate each word with its pos tag---we tag the source language with the stanford pos tagger
1
model parameters 位 i are estimated using numer-ical optimization methods so as to maximize the log-likelihood of the training data---model parameters that maximize the log-likelihood of the training data are computed using a numerical optimization method
1
here we investigate a label propagation algorithm ( lp ) ( cite-p-16-3-4 ) for relation extraction task---we investigate a graph based semi-supervised learning algorithm , a label propagation ( lp ) algorithm , for relation extraction
1
in this paper , we present a reinforcement learning approach for mapping natural language instructions to sequences of executable actions---in this paper , we present a reinforcement learning framework for inducing mappings from text to actions
1
we experimentally demonstrated that it speeded up svm and llm classifiers for a japanese dependency parsing task by a factor of 10---experimental results for a japanese dependency parsing task show that our method speeded up the svm and llm classifiers
1
table 4 shows end-to-end translation bleu score results---table 4 shows translation results in terms of bleu , ribes , and ter
1
this approach has been shown to work well for both subtasks---this approach can be efficiently used also for sts
1
all language models were trained using the srilm toolkit---the language model is trained and applied with the srilm toolkit
1
when parsers are trained on ptb , we use the stanford pos tagger---we use the stanford nlp pos tagger to generate the tagged text
1
we preprocessed the training corpora with scripts included in the moses toolkit---we preprocessed the corpus with tokenization and true-casing tools from the moses toolkit
1
the system also incorporates a dialogue manager based on the trindikit dialogue management toolkit , which implements the information-state based approach to dialogue management---the systems are built using the information state update approach for dialogue management and generic components for deep natural language understanding and generation
1
as inputs we use a random sample of sentences from the penn treebank and represent each word as a 100d glove embedding---dependency parsing is a very important nlp task and has wide usage in different tasks such as question answering , semantic parsing , information extraction and machine translation
0
we also apply layer normalization to the concatenated outputs---we use the adaptive moment estimation for the optimizer
1
a back-off 2-gram model with good-turing discounting and no lexical classes was also created from the training set , using the srilm toolkit ,---a 5-gram language model was built using srilm on the target side of the corresponding training corpus
1
the model was built using the srilm toolkit with backoff and kneser-ney smoothing---the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime
1
zens and ney show that itg constraints yield significantly better alignment coverage than the constraints used in ibm statistical machine translation models on both german-english and french-english---zens and ney show that itg constraints allow a higher flexibility in word ordering for longer sentences than the conventional ibm model
1
however , little is known about what and how much these models learn about each language and its features---however , little is known about what these models learn about source and target languages
1
mcclosky et al applied the method later on out-of-domain texts which show good accuracy gains too---mcclosky et al applied the method later on english out-of-domain texts which show good accuracy gains too
1
one potential application of nli is in the field of forensic linguistics , a juncture where the legal system and linguistic stylistics intersect---ratinov and roth and turian et al also explored this approach for name tagging
0
finding some of the verbs in a text reliably is hard enough ; finding all of them reliably is well beyond the scope of this work---reliably is hard enough ; finding all of them reliably is well beyond the scope of this work
1
our baseline is the smt toolkit moses run over letter strings rather than word strings---as two cascaded tasks , this paper presents a unified framework that can integrate semantic parsing
0
basic reordering models in phrase-based systems use linear distance as the cost for phrase movements---the third feature type is based on the politeness theory
0
we leverage latent dirichlet allocation for topic discovery and modeling in the reference source---we can learn a topic model over conversations in the training data using latent dirchlet allocation
1
all word vectors are trained on the skipgram architecture---we pre-train the 200-dimensional word embeddings on each dataset in with skipgram
1
the development set is used to optimize feature weights using the minimum-error-rate algorithm---bilingual lexicons play an important role in many natural language processing tasks , such as machine translation and cross-language information retrieval
0
our 5-gram language model is trained by the sri language modeling toolkit---the language model is trained and applied with the srilm toolkit
1
in this paper we demonstrate that non-parametric models can complement supervised segmentation---although supervised segmentation is very competitive , we showed that it can be supplemented
1
relation extraction is the task of recognizing and extracting relations between entities or concepts in texts---relation extraction is the task of finding semantic relations between entities from text
1
distributional methods for meaning similarity are based on the observation that similar words occur in similar contexts and measure similarity based on patterns of word occurrence in large corpora---collobert et al and zhou and xu worked on the english constituent-based srl task using neural networks
0
coreference resolution is the task of clustering referring expressions in a text so that each resulting cluster represents an entity---coreference resolution is a key task in natural language processing ( cite-p-13-1-8 ) aiming to detect the referential expressions ( mentions ) in a text that point to the same entity
1
the srilm toolkit is used to train 5-gram language model---we use skipgram model to train the embeddings on review texts for k-means clustering
0
deep learning techniques have shown enormous success in sequence to sequence mapping tasks---recently , with the development of neural network , deep learning based models attract much attention in various tasks
1
twitter is a popular microblogging service , which , among other things , is used for knowledge sharing among friends and peers---for the cluster- based method , we use word2vec 2 which provides the word vectors trained on the google news corpus
0
the anaphor is a pronoun and the referent is in operating memory ( not in focus )---if the anaphor is a pronoun but no referent is found in the cache , it is then necessary to operatingsearch memory
1
performance was calculated using quadratic weighted kappa , which is the standard evaluation metric used in automated scoring---performance was measured with quadratic weighted kappa , a common metric for measuring essay scoring performance
1
if the anaphor is a definite noun phrase and the referent is in focus ( i.e . in the cache ) , anaphora resolution will be hindered---if the anaphor is a pronoun but no referent is found in the cache , it is then necessary to operatingsearch memory
1
besides , we decide to make more efforts to explore how to reinforce the temporal order information for the proposed model---but not least , we introduce the directional self-attention to model temporal order information for the proposed model
1
we trained a 5-gram sri language model using the corpus supplied for this purpose by the shared task organizers---we trained kneser-ney discounted 5-gram language models on each available corpus using the srilm toolkit
1
lapata and lascarides propose a probabilistic ranking model for logical metonymies---we evaluated the translation quality using the bleu-4 metric
0
we used the srilm software 4 to build langauge models as well as to calculate cross-entropy based features---in this paper we propose a data-driven approach for generating short children ’ s stories
0