text
stringlengths
82
736
label
int64
0
1
we evaluated the translation quality using the bleu-4 metric---as an evaluation metric , we used bleu-4 calculated between our model predictions and rpe
1
finally , we describe two ways to extend the model by incorporating three or more modalities---we provide two novel ways to extend the bimodal models to support three or more modalities
1
for this purpose , we used phrase tables learned by the standard statistical mt toolkit moses---for generating the translations from english into german , we used the statistical translation toolkit moses
1
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided---to deal with this problem , our proposed twin-candidate model recasts anaphora resolution
0
the maximum entropy approach presents a powerful framework for the combination of several knowledge sources---to implement the twin model , we adopt the log linear or maximum entropy model for its flexibility of combining diverse sources of information
1
we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset---to learn the user-dependent word embeddings for stance classification and visualization , we train the 50-dimensional word embeddings via glove
1
this is called alignment through audience design---this kind of adaptation is called alignment through audience design
1
seki et al proposed a probabilistic model for zero pronoun detection and resolution that uses hand-crafted case frames---seki et al proposed a probabilistic model for zero pronoun detection and resolution that used hand-crafted case frames
1
wan proposed a co-training approach to address the cross-lingual sentiment classification problem---wan employed a co-training approach for cross-language sentiment classification
1
we use several classifiers including logistic regression , random forest and adaboost implemented in scikit-learn---we also use editor score as an outcome variable for a linear regression classifier , which we evaluate using 10-fold cross-validation in scikit-learn
1
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting---we use srilm to train a 5-gram language model on the xinhua portion of the english gigaword corpus 5th edition with modified kneser-ney discounting
1
we used the stanford parser to generate dependency trees of sentences---we used the malt parser to obtain source english dependency trees and the stanford parser for arabic
1
the english side of the parallel corpus is trained into a language model using srilm---the language model is trained and applied with the srilm toolkit
1
we implemented this model using the srilm toolkit with the modified kneser-ney discounting and interpolation options---we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing
1
however , with math-w-8-1-0-49 this would continue to happen due to the small non-zero gradient---the rule math-w-2-3-1-149 would result in a very large number of loss
1
we implement the weight tuning component according to the minimum error rate training method---we use our reordering model for n-best re-ranking and optimize bleu using minimum error rate training
1
we train a kn-smoothed 5-gram language model on the target side of the parallel training data with srilm---to calculate language model features , we train traditional n-gram language models with ngram lengths of four and five using the srilm toolkit
1
kaplan et al introduce a system designed for building a grammar by both extending and restricting another grammar---hence we use the expectation maximization algorithm for parameter learning
0
for the svm classifier we use the python scikitlearn library---we use the moses software to train a pbmt model
0
in the context of the web 2.0 , the importance of social media has been constantly growing in the past years---yessenalina and cardie represent each word as a matrix and use iterated matrix multiplication as phrase-level composition function
0
to tackle this problem , we construct a large-scale japanese image caption dataset based on images from mscoco , which is called stair captions---stair captions consists of 820,310 japanese captions for 164,062 images
1
we aim to extract frame-semantic structures from text---we consider the task of automatic extraction of semantic frames
1
we have presented a novel incremental relaxation algorithm that can be applied to marginal inference---statistical machine translation ( smt ) system is heavily dependent upon the amount of parallel sentences used in training
0
we use the moses toolkit to train various statistical machine translation systems---for our baseline we use the moses software to train a phrase based machine translation model
1
we train and evaluate a l2-regularized logistic regression classifier with the liblin-ear solver as implemented in scikit-learn---we applied liblinear via its scikitlearn python interface to train the logistic regression model with l2 regularization
1
following , we seek symmetric patterns to retrieve concept terms---following we seek symmetric patterns to retrieve concept terms
1
we use 5-grams for all language models implemented using the srilm toolkit---for language models , we use the srilm linear interpolation feature
1
we use skip-gram with negative sampling for obtaining the word embeddings---we use the word2vec tool to train monolingual vectors , 6 and the cca-based tool for projecting word vectors
1
following , we assume that a discourse commitment represents the any of the set of propositions that can necessarily be inferred to be true , given a conventional reading of a text passage---following , we assume discourse commitments represent the set of propositions which can necessarily be inferred to be true given a conventional reading of a text
1
the use of neural-networks language models was originally introduced in and successfully applied to largescale speech recognition and machine translation tasks---the use of unsupervised word embeddings in various natural language processing tasks has received much attention
1
coreference resolution is the problem of identifying which noun phrases ( nps , or mentions ) refer to the same real-world entity in a text or dialogue---latent dirichlet allocation is one of the most popular topic models used to mine large text data sets
0
we learn our word embeddings by using word2vec 3 on unlabeled review data---we trained word vectors with the two architectures included in the word2vec software
1
within mt there has been a variety of approaches dealing with domain adaption---within mt there has been a variety of approaches dealing with domain adaptation ,
1
since component entailment is not observed in the data , we apply the iterative em algorithm---therefore , we use em-based estimation for the hidden parameters
1
alfonseca et al combine several signals , including web anchor text , in an svm-based supervised splitter---alfonseca , bilac , and pharies combine several signals , including web anchor text , in an svm-based supervised splitter
1
sentence compression is the task of compressing long , verbose sentences into short , concise ones---sentence compression is a text-to-text generation task in which an input sentence must be transformed into a shorter output sentence which accurately reflects the meaning in the input and also remains grammatically well-formed
1
during the last decade , statistical machine translation systems have evolved from the original word-based approach into phrase-based translation systems---during the last few years , smt systems have evolved from the original word-based approach to phrase-based translation systems
1
for the classification task , we use pre-trained glove embedding vectors as lexical features---we use glove word embeddings , an unsupervised learning algorithm for obtaining vector representations of words
1
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting---a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit
1
we used srilm for training the 5-gram language model with interpolated modified kneser-ney discounting ,---for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus
1
crf , is used to calculate the conditional probability of values on designated output nodes given values on other designated input nodes---yago is a large ontology based on wordnet and extended with concepts from wikipedia and other resources
0
we have presented a first attempt at learning an embedded frame lexicon from data , using no annotated information---we present a method to induce an embedded frame lexicon in an minimally supervised fashion
1
in particular , the discomt 2015 shared task on pronoun-focused translation included a protocol for human evaluation---a similar problem was addressed in the discomt 2015 shared task on pronoun translation as a cross-lingual pronoun prediction subtask
1
wikipedia is a free multilingual online encyclopedia and a rapidly growing resource---wikipedia is a free , collaboratively edited encyclopedia
1
it has been empirically shown that word embeddings can capture semantic and syntactic similarities between words---it is widely recognized that word embeddings are useful because both syntactic and semantic information of words are well encoded
1
rapp and fung discussed semantic similarity estimation using cross-lingual context vector alignment---we use the stanford nlp pos tagger to generate the tagged text
0
we trained a standard 5-gram language model with modified kneser-ney smoothing using the kenlm toolkit on 4 billion running words---we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing
1
word sense disambiguation ( wsd ) is the nlp task that consists in selecting the correct sense of a polysemous word in a given context---word sense disambiguation ( wsd ) is a problem of finding the relevant clues in a surrounding context
1
simipair of input-output string l¡ì ? u ? < is : math-p-3-6-0 š as‹ ? l¡ì ? u ? < when u¡ì ? u ?---l ¡ì ? u ? < is : math-p-3-6-0 asal ¡ì ? u ? < when u ¡ì ? u ?
1
twitter is a widely used microblogging platform , where users post and interact with messages , “ tweets ”---twitter is a huge microblogging service with more than 500 million tweets per day from different locations of the world and in different languages ( cite-p-8-1-9 )
1
we used pos tags predicted by the stanford pos tagger---we build upon our previous approach for joint concept disambiguation and clustering
0
previous work showed that word clusters derived from an unlabelled dataset can improve the performance of many nlp applications---distributed word representations have been shown to improve the accuracy of ner systems
1
the use of prosody in speech understanding applications has been quite extensive---using prosodic knowledge for speech recognition is still quite limited
1
the backbone of our system is a statistical retrieval engine which performs automated indexing of documents , then search and ranking in response to user queries---the backbone of the system is a family of svm classifiers for pairs of mentions : each mention type receives its own classifier
1
coreference resolution is a task aimed at identifying phrases ( mentions ) referring to the same entity---coreference resolution is a key problem in natural language understanding that still escapes reliable solutions
1
we base our model on the recurrent neural network language model of mikolov et al which is factored into an input layer , a hidden layer with recurrent connections , and an output layer---our model has a similar structure to the recurrent neural network language model of mikolov et al which is factored into an input layer , a hidden layer with recurrent connections , and an output layer
1
this data split is different from other similar text classification shared tasks which provide much more training than test instances---our peer-learning agent , ksc-pal , has at its core the tutalk system , a dialogue management system that supports natural language dialogue in educational applications
0
string-based models include string-to-string and string-to-tree---string-based approaches include both string-tostring and string-to-tree systems
1
we use 300 dimensional glove embeddings trained on the common crawl 840b tokens dataset , which remain fixed during training---we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset
1
we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit---birke and sarkar use literal and non-literal seed sets acquired without human supervision to perform bootstrapping learning
0
for each connective we built a specialized classifier , by using the stanford maximum entropy classifier package---we use a maximum entropy classifier which allows an efficient combination of many overlapping features
1
the grammar design is based on the standard hpsg analysis of english---latent dirichlet allocation is one of the most popular topic models used to mine large text data sets
0
the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting---a 5-gram language model with kneser-ney smoothing is trained using s-rilm on the target language
1
in particular , we use the liblinear 3 package which has been shown to be efficient for text classification problems such as this---specifically , we use the liblinear svm package as it is well-suited to text classification tasks with large numbers of features and texts
1
in the remainder of this paper , we briefly review the models of selectional preferences we consider ( section 2 )---we use the 300-dimensional skip-gram word embeddings built on the google-news corpus
0
to obtain these features , we use the word2vec implementation available in the gensim toolkit to obtain word vectors with dimension 300 for each word in the responses---we also consider the recently popular word2vec tool to obtain vector representation of words which are trained on 300 million words of google news dataset and are of length 300
1
we apply the rules to each sentence with its dependency tree structure acquired from the stanford parser---we use the stanford parser to generate the grammar structure of review sentences for extracting syntactic d-features
1
we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting---we use srilm train a 5-gram language model on the xinhua portion of the english gigaword corpus 5th edition with modified kneser-ney discounting
1
as a countbased baseline , we use modified kneser-ney as implemented in kenlm---we choose modified kneser ney as the smoothing algorithm when learning the ngram model
1
the in-house phrase-based decoder is used to perform decoding---the kit system uses an in-house phrase-based decoder to perform translation
1
we calculated the language model probabilities using kenlm , and built a 5-gram language model from the english gigaword fifth edition---to rerank the candidate texts , we used a 5-gram language model trained on the europarl corpus using kenlm
1
in this work , we employ the toolkit word2vec to pre-train the word embedding for the source and target languages---we use a popular word2vec neural language model to learn the word embeddings on an unsupervised tweet corpus
1
the nonembeddings weights are initialized using xavier initialization---we used the moses mt toolkit with default settings and features for both phrase-based and hierarchical systems
0
we extract the corresponding feature from the output of the stanford parser---we apply the rules to each sentence with its dependency tree structure acquired from the stanford parser
1
we used srilm to build a 4-gram language model with interpolated kneser-ney discounting---we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus
1
we use the glove pre-trained word embeddings for the vectors of the content words---we use pre-trained 50-dimensional word embeddings vector from glove
1
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts---relation extraction is the task of detecting and classifying relationships between two entities from text
1
srilm toolkit was used to create up to 5-gram language models using the mentioned resources---a 5-gram language model was built using srilm on the target side of the corresponding training corpus
1
we initialize the word embedding matrix with pre-trained glove embeddings---for the mix one , we also train word embeddings of dimension 50 using glove
1
recently , shen et al have shown that dependency language model is beneficial for capturing long-distance relations between target words---shen et al proposed a string-to-dependency target language model to capture long distance word orders
1
text categorization is the classification of documents with respect to a set of predefined categories---text categorization is the task of assigning a text document to one of several predefined categories
1
word sense disambiguation ( wsd ) is a key enabling-technology---word sense disambiguation ( wsd ) is the task of determining the meaning of a word in a given context
1
the bleu score measures the precision of n-grams with respect to a reference translation with a penalty for short translations---by using the algorithm , each similarity between nodes is calculated , and the similarity matrix in figure 5 shows a similarity matrix
0
for example , ( cite-p-19-3-5 ) applied recursive neural networks as a variant of the standard rnn structured by syntactic trees to the sentiment analysis task---for example , ( cite-p-19-3-5 ) uses recursive neural networks to build representations of phrases and sentences
1
most previous works addressing the task of bilingual lexicon extraction from comparable corpora are based on the standard approach---as has been noted earlier , the standard approach is proposed to extract bilingual lexica from comparable corpora
1
we use bleu to evaluate translation quality---we report bleu scores to compare translation results
1
in the 10p case , the new opinion words extracted by our approach could cover almost 75 % of the whole opinion set whereas the corresponding seed words only cover 8 % of the opinion words in the data ( see the init line )---in the 10p case , the new opinion words extracted by our approach could cover almost 75 % of the whole opinion set whereas the corresponding seed words only cover 8 % of the opinion words in the data
1
within this subpart of our ensemble model , we used a svm model from the scikit-learn library---we compared na茂ve bayes , linear svm , and rbf svm classifiers from the scikit-learn package
1
word embedding models are aimed at learning vector representations of word meaning---the decoding weights were optimized with minimum error rate training
0
our neural models achieve state-of-the-art results on the semeval 2010 relation classification task---we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit
0
the system was trained using the moses toolkit---it was trained on the webnlg dataset using the moses toolkit
1
we use case-sensitive bleu-4 to measure the quality of translation result---we evaluate the performance of different translation models using both bleu and ter metrics
1
we measured the overall translation quality with 4-gram bleu , which was computed on tokenized and lowercased data for all systems---we measured performance using the bleu score , which estimates the accuracy of translation output with respect to a reference translation
1
reinforcement learning is a machine learning technique that defines how an agent learns to take optimal actions in a dynamic environment so as to maximize a cumulative reward---we dedicate to the topic of aspect ranking , which aims to automatically identify important aspects of a product from consumer reviews
0
most methods fall into three types : unordered models , sequence models , and convolutional neural networks models---neural models can be categorized into two classes : recursive models and convolutional neural networks ( cnn ) models
1
in recent years , neural machine translation based on encoder-decoder models has become the mainstream approach for machine translation---sentence compression is a text-to-text generation task in which an input sentence must be transformed into a shorter output sentence which accurately reflects the meaning in the input and also remains grammatically well-formed
0
the penn discourse treebank is a large corpus annotated with discourse relations ,---the penn discourse tree bank is the largest resource to date that provides a discourse annotated corpus in english
1
chen et al adopt a deep gated neural model to capture the semantic interactions between argument pairs---chen et al adopted a gated relevance network to capture the semantic interaction between word pairs
1
inspired by centering theory , these annotations are used in a computational account of discourse focus to measure coherence---we perform pre-training using the skipgram nn architecture available in the word2vec tool
0
we use the stanford parser to derive the trees---we use the stanford parser for syntactic and dependency parsing
1