text
stringlengths
82
736
label
int64
0
1
we evaluate our model on a widely used dataset 1 which is developed by and has also been used by---the weights 位 m in the log-linear model were trained using minimum error rate training with the news 2009 development set
0
we use the srilm toolkit to compute our language models---we use 5-grams for all language models implemented using the srilm toolkit
1
we extract the named entities from the web pages using the stanford named entity recognizer---we used the moses pbsmt system for all of our mt experiments
0
bahdanau et al made the first attempt to use an attention-based neural machine translation approach to jointly translate and align words---bahdanau et al proposed an attentional encoder-decoder architecture for machine translation
1
we used kneser-ney smoothing for training bigram language models---the srilm toolkit was used for training the language models using kneser-ney smoothing
1
in this study , we focus on the task of generating market comments from a time-series of stock prices---the glove 100-dimensional pre-trained word embeddings are used for all experiments
0
we use the named entity information and the predicate argument structures of the sentences to accomplish this goal---by exploiting the named entity information and the predicate argument structures of the sentences
1
we used standard classifiers available in scikit-learn package---we used the implementation of random forest in scikitlearn as the classifier
1
ravichandran and hovy proposed a method for learning untyped , anchored surface patterns in order to extract and rank answers for a given question type---ravichandran and hovy present an alternative ontology for type preference and describe a method for using this alternative ontology to extract particular answers using surface text patterns
1
the third step is application of the gtagger---the next step was the application of the gtagger ,
1
this paper describes our system participation in the semeval-2017 task 8 ¡®rumoureval : determining rumour veracity and support for rumours¡¯---we present our proposed system submitted as part of the semeval-2017 shared task on ¡° rumoureval : determining rumour veracity and support for rumours ¡±
1
the embedding layer in the model is initialized with 300-dimensional glove word vectors obtained from common crawl---this model first embeds the words using 300 dimensional word embeddings created using the glove method
1
these models can be tuned using minimum error rate training---the 位 f are optimized by minimum-error training
1
we optimise the feature weights of the model with minimum error rate training against the bleu evaluation metric---then we use the standard minimum error-rate training to tune the feature weights to maximize the system潞s bleu score
1
liu et al suggested incorporating additional network architectures to further improve the performance of sdp-based methods , which uses a recursive neural network to model the sub-tree---liu et al developed a dependency-based neural network , in which a convolutional neural network has been used to capture features on the shortest path and a recursive neural network is designed to model subtrees
1
we estimate a 5-gram language model using interpolated kneser-ney discounting with srilm---the research described here is a further development of several strands of previous research
0
we used the wordsim353 test collection which consists of similarity judgments for word pairs---we use wordsim-353 , which contains 353 english word pairs with human similarity ratings
1
our discriminative model is a linear model trained with the margin-infused relaxed algorithm---we select the cutting-plane variant of the margin-infused relaxed algorithm with additional extensions described by eidelman
1
on the other hand , the/k/sound of kitten is written with a letter k. nor is this lack of invariance between letters and phonemes the only problem---on the other hand , the / k / sound of kitten is written with a letter k . nor is this lack of invariance between letters and phonemes
1
we use logistic regression with l2 regularization , implemented using the scikit-learn toolkit---we use the wikipedia revision toolkit , an enhancement of the java wikipedia library , to gain access to the revision history of each article
0
following , we use the bootstrapresampling test to do significance testing---finally , we conduct paired bootstrap sampling to test the significance in bleu scores differences
1
a 4-gram language model is trained on the monolingual data by srilm toolkit---in the first part of the paper a novel , sortally-based approach to aspectual composition
0
in this paper , we modeled reg as a density-estimation problem---collobert et al first introduced an end-to-end neural-based approach with sequence-level training and uses a convolutional neural network to model the context window
0
venugopal et al propose a method to watermark the output of machine translation systems to aid this distinction---venugopal et al propose a method to watermark the output of machine translation systems to aid this distinction , with a negligible loss of quality
1
we adopted the case-insensitive bleu-4 as the evaluation metric---to train the parsing models , while we use subtree-based features and employ the original gold-standard data to train the models
0
we perform the above structured classification using linear-chain conditional random fields , a discriminative log-linear model for tagging and segmentation---given the above problem formulation , we trained the linear-chain undirected graphical model as conditional random fields , one of the best performing chunkers
1
subsequently , hashimoto et al introduced a method which jointly learns word and phrase embeddings by using a variety of predicateargument structures---hashimoto et al proposed a log-bilinear language model based on predicate-argument structures and report improvements on phrase similarity tasks compared to standard skipgram
1
as evaluation measures , we use the standard bleu as well as ribes , a reorderingbased metric that has been shown to have high correlation with human evaluations on the ntcir data---we evaluate the generated descriptions using sentence-level meteor and bleu4 , which have been shown to have moderate correlation with humans
1
we hypothesize that visual representations can be particularly useful for lexical entailment detection---for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus
0
we implemented the different aes models using scikit-learn---in the future , we will work on making the our approach scale to much larger vocabulary sizes using noise contrastive estimation
0
for evaluation , we compare each summary to the four manual summaries using rouge---we measure the quality of the automatically created summaries using the rouge measure
1
all the neural network models were optimized using adadelta , with mini-batches of 256 samples---the model parameters were optimized with adadelta , using a maximum sentence length of 80 and a minibatch size of 80
1
coreference resolution is the task of determining which mentions in a text refer to the same entity---coreference resolution is the task of determining which mentions in a text are used to refer to the same real-world entity
1
m眉ller et al show that even higher-order crfs can be used for large tagsets when approximations are employed---indeed , m眉ller et al and silfverberg et al show that sub-tag dependencies improve the performance of linear taggers
1
the meta-net project aims to ensure equal access to information by all european citizens---using only manually created lexical resources could lead to the performance improvement
0
we presented an unsupervised method for inferring the hierarchical grouping of the senses of a polysemous word---paper presents an unsupervised method for inferring the hierarchical grouping of the senses of a polysemous word
1
the language model is a 5-gram with interpolation and kneserney smoothing---the language model is a 5-gram lm with modified kneser-ney smoothing
1
we propose a vector representation technique that combines the complementary knowledge of both these types of resource---we put forward a novel concept representation technique , called n asari , which exploits the knowledge available in both types of resource
1
we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing---we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing
1
semeval 2018 task 7 addresses this problem with a shared task on extracting and classifying semantic relations in scientific papers---translation results are evaluated using the word-based bleu score
0
we perform inference using point-wise gibbs sampling---to give additional expressiveness power to standard nns , many architectures have been proposed , such as lstm , gru , and cnn
0
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus---in our implementation , we train a tri-gram language model on each phone set using the srilm toolkit
1
to the best of our knowledge , the viterbi algorithm is the only algorithm widely adopted in the nlp field that offers exact decoding---however , the viterbi algorithm , which is the standard decoding algorithm in nlp , is not efficient
1
when applying trigram models , even with a rather low error rate of 7.1 % , semantic parsing performance degraded about 9 % absolute in f 1---when applying trigram models , even with a rather low error rate of 7 . 1 % , semantic parsing performance degraded about 9 % absolute
1
we use a count-based distributional semantics model and the continuous bag-of-words model to learn word vectors---for all pos tagging tasks we use the stanford log-linear part-ofspeech tagger
0
word sense disambiguation ( wsd ) is the task of determining the correct meaning ( “ sense ” ) of a word in context , and several efforts have been made to develop automatic wsd systems---turian et al found that the more clusters , the better the performance
0
our word embeddings is initialized with 100-dimensional glove word embeddings---for word embeddings , we consider word2vec and glove
1
we use the pre-trained glove vectors to initialize word embeddings---for all models , we use the 300-dimensional glove word embeddings
1
coreference resolution is the task of determining which mentions in a text refer to the same entity---in order to map queries and documents into the embedding space , we make use of recurrent neural network with the long short-term memory architecture that can deal with vanishing and exploring gradient problems
0
next we consider the context-predicting vectors available as part of the word2vec 6 project---for efficiency , we follow the hierarchical softmax optimization used in word2vec
1
a 4-gram language model which was trained on the entire training corpus using srilm was used to generate responses in conjunction with the phrase-based translation model---a 3-gram language model was trained from the target side of the training data for chinese and arabic , using the srilm toolkit
1
coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set---importantly , word embeddings have been effectively used for several nlp tasks
0
we propose a data-driven approach to story generation that does not require extensive manual involvement---we propose a data-driven approach for generating short children ¡¯ s stories that does not require extensive manual involvement
1
we pre-processed the data to add part-ofspeech tags and dependencies between words using the stanford parser---collobert and weston , in their seminal paper on deep architectures for nlp , propose a multilayer neural network for learning word embeddings
0
the parameters of the log-linear model are tuned by optimizing bleu on the development data using mert---the weights for the loglinear model are learned using the mert system
1
a 4-gram language model is trained on the monolingual data by srilm toolkit---the bleu score for all the methods is summarised in table 5
0
in this paper , we have also discussed some important implications of the notion of critical tokenization in the area of character string tokenization research and development---after discussing some helpful implications of critical tokenization in effective tokenization disambiguation and in efficient tokenization implementation , we suggest areas for future research
1
on the wmt¡¯15 english to czech translation task , this hybrid approach offers an addition boost of +2.1 ? 11.4 bleu points over models that already handle unknown words---on the wmt ¡¯ 15 english to czech translation task , such a hybrid approach provides an additional boost of + 2 . 1 ? 11 . 4 bleu points over models that already handle unknown words
1
to solve the feature coverage problem with the em algorithm , meng et al leverage the unlabeled parallel data to learn unseen sentiment words---like lu et al , meng et al , 2012 also proposed their cross-lingual mixture model to leverage an unlabeled parallel dataset
1
we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing---we used a 5-gram language model trained on 126 million words of the xinhua section of the english gigaword corpus , estimated with srilm
1
the feature weights for the log-linear combination of the features are tuned using minimum error rate training on the devset in terms of bleu---we then learn reranking weights using minimum error rate training on the development set for this combined list , using only these two features
1
for automated scoring of unrestricted spontaneous speech , speech proficiency has been evaluated primarily on aspects of pronunciation , fluency , vocabulary and language usage but not on aspects of content and topicality---for automated scoring of unrestricted , spontaneous speech , most automated systems have estimated the non-native speakers ’ speaking proficiency primarily based on low-level speaking-related features , such as pronunciation , intonation , rhythm , rate of speech , and fluency
1
in particular , we define an efficient tree kernel derived from the partial tree kernel , suitable for encoding structural representation of comments into support vector machines---we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words
0
jeong et al use semi-supervised learning to transfer dialogue acts from labeled speech corpora to the internet media of forums and e-mail---jeong et al employed semi-supervised learning to transfer latent states from labeled speech corpora to the internet media and e-mail
1
we then process the whole chinese dataset using the stanford corenlp toolkit to get the pos and named entity tags---in this paper we presented a method to discover asymmetric entailment relations between verbs
0
for example , in extended wordnet , the rich glosses in wordnet are enriched by disambiguating the nouns , verbs , adverbs , and adjectives with synsets---complexity , i . e . , we propose a novel method to compress the embedding and prediction subnets in neural language models
0
adagrad with mini-batches is employed for optimization---a gradient based optimization named adagrad with mini-batch size of 32 is used
1
with this undoubted advantage come four major challenges when compared to standard frequency count smt models---with this undoubted advantage come four major challenges when compared to standard frequency count
1
bengio et al have proposed a neural network based model for vector representation of words---bengio et al proposed to use artificial neural network to learn the probability of word sequences
1
roth and lapata introduced dependency path embedding to model syntactic information and exhibited a notable success---roth and lapata employed dependency path embedding to model syntactic information and exhibited a notable success
1
reading comprehension is a general problem in the real world , which aims to read and comprehend a given article or context , and answer the questions based on it---reading comprehension is the ability to process some text and understand its contents , in order to form some beliefs about the world
1
table 2 shows the blind test results using bleu-4 , meteor and ter---text categorization is the task of automatically assigning predefined categories to documents written in natural languages
0
the semantic textual similarity task examines semantic similarity at a sentence-level---the task of semantic textual similarity measures the degree of semantic equivalence between two sentences
1
we use the open-source moses toolkit to build a phrase-based smt system trained on mostly msa data obtained from several ldc corpora including some limited da data---we used the moses mt toolkit with default settings and features for both phrase-based and hierarchical systems
1
the corpus-based approach is validated to work almost as well as the knowledge-based approach for computing word semantics---corpus-based approach or knowledge-based approach can be incorporated into the framework
1
li and yarowsky proposed an unsupervised method for extracting the mappings from chinese abbreviations and their full-forms---li and yarowsky propose an unsupervised method to extract the relations between full-form phrases and their abbreviations
1
specifically , we used the python scikit-learn module , which interfaces with the widely-used libsvm---we used svm classifier that implements linearsvc from the scikit-learn library
1
ceylan and kim compared a number of methods for identifying the language of search engine queries of 2 to 3 words---ceylan and kim compare a number of methods for identifying the language of search engine queries of 2 to 3 words
1
the bilingual embedding research origins in the word embedding learning , upon which zou et al utilize word alignments to constrain translational equivalence---zou et al learn bilingual word embeddings by designing an objective function that combines unsupervised training with bilingual constraints based on word alignments
1
we propose a new , simple model for the automatic induction of selectional preferences , using corpus-based semantic similarity metrics---we propose a new , simple model for selectional preference induction that uses corpus-based semantic similarity metrics , such as cosine or lin ’ s ( 1998 )
1
we computed the translation accuracies using two metrics , bleu score , and lexical accuracy on a test set of 30 sentences---we compared the performances of the systems using two automatic mt evaluation metrics , the sentence-level bleu score 3 and the document-level bleu score
1
this approach fits with samsa¡¯s stipulation , that an optimal structural simplification is one where each ( ucca- ) event in the input sentence is assigned a separate output sentence---the hidden vector state model is a discrete hidden markov model in which each hmm state represents the state of a push-down automaton with a finite stack size
0
we then review the related research on co-training---we describe our implementation of the co-training
1
the text samples include essays , emails , blogs , and chat---nature is quite different from the other text genres of emails , essays and blogs
1
we use moses , an open source toolkit for training different systems---our system is built using the open-source moses toolkit with default settings
1
we follow the description of the naive bayes classifier given in mccallum and nigam---we build on the framework of multinomial naive bayes text classification
1
the bleu score for all the methods is summarised in table 5---table 4 shows the bleu scores of the output descriptions
1
semantic parsing is the problem of mapping natural language strings into meaning representations---we divide the sentences into three types according to triplet overlap degree , including normal , entitypairoverlap
0
excluding ibm model 1 , the ibm translation models , and practically all variants proposed in the literature , have relied on the optimization of likelihood functions or similar functions that are non-convex---ibm translation models , and practically all variants proposed in the literature , have relied on the optimization of likelihood functions or similar functions that are non-convex
1
for this labeling , we estimate translation quality by the translation edit rate ter metric---we evaluate the performance of different translation models using both bleu and ter metrics
1
relation extraction is the task of extracting semantic relationships between entities in text , e.g . to detect an employment relationship between the person larry page and the company google in the following text snippet : google ceo larry page holds a press announcement at its headquarters in new york on may 21 , 2012---relation extraction ( re ) is a task of identifying typed relations between known entity mentions in a sentence
1
1 for example , ¡°reserate¡± is correctly included in c rown as a hypernym of unlock % 2:35:00 : : ( to open the lock of ) and ¡°awesometastic¡± as a synonym of fantastic % 3:00:00 : extraordinary:00 ( extraordinarily good or great )---for example , ¡° reserate ¡± is correctly included in c rown as a hypernym of unlock % 2 : 35 : 00 : : ( to open the lock of ) and ¡° awesometastic ¡± as a synonym of fantastic %
1
since coherence is a measure of how much sense the text makes , it is a semantic property of the text---coherence is a central aspect in natural language processing of multi-sentence texts
1
we used srilm to build a 4-gram language model with interpolated kneser-ney discounting---we measure the translation quality with automatic metrics including bleu and ter
0
bitpar employs a grammar engineered for german ,---the latter employs a grammar engineered for german
1
in fact , the lexiconbased methods can also be effective in sentiment classification---lexicon-based methods can be robust for cross-domain sentiment analysis
1
word sense disambiguation is the task of identifying the intended meaning of a given target word from the context in which it is used---word sense disambiguation is the task of assigning sense labels to occurrences of an ambiguous word
1
we show that in deployed dialog systems with real users , as in laboratory experiments , users adapt to the system¡¯s lexical and syntactic choices---in deployed dialog systems with real users , as in laboratory experiments , users adapt to the lexical and syntactic choices of the system
1
the choice of a support verb for a given nominalization is unpredictable , causing a problem for language learners as well as for natural language processing systems---this paper describes a simple pattern-matching algorithm for recovering empty nodes and identifying their coindexed antecedents in phrase structure
0
its weight is tuned via minimum error rate training---classifiers are then combined in a weighted ensemble to further enhance the cross-domain classification performance
0