text
stringlengths
82
736
label
int64
0
1
smt training is automated using the moses experiment management system---the smt systems were trained using the moses toolkit and the experiment management system
1
following , we use the bootstrap resampling test to do significance testing---finally , we conduct paired bootstrap sampling to test the significance in bleu scores differences
1
we used srilm to build a 4-gram language model with interpolated kneser-ney discounting---firstly , we built a forward 5-gram language model using the srilm toolkit with modified kneser-ney smoothing
1
existing approaches to nested ner are mostly feature-based and thus suffer from heavy feature engineering---existing approaches to nested ner mainly rely on hand-crafted features
1
coreference resolution is a field in which major progress has been made in the last decade---coreference resolution is the task of grouping mentions to entities
1
relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text---as it may be , strongly suggests that mtl is particularly beneficial for solving the word emotion induction problem
0
in our approach , the general sentiment information in sentiment lexicons is adapted to target domain with the help of a small number of labeled samples which are selected and annotated in an active learning mode---we adapt the general sentiment information in sentiment lexicons to target domain with the help of a small number of labeled samples which are selected and annotated in an active learning mode
1
previous research showed that the dependency based annotation scheme performs better than phrase based annotation scheme for such languages---given a set of mentions , our model tries to ensure that similar mentions are linked to similar entities
0
we begin with a maximum likelihood estimate of the joint based on a word aligned old -domain corpus and update this distribution using new -domain comparable data---derived from the old-domain parallel corpus , our method recovers a new joint distribution that matches the marginal distributions of the new-domain comparable
1
table 2 shows the blind test results using bleu-4 , meteor and ter---automatic evaluation results are shown in table 1 , using bleu-4
1
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided---we then created trigram language models from a variety of sources using the srilm toolkit , and measured their perplexity on this data
1
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit---we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus
1
we initialize the word embedding matrix with pre-trained glove embeddings---to keep consistent , we initialize the embedding weight with pre-trained word embeddings
1
the language model was a kneser-ney interpolated trigram model generated using the srilm toolkit---the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit
1
unpruned language models were trained using lmplz which employs modified kneser-ney smoothing---huang et al , 2012 ) used the multi-prototype models to learn the vector for different senses of a word
0
the language model is a trigram-based backoff language model with kneser-ney smoothing , computed using srilm and trained on the same training data as the translation model---gram language models are trained over the target-side of the training data , using srilm with modified kneser-ney discounting
1
we employed the machine learning tool of scikit-learn 3 , for training the classifier---we used the logistic regression implemented in the scikit-learn library with the default settings
1
in the translation tasks , we used the moses phrase-based smt systems---for decoding , we used moses with the default options
1
the sentiment analysis is a field of study that investigates feelings present in texts---sentiment analysis is a growing research field , especially on web social networks
1
language models were built using the sri language modeling toolkit with modified kneser-ney smoothing---the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit
1
the translation quality is evaluated by case-insensitive bleu-4---the translation quality is evaluated by case-insensitive bleu and ter metrics using multeval
1
sentence compression is a text-to-text generation task in which an input sentence must be transformed into a shorter output sentence which accurately reflects the meaning in the input and also remains grammatically well-formed---this dictionary is not easy to employ for nlp use but work in progress is aimed at addressing this problem
0
semantic parsing is the task of mapping a natural language ( nl ) sentence into a completely formal meaning representation ( mr ) or logical form---semantic parsing is the mapping of text to a meaning representation
1
we evaluated the translation quality using the case-insensitive bleu-4 metric---word embeddings are r 300 and initialized with pre-trained glove embeddings 4
0
the subtree ranking approach is a generalization of the perceptron-based approach---in this paper , we present a reinforcement learning framework for inducing mappings from text to actions
0
citation contexts were also used to improve the performance of citation recommendation systems and to study author influence---citation contexts were also used to improve the performance of citation recommendation systems and to study author influence in document networks
1
we used a logistic regression classifier provided by the liblinear software---we use the wrapper of the scikit learn python library over the liblinear logistic regression implementation
1
moreover , mmrbased feature selection sometimes produces some improvements of conventional machine learning algorithms over svm which is known to give the best classification accuracy---in this paper we present a text-to-text rewriting model that scales to non-isomorphic cases
0
v-measure assesses a cluster solution by considering its homogeneity and its completeness---v-measure assesses the quality of a clustering solution by explicitly measuring its homogeneity and its completeness
1
in this paper , we propose a statistical model to generate appropriate measure words of nouns for an englishto-chinese smt system---in a general smt system , this paper proposes a dedicated statistical model to generate measure words for englishto-chinese translation
1
information extraction ( ie ) is a fundamental technology for nlp---recently , with the development of neural network , deep learning based models attract much attention in various tasks
0
we also train an initial phrase-based smt system with the available seed corpus---we used a phrase-based smt model as implemented in the moses toolkit
1
in particular , we show that the class of string languages generated by linear context-free rewriting systems is equal to the class of output languages of deterministic tree-walking transducers [ 1 ]---we know that this class of languages is also equal to the string languages generated by context-free hypergraph grammars , multicomponent tree-adjoining grammars , and multiple context-free grammars and to the class of yields of images of the regular tree languages under finite-copying
1
for all three classifiers , we used the word2vec 300d pre-trained embeddings as features---for instance , bengio et al present a neural probabilistic language model that uses the n-gram model to learn word embeddings
0
for the classifiers we use the scikit-learn machine learning toolkit---for training the model , we use the linear kernel svm implemented in the scikit-learn toolkit
1
we used a 4-gram language model which was trained on the xinhua section of the english gigaword corpus using the srilm 4 toolkit with modified kneser-ney smoothing---on the remaining tweets , we trained a 10-gram word length model , and a 5-gram language model , using srilm with kneyser-ney smoothing
1
in this paper , i examine the benefits and possible disadvantages of using rich semantic representations as the basis for entailment recognition---in this paper , i have demonstrated how to build an entailment system from mrs graph alignment , combined with heuristic “ robust ”
1
the language model was a kneser-ney interpolated trigram model generated using the srilm toolkit---a statistical significance test based on a bootstrap resampling method , as shown in koehn , was performed
0
with reference to this system , we implement a data-driven parser with a neural classifier based on long short-term memory---we start with a bidirectional long short-term memory model that employs pretrained word embeddings
1
we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus---we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting
1
all of our parsing models are based on the transition-based dependency parsing paradigm---our model is an extension of the transition-based parsing framework described by nivre for dependency tree parsing
1
to better evaluate our models , we also construct an idiom-enriched sentiment classification dataset with considerable scale and abundant peculiarities of idioms---into the real-world task , we introduce a sizeable idiom-enriched sentiment classification dataset , which covers abundant peculiarities of idioms
1
one of the biggest problems with dependency structure analysis in spontaneous speech is that clause boundaries are ambiguous---and consequently , problems peculiar to spontaneous speech arise in dependency structure analysis , such as ambiguous clause boundaries
1
relation extraction is the task of finding semantic relations between two entities from text---in our implementation , we train a tri-gram language model on each phone set using the srilm toolkit
0
our experiments use the ghkm-based string-totree pipeline implemented in moses---we used a phrase-based smt model as implemented in the moses toolkit
1
we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model---we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing
1
we evaluated translation quality with the case-insensitive bleu-4 and nist---we evaluated translation output using case-insensitive ibm bleu
1
we use word2vec to train the word embeddings---we use the word2vec skip-gram model to train our word embeddings
1
coreference resolution is a central problem in natural language processing with a broad range of applications such as summarization ( cite-p-16-3-24 ) , textual entailment ( cite-p-16-3-12 ) , information extraction ( cite-p-16-3-11 ) , and dialogue systems ( cite-p-16-3-25 )---coreference resolution is the task of partitioning the set of mentions of discourse referents in a text into classes ( or ‘ chains ’ ) corresponding to those referents ( cite-p-12-3-14 )
1
dependency parsing is a crucial component of many natural language processing ( nlp ) systems for tasks such as relation extraction ( cite-p-15-1-5 ) , statistical machine translation ( cite-p-15-5-7 ) , text classification ( o ? zgu ? r and gu ? ngo ? r , 2010 ) , and question answering ( cite-p-15-3-0 )---dependency parsing is the task of labeling a sentence math-w-2-1-0-10 with a syntactic dependency tree math-w-2-1-0-16 , where math-w-2-1-0-24 denotes the space of valid trees over math-w-2-1-0-35
1
bagga and baldwin , 1998 ) proposed a method using the vector space model to disambiguate references to a person , place , or event across multiple documents---recently , bagga and baldwin proposed a method for determining whether two names or events refer to the same entity by measuring the similarity between the document contexts in which they appear
1
we train a simple logistic regression classifier with regularization constant of 1 , l2 penalty with liblinear solver on the tf-idf representations of each sentence---for all machine learning results , we train a logistic regression classifier implemented in scikitlearn with l2 regularization and the liblinear solver
1
the target language model was a standard ngram language model trained by the sri language modeling toolkit---a kn-smoothed 5-gram language model is trained on the target side of the parallel data with srilm
0
word sense disambiguation ( wsd ) is a fundamental task and long-standing challenge in natural language processing ( nlp )---for evaluation we use mteval-v13a from the moses toolkit and tercom 3 to score our systems on the bleu respectively ter measures
0
finally , we extract the semantic phrase table from the augmented aligned corpora using the moses toolkit---kruengkrai et al proposed a hybrid model including character-based and word-based features
0
answer selection ( as ) is a crucial subtask of the open domain question answering ( qa ) problem---in our experiments , we also demonstrate the applicability of our approach to another language
0
users typically know the database structure and contents---a typical user can most readily supply and identify the tables
1
more importantly , as terms are defined vis-à-vis a specific domain with a restricted register , it is expected that the quality rather than the quantity of the corpus matters more in terminology mining---previous approaches have used search engine page counts as substitutes for co-occurrence information
0
we use glove word embeddings , which are 50-dimension word vectors trained with a crawled large corpus with 840 billion tokens---in a second baseline model , we also incorporate 300-dimensional glove word embeddings trained on wikipedia and the gigaword corpus
1
the experiments of the phrase-based smt systems are carried out using the open source moses toolkit---all models have been estimated using publicly available software , moses , and corpora
1
we used the sri language modeling toolkit to train lms on our training data for each ilr level---we trained a 5-gram language model on the english side of each training corpus using the sri language modeling toolkit
1
we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model---the translation quality is evaluated by caseinsensitive bleu-4 metric
0
the punctuation prediction problem has attracted research interest in both the speech processing community and the natural language processing community---boundary detection and punctuation prediction have been extensively studied in the speech processing field and have attracted research interest in the natural language processing field
1
in this paper we presented an unsupervised dynamic bayesian modeling approach to modeling speech style accommodation in face-to-face interactions---in this paper , we present an unsupervised dynamic bayesian model that allows us to model stylistic style accommodation
1
second , we utilize word embeddings 3 to represent word semantics in dense vector space---this paper presents our work also on dialogue topic tracking
0
in this paper , we study the discriminative training of query spelling correction , which is potentially beneficial to many existing studies---in this paper , we propose a new discriminative model for query correction that maintains the advantage of a discriminative model in accommodating flexible combination of features
1
the third system approaches relation classification problem with bootstrapping on top of svm , proposed by zhang---zhang approaches the relation classification problem with bootstrapping on top of svm
1
mikolov et al found that the learned word representations capture meaningful syntactic and semantic regularities referred to as linguistic regularities---we used the svd implementation provided in the scikit-learn toolkit
0
for the semantic language model , we used the srilm package and trained a tri-gram language model with the default goodturing smoothing---we propose an endto-end question answering ( qa ) model that learns to correctly answer questions
0
our results show why it is important to be precise about exactly what tree-to-dependency conversion scheme is used---for detecting mwes and nes we use the crf sequence-labeling algorithm
0
to train our neural algorithm , we apply word embeddings of a look-up from 100-d glove pre-trained on wikipedia and gigaword---we use pretrained 100-d glove embeddings trained on 6 billion tokens from wikipedia and gigaword corpus
1
the final parsing step is performed using parsito , which is a transitionbased parser with a neural-network classifier---recent research in this area has resulted in the development of several large kgs , such as nell , yago , and freebase , among others
0
following , we use gru as the recurrent unit in this paper---in this paper , we use the nmt model described in
1
we implemented linear models with the scikit learn package---we used standard classifiers available in scikit-learn package
1
relation extraction ( re ) is the task of recognizing relationships between entities mentioned in text---in this paper , we propose a new document clustering approach
0
our neural generator follows the standard encoder-decoder paradigm---our model is based on the standard lstm encoder-decoder model with an attention mechanism
1
moreover , back translation approaches show efficient use of monolingual data to improve neural machine translation---the use of synthetic data produced by means of the backtranslation technique is an effective way of benefiting from additional monolingual data
1
in this paper , we propose to conduct question search by identifying question topic and question focus---in this paper , we have proposed an approach to question search which models question topic and question focus
1
recently , a new pre-trained model bert obtains new state-of-the-art results on a variety of natural language processing tasks---recently , bert , a pre-trained deep neural network , based on the transformer , has improved the state of the art for many natural language processing tasks
1
we have extended this part of the algorithm with various edit costs to penalise more important features with higher edit costs for being outside the interval , which tree automata learned at the inference stage---and the difference is that we have added various edit costs to penalise more important features with higher edit costs for being outside the interval , which tree automata learned at the inference stage
1
grammar induction is the task of learning grammatical structure from plain text without human supervision---grammar induction is a central problem in computational linguistics , the aim of which is to induce linguistic structures from an unannotated text corpus
1
and luong et al have proposed the attention-based translation model---the weights of the linear ranker are optimized using the averaged perceptron algorithm
0
pennell and liu , 2010 , used a crf sequence modeling approach for deletionbased abbreviation---pennell and liu used a crf sequence modeling approach for deletion-based abbreviations
1
a 5-gram language model with kneser-ney smoothing is trained using s-rilm on the target language---for the language model , we used srilm with modified kneser-ney smoothing
1
the translation quality is evaluated by bleu and ribes---we propose novel linear associative units ( lau ) to reduce the gradient propagation length inside the recurrent unit
0
dependency parsing is the task of building dependency links between words in a sentence , which has recently gained a wide interest in the natural language processing community---dependency parsing is a valuable form of syntactic processing for nlp applications due to its transparent lexicalized representation and robustness with respect to flexible word order languages
1
unlike the best performing grammar-based parsers studied in rimell et al , neither mstparser nor maltparser was developed specifically as a parser for english , and neither has any special mechanism for dealing with unbounded dependencies---one important difference between mstparser and maltparser , on the one hand , and the best performing parsers evaluated in rimell et al , on the other , is that the former were never developed specifically as parsers for english
1
however , training this discriminative model using large-scale parallel corpus might be computationally expensive---however , training this discriminative model using large-scale corpus might be computationally expensive
1
once we have extracted all the features , we train a linear svm using python based scikit learn library for the purpose of classification---for training the model , we use the linear kernel svm implemented in the scikit-learn toolkit
1
cite-p-17-5-5 used a linear-time incremental model which can also benefits from various kinds of features including word-based features---cite-p-17-1-3 used an lstm architecture to capture potential long-distance dependencies , which alleviates the limitation of the size of context window
1
automatic and interactive statistical machine translation ( smt )---the nnlm weights are optimized as the other feature weights using minimum error rate training
0
ramachandran et al propose a mechanism to automate the extraction of patterns from the reference answer as well as highscoring student answers---recently ramachandran et al demonstrated effectiveness of using student answers to a question to extract patterns for asag
1
we then use extended lexrank algorithm to rank the sentences in each cluster---we obtained both phrase structures and dependency relations for every sentence using the stanford parser
0
a 5-gram language model on the english side of the training data was trained with the kenlm toolkit---a 5-gram language model of the target language was trained using kenlm
1
for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing---it is a standard phrasebased smt system built using the moses toolkit
0
we obtain an upper bound of 1.75 bits per character---at the expense of rare characters , we can reach 4 . 46 bits per character
1
we have empirically demonstrated that our model is able to learn the complex structure of document , abstract pairs---that show that our model is able to learn to reliably identify word-and phrase-level alignments
1
in synchronous training , batches on parallel gpu are run simultaneously and gradients aggregated to update master parameters before resynchronization on each gpu for the following batch---in synchronous training , batches on parallel gpu are run simultaneously and gradients aggregated to update master parameters before resynchronization on each gpu
1
the weights for these features are optimized using mert---the model weights are automatically tuned using minimum error rate training
1
we trained a standard 5-gram language model with modified kneser-ney smoothing using the kenlm toolkit on 4 billion running words---unpruned language models were trained using lmplz which employs modified kneser-ney smoothing
1