text
stringlengths
82
736
label
int64
0
1
we tune model weights using minimum error rate training on the wmt 2008 test data---we set the feature weights by optimizing the bleu score directly using minimum error rate training on the development set
1
for syntax-based approaches , riloff and wiebe performed syntactic pattern learning while extracting subjective expressions---second , the commonly used notion of a domain neglects the fact that topic and genre are two distinct properties of text
0
cite-p-20-1-16 extended the above model to handle other types of non-standard words---cite-p-20-1-0 treated sms as another language , and used mt methods to translate
1
we then propose a novel way to incorporate this information into the latent variable model---in this paper , we propose a novel latent variable model for viewpoint discovery
1
in comparison to paraphrase relations from general knowledge bases , relations acquired by our method are more effective as domain knowledge , demonstrating that we successfully learn from real users---than a query expansion baseline , our task-driven relations are more effective for solving science questions than relations from general knowledge sources
1
we evaluated the composite semi-supervised kpca model using data from the senseval-2 english lexical sample task---to calculate language model features , we train traditional n-gram language models with ngram lengths of four and five using the srilm toolkit
0
yang and eisenstein introduce an unsupervised log-linear model for the task of text normalization---the language model is trained and applied with the srilm toolkit
0
sentiment analysis is a collection of methods and algorithms used to infer and measure affection expressed by a writer---sentiment analysis is a multi-faceted problem
1
the baseline of our approach is a statistical phrase-based system which is trained using moses---for our baseline we use the moses software to train a phrase based machine translation model
1
finally we characterize inquiry semantics and the notion of meaning---we compare inquiry semantics to other kinds of semantics , and also identify the nature of meaning
1
in this study , we address the problem of extracting relations between entities from wikipedia ’ s english articles---study is intended to deal with the problem of extracting binary relations between entity pairs from wikipedia ’ s english version
1
it also has a competitive pos tagger that can be used alone or as part of collocation/idiom extraction---word sense disambiguation ( wsd ) is a key enabling-technology
0
wang et al focus on learning a word alignment model without a source-target corpus---borin and wang et al used pivot languages to improve word alignment
1
the approach is a statistical natural language generation system , trained discriminatively using sentences in the amr bank---the system is based on a statistical model whose parameters are trained discriminatively using annotated sentences in the amr bank corpus
1
to tackle this issue we propose an online framework for adaptive qe that targets reactivity and robustness to user and domain changes---to address this problem , we proposed the application of the online learning protocol to leverage users feedback and to tailor qe
1
metonymy is a figure of speech that uses “ one entity to refer to another that is related to it ” ( lakoff and johnson , 1980 , p.35 )---metonymy is typically defined as a figure of speech in which a speaker uses one entity to refer to another that is related to it ( cite-p-10-1-3 )
1
a tag is a rewriting system that derives trees starting from a finite set of elementary trees---we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit
0
commonly used word vectors are word2vec , glove and fasttext---typical language features are label en-coders and word2vec vectors
1
we obtained a vocabulary of 183,400 unique words after eliminating words which occur only once , stemming by a partof-speech tagger , and stop word removal---we obtained a vocabulary of 320,935 unique words after eliminating words which occur only once , stemming by a part-ofspeech tagger , and stop word removal
1
we use the well-known word embedding model that is a robust framework to incorporate word representation features---in addition , we utilize the pre-trained word embeddings with 300 dimensions from for initialization
1
word sense disambiguation is the process of determining which sense of a homograph is correct in a given context---we use a state-of-the-art open-source system , multir , as the relation extraction component
0
one of the main reasons why tense/aspect error correction is difficult is that the choice of tense/aspect is highly dependent on global context---because the choice of tense and aspect highly depends on global context , which makes correction difficult
1
all features were log-linearly combined and their weights were optimized by performing minimum error rate training---these features were optimized using minimum error-rate training and the same weights were then used in docent
1
the statistics for these datasets are summarized in settings we use glove vectors with 840b tokens as the pre-trained word embeddings---eisner proposed a generative model for dependency parsing
0
the decoding weights are optimized with minimum error rate training to maximize bleu scores---shared tasks show that the dynamic oracle significantly improves accuracy on many languages over a static oracle baseline
0
we use a sentence-clustering approach to multidocument summarization , where sentences in the input documents are clustered according to their similarity---our summarisation strategy mirrors the multidocument summarisation strategy of barzilay , where sentences in the input documents are clustered according to their similarity
1
in this paper , we use the maximum entropy framework to automatically predict the correctness of kbp sf intermediate responses---we utilize maximum entropy model to design the basic classifier used in active learning for wsd and tc tasks
1
we employ conditional random fields to predict the sentiment label for each segment---we rely on conditional random fields 1 for predicting one label per reference
1
we preinitialize the word embeddings by running the word2vec tool on the english wikipedia dump---all smt models were developed using the moses phrase-based mt toolkit and the experiment management system
0
we show for the first time that self-training is able to significantly improve the performance of a pcfg-la parser , a single generative parser , on both small and large amounts of labeled training data---in this paper , for the first time , that self-training is able to significantly improve the performance of the pcfg-la parser , a single generative parser , on both small and large amounts of labeled training data
1
our translation model is implemented as an n-gram model of operations using srilm-toolkit with kneser-ney smoothing---we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus
1
maltparser is a transition-based dependency parser generator---maltparser is a freely available implementation of the parsing models described in
1
using all hidden layers crucial for structured perceptron---using the structured perceptron with beam-search decoding
1
the cluster n-gram model is a variant of the n-gram model in which similar words are classified in the same cluster---the cluster n-gram model is a variant of the word n-gram model in which similar words are classified in the same cluster
1
bendersky et al proposed a joint framework for annotating queries with pos tags and phrase chunks---bendersky et al , also used top search results to generate structured annotation of queries
1
we also used pre-trained word embeddings , including glove and 300d fasttext vectors---for english posts , we used the 200d glove vectors as word embeddings
1
our 5-gram language model was trained by srilm toolkit---the language models in our systems are trained with srilm
1
we used kenlm with srilm to train a 5-gram language model based on all available target language training data---a tree domain is a set of node address drawn from n * ( that is , a set of strings of natural numbers ) in which c is the address of the root and the children of a node at address w occur at addresses w0 , wl , ... , in leftto-right order
0
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus---to calculate language model features , we train traditional n-gram language models with ngram lengths of four and five using the srilm toolkit
1
we initialize the word embeddings for our deep learning architecture with the 100-dimensional glove vectors---we use 100-dimension glove vectors which are pre-trained on a large twitter corpus and fine-tuned during training
1
similarly , lazaridou et al improve the word representations of derivationally related words by composing vector space representations of stems and derivational suffixes---lazaridou et al induced embeddings for complex words by adapting phrase composition models , whereas soricut and och automatically constructed a morphological graph by exploiting regularities within a word embedding space
1
we train embeddings using continuous bag-of-words model which can be used also to predict target words from the context---we complement the neural approaches with a simple neural network that uses word representations , namely a continuous bag-of-words model
1
we use minimum error rate training to tune the feature weights of hpb for maximum bleu score on the development set with serval groups of different start weights---minimum error rate training under bleu criterion is used to estimate 20 feature function weights over the larger development set
1
zhang approached the much simpler relation classification sub-task by bootstrapping on the top of svm---zhang approaches the relation classification problem with bootstrapping on top of svm
1
medical ontology alignment addresses this need by identifying the semantically equivalent concepts across multiple medical ontologies---alignment of medical ontologies facilitates the integration of medical knowledge that is relevant to medical image
1
temporal annotation is the task of identifying temporal relationships between pairs of temporal entities , namely temporal expressions and events , within a piece of text---temporal annotation is a time-consuming task for humans , which has limited the size of annotated data in previous tempeval exercises
1
we also compare our results to those obtained using the system of durrett and denero on the same test data---for the first experiment , we use the datasets provided by durrett and denero
1
the attention strategies have been widely used in machine translation and question answering---further uses of the attention mechanism include parsing , natural language question answering , and image question answering
1
we built on the work and guidelines by alkuhlani and habash---we build on the work of alkuhlani and habash , and use their manual annotation guidelines
1
the translation outputs were evaluated with bleu and meteor---the evaluation metric for the overall translation quality was case-insensitive bleu4
1
early studies have suggested that lexical features , word pairs in particular , will be powerful predictors of discourse relations---word-pair features are known to work very well in predicting senses of discourse relations in an artificially generated corpus
1
lexical simplification is a technique that substitutes a complex word or phrase in a sentence with a simpler synonym---lexical simplification is a subtask of text simplification ( cite-p-16-3-3 ) concerned with replacing words or short phrases by simpler variants in a context aware fashion ( generally synonyms ) , which can be understood by a wider range of readers
1
to initialize , we used the harmonic initializer presented in klein and manning---we use the deterministic harmonic initializer from klein and manning
1
lexical-functional grammar is an early member of the family of constraint-based grammar formalisms---lexical functional grammar is a member of the family of constraint-based grammars
1
pos are normally considered useful information in shallow and full parsing---we report the mt performance using the original bleu metric
0
for feature extraction , we used the stanford pos tagger---we used pos tags predicted by the stanford pos tagger
1
zou et al developed a tree kernel-based system to resolve the scope of negation and speculation , which captures the structured information in syntactic parsing trees---we present an unsupervised model of da sequences in conversation
0
for the automatic evaluation we used the bleu and meteor algorithms---we evaluated the translation quality using the bleu-4 metric
1
mirkin et al introduced a system for learning entailment rules between nouns that combines distributional similarity and hearst patterns as features in a supervised classifier---we used the pre-trained google embedding to initialize the word embedding matrix
0
in li and roth , they used wordnet for english and built a set of class-specific words as semantic features and achieved the high precision---li and roth made use of semantic features including named entities , wordnet senses , class-specific related words , and distributional similarity based categories
1
in this baseline , we applied the word embedding trained by skipgram on wiki2014---we used the pre-trained word embeddings that are learned using the word2vec toolkit on google news dataset
1
in order to capture the property of such phrases , we introduce latent variables into the models---in order to capture the properties of semantic orientations of phrases , we introduce latent variables into the models
1
table 4 shows the comparison of the performances on bleu metric---for feature building , we use word2vec pre-trained word embeddings
0
active learning ( al ) consists of asking human annotators to annotate automatically selected data that are assumed to bring the most benefit in the creation of a classifier---active learning ( al ) is a technique that can reduce this cost by setting up an interactive training/annotation loop that selects and annotates training examples that are maximally useful for the classifier that is being trained
1
to address this problem , we propose coverage-based nmt in this paper---in this work , we propose a coverage mechanism to nmt ( nmt-c overage )
1
starting from a collection of tagged images , it is possible to automatically construct an image-based representation of concepts by using offthe-shelf vsem functionalities---hierarchical phrase-based translation models that utilize synchronous context free grammars have been widely adopted in statistical machine translation
0
in addition , the model contains extra connections between adjacent hidden softmax units to formulate the dependency between latent states---when visible units are given , hssm has extra connections utilized to formulate the dependency between adjacent softmax units
1
the weights of the different feature functions were optimised by means of minimum error rate training on the 2008 test set---the weights of the log-linear interpolation model were optimized via minimum error rate training on the ted development set , using 200 best translations at each tuning iteration
1
section 2 gives a review of related works on emotion analysis---in this section , we introduce related work on emotion analysis including emotion
1
we use the moses statistical mt toolkit to perform the translation---for our baseline we use the moses software to train a phrase based machine translation model
1
to recognize explicit connectives , we construct a list of existing connectives labeled in the penn discourse treebank---for this reason , we first exploit indirect annotations of these distinctions in the form of certain types of discourse relations annotated in the penn discourse treebank
1
in this paper we propose to extend the wordnet model by adding a new data structure called words ( as opposed to lexical units ) which are recurrently used to express a concept---we present a proposal to extend wordnet-like lexical databases by adding phrasets , i . e . sets of free combinations of words which are recurrently used to express a concept
1
title queries are found to be preferred in mt-based clir---work on clir shows a trend to adopt mt-based query translation
1
recently , peters et al introduced elmo , a system for deep contextualized word representation , and showed how it can be used in existing task-specific deep neural networks---peters et al propose a deep neural model that generates contextual word embeddings which are able to model both language and semantics of word use
1
the language model is trained with the sri lm toolkit , on all the available french data without the ted data---the language model is a 3-gram language model trained using the srilm toolkit on the english side of the training data
1
our aso approach also outperforms two commercial grammar checking software packages in a manual evaluation---aso approach also outperforms two commercial grammar checking software packages in a manual evaluation
1
in this paper , we proposed a latent class transliteration method which models source language origins as latent classes---in this paper , we propose a latent class transliteration model , which models the source language origin as unobservable latent classes
1
in all the experiments , we use the naı̈ve bayes multinomial classifier and its weka implementation 2 , with term-frequencies as feature values---in all the experiments , we use the naıve bayes multinomial classifier and its weka implementation
1
the feature weights 位 m are tuned with minimum error rate training---the log-linear feature weights are tuned with minimum error rate training on bleu
1
to compute statistical significance , we use the approximate randomization test---we compute statistical significance using the approximate randomization test
1
mimus follows the information state update approach to dialogue management , and has been developed under the eu¨cfunded talk project ( cite-p-14-3-9 )---mimus follows the information state update approach to dialogue management , and supports english , german and spanish , with the possibility of changing language
1
assuming that composition is a linear function of the cartesian product of math-w-2-3-2-59 and math-w-2-3-2-61 allows to specify additive models which are by far the most common method of vector combination in the literature ( cite-p-9-3-16 , cite-p-9-3-8 , cite-p-9-3-14 )---a zero pronoun ( zp ) is a gap in a sentence , which refers to an entity that supplies the necessary information for interpreting the gap ( cite-p-16-3-25 )
0
one of the most frequently used methods for removing redundancy is maximal marginal relevance---a hybrid model of the word-based and the character-based model has also been proposed by luong and manning
0
these results led vieira and poesio to propose a definite description resolution algorithm incorporating independent heuristic strategies for recognizing dn definite descriptions---vieira and poesio proposed an algorithm for definite description resolution that incorporates a number of heuristics for detecting discourse-new descriptions
1
the feature weights 位 i are trained in concert with the lm weight via minimum error rate training---the mod- els h m are weighted by the weights 位 m which are tuned using minimum error rate training
1
the dataset proposed in hu and liu is the most used resource in aspect-based opinion summarization---syntactic parsing is the process of determining the grammatical structure of a sentence as conforming to the grammatical rules of the relevant natural language
0
we use a 5-gram language model with modified kneser-ney smoothing , trained on the english side of set1 , as our baseline lm---in this paper , two instance weighting technologies , i . e . , sentence weighting and domain weighting with a dynamic weight learning strategy , are proposed for nmt
0
this paper describes the first work of context-aware endto-end morph decoding---1 a context consists of all the patterns of n-grams within a certain window around the corresponding entity mention
0
relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text---relation extraction is the task of tagging semantic relations between pairs of entities from free text
1
twitter is a famous social media platform capable of spreading breaking news , thus most of rumour related research uses twitter feed as a basis for research---experimental results demonstrate that the distributional similarity based models can significantly outperform their baseline systems
0
coreference resolution is the task of determining which mentions in a text refer to the same entity---coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text
1
our model learns the probability distribution over all the candidate words by leveraging the entity type information---in this work , we propose to leverage the type information of such named entities
1
the empirical speech data was taken from the switchboard corpus which is part of the penn treebank corpus---the data comes from the conll 2000 shared task , which consists of sentences from the penn treebank wall street journal corpus
1
clark and curran evaluate a number of log-linear parsing models for ccg---clark and curran describe a log-linear glm for ccg parsing , trained on the penn treebank
1
empirical results in this section are achieved by the following experimental setting---in this section are achieved by the following experimental setting
1
for word embeddings , we trained a skip-gram model over wikipedia , using word2vec---we use distributed word vectors trained on the wikipedia corpus using the word2vec algorithm
1
sentence compression is the task of compressing long , verbose sentences into short , concise ones---sentence compression is the task of producing a summary at the sentence level
1
the targetside 4-gram language model was estimated using the srilm toolkit and modified kneser-ney discounting with interpolation---the sri language modeling toolkit was used to train a trigram open-vocabulary language model with kneser-ney discounting on data that had boundary events inserted in the word stream
1
pun is a way of using the characteristics of the language to cause a word , a sentence or a discourse to involve two or more different meanings---we obtained distributed word representations using word2vec 4 with skip-gram
0
a set on the right-hand side of a rule is shorthand for all possible orderings of the elements of the set---on the right-hand side of a rule is shorthand for all possible orderings of the elements of the set
1