text
stringlengths
82
736
label
int64
0
1
zens and ney show that itg constraints allow a higher flexibility in word ordering for longer sentences than the conventional ibm model---zens and ney showed that itg constraints allow a higher flexibility in word-ordering for longer sentences than the conventional ibm model
1
semantic parsing is the task of mapping natural language to machine interpretable meaning representations---semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation
1
a common approach to the automatic extraction of semantically related words is to use distributional similarity---distributional similarity is used in many proposals to find semantically related words
1
in this article , we argue that kendall ’ s math-w-11-1-0-8 can be used as an automatic evaluation method for information-ordering tasks---kendall ’ s math-w-2-5-3-76 and explain how it can be employed for evaluating information ordering
1
we use our reordering model for n-best re-ranking and optimize bleu using minimum error rate training---we used minimum error rate training mert for tuning the feature weights
1
we have presented a generic phrase translation extraction procedure which is parameterized with feature functions---in this work , we present a generic discriminative phrase pair extraction framework that can integrate multiple features
1
researches on the psychology of concepts show that categories in the human mind are not simply sets with clearcut boundaries---research on the human concept representation shows that categories in the human mind are not simply sets with clear-cut boundaries
1
stance detection has been defined as automatically detecting whether the author of a piece of text is in favor of the given target or against it---stance detection is the task of automatically determining from the text whether the author of the text is in favor of , against , or neutral towards a proposition or target
1
we use stanford log-linear partof-speech tagger to produce pos tags for the english side---ushioda et al , 1993 , run a finite state np parser on a pos-tagged corpus to calculate the relative frequency of just six subcategorisation verb classes
0
we used kenlm with srilm to train a 5-gram language model based on all available target language training data---we describe a perceptron-style algorithm for training the neural networks , which not only speeds up the training of the networks with negligible loss in performance , but also can be implemented more easily
0
lin and he propose a method based on lda that explicitly deals with the interaction of topics and sentiments in text---for sentiment analysis , lin and he incorporate external information from a subjectivity lexicon
1
we build a 9-gram lm using srilm toolkit with modified kneser-ney smoothing---for building the baseline smt system , we used the open-source smt toolkit moses , in its standard setup
0
in this paper we presented a formal computational framework for modeling manipulation actions based on a combinatory categorial grammar---in this paper , we propose an approach for learning the semantic meaning of manipulation action
1
the tuning process was done using mert with minimum bayes-risk decoding on moses and focusing on minimizing the bleu score of the development set---the scaling factors of the features were optimized for bleu on the development set with minimum error rate training on 100-best lists
1
we derive 100-dimensional word vectors using word2vec skip-gram model trained over the domain corpus---we obtained distributed word representations using word2vec 4 with skip-gram
1
koo et al used a word clusters trained on a large amount of unannotated data and designed a set of new features based on the clusters for dependency parsing models---koo et al used a clustering algorithm to produce word clusters on a large amount of unannotated data and represented new features based on the clusters for dependency parsing models
1
for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing---we experiment with a machine learning strategy to model multilingual coreference for the conll-2012 shared task
0
for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit---we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit
1
the task is to classify whether each comment is relevant to the question---qa , the task is to locate the smallest span in the given paragraph that answers the question
1
in this paper , we introduce a novel attentive node composition function that is based on slstm---in this study , we introduce neural tree indexers ( nti ) , a class of tree
1
we initialize the word embeddings for our deep learning architecture with the 100-dimensional glove vectors---we use glove 300-dimension embedding vectors pre-trained on 840 billion tokens of web data
1
we use the glove pre-trained word embeddings for the vectors of the content words---in our experiments , we use 300-dimension word vectors pre-trained by glove
1
in this paper , we present collaborative decoding ( or co-decoding ) , a new smt decoding scheme to leverage consensus information between multiple machine translation systems---in this paper , we present a framework of collaborative decoding , in which multiple mt decoders are coordinated to search for better translations
1
sentiment analysis in twitter is a particularly challenging task , because of the informal and “ creative ” writing style , with improper use of grammar , figurative language , misspellings and slang---sentiment analysis in twitter is the problem of identifying people ’ s opinions expressed in tweets
1
these two characteristics make our algorithm relatively easy to be extended to incorpo-riate crossing-sensitive second-order features---to gchsw , our new algorithm has two characteristics that make it relatively easy to be extended to incorporate crossing-sensitive , second-order features
1
we used the svm implementation of scikit learn---a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit
0
the two baseline methods were implemented using scikit-learn in python---all models were implemented in python , using scikit-learn machine learning library
1
we evaluate text generated from gold mr graphs using the well-known bleu measure---we measure translation performance by the bleu and meteor scores with multiple translation references
1
specifically , we characterize the student ’ s knowledge as a vector of feature weights , which is updated as the student interacts with the system---because we take a student ’ s knowledge to be a vector of prediction parameters ( feature weights )
1
contrary to expectations , we find that nearest neighbour search on a stream based on clustering performs faster than lsh for the same level of accuracy---this showed how nearest neighbour search in data streams based on clustering performs faster than lsh , for the same level of accuracy
1
coreference resolution is the task of determining which mentions in a text refer to the same entity---for live chats , wu et al and forsyth defined 15 dialogue acts for casual online conversations based on previous sets
0
bengio et al proposed to use artificial neural network to learn the probability of word sequences---bengio et al propose a feedforward neural network to train a word-level language model with a limited n-gram history
1
we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option---we propose using maximum mutual information ( mmi ) as the objective function
0
previous work consistently reported that the word-based translation models yielded better performance than the traditional methods for question retrieval---translation model has been extensively employed in question search and has been shown to outperform the traditional ir methods significantly
1
the language model is a 5-gram with interpolation and kneser-ney smoothing---we created 5-gram language models for every domain using srilm with improved kneserney smoothing on the target side of the training parallel corpora
0
translation scores are reported using caseinsensitive bleu with a single reference translation---the translation results are evaluated with case insensitive 4-gram bleu
1
culotta and sorensen described a slightly generalized version of this kernel based on dependency trees---zelenko et al and culotta and sorensen proposed kernels for dependency trees inspired by string kernels
1
we implemented linear models with the scikit learn package---we used the scikit-learn library the svm model
1
marcu and echihabi proposed a method for cheap acquisition of training data for discourse relation sense prediction---we decided to explore the use of neural probabilistic language models ( nlpm ) for capturing this kind of behavior
0
as inputs we use a random sample of sentences from the penn treebank and represent each word as a 100d glove embedding---we begin by computing the similarity between words using word embeddings
0
we follow puduppully et al and , applying the learning and search framework of zhang and clark---we propose an unsupervised label propagation algorithm to collectively rank the opinion target
0
in practical treebanking , empty categories have been used to indicate long-distance dependencies , discontinuous constituents , and certain dropped elements---in treebanks , empty categories have been used to indicate long-distance dependencies , discontinuous constituents , and certain dropped elements
1
recent efforts in statistical machine translation have seen promising improvements in output quality , especially the phrase-based models and syntax-based models---hierarchical phrase-based translation models that utilize synchronous context free grammars have been widely adopted in statistical machine translation
1
for evaluation , caseinsensitive nist bleu is used to measure translation performance---for our experiments we use the parallel europarl corpus
0
we evaluate our model on a widely used dataset 1 which is developed by and has also been used by---we evaluate our approach on the basis of nyt10 , a dataset developed by and then widely used in distantly supervised relation extraction
1
we extract utterances from the manchester corpus in the childes database---we used the dataset from the conll shared task for cross-lingual dependency parsing
0
for tree-to-string translation , we parse the english source side of the parallel data with the english berkeley parser---to pre-order the chinese sentences using the syntax-based reordering method proposed by , we utilize the berkeley parser
1
word sense disambiguation is the task to identify the intended sense of a word in a computational manner based on the context in which it appears---our work is most closely related to lee et al , li et al , who all present discriminative models for joint tagging and dependency parsing
0
we use latent dirichlet allocation to obtain the topic words for each lexical pos---we induce a topic-based vector representation of sentences by applying the latent dirichlet allocation method
1
we tune phrase-based smt models using minimum error rate training and the development data for each language pair---we adapt the minimum error rate training algorithm to estimate parameters for each member model in co-decoding
1
it uses flexible semantic templates to specify semantic patterns---for a detailed description of the system we have developed , the reader is referred to
0
we perform the mert training to tune the optimal feature weights on the development set---we set all feature weights using minimum error rate training , and we optimize their number on the development dataset
1
word sense disambiguation ( wsd ) is the task of determining the correct meaning for an ambiguous word from its context---word sense disambiguation ( wsd ) is a problem long recognised in computational linguistics ( yngve 1955 ) and there has been a recent resurgence of interest , including a special issue of this journal devoted to the topic ( cite-p-27-8-11 )
1
they used the web-based annotation tool brat for the annotation---in tree adjoining grammar , the extended domain of locality principle ensures that tag trees group together in a single structure
0
most of these works are devoted to phoneme 1 -based transliteration modeling---this work uses either grapheme or phoneme based models to transliterate words lists
1
first , it clarifies the model , in the same way that elucidate other machine translation models in easily-grasped fst terms---first , it makes the model very clear , in the same way that knight and al-onaizan and kumar and byrne elucidate other machine translation models in easily grasped fst terms
1
a particularly popular coherence model is the entity-based local coherence model of barzilay and lapata---the most prominent approach to entity-based coherence modeling nowadays is the entity grid model by barzilay and lapata
1
we use the skipgram model to learn word embeddings---we pre-train the word embeddings using word2vec
1
we initialize our word vectors with 300-dimensional word2vec word embeddings---we use the pre-trained word2vec embeddings provided by mikolov et al as model input
1
existing works are based on two basic models , plsa and lda---the long short-term memory was first proposed by hochreiter and schmidhuber that can learn long-term dependencies
0
in this work , we introduced a means for end-users to refine and improve the topics discovered by topic models---in this work , we develop a framework for allowing users to iteratively refine the topics discovered by models such as latent dirichlet allocation ( lda )
1
the experiment data used herein was the 35 nouns from the semeval-2007 english lexical sample task---we selected a subset of the ontonotes data , the semeval-2007 coarse-grained english lexical sample wsd task training data
1
we show later in the experiments that the proposed late fusion gives a better language modelling quality than the early fusion---in the experiments that the proposed late fusion gives a better language modelling quality than the early fusion
1
we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing---we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing
1
tsvetkov et al applied a random forest classifier to detect metaphorical and literal an phrases---the method of tsvetkov et al used both concreteness features and hand-coded domain information for words
1
co-training is a learning technique which combines classifiers that support different views of the data in a single learning mechanism---co-training is a semi-supervised learning technique that requires two different views of the data
1
poon and domingos introduced an unsupervised system in the framework of markov logic---we report both unlabeled attachment score and labeled attachment score
0
as an effort to fill this gap , in this paper we describe our contributions to the complex word identification task of semeval 2016---we have presented our contributions to the complex word identification task of semeval 2016
1
table 4 presents case-insensitive evaluation results on the test set according to the automatic metrics bleu , ter , and meteor---the bleu , rouge and ter scores by comparing the abstracts before and after human editing are presented in table 5
1
for the mix one , we also train word embeddings of dimension 50 using glove---we use 300-dimensional word embeddings from glove to initialize the model
1
the phrase structure trees produced by the parser are further processed with the stanford conversion tool to create dependency graphs---the stanford parser can output typed semantic dependencies that conform to the stanford dependencies
1
for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit---we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing
1
in order to acquire syntactic rules , we parse the chinese sentence using the stanford parser with its default chinese grammar---we use the berkeley probabilistic parser to obtain syntactic trees for english and its adapted version for french
1
this paper proposes a generalized training framework of semi-supervised dependency parsing based on ambiguous labelings---alignment , can benefit from a wealth of effective , well established ip techniques , including convolution-based filters , texture analysis and hough transform
0
itspoke is a speech-enabled version of the text-based why2-atlas conceptual physics tutoring system---itspoke is a speech-enabled version of the why2-atlas text-based dialogue tutoring system
1
we train the parameters of the stages separately using adagrad with the perceptron loss function---the target language model was a trigram language model with modified kneser-ney smoothing trained on the english side of the bitext using the srilm tookit
0
we present a generalized discriminative model for spelling error correction which targets character-level transformations---in this work , we present gsec , a generalized character-level spelling error correction
1
instead , we use an lstm to perform word-by-word matching of the hypothesis with the premise---for classification , our solution uses a match-lstm to perform word-by-word matching of the hypothesis with the premise
1
furthermore , we design a multi-layer directed graph to assign different trust levels to short texts for better performance---directed graph is designed to assign different trust levels to documents , which significantly improves the performance
1
in thispaper , we present a chunk based partialparsing system for spontaneous , conversational speech in unrestricteddomains---in this paper , we present a chunk based partial parser , following ideas from ( cite-p-21-1-0 ) , which is used to to generate shallow syntactic structures from speech
1
a 5-gram language model was built using srilm on the target side of the corresponding training corpus---enseval words , we showed that the wikipedia sense annotations can be used to build a word sense disambiguation system
0
we obtained parse trees using the stanford parser , and used jacana for word alignment---we trained a 4-gram language model on this data with kneser-ney discounting using srilm
0
as a classifier , we choose a first-order conditional random field model---we use a conditional random field sequence model , which allows for globally optimal training and decoding
1
domain adaptation is a challenge for ner and other nlp applications---saur铆 and saur铆 and pustejovsky proposed a rule-based model to identify event factuality on factbank
0
the cross lingual arabic blog alerts project is another large-scale effort to create dialectal arabic resources---the colaba project is another large effort to create dialectal arabic resources
1
the simile is a figure of speech that builds on a comparison in order to exploit certain attributes of an entity in a striking manner---a simile is a form of figurative language that compares two essentially unlike things ( cite-p-20-3-11 ) , such as “ jane swims like a dolphin ”
1
socher et al , 2012 ) uses a recursive neural network in relation extraction---using recurrent neural networks has become a very common technique for various nlp based tasks like language modeling
0
pang et al use machine learning methods to detect sentiments on movie reviews---pang et al applied machine learning based classifiers for sentiment classification on movie reviews
1
we initialize our word vectors with 300-dimensional word2vec word embeddings---our mt system was evaluated using the n-gram based bleu and nist machine translation evaluation software
0
latent dirichlet allocation is one of the widely adopted generative models for topic modeling---the benchmark model for topic modelling is latent dirichlet allocation , a latent variable model of documents
1
for the chunking task , we also employed generally used features in this case from sha and pereira---for chunking , we follow sha and pereira for the set of features , including token and pos information
1
what we have just described is a method for approximating the joint distribution of all variables with a model containing only the most important systematic interactions among variables---also described is a strategy for creating cooperative responses to user queries , incorporating an intelligent language generation capability that produces content-dependent verbal descriptions of listed items
1
xing et al presented topic aware response generation by incorporating topic words obtained from a pre-trained lda model---blei et al proposed lda as a general bayesian framework and gave a variational model for learning topics from data
1
rindflesch et al use hand-coded rule-based systems to extract the factual assertions from biomedical text---collobert et al propose a multi-task learning framework with dnn for various nlp tasks , including part-of-speech tagging , chunking , named entity recognition , and semantic role labelling
0
we use the well-known word embedding model that is a robust framework to incorporate word representation features---we use the word2vec framework in the gensim implementation to generate the embedding spaces
1
the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime---these language models were built up to an order of 5 with kneser-ney smoothing using the srilm toolkit
1
the experiments were carried out using the chinese-english datasets provided within the iwslt 2006 evaluation campaign , extracted from the basic travel expression corpus---the experiments were carried out using the chinese-english datasets provided within the iwslt 2007 evaluation campaign , extracted from the basic travel expression corpus
1
as classifier we use a traditional model , a support vector machine with linear kernel implemented in scikit-learn---for training the model , we use the linear kernel svm implemented in the scikit-learn toolkit
1
our decoder uses a simple variant of the viterbi algorithm for solving a relaxed version of this model---we use the standard generative dependency model with valence
0
topics were generated using the latent dirichlet allocation implementation in mallet---however , character n-gram based approaches have largely outperformed function word based approaches indicating that some lexical words may also help with authorship attribution
0