text
stringlengths
82
736
label
int64
0
1
semantic role labeling ( srl ) has been defined as a sentence-level natural-language processing task in which semantic roles are assigned to the syntactic arguments of a predicate ( cite-p-14-1-7 )---two of these features together , we finally outperform the continuous embedding features by nearly 2 points of f1 score
0
we initialize the embedding weights by the pre-trained word embeddings with 200 dimensional vectors---in this approach we are attempting to identify the importance of neural word embeddings to accurately capture the context of the main keywords of the abstracts
0
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus---we train a kn-smoothed 5-gram language model on the target side of the parallel training data with srilm
1
druck et al , 2008 ) propose a new generalized expectation criterion that learns a classification function from labeled features alone---druck et al described generalized expectation criteria in which a discriminative model can employ the labeled features and unlabeled instances
1
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---english 4-gram language models with kneser-ney smoothing are trained using kenlm on the target side of the parallel training corpora and on the gigaword corpus
1
crfs are undirected graphical models trained to maximize a conditional probability---syntactic parsing is a central task in natural language processing because of its importance in mediating between linguistic expression and meaning
0
for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus---we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting
1
knowledge graphs like wordnet , freebase , and dbpedia have become extremely useful resources for many nlp-related applications---knowledge graphs such as wordnet , freebase and yago have been playing a pivotal role in many ai applications , such as relation extraction , question answering , etc
1
a 5-gram language model with kneser-ney smoothing is trained using s-rilm on the target language---we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting
1
cohen et al carry out a detailed analysis of argument realisation with respect to verbs and nominalisations , using the genia and pennbioie corpora---similarly , based on the genia and pennbioie corpora , cohen et al performed a study of argument realisation with respect to the nominalisation and alternation of biomedical verbs
1
we trained a 4-gram language model on the xinhua portion of gigaword corpus using the sri language modeling toolkit with modified kneser-ney smoothing---we used a 5-gram language model trained on 126 million words of the xinhua section of the english gigaword corpus , estimated with srilm
1
we used the bleu score to evaluate the translation accuracy with and without the normalization---we used the machine translation quality metric bleu to measure the similarity between machine generated tweets and the held out tests sets
1
this paper introduces a new training set condensation technique designed for mixtures of labeled and unlabeled data---in this paper , we introduce a new semi-supervised learning algorithm that combines self-training and condensation to produce small subsets of labeled and unlabeled data
1
a 5-gram language model with kneser-ney smoothing was trained with srilm on monolingual english data---we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit
1
choudhury et al developed a supervised hidden markov model based approach for normalizing short message service texts---choudhury et al describe a supervised noisy channel model using hmms for sms normalization
1
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit---we apply the rules to each sentence with its dependency tree structure acquired from the stanford parser
0
we compared the performances of the systems using two automatic mt evaluation metrics , the sentence-level bleu score 3 and the document-level bleu score---to verify sentence generation quantitatively , we evaluated the sentences automatically using bleu score
1
we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing---we built a 5-gram language model on the english side of europarl and used the kneser-ney smoothing method and srilm as the language model toolkit
1
the weights of the log-linear interpolation model were optimized via minimum error rate training on the ted development set , using 200 best translations at each tuning iteration---the weights of the different feature functions were tuned by means of minimum error-rate training executed on the europarl development corpus
1
therefore , we use the mean reciprocal rank , a standard metric used for evaluating ranked retrieval systems---here we suggest borrowing the mean reciprocal rank metric from the information retrieval domain
1
semantic parsing is the task of mapping natural language to machine interpretable meaning representations---semantic parsing is the task of translating text to a formal meaning representation such as logical forms or structured queries
1
the language model was a kneser-ney interpolated trigram model generated using the srilm toolkit---the annotation scheme leans on the universal stanford dependencies complemented with the google universal pos tagset and the interset interlingua for morphological tagsets
0
blei and mcauliffe and ramage et al used document label information in a supervised setting---blei and mcauliffe proposed supervised lda that can handle sentiments as observed labels
1
sentiment analysis is a collection of methods and algorithms used to infer and measure affection expressed by a writer---sentiment analysis is a natural language processing task whose aim is to classify documents according to the opinion ( polarity ) they express on a given subject ( cite-p-13-8-14 )
1
this work opens up avenues for use of word embeddings for sarcasm classification---in this paper , we explore use of word embeddings to capture context
1
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit---the language model is trained on the target side of the parallel training corpus using srilm
1
word sense disambiguation ( wsd ) is a widely studied task in natural language processing : given a word and its context , assign the correct sense of the word based on a predefined sense inventory ( cite-p-15-3-4 )---word sense disambiguation ( wsd ) is a difficult natural language processing task which requires that for every content word ( noun , adjective , verb or adverb ) the appropriate meaning is automatically selected from the available sense inventory 1
1
phrasebased smt models are tuned using minimum error rate training---semantic role labeling ( srl ) is the task of automatic recognition of individual predicates together with their major roles ( e.g . frame elements ) as they are grammatically realized in input sentences
0
unlike most previous work , which has used a small number of grammatical categories , we work with 680 morphosyntactic tags---which disregard semantic information , we integrate semantics by means of building textual entailment graphs over the topic clusters
0
morphological analysis is a staple of natural language processing for broad languages---morphological analysis is the segmentation of words into their component morphemes and the assignment of grammatical morphemes to grammatical categories and lexical morphemes to lexemes
1
bohnet and nivre introduced a transition-based system that jointly performed pos tagging and dependency parsing---bohnet and nivre derived a system that could produce both labeled dependency trees as well as part-ofspeech tags in a joint transition system
1
subsequently we compare the model to previously proposed architectures and finally describe the experimental results on the performance of our model---in the future studies , we would explore the possibility of promoting diversity on the learning procedure , by directly optimizing diversity loss
0
we used a paragraph vector model to obtain these phrase embeddings---in this paper we use paragraph vector , proposed by , to build unsupervised language models
1
charniak and johnson , eg , supply a discriminative reranker that uses eg , features to capture syntactic parallelism across conjuncts---semantic role labeling ( srl ) is the task of automatically annotating the predicate-argument structure in a sentence with semantic roles
0
we also use a 4-gram language model trained using srilm with kneser-ney smoothing---the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting
1
the two contributions together significantly improves unlabeled dependency accuracy from 90.82 % to 92.13 %---contributions combined significantly improves unlabeled dependency accuracy : 90 . 82 % to 92 . 13 %
1
sentiment analysis is the task of identifying the polarity ( positive , negative or neutral ) of review---in culotta and sorensen such kernels were slightly generalized by providing a matching function for the node pairs
0
in this paper , we investigate the problem of jointly learning categories and their feature types---in this paper we presented a cognitively motivated bayesian model which jointly learns categories and their features
1
the training objective of the skip-gram model is to find word representations that are useful to predict the surrounding---based on the distributional hypothesis , we train a skip-gram model to learn the distributional representations of words in a large corpus
1
more recently , mcdonald et al have investigated a model for jointly performing sentence-and document-level sentiment analysis , allowing the relationship between the two tasks to be captured and exploited---mcdonald et al propose a model which jointly identifies global polarity as well as paragraph-and sentence-level polarity , all of which are observed in training data
1
as shieber hoped , direct parsing is better than using earley 's algorithm on an expanded gr , -mlmar---syntactic parsing is the process of determining the grammatical structure of a sentence as conforming to the grammatical rules of the relevant natural language
0
a prefix verb is a derived word with a bound morpheme as prefix---a prefix verb appears with a hyphen between the prefix and stem
1
in japanese sentences , commas play an important role in explicitly separating the constituents , such as words and phrases , of a sentence---in a compositional way , c-phrase word vectors , when combined through simple addition , produce sentence representations that are better than those obtained when adding other kinds of vectors , and competitive against ad-hoc compositional methods
0
we present b last , an open source tool for error analysis of machine translation ( mt ) output---we present b last , 1 a graphical tool for performing human error analysis , from any mt system
1
dependency parsing is the task to assign dependency structures to a given sentence math-w-4-1-0-14---dependency parsing is a crucial component of many natural language processing ( nlp ) systems for tasks such as relation extraction ( cite-p-15-1-5 ) , statistical machine translation ( cite-p-15-5-7 ) , text classification ( o ? zgu ? r and gu ? ngo ? r , 2010 ) , and question answering ( cite-p-15-3-0 )
1
gong et al and xiao et al introduce topic-based similarity models to improve smt system---xiao et al proposed a topic similarity model for rule selection
1
we adopt a long short-term memory network for the word-level and sentence-level feature extraction---we use long shortterm memory networks to build another semanticsbased sentence representation
1
we utilized pre-trained global vectors trained on tweets---we used 300-dimensional pre-trained glove word embeddings
1
in tmhmm , tmhmms and tmhmmss , the number of “ topics ” in the latent states and a dialogue is a hyperparameter---in tmhmm , tmhmms and tmhmmss , the number of “ topics ” in the latent states
1
we use mini-batch update and adagrad to optimize the parameter learning---we optimize the objective by initializing the parameters 胃 to zero and running adagrad
1
for significance tests , we use the wilcoxon signed ranks test---a single-layer lstm is used for both encoder and decoder
0
we use the dictionary of affect in language , augmented with wordnet for coverage---we use the most frequent sense of wordnet to annotate all verbs in the direct speech
1
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---the language model is trained on the target side of the parallel training corpus using srilm
1
we used the sri language modeling toolkit to train lms on our training data for each ilr level---we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting
1
recently , deep learning has also been introduced to propose an end-to-end convolutional neural network for relation classification---informally , a compound is a combination of two or more words that function as a single unit of meaning
0
our phrase-based mt system is trained by moses with standard parameters settings---we used the moses mt toolkit with default settings and features for both phrase-based and hierarchical systems
1
we use the moses phrase-based mt system with standard features---our direct system uses the phrase-based translation system
1
we also compare against the syntactic function baseline , which is considered difficult to outperform in the unsupervised setting---it is important to note that the syntactic baseline is not trivial to beat in the unsupervised setting
1
h i¡í and math-w-2-7-0-62 are projected vectors of entities---math-w-4-4-0-24 and math-w-4-4-0-27 represent the number of entities
1
serban et al further introduced a stochastic latent variable at each dialogue turn to improve the diversity of the hred model---serban et al further introduced a stochastic latent variable at each dialogue turn to improve the ambiguity and uncertainty of the hred model for dialogue generation
1
word alignment , which can be defined as an object for indicating the corresponding words in a parallel text , was first introduced as an intermediate result of statistical translation models ( cite-p-13-1-2 )---word alignment is a fundamental problem in statistical machine translation
1
in this paper , we propose a novel emotion-aware lda ( ealda ) model to build a domain-specific lexicon for predefined emotions that include anger , disgust , fear , joy , sadness , surprise---in this paper , we have presented a novel emotion-aware lda model that is able to quickly build a fine-grained domain-specific emotion lexicon
1
sentiment analysis is the task in natural language processing ( nlp ) that deals with classifying opinions according to the polarity of the sentiment they express---soft logic ( psl ) is a recently developed framework for probabilistic logic
0
we demonstrate that concept drift is a real , pervasive issue for learning from issue tracker streams---firstly , we explicitly show that concept-drift is pervasive and serious in real bug report streams
1
we use the stanford parser with stanford dependencies---we extract the corresponding feature from the output of the stanford parser
1
twitter is a microblogging service that has 313 million monthly active users 1---twitter is a microblogging social network launched in 2006 with 310 million active users per month and where 340 million tweets are daily generated 1
1
existing active learning methods usually randomly select a set of unlabeled samples to annotate and then train the initial classifier on them---an active learner uses a small set of labeled data to iteratively select the most informative instances from a large pool of unlabeled data for human annotators to label
1
these responses address particular topics and reflect diverse sentiments towards them , in accordance to predefined user agendas---but these responses do not target particular topics and are not driven by a concrete user agenda
1
campbell proposed a rule-based post-processing method based on linguistically motivated rules---campbell developed a set of linguistically motivated hand-written rules for gap insertion
1
we used moses tokenizer 5 and truecaser for both languages---we used a standard pbmt system built using moses toolkit
1
vector-space models of lexical semantics have been a popular and effective approach to learning representations of word meaning---corpus-derived models of semantics have been extensively studied in the nlp and machine learning communities
1
wordnet is a general english thesaurus which additionally covers biological terms---wordnet is a byproduct of such an analysis
1
we trained the five classifiers using the svm implementation in scikit-learn---ccg is a linguistically-motivated categorial formalism for modeling a wide range of language phenomena
0
in both settings , adding sentiment information reduced the dialog length and improved the task success rate on a bus information search task---the translation results are evaluated with case insensitive 4-gram bleu
0
chambers and jurafsky and jans et al give methods for learning models of pairs , as described above---chambers and jurafsky give a method of modeling and inferring simple pair-events
1
sentiment classification is the task of detecting whether a textual item ( e.g. , a product review , a blog post , an editorial , etc . ) expresses a p ositive or a n egative opinion in general or about a given entity , e.g. , a product , a person , a political party , or a policy---sentiment classification is the task of classifying an opinion document as expressing a positive or negative sentiment
1
conditional random fields are discriminatively-trained undirected graphical models that find the globally optimal labeling for a given configuration of random variables---conditional random fields are undirected graphical models to calculate the conditional probability of values on designated output nodes given values on designated input nodes
1
for msa , we use the penn arabic treebank---the data we use comes from the penn arabic treebank
1
the well-known phrase-based statistical translation model extends the basic translation units from single words to continuous phrases to capture local phenomena---the well-known phrasebased translation model has significantly advanced the progress of smt by extending translation units from single words to phrases
1
experimental results show the effectiveness of the clustering-based stratified seed sampling for semi-supervised relation classification---our clustering-based stratified seed sampling strategy significantly improves the performance of semi-supervised relation classification
1
the first part of the paper develops a novel , sortally-based approach to the problem of aspectual composition---in the first part of the paper a novel , sortally-based approach to aspectual composition
1
xing et al show a substantial gain by normalizing the embeddings and constraining the mapping to be orthogonal---xing et al propose the orthogonal transformation and the vector length normalization during the learning phase
1
for all models , we use the 300-dimensional glove word embeddings---we use pre-trained 50 dimensional glove vectors 4 for word embeddings initialization
1
it builds on the distributional hypothesis which states that words occurring in similar contexts are semantically similar---the cbow method is based on the distributional hypothesis , which states that words occur in similar contexts often possess similar meanings
1
srilm toolkit is used to build these language models---the models are built using the sri language modeling toolkit
1
mikolov et al uses a continuous skip-gram model to learn a distributed vector representation that captures both syntactic and semantic word relationships---we used the moses mt toolkit with default settings and features for both phrase-based and hierarchical systems
0
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided---we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words
1
we then follow published procedures to extract hierarchical phrases from the union of the directional word alignments---we extract translation rules from a hypergraph for the hierarchical phrase-based system
1
second , we propose a novel abstractive summarization ( cite-p-10-1-6 ) technique to summarize content from multiple snippets of relevant information---second , we propose a novel abstractive summarization technique based on an optimization framework that generates section-specific summaries for wikipedia
1
various recent attempts have been made to include non-local features into graph-based dependency parsing---distributed representations of words have been widely used in many natural language processing tasks
0
the log-linear parameter weights are tuned with mert on the development set---met iterative parameter estimation under ibm bleu is performed on the development set
1
we use word embedding pre-trained on newswire with 300 dimensions from word2vec---in ( 1 ) , the two instances of the variation nucleus satisfy the non-fringe heuristic because they are properly contained within the identical variation
0
we primarily compared our model with conditional random fields---we used conditional random fields for the machine learning task
1
we use the svm implementation from scikit-learn , which in turn is based on libsvm---we used the implementation of random forest in scikitlearn as the classifier
1
the feature weights 位 m are tuned with minimum error rate training---our proposed method can extract precise sentiment and topic lexicons from the target domain
0
as a baseline for this comparison , we use morfessor categories-map---we train a word embedding using word2vec over a large corpus of 55 , 463 product reviews
0
we use the word2vec framework in the gensim implementation to generate the embedding spaces---we train the twitter sentiment classifier on the benchmark dataset in semeval 2013
0
we have presented an approach that uses a supervised learning method with a graph based representation---metaphor is a figure of speech in which a word or phrase that ordinarily designates one thing is used to designate another , thus making an implicit comparison ( cite-p-19-1-11 , cite-p-19-1-12 , cite-p-19-3-15 )
0
for support vector machines , we used the liblinear package---for implementation , we used the liblinear package with all of its default parameters
1
all the weights of those features are tuned by using minimal error rate training---for simplicity , we use the well-known conditional random fields for sequential labeling
0