text
stringlengths
82
736
label
int64
0
1
we can also improve the representation used for spatial priors of objects in scenes---we present a spatial knowledge representation that can be learned from 3d scenes
1
we use 300-dimensional word embeddings from glove to initialize the model---we use pre-trained vectors from glove for word-level embeddings
1
pitler and nenkova show that discourse coherence features are more informative than other features for ranking texts with respect to their readability---pitler and nenkova investigated different features for text readability judgement and empirically demonstrated that discourse relations are highly correlated with perceived readability
1
sentence compression is a standard nlp task where the goal is to generate a shorter paraphrase of a sentence---sentence compression is the task of producing a summary at the sentence level
1
the feature weights for each system were tuned on development sets using the moses implementation of minimum error rate training---the weights of the different feature functions were tuned by means of minimum error-rate training executed on the europarl development corpus
1
by experiments , we show that the proposed model outperforms the bigram hidden markov model ( hmm ) -based tagging model---although the model leaves much room for improvement , it outperforms the hmm based model
1
also , taking advantage of properties of this corpus , cross-document inference is applied to obtain more ¡°informative¡± probabilities---of this corpus , global inference is applied to provide more confident and informative data
1
language models are built using the sri-lm toolkit---the language models were trained using srilm toolkit
1
coreference resolution is a complex problem , and successful systems must tackle a variety of non-trivial subproblems that are central to the coreference task — e.g. , mention/markable detection , anaphor identification — and that require substantial implementation efforts---coreference resolution is a fundamental component of natural language processing ( nlp ) and has been widely applied in other nlp tasks ( cite-p-15-3-9 )
1
in the context of dependency parsing , bohnet and nivre and bohnet et al integrated tagging and dependency parsing , improving state-of-theart accuracy for a set of typologically different languages---for dependency parsing , bohnet and nivre and bohnet et al present language-agnostic transition-based frameworks for jointly parsing and tagging input words , though without addressing the complex issue of retokenizing ambiguous input tokens
1
moreover , the nws scores shows interesting correlations with perceived meaning of words indicated by concreteness and imageability psycholinguistic ratings---using the nws scores and pre-trained word embeddings show a high degree of correlation with human similarity ratings
1
in this paper we present a formal computational framework for modeling manipulation actions---we present a novel approach to web search result clustering which is based on the automatic discovery of word senses from raw text – a task referred to as word sense induction ( wsi )
0
the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime---the language model is trained and applied with the srilm toolkit
1
question answering ( qa ) is a challenging task that draws upon many aspects of nlp---feature weights themselves are learned via minimum error rate training as implemented in z-mert with the bleu metric
0
in addition , i plan to implement emotion modeling capabilities into itspoke and evaluate the effectiveness of doing so---experiments on the benchmark data set show that our model achieves comparable and even better performance
0
experimental results show that , our model achieves the state-of-the-art performance , and significantly outperforms previous text-enhanced models---experiment results show that our method can achieve the state-of-the-art performance , and significantly outperforms previous text-enhanced knowledge
1
these patterns are either manually identified or automatically extracted---duan et al made use of a tree-cut model to represent questions as graphs of topic terms
0
for implementation , we used the liblinear package with all of its default parameters---the scaling factors of the features were optimized for bleu on the development set with minimum error rate training on 100-best lists
0
we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus---a 5-gram language model was built using srilm on the target side of the corresponding training corpus
1
a dyadic speaker-addressee model captures properties of interactions between two interlocutors---speaker-addressee model encodes the interaction patterns of two interlocutors
1
morphological analysis is the task of segmenting a word into morphemes , the smallest meaning-bearing elements of natural languages---our method of morphological analysis comprises a morpheme lexicon
1
hence , topics inferred by lda may not correlate well with human judgements even though they better optimize perplexity on held-out documents---however , perplexity on the heldout test set does not reflect the semantic coherence of topics and may be contrary to human judgments
1
gale et al . refer to this as the ¡®one sense per discourse¡¯ property ( cite-p-14-3-0 )---because of the ¡® one sense per discourse ¡¯ claim ( cite-p-14-3-0 )
1
word sense disambiguation ( wsd ) is a particular problem of computational linguistics which consists in determining the correct sense for a given ambiguous word---word sense disambiguation ( wsd ) is the task of identifying the correct meaning of a word in context
1
of the three base systems , the feature-based model obtained the best results , outperforming the lstm-based models by .06---of the three base systems , the feature-based model obtained the best results , outperforming each lstm-based model ¡¯ s correlation by . 06
1
abstract meaning representation is a semantic formalism where the meaning of a sentence is encoded as a rooted , directed graph---abstract meaning representation is a semantic representation where the meaning of a sentence is encoded as a rooted , directed graph
1
for building the baseline smt system , we used the open-source smt toolkit moses , in its standard setup---we have used the open source smt system , moses 6 to implement the base decoder and the decoder that uses the proposed segmentation model
1
we applied bpe to all data using 32,000 merge operations---we trained a subword model using bpe with 29,500 merge operations
1
we proposed a minimally supervised method for multilingual paraphrase extraction---we propose a minimally supervised method for multilingual paraphrase extraction from definition sentences
1
our aim with the participation was to adapt language modeling techniques to this task---with the participation was to adapt language modeling techniques to this task
1
the weights for these features are optimized using mert---the log-lineal combination weights were optimized using mert
1
blitzer et al investigate domain adaptation for sentiment classifiers using structural correspondence learning---blitzer et al investigate domain adaptation for sentiment classifiers , focusing on online reviews for different types of products
1
neural network models have been exploited to learn dense feature representation for a variety of nlp tasks---convolutional neural networks are useful in many nlp tasks , such as language modeling , semantic role labeling and semantic parsing
1
semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation ( mr )---semantic parsing is the task of mapping natural language utterances to machine interpretable meaning representations
1
4 in the vso constructions , the verb agrees with the syntactic subject in gender only , while in the svo constructions , the verb agrees with the subject in both number and gender---in the vso constructions , the verb agrees with the syntactic subject in gender only , while in the svo constructions , the verb agrees with the subject
1
wang et al have presented syntactic tree based matching for finding semantically similar questions---wang et al presented a syntactic tree matching method for finding similar questions
1
mikolov et al further proposed continuous bagof-words and skip-gram models , which use a simple single-layer architecture based on inner product between two word vectors---in order to acquire syntactic rules , we parse the chinese sentence using the stanford parser with its default chinese grammar
0
in our work , we jointly learn and reason about relation-types , entities , and entity-types---we take chen and manning , which uses the arc-standard transition system
0
the heuristic rule assumes that one sense per 3-gram which is proposed by us initially through investigating a chinese sense-tagged corpus stc---the heuristic rule assumes that one sense per n-gram which we testified initially through investigating a chinese sense-tagged corpus stc
1
a context-free grammar ( cfg ) is a 4-tuple math-w-3-1-1-9 where math-w-3-1-1-21 and math-w-3-1-1-23 are finite disjoint sets of nonterminal and terminal symbols , respectively , math-w-3-1-1-36 is the start symbol and math-w-3-1-1-44 is a finite set of rules---a context-free grammar ( cfg ) is a 4-tuple math-w-4-1-0-9 , where math-w-4-1-0-18 is the set of nonterminals , σ the set of terminals , math-w-4-1-0-31 the set of production rules and math-w-4-1-0-38 a set of starting nonterminals ( i.e . multiple starting nonterminals are possible )
1
among them , twitter is the most popular service by far due to its ease for real-time sharing of information---twitter is a widely used social networking service
1
we perform inference using point-wise gibbs sampling---to train our model we use markov chain monte carlo sampling
1
with english gigaword corpus , we use the skip-gram model as implemented in word2vec 3 to induce embeddings---we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting
0
in this paper we describe an approach to reducing the complexity of arabic morphology generation using discrimination trees and transformational rules---xie et al employs head-dependents relations as elementary structures and proposed a dependency-to-string model with good long distance reordering property
0
we also show that very high gradually decreasing exploration rates are required for convergence---very high gradually decreasing exploration rates are required for convergence
1
in this paper , we propose a new model based on the cbow , hence we focus attention on it---in this paper , we propose a new model based on the cbow , hence
1
in section 2 , we review the existing approaches for categorical and arbitrary slot filling tasks and introduce related work---in section 2 , we review the existing approaches for categorical and arbitrary slot filling tasks
1
our model is thus a form of quasi-synchronous grammar---this paper presented a negative result about importance weighting for unsupervised domain adaptation of pos taggers
0
we used the moses toolkit to train the phrase tables and lexicalized reordering models---we preprocessed the training corpora with scripts included in the moses toolkit
1
nowadays , most conversational systems require extensive human annotation efforts in order to be fit for their task---nowadays , most conversational systems operate on a dialogue-act level and require extensive annotation efforts in order to be fit for their task
1
in this paper we report our work on anchoring temporal expressions in a novel genre , emails---in this paper we report our work on anchoring temporal expressions
1
we initialize our word vectors with 300-dimensional word2vec word embeddings---we use the word2vec tool to pre-train the word embeddings
1
on the pdtb data set , using dswe as features achieves significant improvements over baselines---in this paper , we propose to overcome this problem by replacing the source-language embedding layer of nmt with a bidirectional recurrent neural network that generates compositional representations of the input
0
in this work , we propose a multi-sentence qa challenge in which questions can be answered only using information from multiple sentences---li et al use sense paraphrases to estimate probabilities of senses and carry out wsd
0
in our feature set , we included linguistic features introduced by pitler and nenkova and partially overlapping with those used in cohmetrix for predicting text quality---we analyzed these features on the dataset created by pitler and nenkova which associates human readability ratings with each document
1
the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting---the language model was a kneser-ney interpolated trigram model generated using the srilm toolkit
1
parameters were tuned using minimum error rate training---the standard minimum error rate training algorithm was used for tuning
1
word sense disambiguation ( wsd ) is a key enabling technology that automatically chooses the intended sense of a word in context---in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd )
1
translation quality is measured in truecase with bleu on the mt08 test sets---pronunciation dictionaries provide a readily available parallel corpus for learning to transduce between character strings and phoneme strings
0
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---we then lowercase all data and use all sentences from the modern dutch part of the corpus to train an n-gram language model with the srilm toolkit
1
topic models , such as plsa and lda , have shown great success in discovering latent topics in text collections---since its introduction , topic modeling has been tailored to perform better on short texts such as microblogs
1
the log-lineal combination weights were optimized using mert---the 位 f are optimized by minimum-error training
1
such a model can be used for identification of topics of unseen calls---such a domain model can be used for topic identification of unseen calls
1
we used 100 dimensional glove embeddings for this purpose---in this task , we use the 300-dimensional 840b glove word embeddings
1
evaluation shows that our integrated parsing approach outperforms the pipeline parsing approach on n-best parse trees , a natural extension of the widely-used pipeline parsing approach on the top-best parse tree---nombank shows that our integrated parsing approach outperforms the pipeline parsing approach on n-best parse trees , a natural extension of the widely used pipeline parsing approach
1
collins et al described six types of transforming rules to reorder the german clauses in german-to-english translation---collins et al , 2005 ) analyze german clause structure and propose six types of rules for transforming german parse trees with respect to english word order
1
in this paper , we proposed an approach to represent rare words by sparse linear combinations of common ones---we propose an approach to represent uncommon words ¡¯ embeddings by a sparse linear combination of common ones
1
the hidden unknown words could be identified using the approaches such as n-gram generation and phrase chunking---on the other hand , most obvious ways of reducing the bulk usually lead to a reduction in translation quality
0
turney and littman manually selected seven positive and seven negative words as a polarity lexicon and proposed using pointwise mutual information to calculate the polarity of a word---turney and littman proposed to compute pair-wised mutual information between a target word and a set of seed positive and negative words to infer the so of the target word
1
furthermore , the bag-of-words methods we test are equivalent in retrieval accuracy to the more expensive segment order-sensitive methods , but superior in retrieval speed---in their optimum configuration , bag-of-words methods are shown to be equivalent to segment order-sensitive methods in terms of retrieval accuracy , but much faster
1
the results show that combining the huge space of tree fragments generalized at the lexical level provides an effective model for adapting re systems to new domains---lsa show that a suitable combination of syntax and lexical generalization is very promising for domain adaptation
1
each system is optimized using mert with bleu as an evaluation measure---the various smt systems are evaluated using the bleu score
1
the language model was a 5-gram language model estimated on the target side of the parallel corpora by using the modified kneser-ney smoothing implemented in the srilm toolkit---a 5-gram language model was created with the sri language modeling toolkit and trained using the gigaword corpus and english sentences from the parallel data
1
erk et al propose the exemplar-based model of selectional preferences , in turn based on erk---erk and erk et al describe a method that uses corpus-driven distributional similarity metrics for the induction of selectional preferences
1
we implement a distributed training strategy for the perceptron algorithm using the mapreduce framework---we adopt the iterative parameter mixing variation of the perceptron to scale to a large number of training examples
1
dependency parsing is a basic technology for processing japanese and has been the subject of much research---dependency parsing is a valuable form of syntactic processing for nlp applications due to its transparent lexicalized representation and robustness with respect to flexible word order languages
1
here we investigate a label propagation algorithm ( lp ) ( cite-p-16-3-4 ) for relation extraction task---we propose a novel ilp-based model using an interactive loop to create multi-document user-desired summaries
0
twitter is a microblogging service that has 313 million monthly active users 1---twitter is the medium where people post real time messages to discuss on the different topics , and express their sentiments
1
our model modifies the attention based architecture proposed by bahdanau et al , and implements as a deep stack lstm framework---our system for this shared task 1 is based on an encoder-decoder model proposed by bahdanau et al for neural machine translation
1
we use attitude predictions to construct an attitude vector for each discussant---relation extraction is the task of finding relational facts in unstructured text and putting them into a structured ( tabularized ) knowledge base
0
coreference resolution is the task of clustering a set of mentions in the text such that all mentions in the same cluster refer to the same entity---coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept
1
together with this recognition mechanism , we used a heuristic similarity search method , to assign an unambiguous identifier to each concept recognized in the text---the system output is evaluated using the meteor and bleu scores computed against a single reference sentence
0
we learn our word embeddings by using word2vec 3 on unlabeled review data---we use the skipgram model with negative sampling to learn word embeddings on the twitter reference corpus
1
liu et al proposed a context-sensitive rnn model that uses latent dirichlet allocation to extract topic-specific word embeddings---lin et al develop a sentence-level recurrent neural network language model that takes a sentence as input and tries to predict the next one based on the sentence history vector
1
word alignment is a fundamental problem in statistical machine translation---for training the model , we use the linear kernel svm implemented in the scikit-learn toolkit
0
recently , rnns with attention mechanisms have demonstrated success in various nlp tasks , such as machine translation , parsing , image captioning , and textual entailment---recently , using an attention mechanism with a neural networks has resulted in notable success in a wide range of nlp tasks , such as machine translation , speech recognition , and image captioning
1
the assumption is that a word vector is learned in such a way that it best predicts its surrounding words in a sentence or a document---while all of the target formalisms share a similar basic syntactic structure with penn treebank cfg ,
0
we evaluate the translation quality using the case-sensitive bleu-4 metric---since its introduction , topic modeling has been tailored to perform better on short texts such as microblogs
0
aida is the system presented with the conll-yago dataset and places emphasis on state-of-the-art ranking of candidate entity sets---the conll-yago dataset is an excellent target for end-to-end , wholedocument entity annotation
1
experiments on the naist text corpus demonstrate that without syntactic information , our model outperforms previous syntax-dependent models---performing experiments on the naist text corpus ( cite-p-31-3-9 ) , we demonstrate that even without syntactic information , our neural models outperform previous syntax-dependent models
1
this resource was created as a commissioned translation of the basic traveling expression corpus sentences from english and french to the different dialects---this resource was a commissioned translation of the basic traveling expression corpus sentences from english and french to the different dialects
1
the final results improve the state of the art in dependency parsing for all languages---all contribute to improved parsing accuracy , leading to new state-of-the-art results for all languages
1
bleu is the most commonly used metric for mt evaluation---bleu is the most commonly used metric for machine translation evaluation
1
to extract terms we used lingua english tagger for finding single and multi-token nouns and the stanford named entity recognizer to extract named entities---after sentence segmentation and tokenization , we used the stanford ner tagger to identify per and org named entities from each sentence
1
the idea is explored more comprehensively in ( cite-p-27-1-0 )---to address this issue , see e . g . ( cite-p-27-3-6 ) and ( cite-p-27-3-7 )
1
to generate these trees , we employ the stanford pos tagger 8 and the stack version of the malt parser---our approach follows that of johnson et al , a multilingual mt approach that adds an artificial token to encode the target language to the beginning of each source sentence in the parallel corpus
0
we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting---we used srilm to build a 4-gram language model with kneser-ney discounting
1
the statistical phrase-based systems were trained using the moses toolkit with mert tuning---word sense disambiguation ( wsd ) is formally defined as the task of computationally identifying senses of a word in a context
0
the output of bigru is then used as the input to the capsule network---ravichandran and hovy extract semantic relations for various terms in a question answering system
0
our work is inspired by the suc-cessful application of word clustering in supervised nlp models---as discussed in the introduction , our work is related to previous work on integrating word embeddings into discrete models
1