text
stringlengths
82
736
label
int64
0
1
we use the word2vec framework in the gensim implementation to generate the embedding spaces---chinese-english tasks show that the proposed methods can substantially improve nmt
0
wieting et al use embedding models to identify paraphrastic sentences in such a mixed nlp task employing a large corpus of short phrases associated with paraphrastic relatives---wieting et al explored using supervision from paraphrase information to obtain custom-tailored word vectors that give rise to high-quality sentence embeddings
1
we make this speculation precise and define the problem of attachment to construct state constructions in the arabic treebank---we make their speculation precise and define the problem of attachment to construct state constructions in the atb by extracting out such idafa constructions
1
we model the generative architecture with a recurrent language model based on a recurrent neural network---for the decoder , we use a recurrent neural network language model , which is widely used in language generation tasks
1
since the work of pang et al , various classification models and linguistic features have been proposed to improve the classification performance---since the work of pang , lee , and vaithyanathan , various classification models and linguistic features have been proposed to improve classification performance
1
a 4-gram language model was trained on the monolingual data by the srilm toolkit---a residual connection is employed around each of two sub-layers , followed by layer normalization
0
in this paper , we address the task of cross-cultural deception detection---in this paper , we explore within-and across-culture deception detection
1
the trigram language model is implemented in the srilm toolkit---the target-side language models were estimated using the srilm toolkit
1
coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set---coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities
1
in the emu speech database system the hierarchical relation between levels has to be made explicit---the emu speech database system defines an annotation scheme involving temporal constraints of precedence and overlap
1
alternatively , deep learning has recently been tried for sequence-to-sequence transduction---all the weights of those features are tuned by using minimal error rate training
0
relation extraction is a key step towards question answering systems by which vital structured data is acquired from underlying free text resources---relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form
1
relation extraction is the task of extracting semantic relationships between entities in text , e.g . to detect an employment relationship between the person larry page and the company google in the following text snippet : google ceo larry page holds a press announcement at its headquarters in new york on may 21 , 2012---relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text
1
truncation size is set to math-w-14-8-0-55---n , productions r , start symbol math-w-4-1-0-54
1
however , as discussed by heift and schulze , most of the systems are research prototypes that have never seen real-life testing or use---we set the feature weights by optimizing the bleu score directly using minimum error rate training on the development set
0
finally , we conduct paired bootstrap sampling to test the significance in bleu scores differences---we apply statistical significance tests using the paired bootstrapped resampling method
1
we implement an in-domain language model using the sri language modeling toolkit---we use the sri language modeling toolkit for language modeling
1
sentiment classification is a special task of text categorization that aims to classify documents according to their opinion of , or sentiment toward a given subject ( e.g. , if an opinion is supported or not ) ( cite-p-11-1-2 )---sentiment classification is a well studied problem ( cite-p-13-3-6 , cite-p-13-1-14 , cite-p-13-3-3 ) and in many domains users explicitly provide ratings for each aspect making automated means unnecessary
1
it is used to support semantic analyses in the hpsg english resource grammar - , but also in other grammar formalisms like lfg---with the advent of recurrent neural network based language models , some rnn based nlg systems have been proposed
0
our systems were among the top performing systems in both subtasks---our systems participated in these two subtasks
1
the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime---an english 5-gram language model is trained using kenlm on the gigaword corpus
1
however , it has been shown in that the amount of lexical and semantic information contained in such resources is typically insufficient for high-performance wsd---for data preparation and processing we use scikit-learn
0
following , 伪 , 纬 is used to represent a synchronous context free grammar rule extracted from the training corpus , where 伪 and 纬 are the source-side and target-side rule respectively---following , we use 伪 , 纬 to represent a scfg rule extracted from the training corpus , where 伪 and 纬 are source and target strings , respectively
1
in this paper , we overcome the above challenge above by combining two or more algorithms instead of picking one of them to perform semi-supervised learning---in this paper , we address semi-supervised sentiment learning via semi-stacking , which integrates two or more semi-supervised learning algorithms
1
the parse trees for sentences in the test set were obtained using the stanford parser---we used the moses tree-to-string mt system for all of our mt experiments
0
firstly , we built a forward 5-gram language model using the srilm toolkit with modified kneser-ney smoothing---we used srilm to build a 4-gram language model with kneser-ney discounting
1
translation results are evaluated using the word-based bleu score---the translation quality is evaluated by case-insensitive bleu and ter metric
1
as a baseline system for our experiments we use the syntax-based component of the moses toolkit---our baseline is a phrase-based mt system trained using the moses toolkit
1
we trained a trigram model with the kenlm , again using all sentences from wikipedia---we trained a 3-gram language model on all the correct-side sentences using kenlm
1
we use srilm for n-gram language model training and hmm decoding---for the language model , we used srilm with modified kneser-ney smoothing
1
word2vec has been proposed for building word representations in vector space , which consists of two models , including continuous bag of word and skipgram---in this paper , we present two deep-learning systems for short text sentiment analysis developed for semeval-2017 task 4 “ sentiment analysis
0
word sense disambiguation ( wsd ) is the task to identify the intended sense of a word in a computational manner based on the context in which it appears ( cite-p-13-3-4 )---word sense disambiguation ( wsd ) is the task of determining the correct meaning ( “ sense ” ) of a word in context , and several efforts have been made to develop automatic wsd systems
1
on wmt german→english , we outperform the best single system reported on matrix.statmt.org by 0.8 % b leu absolute---on wmt german→english , we outperform the best single system reported on matrix . statmt . org by 0 . 8 %
1
accordingly , we use an adaptive recurrence mechanism to learn a dynamic node representation through attention structure---in our model , we use an attention mechanism to integrate the information from a set of comment into an action embedding vector
1
however , their model only focuses on subgraphs which cover continuous phrases---however , their models lack the ability to handle continuous phrases which are not connected in trees
1
we empirically show that a possible reason for its good performance is its alignment to dimensions specific of hypernymy : generality and similarity---baroni dataset may be in part due to its alignment to two dimensions relevant to of hypernymy : generality and similarity
1
transition-based parsers for phrase structure grammars generally derive from the work of sagae and lavie---lexicalized transition-based constituent parsing generally derives from the work of sagae and lavie and subsequent work
1
semantic parsing is the task of mapping natural language sentences to complete formal meaning representations---semantic parsing is the task of mapping a natural language ( nl ) sentence into a completely formal meaning representation ( mr ) or logical form
1
we have test our method by using homogeneous smt systems and a single pivot language---in this paper , we propose a new method of transductive inference , named cross-entity inference , for event extraction by well
0
hatzivassiloglou and mckeown proposed a supervised algorithm to determine the semantic orientation of adjectives---in previous work , hatzivassiloglou and mckeown propose a method to identify the polarity of adjectives
1
finally , based on recent results in text classification , we also experiment with a neural network approach which uses a long-short term memory network---in this paper , we present a potential approach for improving the performance of coreference resolution by using classifier
0
in this work , we investigated how neural mt models learn word structure---we train a linear support vector machine classifier using the efficient liblinear package
0
this terminology is similar to the one used in open information extraction systems , such as reverb---this form of event representation is widely used in open information extraction
1
sometimes a noun can refer to the entity denoted by a noun that has a different modifier---the noun phrase refers to the entity denoted by a previous noun phrase which has the same head noun
1
traditionally , a language model is a probabilistic model which assigns a probability value to a sentence or a sequence of words---a language model is a probability distribution that captures the statistical regularities of natural language use
1
in this paper , we have proposed a new hybrid kernel for re that combines two vector based kernels and a tree kernel---in this paper , we propose a novel hybrid kernel that combines ( automatically collected )
1
xiong et al and lin et al extracted hypernym features from hownet semantic knowledge and integrated the features into a generative model for chinese constituent parsing---xiong et al integrated first-sense and hypernym features in a generative parse model applied to the chinese penn treebank and achieved significant improvement over their baseline model
1
we also compare our results to those obtained using the system of durrett and denero on the same test data---in order to present a comprehensive evaluation , we evaluated the accuracy of each model output using both bleu and chrf3 metrics
0
we utilize a maximum entropy model to design the basic classifier used in active learning for wsd---when labeled training data is available , we can use the maximum entropy principle to optimize the 位 weights
1
one of the first challenges in sentiment analysis is the vast lexical diversity of subjective language---entity linking ( el ) is a central task in information extraction — given a textual passage , identify entity mentions ( substrings corresponding to world entities ) and link them to the corresponding entry in a given knowledge base ( kb , e.g . wikipedia or freebase )
0
we used a bilingual corpus of travel conversation , which has japanese sentences and their english translations---we used the basic travel expression corpus , a collection of conversational travel phrases for korean and english
1
the dependency parse trees are finally obtained using a phrase structure parser , using the post-processing of the stanford corenlp package---the syntactic relations are obtained using the constituency and dependency parses from the stanford parser
1
word sense disambiguation ( wsd ) is formally defined as the task of computationally identifying senses of a word in a context---in such a case , the end-user may prefer a concise summary of the ongoing discussion
0
a discourse structure is a tree whose leaves correspond to elementary discourse units ( edu ) s , and whose internal nodes correspond to contiguous text spans ( called discourse spans )---the discourse structure is a directed graph , where nodes correspond to segments of a document ( which we will refer to as “ blocks ” of text ) , and the edges define the dependencies between them
1
first , arabic is a morphologically rich language ( cite-p-19-3-7 )---chklovski and pantel used patterns to extract a set of relations between verbs , such as similarity , strength and antonymy
0
alternatively , processing difficulty has been explained in terms of surprisal---from the perspective of online language comprehension , processing difficulty is quantified by surprisal
1
segmentation is a useful intermediate step in such applications as subjectivity analysis ( stoyanov and cardie , 2008 ) , automatic summarization ( cite-p-12-1-7 ) , question answering ( cite-p-12-3-2 ) and others---similarity between their hidden representations shows comparable performance with the state-of-the-art supervised models and in some cases outperforms them
0
we use the stanford parser to generate a dg for each sentence---the stanford parser was used to generate the dependency parse information for each sentence
1
coreference resolution is the task of clustering a set of mentions in the text such that all mentions in the same cluster refer to the same entity---coreference resolution is a set partitioning problem in which each resulting partition refers to an entity
1
the mt performance is measured with the widely adopted bleu and ter metrics---relation extraction is the task of finding semantic relations between entities from text
0
this paper proposes a new neural network , sdp-lstm , for relation classification---in this paper , we present sdp-lstm , a novel neural network to classify the relation of two entities
1
we do perform word segmentation in this work , using the stanford tools---we use stanford corenlp for chinese word segmentation and pos tagging
1
based on shallow syntax , used rules to reorder the source sentences on the chunk level and provide a source-reordering lattice instead of a single reordered source sentence as input to the smt system---we apply online training , where model parameters are optimized by using adagrad
0
word sense disambiguation ( wsd ) is a key enabling-technology---we report case-sensitive bleu and ter as the mt evaluation metrics
0
the types of events to extract are known in advance---the idea of distinguishing between general and domain-specific examples is due to daum茅 and marcu , who used a maximum-entropy model with latent variables to capture the degree of specificity
0
discourse parsing is the task of identifying the presence and the type of the discourse relations between discourse units---discourse parsing is the process of assigning a discourse structure to the input provided in the form of natural language
1
all word vectors are trained on the skipgram architecture---the model parameters of word embedding are initialized using word2vec
1
we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus---and such techniques as shrinkage and retraining have been used to increase recall from english wikipedia ¡¯ s long tail of sparse infobox classes ( cite-p-27-3-19 , cite-p-27-3-22 )
0
for example , in extended wordnet , the rich glosses in wordnet are enriched by disambiguating the nouns , verbs , adverbs , and adjectives with synsets---they learned text embeddings using the neural language model from le and mikolov and used them to train a binary classifier
0
coreference resolution is the task of clustering referring expressions in a text so that each resulting cluster represents an entity---named entity recognition ( ner ) is the process by which named entities are identified and classified in an open-domain text
0
we use the sdsl library to implement all our structures and compare our indexes to srilm---we use 5-grams for all language models implemented using the srilm toolkit
1
we use the brown clustering algorithm to induce our word representations---we used the brown word clustering algorithm to obtain the word clusters
1
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---for the fluency and grammaticality features , we train 4-gram lms using the development dataset with the sri toolkit
1
collobert et al use a convolutional neural network over the sequence of word embeddings---collobert et al , kalchbrenner et al , and kim use convolutional networks to deal with varying length sequences
1
model fitting for our model is based on the expectation-maximization algorithm---ng examined the representation and optimization issues in computing and using anaphoricity information to improve learning-based coreference resolution
0
a multiword expression is an idiosyncratically interpreted linguistic unit which consists of more than a single word---a multiword expression is a combination of words with lexical , syntactic or semantic idiosyncrasy
1
a 5-gram language model with kneser-ney smoothing is trained using s-rilm on the target language---the english side of the parallel corpus is trained into a language model using srilm
1
a few approaches have emerged more recently that combine content selection and surface realization---besides concentrating on isolated components , a few approaches have emerged that tackle conceptto-text generation
1
rooth et al use an em-based clustering technique to induce a clustering based on the co-occurrence frequencies of verbs with their subjects and direct objects---rooth et al and torisawa showed that em-based clustering using verb-noun dependencies can produce semantically clean noun clusters
1
a 4-grams language model is trained by the srilm toolkit---the language models were trained using srilm toolkit
1
word embeddings are popular representations for syntax , semantics and other areas---seo et al solves a set of sat geometry questions with text and diagram provided
0
in fact , the gains are even stronger on out-of-domain tests than on in-domain tests---a pun is a form of wordplay in which one sign ( e.g. , a word or phrase ) suggests two or more meanings by exploiting polysemy , homonymy , or phonological similarity to another sign , for an intended humorous or rhetorical effect ( aarons , 2017 ; hempelmann and miller , 2017 )
0
we also include results over the penn treebank converted to stanford basic dependencies---we compute these using the manual parse annotations for the articles from the penn treebank corpus
1
transliteration is the task of converting a word from one alphabetic script to another---we have presented an approach that allows the unsupervised induction of dialogue structure from naturally-occurring open-topic
0
in this paper , we propose a novel nmt with source dependency representation to improve translation performance---we use the mallet implementation of conditional random fields
0
this success rests on a high-coverage dictionary---success , however , depends on a high-coverage dictionary
1
transliteration is a process of translating a foreign word into a native language by preserving its pronunciation in the original language , otherwise known as translationby-sound---transliteration is a process of rewriting a word from a source language to a target language in a different writing system using the word ’ s phonological equivalent
1
to train the network , we make use of stochastic gradient descent and the adam optimization algorithm---we use a minibatch stochastic gradient descent algorithm together with the adam optimizer
1
we used the sri language modeling toolkit to train lms on our training data for each ilr level---semantic parsing is the task of mapping a natural language ( nl ) sentence into a completely formal meaning representation ( mr ) or logical form
0
we used minimum error rate training to optimize the feature weights---the minimum error rate training was used to tune the feature weights
1
section 4 will first compare the results of these three approaches , for a total of 43 models---in this paper , we will investigate the performance of these two types of models
1
for phrase-based smt translation , we used the moses decoder and its support training scripts---we used the open source moses phrase-based mt system to test the impact of the preprocessing technique on translation results
1
the promt smt system is based on the moses open-source toolkit---the moses smt system allows for the use of user-defined features in its loglinear model
1
we use the 300-dimensional pre-trained word2vec 3 word embeddings and compare the performance with that of glove 4 embeddings---third , we convert the stanford glove twitter model to word2vec and obtain the word embeddings
1
the word embeddings required by our proposed methods were trained using the gensim 5 implementation of the skip gram version of word2vec---for our implementation we use 300-dimensional part-of-speech-specific word embeddings v i generated using the gensim word2vec package
1
we compute the spearman correlation coefficient between the similarity scores given by the embedding models and those given by human annotators---the standard approach to word alignment is to construct directional generative models , which produce a sentence in one language given the sentence in another language
0
the model parameters are trained using minimum error-rate training---crfs have been shown to perform well in a number of natural language processing applications , such as pos tagging , shallow parsing or np chunking , and named entity recognition
0
sentence scoring is critical since it is used to measure the saliency of a sentence---sentence scoring aims to assign an importance score to each sentence
1
in this paper , we use well-formed dependency structures to handle the coverage of non-constituent rules---with this new framework , we employ a target dependency language model during decoding
1
we used srilm to build a 4-gram language model with interpolated kneser-ney discounting---with ¡° broad coverage ¡± , i . e . , for any user-created nonstandard token , the system should be able to restore the correct word within its top
0