text
stringlengths
82
736
label
int64
0
1
we use mteval from the moses toolkit and tercom to evaluate our systems on the bleu and ter measures---for evaluation we use mteval-v13a from the moses toolkit and tercom 3 to score our systems on the bleu respectively ter measures
1
eisner proposes an odecoding algorithm for dependency parsing---we extract hierarchical rules from the aligned parallel texts using the constraints developed by chiang
0
word sense disambiguation ( wsd ) is the task to identify the intended sense of a word in a computational manner based on the context in which it appears ( cite-p-13-3-4 )---word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text
1
table 2 shows results for the strategies 1 , 2 and 3 in terms of bleu---table 4 shows the bleu scores of the output descriptions
1
we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus---a 3-gram language model was trained from the target side of the training data for chinese and arabic , using the srilm toolkit
1
semantic role labeling is the task of determining the constituents of a sentence that represent semantic arguments with respect to a predicate and labeling each with a semantic role---semantic role labeling is the process of annotating the predicate-argument structure in text with semantic labels
1
we perform minimum-error-rate training to tune the feature weights of the translation model to maximize the bleu score on development set---the in-house phrase-based translation system is used for generating translations
0
cucerzan and brill clarified problems of spelling correction for search queries , addressing them using a noisy channel model with a language model created from query logs---bengio et al proposed a probabilistic neural network language model for word representations
0
coreference resolution is the next step on the way towards discourse understanding---coreference resolution is a key task in natural language processing ( cite-p-13-1-8 ) aiming to detect the referential expressions ( mentions ) in a text that point to the same entity
1
all the weights are initialized with xavier initialization method---all the parameters are initialized with xavier method
1
the model was built using the srilm toolkit with backoff and kneser-ney smoothing---the language model is trained and applied with the srilm toolkit
1
to the best of our knowledge this is the first attempt to incorporate world knowledge from a knowledge base for learning models---this is the first attempt at infusing general world knowledge for task specific training of deep learning
1
sentence planning is a set of interrelated but distinct tasks , one of which is sentence scoping , i.e . the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences---sentence planning is a set of interrelated but distinct tasks , one of which is sentence scoping , i.e . the choice of syntactic structure for elementary speech acts and the decision of how to combine them into sentences
1
for example , collobert et al effectively used a multilayer neural network for chunking , part-ofspeech tagging , ner and semantic role labelling---collobert et al propose a multi-task learning framework with dnn for various nlp tasks , including part-of-speech tagging , chunking , named entity recognition , and semantic role labelling
1
we evaluate our models with the standard rouge metric and obtain rouge scores using the pyrouge package---for evaluation , we compare each summary to the four manual summaries using rouge
1
the task sets a goal to automatically process text and identify objects of spatial scenes and relations between them---which sets a goal to automatically process text and identify objects of spatial scenes and relations between them
1
leveraging ensembles of models has proven effective in order to improve performance on native language identification and many other nlp tasks---ensemble methods , in particular , have proved crucial to reach top performance on this task and other related document categorization tasks like the discrimination of language variants
1
therefore , word segmentation is a crucial first step for many chinese language processing tasks such as syntactic parsing , information retrieval and machine translation---word segmentation is a fundamental task for processing most east asian languages , typically chinese
1
many words have multiple meanings , and the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd )---word sense disambiguation ( wsd ) is a key task in computational lexical semantics , inasmuch as it addresses the lexical ambiguity of text by making explicit the meaning of words occurring in a given context ( cite-p-18-3-10 )
1
belinkov and bisk show that machine translation models trained on noisy source text are more robust to the corresponding type of noise---in this paper , we present an unsupervised methodology for propagating lexical co-occurrence vectors into an ontology
0
we show that sentiment analysis of english translations of arabic texts produces competitive results , w.r.t . arabic sentiment analysis---on two different datasets , we show that sentiment analysis of english translations of arabic texts produces competitive results , w . r . t . arabic sentiment analysis
1
relation classification is a crucial ingredient in numerous information extraction systems seeking to mine structured facts from text---relation classification is the task of finding semantic relations between pairs of nominals , which is useful for many nlp applications , such as information extraction ( cite-p-15-3-3 ) , question answering ( cite-p-15-3-6 )
1
training is done through stochastic gradient descent over shuffled mini-batches with adadelta update rule---the stochastic gradient descent with back-propagation is performed using adadelta update rule
1
parallel bilingual corpora are critical resources for statistical machine translation , and cross-lingual information retrieval---bilingual data are critical resources for building many applications , such as machine translation and cross language information retrieval
1
here , we summarize the main ideas of network-based dsms as proposed in [ iosif and potamianos ]---in this section , we generalize the ideas regarding network-based dsms presented in , for the case of more complex structures
1
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---to calculate language model features , we train traditional n-gram language models with ngram lengths of four and five using the srilm toolkit
1
the feature weights for each system were tuned on development sets using the moses implementation of minimum error rate training---turney and littman use pointwise mutual information and latent semantic analysis to determine the similarity of the word of unknown polarity with the words in both positive and negative seed sets
0
we propose a different approach , performing normalization in a maximum-likelihood framework---we propose a unified statistical model , which learns feature weights in a maximum-likelihood framework
1
we propose the first unsupervised approach to the problem of modeling dialogue acts in an open domain---we have presented an approach that allows the unsupervised induction of dialogue structure from naturally-occurring open-topic
1
all data is automatically annotated with syntactic tags using maltparser---the syntactic feature set is extracted after dependency parsing using the maltparser
1
bilingual dictionaries are an essential resource in many multilingual natural language processing tasks such as machine translation and cross-language information retrieval---bilingual dictionaries of technical terms are important resources for many natural language processing tasks including statistical machine translation and cross-language information retrieval
1
in addition , we use an english corpus of roughly 227 million words to build a target-side 5-gram language model with srilm in combination with kenlm---we then lowercase all data and use all sentences from the modern dutch part of the corpus to train an n-gram language model with the srilm toolkit
1
we tokenize and frequent-case the data with the standard scripts from the moses toolkit---we use the moses smt toolkit to test the augmented datasets
1
we use the skipgram model with negative sampling implemented in the open-source word2vec toolkit to learn word representations---this approach relies on word embeddings for the computation of semantic relatedness with word2vec
1
we propose a framework to quantitatively characterize competition and cooperation between ideas in texts , independent of how they might be represented---in texts , we propose the first framework to systematically characterize the relations between ideas based on their occurrence in a corpus of documents , independent of how these ideas are represented
1
the skip-gram model implemented by word2vec learns vectors by predicting context words from targets---the skip-gram model adopts a neural network structure to derive the distributed representation of words from textual corpus
1
recently , mikolov et al proposed novel model architectures to compute continuous vector representations of words obtained from very large data sets---many words have multiple meanings , and the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd )
0
the n-gram language models are trained using the srilm toolkit or similar software developed at hut---the target-side language models were estimated using the srilm toolkit
1
an example of such a query is : ” asus laptop + opinions ” , another , more detailed query , might be ” asus laptop + positive opinions ”---asus laptop + opinions ” , another , more detailed query , might be ” asus laptop + positive opinions ”
1
for systems evaluation , we also use bleu score through the scripts at moses---we substitute our language model and use mert to optimize the bleu score
1
therefore , we propose a novel combination of post-processing morphology prediction with morpheme-based translation---we show that using a post-processing morphology generation model can improve translation
1
our word embeddings is initialized with 100-dimensional glove word embeddings---we initialize these word embeddings with glove vectors
1
roth and yih also described a classification-based framework in which they jointly learn to identify named entities and relations---roth and yih have combined named entity recognition and relation extraction in a structured prediction approach to improve both tasks
1
for all the experiments we used the weka toolkit---we use the weka toolkit for our supervised learning experiments
1
relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text---relation extraction is the task of finding relations between entities in text , which is useful for several tasks such as information extraction , summarization , and question answering ( cite-p-14-3-7 )
1
we evaluate the translation quality using the case-sensitive bleu-4 metric---we measure the translation quality using a single reference bleu
1
we also use a 4-gram language model trained using srilm with kneser-ney smoothing---peng and schuurmans proposed an unsupervised approach based on an improved expectation maximum learning algorithm and a pruning algorithm based on mi
0
in this paper , we introduce a supervised method for back-of-the-book index construction , using a novel set of linguistically motivated features---in this paper , we introduced a supervised method for back-of-the-book indexing which relies on a novel set of features , including features
1
and we use monolingual srl systems to produce argument candidates for each predicate---in our approach , we use monolingual srl systems to produce argument candidates for predicates
1
using espac medlineplus , we trained an initial phrase-based moses system---our phrase-based mt system is trained by moses with standard parameters settings
1
we use the stanford dependency parser to parse the statement and identify the path connecting the content words in the parse tree---we use srilm train a 5-gram language model on the xinhua portion of the english gigaword corpus 5th edition with modified kneser-ney discounting
0
we compare our graphbtm approach with the avitm and the lda model---the bilda model is a straightforward multilingual extension of the standard lda model
1
automatic evaluation measures are used in evaluating simulated dialog corpora---coreference resolution is the task of partitioning a set of mentions ( i.e . person , organization and location ) into entities
0
furthermore , we train a 5-gram language model using the sri language toolkit---we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit
1
bunescu and mooney propose a shortest path dependency kernel for relation extraction---for the lda based method , adding other content words , combined with an increased number of topics , can further improve the performance , achieving up to 14 . 23 % perplexity reduction
0
for minimum error rate tuning , we use nist mt-02 as the development set for the translation task---we set the feature weights by optimizing the bleu score directly using minimum error rate training on the development set
1
most related to our approach , wu used inversion transduction grammars-a synchronous context-free formalism -for this task---the key to our solution is the inversion transduction grammars , a type of synchronous context free grammar limiting reordering to adjacent source spans
1
we show how such interaction of lexical and derivational semantics at the lexico-syntactic interface can be precomputed as a process of offline lexical compilation comprising cut elimination in partial proof-nets---we show how interaction of lexical and derivational semantics at the lexico-syntactic interface can be precomputed as a process of offline lexical compilation comprising cut elimination in partial proof-nets
1
a phrase consists of a content word and one or more suffixes , such as postpositional particles---a phrase is defined as a group of source words f ? that should be translated together into a group of target words e ?
1
while this is true for languages such as english , it is not true universally---and while this is true of languages such as english , it is not true universally
1
we use the distance based logistic triplet loss , which vo and hays report exhibits better performance in image similarity tasks---we use the distance based logistic triplet loss which gave better results than a contrastive loss
1
the correlations are above 95 % for all of the four runs , which means in general , a better performance on mt will lead to a better performance on retrieval---for all of the four runs , which means in general , a better performance on mt will lead to a better performance on retrieval
1
discourse parsing is a challenging natural language processing ( nlp ) task that has utility for many other nlp tasks such as summarization , opinion mining , etc . ( cite-p-17-3-3 )---this paper has presented a novel data-driven approach for building a melody-conditioned lyrics
0
for the evaluation of machine translation quality , some standard automatic evaluation metrics have been used , like bleu , nist and ribes in all experiments---figure 5 : percent postnominal placement for thirty most frequent adjectives
0
lodhi et al used string kernels for document categorization with very good results---the translation outputs were evaluated with bleu and meteor
0
we trained two 5-gram language models on the entire target side of the parallel data , with srilm---in this paper , we present that , word sememe information can improve word representation learning
0
we compare the final system to moses 3 , an open-source translation toolkit---we used a phrase-based smt model as implemented in the moses toolkit
1
coreference resolution is the task of clustering referring expressions in a text so that each resulting cluster represents an entity---coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities
1
the morphological disambiguator component of our parser is based on more and tsarfaty , modified only to accommodate ud pos tags and morphological features---the morphological disambiguation component of our parser is based on more and tsarfaty , modified to accommodate ud pos tags and morphological features
1
we used moses tokenizer 5 and truecaser for both languages---koo et al used the brown algorithm to learn word clusters from a large amount of unannotated data and defined a set of word cluster-based features for dependency parsing models
0
in this paper , we present a method which linearizes amr graphs in a way that captures the interaction of concepts and relations---in this paper , we described a sequenceto-sequence model for amr parsing and present different ways to tackle the data
1
both wan et al and our system use approximate search to solve the problem of input word ordering---wan et al use a dependency grammar to model word ordering and apply greedy search to find the best permutation
1
for other methods , we used the mstparser as the underlying dependency parsing tool---we used the mstparser as the basic dependency parsing model
1
liu and gildea added two types of semantic role features into a tree-to-string translation model---liu and gildea introduced two types of semantic features for tree-to-string machine translation
1
we model the generative architecture with a recurrent language model based on a recurrent neural network---to compensate this , we apply a strong recurrent neural network language model
1
text categorization is the task of classifying documents into a certain number of predefined categories---text categorization is the classificationof documents with respect to a set of predefined categories
1
we pre-trained the word embeddings with glove on english gigaword 2 and we fine-tune them during training---we use the 100-dimensional glove 4 embeddings trained on 2 billions tweets to initialize the lookup table and do fine-tuning during training
1
stance detection is the task of automatically determining from the text whether the author of the text is in favor of , against , or neutral towards a proposition or target---stance detection is the task of automatically determining from text whether the author of the text is in favor of , against , or neutral towards a proposition or target
1
ambiguity is a problem for the vector representation scheme used here , because the two components of an ambiguous vector can add up in a way that makes it by chance similar to an unambiguous word of a different syntactic category---coreference resolution is the task of determining which mentions in a text refer to the same entity
0
semeval-2015 task 3 targets semantically oriented solutions for answer selection in community question answering data---in semeval-2015 task 3 , subtask a . task 3 targets semantic solutions for answer selection in community question answering systems
1
on the remaining tweets , we trained a 10-gram word length model , and a 5-gram language model , using srilm with kneyser-ney smoothing---we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing
1
for this target word the synonym ¡®record¡¯ was picked , which matches ¡®disc¡¯ in its musical sense---one should recognize that ¡® arm ¡¯ has a different sense than ¡® weapon ¡¯ in sentences such as ¡°
1
for example , for the oov mention “ lukebryanonline ” , our model can find similar mentions like “ thelukebryan ” and “ lukebryan ”---we employed the product-of-grammars procedure of the berkeleyparser , where grammars are trained on the same dataset but with different initialization setups , which leads to different grammars
0
in order to quantify how well a particular argument class fits the verb , we adopted the selectional association measure proposed by resnik---we present an efficient variational inference algorithm for the hdp-pcfg based on a structured mean-field approximation of the true posterior
0
word sense disambiguation ( wsd ) is the task of identifying the correct meaning of a word in context---word sense disambiguation ( wsd ) is formally defined as the task of computationally identifying senses of a word in a context
1
for this experiment , we used word2vec on the same frwac corpus to obtain a dense matrix in which each word is represented by a numeric vector---we also used word2vec to generate dense word vectors for all word types in our learning corpus
1
in the general case , the integration of information from distinct mrd sources for use within a lexicon development environment is probably going to remain an unsolved problem for quite some time---in the general case , the integration of information from distinct mrds remains a problem hard , perhaps impossible , to solve without the aid of a complete , linguistically motivated database
1
our 5-gram language model was trained by srilm toolkit---we apply srilm to train the 3-gram language model of target side
1
weka which contains the implementation of all three algorithms was used in our study---for building our ap e b2 system , we set a maximum phrase length of 7 for the translation model , and a 5-gram language model was trained using kenlm
0
with reference to this system , we implement a data-driven parser with a neural classifier based on long short-term memory---to do this , we relied on a neural network with a long short-term memory layer , which is fed from the word embeddings
1
following sutskever et al and bahdanau et al , we decided to use a multi-layer lstm decoder with an attention mechanism---a lattice is a directed acyclic graph ( dag ) , a subclass of non-deterministic finite state automata ( nfa )
0
we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting---we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing
1
a 4-gram language model was trained on the monolingual data by the srilm toolkit---the pre-processed monolingual sentences will be used by srilm or berkeleylm to train a n-gram language model
1
we show the important effect of synsets and antonyms in computing the sentiment similarity of words---in this paper , we propose a probabilistic approach to detect the sentiment similarity of words
1
a pun is a form of wordplay in which one sign ( e.g. , a word or phrase ) suggests two or more meanings by exploiting polysemy , homonymy , or phonological similarity to another sign , for an intended humorous or rhetorical effect ( aarons , 2017 ; hempelmann and miller , 2017 )---distribution allows for straightforward high-performance nlp processing
0
sentiment analysis is the task in natural language processing ( nlp ) that deals with classifying opinions according to the polarity of the sentiment they express---sentiment analysis is the task of automatically identifying the valence or polarity of a piece of text
1
dependencies in an input parse tree are revised by selecting , for a given dependent , the best governor from within a small set of candidates---dependency parse correction , attachments in an input parse tree are revised by selecting , for a given dependent , the best governor from within a small set of candidates
1
moro et al proposed another graph-based approach which uses wikipedia and wordnet in multiple languages as lexical resources---moro et al propose a graphbased approach which uses wikipedia and wordnet as lexical resources
1
heilman et al continued using language modeling to predict readability for first and second language texts---heilman et al combined a language modeling approach with grammarbased features to improve readability assessment for first and second language texts
1
for input representation , we used glove word embeddings---we use pre-trained glove vector for initialization of word embeddings
1