text
stringlengths
82
736
label
int64
0
1
furthermore , have shown that using a neural network based lexical translation model can help boost the quality of statistical machine translation---pang et al employed n-gram and pos features for ml methods to classify movie-review data
0
the language model was a 5-gram language model estimated on the target side of the parallel corpora by using the modified kneser-ney smoothing implemented in the srilm toolkit---the third baseline , a bigram language model , was constructed by training a 2-gram language model from the large english ukwac web corpus using the srilm toolkit with default good-turing smoothing
1
for example , riaz and girju have proposed an unsupervised metric effect-control dependency to determine causality between events in news scenarios---for example , riaz and girju and do et al have proposed unsupervised metrics for learning causal dependencies between two events
1
in this paper we present a computational analysis of the grapho-phonological system of written french , and an empirical validation of some of the obtained descriptive statistics---in this paper , we present a descriptive analysis of the grapheme-phoneme mapping system of the french orthography , and
1
for the chunking task , we also employed generally used features in this case from sha and pereira---taken into accout , we proposed a set of lexical knowledge for idioms and implemented a recognizer that exploits the knowledge
0
on the other hand , sagae and tsujii propose a transition-based counterpart for dag parsing which made available for parsing multi-headed relations---bastings et al used neural monkey to develop a new convolutional architecture for encoding the input sentences using dependency trees
0
previously , tutorial dialogue systems such as auto-tutor and research methods tutor have used lsa to perform the same type of content analysis for student essays that we do in why2---previously , tutorial dialogue systems such as auto-tutor and research methods tutor have used lsa to perform an analysis of the correct answer aspects present in extended student explanations
1
we then use extended lexrank algorithm to rank the sentences in each cluster---we give an extended lexrank with integer linear programming to optimize sentence selection
1
to measure the translation quality , we use the bleu score and the nist score---we evaluate the performance of different translation models using both bleu and ter metrics
1
however , yarowsky proposed that strong collocations should be identified for wsd---however , yarowsky proposed an approach in which strong collocations were identified for wsd
1
word alignment is the task of identifying translational relations between words in parallel corpora , in which a word at one language is usually translated into several words at the other language ( fertility model ) ( cite-p-18-1-0 )---the alignment aspect of our model is similar to the hmm model for word alignment
0
socher et al introduce a semi-supervised approach that uses recursive autoencoders to learn the hierarchical structure and sentiment distribution of a sentence---socher et al introduced a deep learning framework called semi-supervised recursive autoencoders for predicting sentencelevel sentiment distributions
1
semantic role labeling ( srl ) is a major nlp task , providing a shallow sentence-level semantic analysis---we use the word2vec tool with the skip-gram learning scheme
0
according to guo and berkhahn , the embeddings of categorial variables can reduce the network size and capture the intrinsic properties of the categorical variables---according to , the embeddings of categorical variables can reduce the network size while capturing the intrinsic properties of the categorical variables
1
we trained a 4-gram language model on the xinhua portion of gigaword corpus using the sri language modeling toolkit with modified kneser-ney smoothing---we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit
0
in this work , we make the first attempt to define the semantic structure of noun phrase queries---we trained word embeddings for this dataset using word2vec on over around 10m documents of clinical records
0
following the setup of johnson et al , we prepend a totarget-language tag to the source side of each sentence pair and mix all language pairs in the nmt training data---our approach follows that of johnson et al , a multilingual mt approach that adds an artificial token to encode the target language to the beginning of each source sentence in the parallel corpus
1
we incorporate features of both lexical resource-based and vectorial semantics , including wordnet and verbnet sense-level information and vectorial word representations---by aggregating information across many unannotated examples , it is possible to find accurate distributional representations
0
munteanu and marcu use a bilingual lexicon to translate some of the words of the source sentence---the models are built using the sri language modeling toolkit
0
we evaluated the translation quality using the bleu-4 metric---we measured translation performance with bleu
1
evaluating the algorithm on the penn treebank shows an improvement of both precision and recall , compared to the results presented in ( cite-p-10-1-0---on data automatically derived from the penn treebank shows an increase in both precision and recall in recovery of non-local dependencies by approximately 10 % over the results reported in ( cite-p-10-1-0 )
1
our implementation of the segment-based imt protocol is based on the moses toolkit---we use the moses smt toolkit to test the augmented datasets
1
we use binary crossentropy loss and the adam optimizer for training the nil-detection models---in order to cluster lexical items , we use the algorithm proposed by brown et al , as implemented in the srilm toolkit
0
we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus---to β€œ negative ” or β€œ positive ” , then we iteratively calculate the score by making use of the accurate labels of old-domain data as well as the β€œ pseudo ” labels of new-domain data
0
relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text---entity linking ( el ) is a central task in information extraction β€” given a textual passage , identify entity mentions ( substrings corresponding to world entities ) and link them to the corresponding entry in a given knowledge base ( kb , e.g . wikipedia or freebase )
0
in the first text , crime was metaphorically portrayed as a virus and in the second as a beast---for the evaluation , we used bleu , which is widely used for machine translation
0
we propose a framework to select and rank mandatory matching phrases ( mmp ) for question answering---we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words
0
our phrase-based mt system is trained by moses with standard parameters settings---we use the moses toolkit to train our phrase-based smt models
1
the same data was used for tuning the systems with mert---kalchbrenner et al introduced a convolutional neural network for sentence modeling that uses dynamic k-max pooling to better model inputs of varying sizes
0
experimental results on te and as match our observation and show the effectiveness of our approach---experimental results show the soundness of our argument and the effectiveness of our attention
1
we introduce the tree-lstm , a generalization of lstms to tree-structured network topologies---in this paper , we propose a reinforcement learning based framework of dialogue system for automatic diagnosis
0
since this feature is affected by the source-side context , the decoder can choose a proper paraphrase and translate correctly---as a decoding feature , the decoder can choose proper paraphrases and translate properly
1
bunescu and mooney designed a kernel along the shortest dependency path between two entities by observing that the relation strongly relies on sdps---the scarcity of such corpora in particular for specialized domains and for language pairs not involving english pushed researchers to investigate the use of comparable corpora
0
in this paper , we develop attention mechanisms for uncertainty detection---in this paper , we develop novel ways to calculate attention
1
we used the 200-dimensional word vectors for twitter produced by glove---for word embeddings , we used popular pre-trained word vectors from glove
1
collobert and weston propose a unified deep convolutional neural network for different tasks by using a set of taskindependent word embeddings together with a set of task-specific word embeddings---we adapt expectation maximization to find an optimal clustering
0
we have also shown that phrase structure trees , even when deprived of the labels , retain in a certain sense all the structural information---which show that phrase structure trees , even when deprived of the labels , retain in a certain sense all the structural information
1
we apply srilm to train the 3-gram language model of target side---in a number of languages , unlike in english , and it is beneficial for various nlp applications to split such noun compounds
0
coreference resolution is the task of clustering a set of mentions in the text such that all mentions in the same cluster refer to the same entity---coreference resolution is the process of linking multiple mentions that refer to the same entity
1
relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text---relation extraction is the task of finding semantic relations between entities from text
1
the opencyc kb is an open source version of researchcyc that contains much of the definitional information and higher order predicates , but has had much of the lower level specific facts and the entire word lexicon removed---collobert et al designed a unified neural network to learn distributed representations that were useful for part-of-speech tagging , chunking , ner , and semantic role labeling
0
high quality word embeddings have been proven helpful in many nlp tasks---following , we retain only nouns that occur at least 1,000 times in our corpora
0
navigli proposed a sense clustering method by mapping wordnet senses to oxford english dictionary---navigli proposed an automatic approach for mapping wordnet senses to the coarsegrained sense distinctions of the oxford dictionary of english
1
each of these forums constitutes its own β€œ fine-grained domain ” in that the forums cover different market sectors with different properties , even though all forums are in the broad domain of cybercrime---one forum is applied to a different forum : in this sense , even two different cybercrime forums seem to represent different β€œ fine-grained domains
1
text summarization is the task of automatically condensing a piece of text to a shorter version while maintaining the important points---pun is a figure of speech that consists of a deliberate confusion of similar words or phrases for rhetorical effect , whether humorous or serious
0
we train the model by using a simple optimization technique called stochastic gradient descent over shuffled mini-batches with the adadelta rule---we apply the stochastic gradient descent algorithm with mini-batches and the adadelta update rule
1
we use 300 dimension word2vec word embeddings for the experiments---because hand-labeling individual words and word boundaries is very difficult , producing segmented chinese texts is very time-consuming and expensive
0
we evaluated the performance of the three pruning criteria in a real application of chinese text input ( cite-p-15-1-2 ) through cer---of previous years , we do not rely on hand-crafted features , sentiment lexicons
0
we used the berkeley parser 2 to train such grammars on sections 2-21 of the penn treebank---our clustering algorithm was applied to an ltag grammar automatically extracted from sections 02-21 of the penn treebank ,
1
we implement a semi-supervised learning ( ssl ) approach to demonstrate that utilization of more unlabeled data points can improve the answer-ranking task of qa---in this paper , we applied a graph-based ssl algorithm to improve the performance of qa task by exploiting unlabeled entailment
1
during decoding , the nmt decoder enquires the phrase memory and properly generates phrase translations---if the phrase generation is carried out , the nmt decoder generates a multi-word phrase and updates its decoding state
1
we propose a method based on support vector machine---for smt decoding , we use the moses toolkit with kenlm for language model queries
0
recently , the field has been influenced by the success of neural language models---however , common approaches have shown to be inefficient in learning long-term dependencies due to a vanishing gradient
0
furthermore , we analyzed the effect of genre on slot filling and showed that it needs to be carefully examined in research on slot filling---we analyze the effect of genre on slot filling and show that it is an important conflating variable that needs to be carefully examined in research on slot filling
1
crf is a well-known probabilistic framework for segmenting and labeling sequence data---the crf is a sequence modeling framework that can solve the label bias problem in a principled way
1
although negation is a very relevant and complex semantic aspect of language , current proposals to annotate meaning either dismiss negation or only treat it in a partial manner---we use pretrained 100-d glove embeddings trained on 6 billion tokens from wikipedia and gigaword corpus
0
our system consists of three modules : wsd , srl and ed---for the task consists of three modules : wsd , srl and ed
1
mihalcea et al use both corpusbased and knowledge-based measures of the semantic similarity between words---mihalcea et al presents results for several underlying measures of lexical semantic relatedness
1
we use the stanford pos-tagger and name entity recognizer---for feature extraction , we used the stanford pos tagger
1
for the gap-degree 1 case , we have proven several properties of these linearizations , and have used these properties to optimize our algorithm---for the minimally context-sensitive case of gap-degree 1 dependency trees , we prove several properties of minimal-length linearizations which allow us to improve the efficiency of our algorithm
1
for phrase-based smt translation , we used the moses decoder and its support training scripts---we use the moses toolkit with a phrase-based baseline to extract the qe features for the x l , x u , and testing
1
our second method is based on the recurrent neural network language model approach to learning word embeddings of mikolov et al and mikolov et al , using the word2vec package---we use the glove algorithm to obtain 300-dimensional word embeddings from a union of these corpora
0
very recently , researchers have started developing semantic parsers for large , generaldomain knowledge bases like freebase and dbpedia---in recent years , there has been a drive to scale semantic parsing to large databases such as freebase
1
input layer word embeddings are initialized with glove embeddings pre-trained on twitter text---sentiment analysis is the task of identifying positive and negative opinions , sentiments , emotions and attitudes expressed in text
0
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---for the fluency and grammaticality features , we train 4-gram lms using the development dataset with the sri toolkit
1
relation extraction is the task of finding relational facts in unstructured text and putting them into a structured ( tabularized ) knowledge base---relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form
1
recently there has been tremendous interest in representing words via vector embeddings---recent works on word embedding show improvements in capturing semantic features of the words
1
coreference resolution is the task of partitioning a set of mentions ( i.e . person , organization and location ) into entities---framenet is a lexicalsemantic resource manually built by fn experts
0
effectively identifying events in unstructured text is a very difficult task---accurately identifying events in unstructured text is an important goal
1
the 5-gram target language model was trained using kenlm---we used 4-gram language models , trained using kenlm
1
we selected the french sentences for the manual annotation from the parallel europarl corpus---we extract our paraphrase grammar from the french-english portion of the europarl corpus
1
we used sklearn-kittext to build our svm models---each system was tuned via mert before running it on the test set
0
in this work , we leverage a large amount of data to train a multi-layer cnn---we leverage a large amount of weakly-labelled training data
1
we use the word2vec framework in the gensim implementation to generate the embedding spaces---for feature building , we use word2vec pre-trained word embeddings
1
the parameters of the log-linear model were tuned by optimizing bleu on the development set using the batch variant of mira---in the experiments that the proposed late fusion gives a better language modelling quality than the early fusion
0
grefenstette and sadrzadeh followed this approach and proposed new method that obtains the representations of verb meaning as tensors---grefenstette and sadrzadeh then introduced composition functions using the verb matrices and the noun embeddings
1
on the resulting c , we apply max pooling and take the maximum feature as the representative one---here we use max-pooling , which simply selects the maximum value among the elements in the same feature map
1
yao et al diversified the response by a loss function in which words with high inverse document frequency values are preferred---mikolov et al used distributed representations of words to learn a linear mapping between vector spaces of languages and showed that this mapping can serve as a good dictionary between the languages
0
ucca is supported by extensive typological cross-linguistic evidence and accords with the leading cognitive linguistics theories---ucca ‘¯ s representation is guided by conceptual notions and has its roots in the cognitive linguistics tradition
1
berger and lafferty introduce a probabilistic approach to ir based on statistical machine translation models---berger and lafferty proposed the use of translation models for document retrieval
1
word sense disambiguation ( wsd ) is the task to identify the intended sense of a word in a computational manner based on the context in which it appears ( cite-p-13-3-4 )---argumentation features such as premise and support relation appear to be better predictors of a speaker ’ s influence rank
0
each translation model is tuned using mert to maximize bleu---in this paper , we introduce a low-rank approximation based approach for learning joint embeddings of news stories and images
0
we use different pretrained word embeddings such as glove 1 and fasttext 2 as the initial word embeddings---we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset
1
word sense disambiguation ( wsd ) is a key enabling-technology that automatically chooses the intended sense of a word in context---word sense disambiguation ( wsd ) is a key enabling-technology
1
word alignment is a central problem in statistical machine translation ( smt )---word alignment is a fundamental problem in statistical machine translation
1
the srilm toolkit is used to train 5-gram language model---we evaluate accuracy by using the test set developed by mikolov et al
0
we use the glove vectors of 300 dimension to represent the input words---to get word vectors , we used glove and the mean of these word vectors are used as the sentence embedding
1
dependency parsing is the task of predicting the most probable dependency structure for a given sentence---dependency parsing consists of finding the structure of a sentence as expressed by a set of directed links ( dependencies ) between words
1
word embedding models are aimed at learning vector representations of word meaning---several vector space models for word meaning have already been proposed
1
introduced by bengio et al , the authors proposed a statistical language model based on shallow neural networks---corpus-derived models of semantics have been extensively studied in the nlp and machine learning communities
0
following previous work on hierarchical mt , we solve the decoding problem with chart parsing---following previous work , we impose alignment constraint for rule extraction
1
we used stanford dependency parser for the purpose---generally , distant supervision is employed to generate training data by aligning knowledge bases with free texts
0
many nlp problems have benefited from having large amounts of data---approaches to solve nlp problems always benefited from having large amounts of data
1
we investigate the differences between language models compiled from original target-language texts and those compiled from texts manually translated to the target language---we build two language models from two types of corpora : texts originally written in the target language , and human translations from the source language into the target language
1
augmenting the s eq 2s eq models with a copy-mechanism improves performance on both data splits , establishing a new competitive baseline for the task---by extending the s eq 2s eq approach with a copy mechanism , which was shown to be helpful in similar tasks
1
crfs are undirected graphical models trained to maximize a conditional probability---conditional random fields are undirected graphical models that are conditionally trained
1
we trained a linear log-loss model using stochastic gradient descent learning as implemented in the scikit learn library---we trained linear classification models using logistic regression 6 , and non-linear models using random forests , using implementations from the scikit-learn package
1
in fact , to the best of my knowledge , there is no formal comprehensive categorization of social interactions---however , to the best of my knowledge , there is no work that addresses approximation of kernel evaluation
1
for preprocessing the corpus , we use the stanford pos-tagger and parser included in the dkpro framework---to generate dependency links , we use the stanford pos tagger 18 and the malt parser
1
coreference resolution is the task of identifying all mentions which refer to the same entity in a document---coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities
1