text
stringlengths
82
736
label
int64
0
1
unfortunately , we have seen that this kind of theory can not explain opaque indexicals---but i will argue that the new theory explains the opacity of indexicals
1
our evaluation metric is bleu the overall result of our experiment is shown in table 2---the empirical evaluation of all our systems on the two standard metrics bleu and ter is presented in table 5
1
biased-svm is the state-of-the-art svm method , and often used for comparison---biased-svm is known as the state-of-the-art svms method , and often used for comparison
1
we first propose a simple yet powerful semi-supervised discriminative model appropriate for handling large scale unlabeled data---first , we present a simple , scalable , but powerful task-independent model for semi-supervised
1
kaji and kitsuregawa outline a method of building sentiment lexicons for japanese using structural cues from html documents---kaji and kitsuregawa describe a method for harvesting sentiment words from non-neutral sentences extracted from japanese web documents based on structural layout clues
1
semantic parsing is the task of converting a sentence into a representation of its meaning , usually in a logical form grounded in the symbols of some fixed ontology or relational database ( cite-p-21-3-3 , cite-p-21-3-4 , cite-p-21-1-11 )---semantic parsing is the task of mapping natural language sentences into logical forms which can be executed on a knowledge base ( cite-p-18-5-13 , cite-p-18-5-14 , cite-p-18-3-6 , cite-p-18-5-8 , cite-p-18-3-15 , cite-p-18-3-9 )
1
coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set---coreference resolution is the process of linking multiple mentions that refer to the same entity
1
following the line of work presented by bohnet et al we also replace the feature mapping function by a hash function which enables the use of negative features and yields a considerable speed improvement---we use the most frequent sense of wordnet to annotate all verbs in the direct speech
0
for this purpose , we used phrase tables learned by the standard statistical mt toolkit moses---in this paper , we have presented a decoding procedure for phrase-based smt
0
we use a random forest classifier for all experiments---we used the implementation of random forest in scikitlearn as the classifier
1
stalls and knight adapt this approach to arabic , with the modification that the english phonemes are mapped directly to arabic letters---stalls and knight adapted this approach for back transliteration from arabic to english of english names
1
we use stanford corenlp for pos tagging and lemmatization---we use stanford corenlp for feature generation
1
similarly to ud , gf uses shared syntactic descriptions for multiple languages---gf and ud , are two attempts to use shared syntactic descriptions for multiple languages
1
our experimental results show that our proposed approaches significantly outperform existing strong baselines ( e.g . dnorm ) across all of the three datasets---semantic parsing is the task of mapping a natural language ( nl ) sentence into a complete , formal meaning representation ( mr ) which a computer program can execute to perform some task , like answering database queries or controlling a robot
0
relation extraction is the task of tagging semantic relations between pairs of entities from free text---relation extraction ( re ) is a task of identifying typed relations between known entity mentions in a sentence
1
the natural language toolkit is a suite of program modules , data sets , tutorials and exercises , covering symbolic and statistical natural language processing---the natural language toolkit is a suite of program modules , data sets and tutorials supporting research and teaching in computational linguistics and natural language processing
1
both proposed refinement models have linear time complexity in set size allowing for practical online use in set expansion systems---refinement models have linear time complexity in set size allowing for practical online use in set expansion systems
1
word segmentation is a classic bootstrapping problem : to learn words , infants must segment the input , because around 90 % of the novel word types they hear are never uttered in isolation ( cite-p-13-1-0 , cite-p-13-3-8 )---like pavlopoulos et al , we initialize the word embeddings to glove vectors
0
this has proven useful previously in cases of unbalanced datasets---this has proven useful in cases of unbalanced datasets
1
in this paper we explore word-distribution embeddings for zsl---in this paper , we advocate using distribution-based embeddings of text and images
1
in this work , we have examined the utility of eye gaze and word confusion networks for reference resolution in situated dialogue within a virtual world---we explore the use of human eye gaze during real-time interaction to model attention and facilitate reference resolution
1
in addition , we apply the synonyms similarity to expand the fst model---third , adding the similarity of synonyms to extend the fst model
1
in addition , we use an english corpus of roughly 227 million words to build a target-side 5-gram language model with srilm in combination with kenlm---we used a 5-gram language model trained on 126 million words of the xinhua section of the english gigaword corpus , estimated with srilm
1
on the other end of the spectrum , machine translation metrics remain skeptical when text snippets are annotated with a score of 5 for being semantically analogous but syntactically the texts are expressed in a different form---on the other end of the spectrum , machine translation metrics remain skeptical when text snippets are annotated with a score of 5 for being semantically analogous but syntactically
1
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided---for improving the word alignment , we use the word-classes that are trained from a monolingual corpus using the srilm toolkit
1
we implement our model on top of the miml code base---with a variety of manual features which are helpful in solving the problem that the correct answer can be easily found in the given document
0
in the parse chart , labels on the nodes represent local properties of a parse , such as the category of a span in figure 1a---in the parse chart , labels on the nodes represent local properties of a parse , such as the category of a span
1
our first choice is the bottom-up agglomerative word clustering algorithm of brown et al , which derives a hierarchical clustering of words from unlabeled data---word sense disambiguation ( wsd ) is a natural language processing ( nlp ) task in which the correct meaning ( sense ) of a word in a given context is to be determined
0
in comparison , mrlsa models multiple lexical relations holistically---mrlsa provides an elegant approach to combining multiple relations between words
1
the system was trained in a standard manner , using a minimum error-rate training procedure with respect to the bleu score on held-out development data to optimize the loglinear model weights---the feature weights of the log-linear models were trained with the help of minimum error rate training and optimized for 4-gram bleu on the development test set
1
we implement the pbsmt system with the moses toolkit---we used the moses toolkit with its default settings
1
li and roth reported a hierarchical approach based on the snow learning architecture---more recently , li and roth have developed a machine learning approach which uses the snow learning architecture
1
as a countbased baseline , we use modified kneser-ney as implemented in kenlm---we consider a phrase-based translation model and a hierarchical translation model
0
feature weights were set with minimum error rate training on a development set using bleu as the objective function---system tuning was carried out using minimum error rate training optimised with k-best mira on a held out development set
1
in addition to these two key indicators , we evaluated the translation quality using an automatic measure , namely bleu score---for the evaluation of translation quality , we used the bleu metric , which measures the n-gram overlap between the translated output and one or more reference translations
1
in this paper , we investigate a simple method to learn word representations by taking into account subword information---in this paper , we propose a new approach based on the skipgram model , where each word is represented as a bag of character
1
coreference resolution is a well known clustering task in natural language processing---coreference resolution is the process of linking multiple mentions that refer to the same entity
1
table 5 shows the bleu and per scores obtained by each system---the results evaluated by bleu score is shown in table 2
1
pang et al are the first to apply supervised machine learning methods to sentiment classification---pang et al for the first time applied machine learning techniques for sentiment classification
1
these sentences were randomly selected from the europarl corpus---all sentences are randomly selected from the en-fr part of the europarl collection
1
we trained two 5-gram language models on the entire target side of the parallel data , with srilm---in our method is constructed based on word co-occurrence
0
we show that alignment of related words in two sentences , if carried out in a principled and accurate manner , can yield state-of-the-art results for sentence-level semantic similarity---with individual words , we experimentally show that this hypothesis can lead to state-of-the-art results for sentence-level semantic similarity
1
we use stanford corenlp for preprocessing and a supervised learning approach for classification---we use the sentiment pipeline of stanford corenlp to obtain this feature
1
as classifier we use a traditional model , a support vector machine with linear kernel implemented in scikit-learn---we divide negations and their corresponding interpretations into training and test , and use svm with rbf kernel as implemented in scikit-learn
1
faruqui and dyer introduced canonical correlation analysis to project the embeddings in both languages to a shared vector space---faruqui and dyer introduce canonical correlation analysis to project the embeddings in both languages to a shared vector space
1
an effective solution for these problems is the long short-term memory architecture---coreference resolution is the task of determining which mentions in a text refer to the same entity
0
we also demonstrate that our independently trained models are portable , showing that they can improve both syntactic and phrasal smt systems---and show that our model improves the quality of smt over both phrasal and syntax-based smt systems
1
we used the treetagger for lemmatisation as well as part-of-speech tagging---we annotated both corpora with parts of speech using the tree tagger
1
this simple solution has been shown effective for named entity recognition ( cite-p-20-3-4 ) and dependency parsing ( cite-p-20-3-1 )---semi-supervised approach has been successfully applied to named entity recognition ( cite-p-20-3-4 ) and dependency parsing ( cite-p-20-3-1 )
1
the tagging results are for one query only , without aggregating the global information of all queries to generate the final templates---because the results are for one query only , without merging the information of all queries to generate the final templates
1
bahdanau et al incorporated the attention model into the sequence to sequence learning framework---the traditional attention mechanism was proposed by bahdanau et al in the nmt literature
1
in phase 1 , the unsupervised approach adopts the method of---the basic method of phase 1 adopts the method of
1
we measure translation performance by the bleu and meteor scores with multiple translation references---named entity recognition ( ner ) is the first step for many tasks in the fields of natural language processing and information retrieval
0
we used the moses toolkit to build an english-hindi statistical machine translation system---we used the open source moses phrase-based mt system to test the impact of the preprocessing technique on translation results
1
recently , several researchers proposed the use of the pivot language for phrase-based statistical machine translation---recently , several researchers proposed the use of the pivot language for phrase-based smt
1
in recent years , mln has been adopted for several natural language processing tasks and achieved a certain level of success---mln has been applied in several natural language processing tasks and demonstrated its advantages
1
we also used a generative model based on dependency model with valence---for the generative model , we used the dependency model with valence as it appears in klein and manning
1
the smt systems are tuned on the dev development set with minimum error rate training using bleu accuracy measure as the optimization criterion---relation extraction ( re ) is a task of identifying typed relations between known entity mentions in a sentence
0
we trained a 5-gram language model on the english side of each training corpus using the sri language modeling toolkit---mikolov et al showed that constant vector offsets of word pairs can represent linguistic regularities
0
additionally , features are used to implement auxiliary distributions for selectional preferences---in addition , features are used which implement auxiliary distributions for selectional preferences , as described in van noord
1
closed dialog systems work well in practice---dialogs can succeed without a closed dialog model
1
the baselines apply 4-gram lms trained by the srilm toolkit with interpolated modified kneser-ney smoothing---the lm is implemented as a five-gram model using the srilm-toolkit , with add-1 smoothing for unigrams and kneser-ney smoothing for higher n-grams
1
klebanov et al approach was based on optimal weighting to obtain optimal f-score which lead to comparatively higher recall---klebanov et al used concreteness as a feature with baseline features and optimal weighting technique
1
our model ranked first in the semeval-2017 task 10 ( scienceie ) for relation extraction in scientific articles ( subtask c )---we have presented an ann-based approach to relation extraction , which ranked first in the semeval-2017 task 10 ( scienceie ) for relation extraction in scientific articles ( subtask c )
1
when labeled training data is available , we can use the maximum entropy principle to optimize the 位 weights---preparing an aligned abbreviation corpus , we obtain the optimal combination of the features by using the maximum entropy framework
1
sentiment analysis is a research area in the field of natural language processing---the sentiment analysis is a field of study that investigates feelings present in texts
1
question answering ( qa ) is a long-standing challenge in nlp , and the community has introduced several paradigms and datasets for the task over the past few years---dependency parsing is a core task in nlp , and it is widely used by many applications such as information extraction , question answering , and machine translation
0
an event schema is a set of actors ( also known as slots ) that play different roles in an event , such as the perpetrator , victim , and instrument in a bombing event---event schema is a high-level representation of a bunch of similar events
1
despite its simplicity , a product of eight automatically learned grammars improves parsing accuracy from 90.2 % to 91.8 % on english , and from 80.3 % to 84.5 % on german---we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing
0
a 5-gram lm was trained using the srilm toolkit 12 , exploiting improved modified kneser-ney smoothing , and quantizing both probabilities and back-off weights---we used the 300-dimensional glove word embeddings learned from 840 billion tokens in the web crawl data , as general word embeddings
0
this paper proposes a novel composite kernel for relation extraction---in this paper , we address the problem of relation extraction using kernel
1
we have presented three different approaches for tackling the problem of semantic textual similarity---in this paper , we present three different approaches for the textual semantic similarity task
1
we used the svm implementation of scikit learn---we used the implementation of the scikit-learn 2 module
1
they generalize string transdu ers to the tree case and are defined in more detail---they generalize string transducers to the tree case and are defined in more detail in
1
dependency parsing is a longstanding natural language processing task , with its outputs crucial to various downstream tasks including relation extraction ( cite-p-12-3-9 , cite-p-12-1-1 ) , language modeling ( cite-p-12-1-10 ) , and natural logic inference ( cite-p-12-1-4 )---dependency parsing is a very important nlp task and has wide usage in different tasks such as question answering , semantic parsing , information extraction and machine translation
1
we leverage latent dirichlet allocation for topic discovery and modeling in the reference source---and thus we hypothesize ontology-based representation may facilitate obtaining better content
0
the statistics for these datasets are summarized in settings we use glove vectors with 840b tokens as the pre-trained word embeddings---we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training
1
the scaling factors are tuned with mert with bleu as optimization criterion on the development sets---the feature weights are tuned to optimize bleu using the minimum error rate training algorithm
1
large context windows , we studied the relatedness and similarity subsets of the popular wordsim-353 reference dataset---in particular , we used the wordsim353 dataset containing pairs of similar words that reflect either relatedness or similarity relations
1
we present a new context representation for convolutional neural networks for relation classification ( extended middle context )---we present connectionist bidirectional rnn models which are especially suited for sentence classification tasks
1
four approximations were presented , which differ in size and the strictness of phrase-matching constraints---alternative approximations are presented , which differ in index size and the strictness of the phrase-matching constraints
1
in section 2 , we discuss previous work , followed by an explanation of our model and its implementation in sections 3 and 4---sentiment analysis is a recent attempt to deal with evaluative aspects of text
0
we use different pretrained word embeddings such as glove 1 and fasttext 2 as the initial word embeddings---we use 300-dimensional word embeddings from glove to initialize the model
1
the standard approach to word alignment is to construct directional generative models , which produce a sentence in one language given the sentence in another language---the standard approach to word alignment from sentence-aligned bitexts has been to construct models which generate sentences of one language from the other , then fitting those generative models with em
1
we used the svm implementation provided within scikit-learn---we used the implementation of the scikit-learn 2 module
1
in section 2 , we introduce and discuss the related work in this area---system tuning was carried out using minimum error rate training optimised with k-best mira on a held out development set
0
it has already been proposed for phrase-based , hierarchical , and syntax-based systems---this idea has been applied to phrase-based , hierarchical , and syntax-based models
1
the performance is measured in terms of character error rate ( cer )---task is to substitute a word in a language math-w-2-1-0-21 , which occurs in a particular context , by providing the best substitutions in a different language math-w-2-1-0-40
0
word sense disambiguation ( wsd ) is a widely studied task in natural language processing : given a word and its context , assign the correct sense of the word based on a predefined sense inventory ( cite-p-15-3-4 )---word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text
1
kim and hovy try to determine the final sentiment orientation of a given sentence by combining sentiment words within it---kim and hovy select candidate sentiment sentences and use word-based sentiment classifiers to classify unseen words into a negative or positive class
1
cite-p-16-1-11 use the source article at evaluation time and propose a correction only when the score of the classifier is high enough , but the source article is not used in training---at evaluation time , by proposing a correction only when the confidence of the classifier is high enough , but the article can not be used in training
1
the nnlm weights are optimized as the other feature weights using minimum error rate training---the feature weights 位 i are trained in concert with the lm weight via minimum error rate training
1
word embeddings have proved useful in downstream nlp tasks such as part of speech tagging , named entity recognition , and machine translation---learned word representations are widely used in nlp tasks such as tagging , named entity recognition , and parsing
1
the parameter weights are optimized with minimum error rate training---all the weights of those features are tuned by using minimal error rate training
1
in the next section , we start by describing our symbolic representation of the literature---taxonomies that are backbone of structured ontology knowledge have been found to be useful for many areas such as question answering , document clustering and textual entailment
0
pos tagging is performed using the ims tree tagger---broad coverage and disambiguation quality are critical for wsd
0
our best performing method obtains a significant increase over the baseline ( 25.9 % f-1 )---while our results show improvement over the baseline ( up to 25 . 9 % )
1
we introduce a spectral learning algorithm for latent-variable pcfgs ( cite-p-15-3-0 )---we used srilm to build a 4-gram language model with kneser-ney discounting
0
in order to reduce the amount of annotated data to train a dependency parser , koo et al used word clusters computed from unlabelled data as features for training a parser---srilm toolkit has been used to develop the language models using target language sentences from the training and tuning sets of parallel corpora
0
for english , we convert the ptb constituency trees to dependencies using the stanford dependency framework---ittycheriah and roukos proposed to use only manual alignment links in a maximum entropy model , which is considered supervised
0