text
stringlengths
82
736
label
int64
0
1
these results suggest that this model formalizes underlying principles that account for speakers¡¯ choices of referring expressions---but it is not clear how these models would predict speakers ¡¯ choices of referring expressions
1
we optimise the feature weights of the model with minimum error rate training against the bleu evaluation metric---modified kneser-ney trigram models are trained using srilm upon the chinese portion of the training data
0
biadsy et al describe a phonotactic approach that automatically identifies the arabic dialect of a speaker given a sample of speech---parameter tuning was carried out using both k-best mira and minimum error rate training on a held-out development set
0
we use 300d glove vectors trained on 840b tokens as the word embedding input to the lstm---the quality of translations is evaluated by the case insensitive nist bleu-4 metric
0
language models were built using the sri language modeling toolkit with modified kneser-ney smoothing---trigram language models were estimated using the sri language modeling toolkit with modified kneser-ney smoothing
1
so if we neglect the notion and revise the taxonomy of sanders et al , we can present the upper-level ontology as follows---following sanders et al , 1992 sanders et al , 1993 , we will construct an upper-level ontology
1
we used the default parameter in svm light for all trials---for the support vector machine , we used svm-light
1
we use the same metrics as described in wu et al , which is similar to those in---we use the same evaluation criterion as described in
1
for convenience we will will use the rule notation of simple rcg , which is a syntactic variant of lcfrs , with an arguably more transparent notation---for convenience we will will use the rule notation of simple rcg , which is a syntactic variant of lcfrs
1
vector space models of word meaning represent words as points in a highdimensional semantic space---co-occurrence space models represent the meaning of a word as a vector in high-dimensional space
1
additionally , we evaluate different approaches for lexical representation---in section 5 . 2 , we explore an alternative approach for lexical representations
1
the encoder-decoder model has been shown effective in the field of machine translation---this idea has been recently introduced in many nlp tasks , such as machine translation
1
research on automatic semantic structure extraction has been widely studied since the pioneering work of gildea and jurafsky---the pioneering work on building an automatic semantic role labeler was proposed by gildea and jurafsky
1
relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text---we used the opennmt-tf framework 4 to train a bidirectional encoder-decoder model with attention
0
we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit---we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing
1
word sense disambiguation ( wsd ) is a key enabling technology that automatically chooses the intended sense of a word in context---we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting
0
we use different pretrained word embeddings such as glove 1 and fasttext 2 as the initial word embeddings---for word embeddings , we used popular pre-trained word vectors from glove
1
smt has evolved from the original word-based approach into phrase-based approaches and syntax-based approaches---during the last few years , smt systems have evolved from the original word-based approach to phrase-based translation systems
1
named entity recognition ( ner ) is a frequently needed technology in nlp applications---named entity recognition ( ner ) is the task of detecting named entity mentions in text and assigning them to their corresponding type
1
in section 3 we then describe the probabilistic taxonomy learning model introduced by---we then describe how we introduced svd as natural feature selector in the probabilistic taxonomy learning model introduced by
1
case-insensitive nist bleu was used to measure translation performance---translation performances are measured with case-insensitive bleu4 score
1
we adapt the models of mikolov et al and mikolov et al to infer feature embeddings---we use the pre-trained word2vec embeddings provided by mikolov et al as model input
1
in such cases , as in , we only take the first sense of the word and the first hypernym listed for each level of the hierarchy---in such cases , as in gildea and jurafsky , we only take the first sense of the word and the first hypernym listed for each level of the hierarchy
1
we used minimum error rate training to tune the feature weights for maximum bleu on the development set---we used the pharaoh decoder for both the minimum error rate training and test dataset decoding
1
we use 300-dimensional word embeddings from glove to initialize the model---the objective of subjectivity analysis is to identify text that presents opinion as opposed to objective text that presents factual information
0
for parsing , only a set of possible vns has to be provided---to include vns into pdas , a set of vns has to be provided
1
we measure machine translation performance using the bleu metric---large scale knowledge bases like dbpedia and freebase provide structured information in diverse domains
0
in our experiment , word embeddings were 200-dimensional as used in , trained on gigaword with word2vec---to encode the original sentences we used word2vec embeddings pre-trained on google news
1
in this section , we describe the observed data , latent variables , and auxiliary variables of the problem and show an example in fig . 1---in this section , we describe the observed data , latent variables , and auxiliary variables of the problem
1
therefore , we employ negative sampling and adam to optimize the overall objective function---on bsus is constructed to capture the semantic information of texts
0
word sense disambiguation ( wsd ) is a key enabling-technology that automatically chooses the intended sense of a word in context---word sense disambiguation ( wsd ) is the problem of assigning a sense to an ambiguous word , using its context
1
our model extends the rational speech act model from cite-p-21-3-1 to incorporate updates to listeners ’ beliefs as discourse proceeds---commonly used compositionality functions are vector addition and pointwise vector multiplication
0
therefore , the main extension towards a comprehensive model of the acquisition of allophonic rules would be to include acoustic indicators---and , based on theoretical and empirical desiderata , we outline a more comprehensive framework to model the acquisition of allophonic rules
1
we use the opensource moses toolkit to build a phrase-based smt system---on this latter condition , and only 5 % of 130 humans performed 100 or more classifications with higher accuracy than this machine
0
an attention-based nmt system uses a bidirectional rnn as an encoder and a decoder that emulates searching through a source sentence during decoding---we use the stanford pos tagger to obtain the lemmatized corpora for the parss task
0
sarcasm , commonly defined as ‘ an ironical taunt used to express contempt ’ , is a challenging nlp problem due to its highly figurative nature---sarcasm is a form of speech in which speakers say the opposite of what they truly mean in order to convey a strong sentiment
1
this paper proposed a method for inserting linefeeds into discourse speech data---the minimum error rate training was used to tune the feature weights
0
a disadvantage of the log-linear models is that they require cluster computing resources for practical training---a 5-gram language model with kneser-ney smoothing is trained using s-rilm on the target language
0
ling et al used bi-lstm combining words and characters vector representations to achieve comparable results to state-of-the-art english pos tagging---ling et al achieve state-of-the-art results in language modeling and part-of-speech tagging by utilizing these word representations
1
as a preliminary study , we treat this task as a special kind of document summarization based on sentence extraction---as a preliminary study , we treat this task as a special kind of document summarization : extracting sentences from live texts to form a match report
1
in phrase-based smt models , phrases are used as atomic units for translation---in phrase-based smt , the building blocks of translation are pairs of phrases
1
five-gram language model parameters are estimated using kenlm---note that visweswariah et al used only manually aligned data for training the tsp model
0
crfs has been used for sequential labeling problems such as text chunking and named entity recognition---the crf model has been widely used in nlp segmentation tasks , such as shallow parsing , named entity recognition , and word segmentation
1
we use srilm for n-gram language model training and hmm decoding---in which the anaphoric expression refers to an abstract object such as a proposition , a property , or a fact is known as abstract object
0
runyankore is a bantu language spoken in the south western part of uganda by over two million people , 84 which makes it one of the top five most populous languages in uganda---as embedding vectors , we used the publicly available representations obtained from the word2vec cbow model
0
in this work , we build a very large dataset for fine-grained emotions and develop deep learning models on it---in this work , we seek to enable deep learning by creating a large dataset of fine-grained emotions
1
we use word2vec to train the word embeddings---then , we trained word embeddings using word2vec
1
we use the moses mt framework to build a standard statistical phrase-based mt model using our old-domain training data---we use moses to train our phrasebased statistical mt system using the same parallel text as the nmt model , with the addition of common crawl , 10 for phrase extraction
1
word alignment is a central problem in statistical machine translation ( smt )---word alignment is the task of identifying corresponding words in sentence pairs
1
coreference resolution is the process of linking together multiple expressions of a given entity---coreference resolution is the task of grouping all the mentions of entities 1 in a document into equivalence classes so that all the mentions in a given class refer to the same discourse entity
1
also , grammar appears to play a more important role in second language readability than in first language readability---grammatical features may play a more important role in second language readability than in first language readability
1
the experiments were conducted with the scikit-learn tool kit---the experiment was set up and run using the scikit-learn machine learning library for python
1
a recent advance in this area is xue , in which the author uses a sliding-window maximum entropy classifier to tag chinese characters into one of four position tags , and then covert these tags into a segmentation using rules---further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus
0
we use expectation-maximization for training---we adapt expectation maximization to find an optimal clustering
1
we used treetagger based on the english parameter files supplied with it---for all languages except spanish , we used the treetagger with its built-in lemmatizer
1
dave et al , riloff and wiebe , bethard et al , wilson et al , yu and hatzivassiloglou , choi et al , kim and hovy ,---dave et al , riloff and wiebe , bethard et al , wilson et al , yu and hatzivassiloglou , choi et al , kim and hovy , wiebe and riloff ,
1
socher et al introduced a family of recursive neural networks to represent sentence-level semantic composition---socher et al proposed a feature learning algorithm to discover explanatory factors in sentiment classification
1
word sense disambiguation is the task of identifying the intended meaning of a given target word from the context in which it is used---we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing
0
context unification is the satisfiability problem of context constraints---context unification is the problem of solving con-a natural approach for describing underspecified se- text constraints over finite trees
1
in fact , significant works have attributed emotional dynamics as an interactive phe-nomenon , rather than being within-person and one-directional---richards et al attribute emotional dynamics to be an interactive phenomena , rather than being withinperson
1
these features are then the input to a logistic classifier for pi---matrices are then the input to a logistic classifier for pi
1
a 4-gram language model was trained on the monolingual data by the srilm toolkit---we utilise state-of-the-art techniques to develop a method for automatic extraction of news values from headline text
0
we also report state-of-the-art results on the multi30k data set---these language models were built up to an order of 5 with kneser-ney smoothing using the srilm toolkit
0
it has been empirically shown that word embeddings can capture semantic and syntactic similarities between words---studies have also shown that the learned embedding captures both syntactic and semantic functions of words
1
moreover , when combining three state-of-the-art systems , the collaborative ensemble achieves the second-best results reported in the literature so far ( mela score of 64.47 )---on top of three state-of-the-art resolvers , we obtain the second-best coreference performance reported so far in the literature ( mela v08 score of 64 . 47 )
1
this paper proposes a generalized training framework of semi-supervised dependency parsing based on ambiguous labelings---this paper proposes a simple yet effective framework for semi-supervised dependency parsing
1
we used the glove embeddings for these features---we used crfsuite and the glove word vector
1
we address this by introducing a robust system based on the lambda calculus for deriving neo-davidsonian logical forms from dependency trees---we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing
0
in addition , we have designed a hybrid model which combines the seq2seq model and a retrieval model to further improve performance---coreference resolution is the task of clustering a sequence of textual entity mentions into a set of maximal non-overlapping clusters , such that mentions in a cluster refer to the same discourse entity
0
sentiment analysis is a fundamental problem aiming to give a machine the ability to understand the emotions and opinions expressed in a written text---sentiment analysis is a nlp task that deals with extraction of opinion from a piece of text on a topic
1
with this scheme the disambiguation methods are considered as experts providing a preference ranking over the sense of the word---in wikipedia , we show how we can generate sense annotated corpora that can be used for building accurate and robust sense classifiers
0
we show how the expected similarity maximization can be efficiently computed for these kernels---and thus rational kernels , the expected similarity maximization problem can be solved efficiently
1
our model uses non-negative matrix factorization in order to find latent dimensions---then , we extend the seq2seq framework to jointly conduct template reranking and template-aware summary generation
0
kim et al apply a simple convolutional neural network model , which uses character level inputs for word representations---kim et al proposed a convolutional module to process complex inputs for the problem of language modeling
1
semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance---semantic parsing is the task of mapping a natural language ( nl ) sentence into a complete , formal meaning representation ( mr ) which a computer program can execute to perform some task , like answering database queries or controlling a robot
1
we use the automatic mt evaluation metrics bleu , meteor , and ter , to evaluate the absolute translation quality obtained---we used the bleu score to evaluate the translation accuracy with and without the normalization
1
those so-called hearst patterns occur frequently in lexicons for describing a term---these so-called hearst patterns can be expected to occur frequently in lexicons for describing a term
1
part-of-speech tagging is the problem of determining the syntactic part of speech of an occurrence of a word in context---part-of-speech tagging is the assignment of syntactic categories ( tags ) to words that occur in the processed text
1
in order to train the argument identification and role label disambiguation classifiers , we used the english portion of the conll 2009 shared task---for training and testing our srl systems we used a version of the conll 2008 shared task dataset that only mentions verbal predicates , disregarding the nominal predicates available in the original corpus
1
various optimisations were made to each string comparison method to reduce retrieval time , of the type described by baldwin and tanaka---although word embeddings have been successfully employed in many nlp tasks , the application of word embeddings in re is very recent
0
we compare the final system to moses 3 , an open-source translation toolkit---for building the baseline smt system , we used the open-source smt toolkit moses , in its standard setup
1
undersampling causes negative effect on active learning---undersampling causes negative effects on active learning
1
we trained a specific language model using srilm from each of these corpora in order to estimate n-gram log-probabilities---pitler and nenkova show that discourse coherence features are more informative than other features for ranking texts with respect to their readability
0
we test the statistical significance of differences between various mt systems using the bootstrap resampling method---to compute the statistical significance of the performance differences between qe models , we use paired bootstrap resampling following koehn
1
additionally , we compile the model using the adamax optimizer---we train our model using adam optimization for better robustness across different datasets
1
we use binary crossentropy loss and the adam optimizer for training the nil-detection models---we train the classifier with log-loss and adam optimization algorithm , including dropout and early stopping for regularization
1
our translation system uses cdec , an implementation of the hierarchical phrasebased translation model that uses the kenlm library for language model inference---our baseline russian-english system is a hierarchical phrase-based translation model as implemented in cdec
1
we used moses with the default configuration for phrase-based translation---we trained a 3-gram language model on all the correct-side sentences using kenlm
0
coreference resolution is a well known clustering task in natural language processing---coreference resolution is the task of grouping mentions to entities
1
in this case the environment of a learning agent is one or more other agents that can also be learning at the same time---we perform pre-training using the skipgram nn architecture available in the word2vec tool
0
we used the moses toolkit to build mt systems using various alignments---experiments ( section 5 ) show a statistically significant improvement of + 0 . 7 bleu points over a state-of-the-art forest-based tree-to-string system even with less translation rules
0
twitter 1 is a microblogging service , which according to latest statistics , has 284 million active users , 77 % outside the us that generate 500 million tweets a day in 35 different languages---to get a dictionary of word embeddings , we use the word2vec tool 2 and train it on the chinese gigaword corpus
0
our approach relies on long short-term memory networks---our rnn model uses a long short-term memory component
1
semantic parsing is the task of converting a sentence into a representation of its meaning , usually in a logical form grounded in the symbols of some fixed ontology or relational database ( cite-p-21-3-3 , cite-p-21-3-4 , cite-p-21-1-11 )---semantic parsing is the task of mapping natural language sentences to complete formal meaning representations
1
recently , cnns have been successfully applied to various text and semantic sentence classification tasks , and often achieved very good performance---it has been demonstrated that cnns produce state-of-the-art results in many nlp tasks such as text classification and sentiment analysis
1
turian et al used unsupervised word representations as extra word features to improve the accuracy of both ner and chunking---turian et al create powerful word embedding by training on real and corrupted phrases , optimizing for the replaceability of words
1
boostedmert is easy to implement , inherits mert ’ s efficient optimization procedure , and more effectively boosts the training score---boostedmert is easy to implement , inherits the efficient optimization properties of mert , and can quickly boost the bleu score
1
coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities---coreference resolution is a key problem in natural language understanding that still escapes reliable solutions
1
cite-p-17-1-2 proposed a simple customizaition of recursive neural networks---in this work , we propose a method for joint dependency parsing and disfluency detection
0
these energy functions are encoded from design guidelines or learned from scene data---these energy functions are encoded from interior design guidelines or learned from input scene data
1