text
stringlengths
82
736
label
int64
0
1
for the task of event trigger prediction , we train a multi-class logistic regression classifier using liblinear---we use a multi-class logistic regression classifier , and concatenate multiple features into a single vector
1
in the joint learning framework , the contextual information is captured following the context prediction task introduced by---the values of the word embeddings matrix e are learned using the neural network model introduced by
1
in the second category , the context of subjective text is used---in the second category , subjectivity of a phrase or word is analyzed within its context
1
twitter is a popular microblogging service which provides real-time information on events happening across the world---twitter is a social platform which contains rich textual content
1
for all submissions , we used the phrase-based variant of the moses decoder---we evaluated the reordering approach within the moses phrase-based smt system
1
however , in many applications , we need to mine topics from unaligned text corpus---with the existing models , we can only extract topics from text
1
sentences are ranked by their salience according to specific strategies---textrank sentences are scored by their centrality in the graph with sentences as the nodes
1
we induce a topic-based vector representation of sentences by applying the latent dirichlet allocation method---this task can be formulated as a topic modeling problem for which we chose to employ latent dirichlet allocation
1
crowdsourcing is a cheap and increasingly-utilized source of annotation labels---we have described a new stochastic grammatical channel model for statistical machine translation that exhibits several nice properties
0
we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit---when parsers are trained on ptb , we use the stanford pos tagger
0
furthermore , it is shown that additional precision gains may be achieved by incorporating feature sets of higher-order n-grams---while precision gains can be achieved by augmenting these feature sets with higher-order n-grams , a significant cost is incurred
1
we initialize the embedding layer by pretrained skipgram embeddings induced from the training set of ratebeer dataset---in addition , we utilize the pre-trained word embeddings with 300 dimensions from for initialization
1
they were acquired automatically using a domain-independent statistical parsing toolkit , rasp , and a classifier which identifies verbal scfs---valex was acquired automatically using a domain-independent statistical parsing toolkit , rasp , and a classifier which identifies verbal scfs
1
in this paper , we explore the possibilities of leveraging residual learning to improve the performances of recurrent structures , in particular , lstm rnn , in modeling fairly long sequences ( i.e. , whose lengths exceed 100 )---in this paper , we explore the possibility of leveraging residual networks ( resnet ) , a powerful structure in constructing extremely deep neural network
1
collobert and weston propose a unified deep convolutional neural network for different tasks by using a set of taskindependent word embeddings together with a set of task-specific word embeddings---collobert and weston used convolutional neural networks in a multitask setting , where their model is trained jointly for multiple nlp tasks with shared weights
1
we obtain new state-of-the-art performance in extracting standard fields from research papers , with a significant error reduction by several metrics---on a standard benchmark data set , we achieve new state-of-the-art performance , reducing error in average f1 by 36 % , and word error rate by 78 %
1
neural network modeling has been explored to some extent in the context of this task---neural network models are an attractive alternative for this task
1
the uima-based architecture of dkpro keyphrases allows users to easily evaluate keyphrase extraction configurations---we used classification of politeness factors in line with trosborg and d铆az-p茅rez
0
the interpretation of event descriptions is highly contextually dependent---we evaluate global translation quality with bleu and meteor
0
this model uses multilingual word embeddings trained using fasttext and aligned using muse---the input to this network consists of pre-trained word embeddings extracted from the 300-dimensional fasttext embeddings
1
chang et al proposed a probabilistic first-order inductive learning algorithm for error classification and outperformed some basic classifiers---chang et al proposed a penalized probabilistic first-order inductive learning algorithm for error diagnosis
1
to pursue these questions , we started with constructing a document-level readability model---it has been shown that ebm practitioners often do not pursue evidence based answers to clinical questions because of the time required
0
figure 6 shows that our approaches consistently outperform the baseline and the state-of-the-art methods with diverse feature sparsity degrees---we lemmatise each word using the wordnet nltk lemmatiser
0
many works have shown that the additional semantics in word embeddings can enhance the performance of traditional topic models---generative models of word embeddings have recently been proposed in topic modeling in order to capture the semantic structure of words and documents
1
biadsy et al describe a phonotactic approach that automatically identifies the arabic dialect of a speaker given a sample of speech---biadsy et al present a system that identifies dialectal words in speech and their dialect of origin through the acoustic signals
1
the evaluation protocol and metrics were very similar to which allowed us to do indirect comparison to previous work---in this paper is to examine the utility of a paraphrase identification approach that relies solely on mt evaluation metrics
0
gram language model with modified kneser-ney smoothing is trained with the srilm toolkit on the epps , ted , newscommentary , and the gigaword corpora---we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus
1
the state-of-the-art baseline is a standard phrase-based smt system tuned with mert---the baseline system is a phrase-based smt system , built almost entirely using freely available components
1
we use the glove vector representations to compute cosine similarity between two words---we use pre-trained glove vector for initialization of word embeddings
1
we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training---for the character-based model we use publicly available pre-trained character embeddings 3 de- rived from glove vectors trained on common crawl
1
the language model was a 5-gram language model estimated on the target side of the parallel corpora by using the modified kneser-ney smoothing implemented in the srilm toolkit---the system used a tri-gram language model built from sri toolkit with modified kneser-ney interpolation smoothing technique
1
this grammar consists of a lexicon which pairs words or phrases with regular expression functions---the grammar we have implemented consists of only 6 id schemata , 68 lexical entries ( assigned to functional words ) , and 63 lexical entry templates ( assigned to parts of speech ( boss ) )
1
the crf ptt and cl i per h methods successfully labeled these two examples correctly , but failed to produce the correct label for the example in figure 1---lstm units are firstly proposed by hochreiter and schmidhuber to overcome gradient vanishing problem
0
we used svm-light-tk , which enables the use of the partial tree kernel---to calculate the constituent-tree kernels st and sst we used the svm-light-tk toolkit
1
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided---for the fluency and grammaticality features , we train 4-gram lms using the development dataset with the sri toolkit
1
prefix probabilities and right prefix probabilities for pscfgs can be exploited to compute probability distributions for the next word or part-of-speech in leftto-right incremental translation , essentially in the same way as described by cite-p-7-1-9 for probabilistic context-free grammars , as discussed later in this paper---prefix probabilities and right prefix probabilities for pscfgs can be exploited to compute probability distributions for the next word or part-of-speech in leftto-right incremental translation of speech , or alternatively
1
our 5-gram language model was trained by srilm toolkit---profile hmms can be adapted to the task of aligning multiple words
0
word segmentation is a classic bootstrapping problem : to learn words , infants must segment the input , because around 90 % of the novel word types they hear are never uttered in isolation ( cite-p-13-1-0 , cite-p-13-3-8 )---word segmentation is a fundamental task for processing most east asian languages , typically chinese
1
we use srilm for training a trigram language model on the english side of the training data---for language model scoring , we use the srilm toolkit training a 5-gram language model for english
1
t盲ckstr枚m et al investigate weakly supervised pos tagging in low-resource languages , combining dictionary constraints and labels projected across languages via parallel corpora and automatic alignment---we use the implementation of clark et al to compute the p-value via approximate randomization algorithms
0
we use a set of 318 english function words from the scikit-learn package---we implemented linear models with the scikit learn package
1
this variation poses challenges for natural language processing tasks---to overcome this problem , shen et al proposed a dependency language model to exploit longdistance word relations for smt
0
we learn the noise model parameters using an expectation-maximization approach---in this paper , we present a greedy non-directional parsing algorithm which doesn ’ t need a fully connected parse and can learn from partial parses
0
the language model was a kneser-ney interpolated trigram model generated using the srilm toolkit---the approach is analogous to the recently emerged lstm and gated neural network
0
mln framework has been adopted for several natural language processing tasks and achieved a certain level of success---mln has been applied in several natural language processing tasks and demonstrated its advantages
1
we extract named entities using a python wrapper for the stanford ner tool---we extract the named entities from the web pages using the stanford named entity recognizer
1
we implemented the different aes models using scikit-learn---we used crfsuite and the glove word vector
0
each discourse relation is a set of four : two arguments , connective words and senses---we list several systems and their performance on the task
0
japanese loanwords have attracted much interest from researchers---japanese loanwords would be an interesting subject to work on in the study of meaning change
1
for annotation tasks , snow et al showed that crowdsourced annotations are similar to traditional annotations made by experts---a phrase is defined as a group of source words f ? that should be translated together into a group of target words e ?
0
lu et al , 2009 , focuses on summarising short comments , each associated with an overall rating---lu et al , 2009 , used shallow parsing to identify aspects for short comments
1
to train our models , we use svm-light-tk 15 , which enables the use of structural kernels in svm-light---to train our reranking models we used svm-light-tk 7 , which encodes structural kernels in svmlight solver
1
for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus---we follow previous studies , conducting experiments by using the rst discourse treebank
0
for pos-tagging , we used the stanford pos-tagger---the model parameters in word embedding are pretrained using glove
0
we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset---we use glove pre-trained word embeddings , a 100 dimension embedding layer that is followed by a bilstm layer of size 32
1
some of the work is not related to discourse at all , morphosyntactic similarities and word-based measures like tf-idf ,---some of these are not related to discourse at all , morphosyntactic similarities and word based measures like tf-idf ,
1
we employ the orthant-wise limited-memory quasi newton optimizer for l1 regularization---the crf parameters are regularized using an l1 penalty and optimized via orthant-wise limited-memory quasi-newton optimization
1
the numbers in the table are bleu scores of different neural models---the bleu score for all the methods is summarised in table 5
1
we propose a framework to quantitatively characterize competition and cooperation between ideas in texts , independent of how they might be represented---a pun is a word used in a context to evoke two or more distinct senses for humorous effect
0
while this was plausible on 2009 data that focused on the swine flu epidemic , it is clearly false for more typical flu seasons---semantic role features and pronominal ranking feature much improve the performance of pronoun resolution , especially when the detailed pronominal subcategory features
0
word sense disambiguation ( wsd ) is the task of determining the correct meaning ( “ sense ” ) of a word in context , and several efforts have been made to develop automatic wsd systems---word sense disambiguation ( wsd ) is the task of determining the meaning of an ambiguous word in its context
1
this paper proposed a novel angle to the problem by modeling pu ( positive unlabeled ) learning---this paper proposes a novel pu learning ( mpipul ) technique to identify deceptive reviews
1
a synchronous context-free grammar is extracted from the alignments---garera , et al defines context vectors on the dependency tree rather than using adjacency
0
word2vec 3 was trained with all 3 million sentences of aspec---those models were trained using word2vec skip-gram and cbow
1
park and levy proposed an em-based unsupervised approach to perform whole sentence grammar correction , but the types of errors must be predetermined to learn the parameters for their noisy channel model---park and levy proposed a language-modeling approach to whole sentence error correction but their model is not competitive with individually trained models
1
pseudo-projective parsing was proposed by nivre and nilsson as a way of dealing with nonprojective structures in a projective data-driven parser---pseudo-projective parsing , proposed by nivre and nilsson , is a general technique applicable to any data-driven parser
1
distortion is the sum of the distances between the representative sentence of the cluster at each node and the other sentences in the same cluster---distortion can be succinctly defined as the information loss in the meaning of the sentences due to their representation with other sentences
1
our algorithm models transitions rather than incremental derivations , and hence we don ’ t need an incremental ccgbank---as our algorithm does not model derivations , but rather models transitions , we do not need a treebank of incremental ccg derivations
1
information extraction ( ie ) is a fundamental technology for nlp---information extraction ( ie ) is a technology that can be applied to identifying both sources and targets of new hyperlinks
1
in section 5 , we present a different approach to phonestheme meaning induction that exploits the properties of word embeddings in a fully unsupervised manner and yields substantially better results---on this front , we have proposed a fully unsupervised meaning induction method that relies on extracting semantic nearest neighbors of a phonesthemic cluster
1
note that , in parallel to our efforts , cheng et al have explored the usage of both source and target monolingual data using a similar semi-supervised reconstruction method , in which two nmts are employed---in parallel to our work , cheng et al propose a similar semi-supervised framework to handle both source and target language monolingual data
1
many words have multiple meanings , and the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd )---the vectors are given by a word2vec model and a glove model trained on german data
0
deep neural networks have seen widespread use in natural language processing tasks such as parsing , language modeling , and sentiment analysis---neural models have shown great success on a variety of tasks , including machine translation , image caption generation , and language modeling
1
the translation performance was measured using the bleu and the nist mt-eval metrics , and word error rate---the quality of the translation was assessed by the bleu index , calculated using a perl script provided by nist
1
coreference resolution is a key task in natural language processing ( cite-p-13-1-8 ) aiming to detect the referential expressions ( mentions ) in a text that point to the same entity---coreference resolution is a key problem in natural language understanding that still escapes reliable solutions
1
the constraint grammar paradigm is a popular formalism for performing partof-speech disambiguation , surface syntactic tagging , and certain forms of dependency analysis---e ∈ e is a triple math-w-6-6-0-121 is its head node , t ( e ) ∈ n ∗ is a set of tail nodes and f ( e ) is a monotonic weight function
0
semantic parsing is the task of mapping a natural language ( nl ) sentence into a completely formal meaning representation ( mr ) or logical form---semantic parsing is a domain-dependent process by nature , as its output is defined over a set of domain symbols
1
the experimental results show overall high performance---our experimental results have overall high performance
1
our word embeddings is initialized with 100-dimensional glove word embeddings---for the mix one , we also train word embeddings of dimension 50 using glove
1
transliteration is the task of converting a word from one writing script to another , usually based on the phonetics of the original word---phonetic translation across these pairs is called transliteration
1
word sense disambiguation ( wsd ) is the nlp task that consists in selecting the correct sense of a polysemous word in a given context---word sense disambiguation ( wsd ) is a key enabling-technology
1
finally , we propose lea , a link-based entity aware evaluation metric that is designed to overcome problems of the existing metrics---we propose lea , a link-based entity-aware evaluation metric that is designed to overcome the shortcomings of the current evaluation
1
in contrast , our system adopts bidirectional lstm on the concatenation of feature embeddings---our model adopts bidirectional lstm for capturing both forward and backward orders
1
we train our own word alignment model using the state-of-the-art word alignment tool berkeley aligner---this type of features are based on a trigram model with kneser-ney smoothing
0
word similarity is typically low for synonyms that have many word senses since information about different senses are mashed together---word similarity is typically low for synonyms having many word senses since information about different senses are mashed together
1
the out-of-vocabulary is defined as the words in the test set that are not in the training set---the out-of-vocabulary is defined as tokens in the test set that are not in the training set
1
kenlm is used to train a 5-gram language model on english gigaword---discourse is a structurally organized set of coherent text segments
0
we also use a 4-gram language model trained using srilm with kneser-ney smoothing---we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing
1
zhou and xu use a bidirectional wordlevel lstm combined with a conditional random field for semantic role labeling---neural network models have been exploited to learn dense feature representation for a variety of nlp tasks
0
the composite kernel consists of an entity kernel and a convolution parse tree kernel---our composite kernel consists of a history sequence and a domain context tree kernels , both of which are composed based on similar textual units in wikipedia articles to a given dialog context
1
a core feature of learning to write is receiving feedback and making revisions based on that feedback---a core feature of learning to write is receiving feedback and making revisions based on the information provided
1
we also use a 4-gram language model trained using srilm with kneser-ney smoothing---we estimate a 5-gram language model using interpolated kneser-ney discounting with srilm
1
in this work , we present a multi-pass coarse-to-fine architecture for graph-based dependency parsing---we propose a multi-pass coarse-to-fine architecture for dependency parsing
1
although this work represents the first formal study of relationship questions that we are aware of , by no means are we claiming a solution—we see this as merely the first step in addressing a complex problem---although this work represents the first formal study of relationship questions that we are aware of , by no means are we claiming a solution —
1
semantic role labeling ( srl ) is the process of assigning semantic roles to strings of words in a sentence according to their relationship to the semantic predicates expressed in the sentence---semantic role labeling ( srl ) is a kind of shallow sentence-level semantic analysis and is becoming a hot task in natural language processing
1
in this paper , we first describe our task setting of opinion extraction---in this paper , we described our opinion extraction task , which extract opinion
1
we present an evaluation on a neural machine translation task that shows improvements of up to 5.89 bleu points for domain adaptation from simulated bandit feedback---on the task of neural machine translation domain adaptation , we found relative improvements of up to 5 . 89 bleu points over out-of-domain seed
1
in our experiments the mt system used is hierarchical phrase-based system---we use our implementation of hierarchical phrase-based smt , with standard features , for the smt experiments
1
phrase-based models have been strong in local translation and reordering---word alignment models have been widely used for lexical acquisition in smt
1
the evaluation shows that each type of sequences is useful to temporal relation classification between events---evaluation shows that our sequential models are promising in distinguishing among fine-grained temporal relations
1