text
stringlengths
82
736
label
int64
0
1
an event schema is a set of actors ( also known as slots ) that play different roles in an event , such as the perpetrator , victim , and instrument in a bombing event---it is a standard phrasebased smt system built using the moses toolkit
0
in our experiments we use word2vec as a representative scalable model for unsupervised embeddings---for a fair comparison to our model , we used word2vec , that pretrain word embeddings at a token level
1
we use the crf learning algorithm , which consists in a framework for building probabilistic models to label sequential data---we use a conditional random field formalism to learn a model from labeled training data that can be applied to unseen data
1
recent research in this area has resulted in the development of several large kgs , such as nell , yago , and freebase , among others---over the last few years , several large scale knowledge bases such as freebase , nell , and yago have been developed
1
word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text---we also examine the possibility of using similarity metrics defined on wordnet
0
lda is the most popular unsupervised topic model---generative topic models widely used for ir include plsa and lda
1
the language models were interpolated kneser-ney discounted trigram models , all constructed using the srilm toolkit---brockett et al employed phrasal statistical machine translation techniques to correct countability errors
0
the system used in this study is alpino , a wide-coverage stochastic attribute value grammar for dutch---the performance of our approach is tested in a case study with the wide-coverage alpino grammar of dutch
1
we use 300-dimensional word embeddings from glove to initialize the model---a gaussian process is a generative model of bayesian inference that can be used for function regression
0
semantic role labeling ( srl ) is a task of analyzing predicate-argument structures in texts---semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them
1
takamura et al used the spin model to extract word semantic orientation---takamura et al also have reported a method for extracting polarity of words
1
it mimics the incremental initialization of johnson and goldwater---this is the same set-up used by klein , goldwater et al , and johnson and goldwater
1
gildea and jurafsky presented an early framenet-based srl system that targeted both verbal and nominal predicates---gildea proposed a probabilistic discriminative model to assign a semantic roles to the constituent
1
a 4-grams language model is trained by the srilm toolkit---stance detection is the task of classifying the attitude previous work has assumed that either the target is mentioned in the text or that training data for every target is given
0
since passage information relevant to question is more helpful to infer the answer in reading comprehension , we apply self-matching based on question-aware representation and gated attention-based recurrent networks---based on question-aware passage representation , we employ gated attention-based recurrent networks on passage against passage itself , aggregating evidence relevant to the current passage
1
second , following kamvar et al , we evaluate the clusters produced by our approach against the gold-standard clusters using the adjusted rand index---on this task , we also introduce an automated metric that strongly correlates with human judgments
0
the purpose of the task is to find the best point of attachment in wordnet for a set of out of vocabulary ( oov ) terms---on the task of , given an out-of-vocabulary ( oov ) term , and an associated definition and part of speech , find the best point of attachment in wordnet
1
since the generated data is based on discrete symbols , we usually adopt policy gradient method to update model parameters of the generator---in practice , policy gradient method is usually used to calculate gradients for the generator due to discrete symbols
1
the parameters are initialized by the techniques described in---all parameters are initialized using glorot initialization
1
in order to find the shortest path between two concepts , the ontoscore system employs the single source shortest path algorithm of dijkstra---in order to find the shortest path between two concepts , ontoscore employs the single source shortest path algorithm of dijkstra
1
conversation is a joint social process , with participants cooperating to exchange information---we extract our paraphrase grammar from the french-english portion of the europarl corpus
0
for instance , the top performing system on the conll-2009 shared task employs over 50 language-specific feature templates---chen et al applied user preferences and product characteristics as attentions to words and sentences in reviews to learn the final representation for the sentences and reviews
0
we use the word2vec vectors with 300 dimensions , pre-trained on 100 billion words of google news---our experiments directly utilize the embeddings trained by the cbow model on 100 billion words of google news
1
this paper presents neural probabilistic parsing models which explore up to third-order graph-based parsing with maximum likelihood training criteria---work presents neural probabilistic graph-based models for dependency parsing , together with a convolutional part
1
it is also related to the markov random field methods for parsing suggested in , and the boosting methods for parsing in , collins 2000---the method is related to the boosting approach to ranking problems , the markov random field methods of , and the boosting approaches for parsing in
1
first , we propose new features based on neural networks to model various non-local translation phenomena---first , we model more features using neural networks , including two novel ones : a joint model with offset source context and a translation context
1
for this we explored one-class svm , pebl and found that pebl performs much better than any of the approaches discussed in one-class svm---previous literature was explored for the above and found pebl performs much better than other approaches
1
in section 2 , we describe the perceptron algorithm as a special case of the stochastic gradient descent algorithm---when we treat the perceptron algorithm as a special case of the sgd algorithm
1
we used scikit-lean toolkit , and we developed a framework to define functional classification models---we used svm implementations from scikit-learn and experimented with a number of classifiers
1
this paper presented a novel framework called error case frames for correcting preposition errors with feedback messages---in view of this background , this paper presents a novel error correction framework called error case frames
1
gram language models are trained over the target-side of the training data , using srilm with modified kneser-ney discounting---the baselines apply 4-gram lms trained by the srilm toolkit with interpolated modified kneser-ney smoothing
1
corpus pattern analysis attempts to catalog norms of usage for individual words , specifying them in terms of context patterns---word alignment is a well-studied problem in natural language computing
0
for sentence segmentation and tokenization up to and including full morphological disambiguation for all languages , we rely on the udpipe---in summary , we rely for most but not all languages on the tokenization and sentence splitting provided by the udpipe baseline
1
word segmentation is a prerequisite for many natural language processing ( nlp ) applications on those languages that have no explicit space between words , such as arabic , chinese and japanese---word segmentation is a fundamental task for chinese language processing
1
in this paper , we extend the model with a global model which takes the hyperlink structure of wikipedia into account---we combine the global hyperlink structure of wikipedia with a local bag-of-words probabilistic model
1
our translation model is implemented as an n-gram model of operations using srilm-toolkit with kneser-ney smoothing---for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing
1
wang and manning , 2010 , develop a probabilistic model to learn tree-edit operations on dependency parse trees---wang and manning , 2010 , designed a probabilistic model to learn tree-edit operations on dependency parse trees
1
however , there is a much larger quantity of freely available web text to exploit---using the same framework describe here , it is possible to collect a much larger corpus of freely available web text
1
in contrast , we present a corpus-driven framework using which a user-adaptive reg policy can be learned using rl from a small corpus of non-adaptive human-machine interaction---to train on , we show that effective adaptive policies can be learned from a small dialogue corpus of non-adaptive human-machine interaction , by using a rl framework
1
we use the 200-dimensional global vectors , pre-trained on 2 billion tweets , covering over 27-billion tokens---we use glove 300-dimension embedding vectors pre-trained on 840 billion tokens of web data
1
following , we lower-case the text and remove all punctuations and partial words 1---following honnibal and johnson , we lower-case the text and remove all punctuations and partial words 2
1
coreference resolution is the task of determining when two textual mentions name the same individual---although coreference resolution is a subproblem of natural language understanding , coreference resolution evaluation metrics have predominately been discussed in terms of abstract entities and hypothetical system errors
1
abbreviation is defined as a shortened description of the original fully expanded form---we implement logistic regression with scikit-learn and use the lbfgs solver
0
the ubiquity of metaphor in our everyday communication makes it an important problem for natural language understanding---given its ubiquity , metaphorical language poses an important problem for natural language understanding
1
the search engine module performs language classification based on the maximum normalised score of the number of hits returned for two searches per token , one for each language---the latter performs language classification based on the maximum normalised score of the number of hits returned for two searches per token , one for each language
1
word sense disambiguation is the task of assigning a sense to a word based on the context in which it occurs---word sense disambiguation is the process of selecting the most appropriate meaning for a word , based on the context in which it occurs
1
moreover , word preference is captured and incorporated into our co-ranking algorithm---we add word preference information into our algorithm and make our co-ranking algorithm
1
furthermore , we train a 5-gram language model using the sri language toolkit---the language model is trained and applied with the srilm toolkit
1
the task is to classify whether each sentence provides the answer to the query---qa , the task is to locate the smallest span in the given paragraph that answers the question
1
named entity typing is the task of detecting the type ( e.g. , person , location , or organization ) of a named entity in natural language text---named entity typing is a fundamental building block for many natural-language processing tasks
1
to generate dependency links , we use the stanford pos tagger 18 and the malt parser---for all pos tagging tasks we use the stanford log-linear part-ofspeech tagger
1
recently approaches using neural networks have shown great improvements in a number of areas such as parsing ( cite-p-25-3-11 ) , machine translation ( cite-p-25-1-10 ) , and image captioning ( cite-p-25-3-4 )---neural networks with relatively simple structure have shown great gains in both dependency parsing ( cite-p-25-1-7 ) and machine translation ( cite-p-25-1-10 )
1
we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit---coreference resolution is a set partitioning problem in which each resulting partition refers to an entity
0
we use the logistic regression classifier as implemented in the skll package , which is based on scikitlearn , with f1 optimization---we implemented the algorithms in python using the stochastic gradient descent method for nmf from the scikit-learn package
1
we trained a specific language model using srilm from each of these corpora in order to estimate n-gram log-probabilities---we trained a 5-gram sri language model using the corpus supplied for this purpose by the shared task organizers
1
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided---based on a real life blog data set collected from a large number of blog hosting sites show that the two new techniques enable classification algorithms to significantly improve the accuracy of the current state-of-the-art techniques
0
for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus---word alignment is a critical first step for building statistical machine translation systems
0
this paper introduces an unsupervised vector approach to disambiguate words in biomedical text that can be applied to all-word disambiguation---in this paper , we introduce an unsupervised vector approach to disambiguate words in biomedical text
1
we use pegasos algorithm , an instance of the stochastic gradient descent , to optimize the new objective---in this work , we employ the toolkit word2vec to pre-train the word embedding for the source and target languages
0
we propose to use the researchcyc knowledge base as a source of semantic information about nominals---in this paper , i have demonstrated how to build an entailment system from mrs graph alignment , combined with heuristic β€œ robust ”
0
a particular generative model , which is well suited for the modeling of text , is called latent dirichlet allocation---metonymy is typically defined as a figure of speech in which a speaker uses one entity to refer to another that is related to it ( cite-p-10-1-3 )
0
in this paper , we propose a set of efficient and scalable neural shortlisting-reranking models for large-scale domain classification in ipdas---in this paper addresses both of these limitations with a scalable and efficient two-step shortlisting-reranking approach , which has a neural ranking model
1
pang and lee propose a graph-based method which finds minimum cuts in a document graph to classify the sentences into subjective or objective---pang and lee use a graph-based technique to identify and analyze only subjective parts of texts
1
we achieve this by using the recently proposed domain adversarial training methods of neural networks---in this paper , we address the problem of finding the most probable target language representation of a given source language
0
within such an architecture each level reached during the analysis computes its meaningfulness value ; this result is then handled according to modalities that are peculiar to that level---in our experiments , we use 300-dimension word vectors pre-trained by glove
0
the irstlm toolkit was used to build the 5-gram language model---word segmentation is a fundamental task for chinese language processing
0
furthermore , we train a 5-gram language model using the sri language toolkit---persian is a language with about 110 million speakers all over the world ( cite-p-12-3-10 ) , yet in terms of the availability of teaching materials and annotated data for text processing , it is undoubtedly a low-resourced language
0
part-of-speech ( pos ) tagging is a fundamental natural-language-processing problem , and pos tags are used as input to many important applications---part-of-speech ( pos ) tagging is a job to assign a proper pos tag to each linguistic unit such as word for a given sentence
1
llu铆s et al use a joint arcfactored model that predicts full syntactic paths along with predicate-argument structures via dual decomposition---sentiment analysis is a research area where does a computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-12-1-3 )
0
in contrast , we present a corpus-driven framework using which a user-adaptive reg policy can be learned using rl from a small corpus of non-adaptive human-machine interaction---as word vectors the authors use word2vec embeddings trained with the skip-gram model
0
we used the moses machine translation decoder , using the default features and decoding settings---chambers and jurafsky proposed a narrative chain model based on scripts
0
in the case of the trigram model , we expand the lattice with the aid of the srilm toolkit---for the language model , we used srilm with modified kneser-ney smoothing
1
we used a phrase-based smt model as implemented in the moses toolkit---social media is a valuable source for studying health-related behaviors ( cite-p-11-1-8 )
0
our goal in this paper is to study conversational features that lead to egregious conversations---in this paper , we outline an approach to detecting such egregious conversations , using behavioral cues
1
the state-of-the-art baseline is a standard phrase-based smt system tuned with mert---to train our model we use markov chain monte carlo sampling
0
we use three common evaluation metrics including bleu , me-teor , and ter---we use case-insensitive bleu as evaluation metric
1
shallow semantic representations , bearing a more compact information , could prevent the sparseness of deep structural approaches and the weakness of bow models---shallow semantic representations can prevent the sparseness of deep structural approaches and the weakness of cosine similarity based models
1
we initialize all setups with the 300-dimensional word embeddings provided by mikolov et al , which were trained on the common crawl corpus---mei et al proposed an encoder-aligner-decoder framework for generating weather broadcast
0
our approach explicitly determines the words which are equally significant with a consistent polarity across source and target domains---coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set
0
nowadays , only a few techniques exist for inferring finite-state transducers---however , only a few techniques to learn finite-state transducers for machine translation purposes can be found
1
the sentiment analysis is a field of study that investigates feelings present in texts---sentiment analysis is the process of identifying and extracting subjective information using natural language processing ( nlp )
1
semantic role labeling ( srl ) is the process of extracting simple event structures , i.e. , β€œ who ” did β€œ what ” to β€œ whom ” , β€œ when ” and β€œ where ”---semantic role labeling ( srl ) is a task of analyzing predicate-argument structures in texts
1
princeton wordnet is an english lexical database that groups nouns , verbs , adjectives and adverbs into sets of cognitive synonyms , which are named as synsets---princeton wordnet 1 is an english lexical database that groups nouns , verbs , adjectives and adverbs into sets of cognitive synonyms , which are named as synsets
1
relation extraction is the task of detecting and classifying relationships between two entities from text---relation extraction is the task of finding semantic relations between entities from text
1
in this paper , we presented a system for identifying opinion subgroups in arabic online discussions---in this paper is to use natural language processing techniques to detect opinion subgroups in arabic discussions
1
the language models were created using the srilm toolkit on the standard training sections of the ccgbank , with sentenceinitial words uncapitalized---in this study , we attempt to automatically generate a related work section
0
in our previous work , we established the predictiveness of several interaction parameters derived from discourse structure---in , we demonstrate the predictiveness of several discourse structurebased parameters
1
semantic role labeling ( srl ) is a kind of shallow sentence-level semantic analysis and is becoming a hot task in natural language processing---semantic role labeling ( srl ) is the task of labeling the predicate-argument structures of sentences with semantic frames and their roles ( cite-p-18-1-2 , cite-p-18-1-19 )
1
however , combining our approach with other methods results in an ensemble that performs the best on most datasets---to this end , we have developed an ensemble approach that performs better than the baseline models
1
framing is a phenomenon largely studied and debated in the social sciences , where , for example , researchers explore how news media shape debate around policy issues by deciding what aspects of an issue to emphasize , and what to exclude---framing is a political strategy in which politicians carefully word their statements in order to control public perception of issues
1
given that fast practical bmm algorithms are unlikely to exist , we have established a limitation on practical cfg parsing---relation extraction is the task of finding semantic relations between entities from text
0
for automatic evaluations , we use bleu and meteor to evaluate the generated comments with ground-truth outputs---we use the automatic mt evaluation metrics bleu , meteor , and ter , to evaluate the absolute translation quality obtained
1
this work provides the essential foundations for modular construction of signatures in typed unification grammars---work provides the essential foundations for modular construction of ( typed ) unification grammars
1
callison-burch et al used pivot languages for paraphrase extraction to handle the unseen phrases for phrase-based smt---callison-burch et al propose the use of paraphrases as a means of dealing with unseen source phrases
1
semantic similarity is a measure that specifies the similarity of one text ’ s meaning to another ’ s---at the same time , even our baseline models perform on par with or better than the brown models , so it is likely that other factors not accounted for are also affecting the results reported in θ„΄vrelid and skjaerholt
0
the conll dataset is taken form the wall street journal portion of the penn treebank corpus---the data comes from the conll 2000 shared task , which consists of sentences from the penn treebank wall street journal corpus
1
it combines various techniques developed for sequence comparison with an appropriate scoring scheme for computing phonetic similarity on the basis of multivalued features---that combines a number of techniques developed for sequence comparison with a scoring scheme for computing phonetic similarity on the basis of multivalued features
1
honnibal et al use a non-monotonic parser that allows actions that are inconsistent with previous actions---honnibal et al allow the parser to correct prior misclassifications between the shift and right-arc actions
1
user adaptation to the system ’ s lexical and syntactic choices can be particularly useful in flexible input dialog systems---in deployed dialog systems with real users , as in laboratory experiments , users adapt to the system ’ s lexical and syntactic choices
1
in this paper , we focus on one of the key subtasks Β¨c answer sentence selection---in this paper , we present an experimental study on solving the answer selection problem
1