text stringlengths 82 736 | label int64 0 1 |
|---|---|
we use the glove vectors of 300 dimension to represent the input words---the pre-trained word vectors with the dimension of 100 released by pennington et al are used | 1 |
experiments show that the proposed methods significantly outperform the standard vaes and can discover meaningful latent actions from these datasets---experiments show the proposed methods outperform strong baselines in learning discrete latent variables | 1 |
parameters were tuned using minimum error rate training---the parameter weights are optimized with minimum error rate training | 1 |
despite being almost entirely unsupervised , our model yields the best reported endto-end results on a range of standard coreference data sets---at the level of abstract entity types , our model is able to substantially reduce semantic compatibility errors , resulting in the best results to date on the complete endto-end coreference task | 1 |
relation extraction is a core task in information extraction and natural language understanding---we use a pbsmt model built with the moses smt toolkit | 0 |
as a part of our research , we had collected 12,000 news reports from five different international news sources over a period of ten years , to study systematic differences in news coverage on the rise of china , between western and chinese media---as a part of our research , we had collected 12 , 000 news reports from five different international news sources over a period of ten years , to study systematic differences in news coverage | 1 |
we start with 300 dimension glove representations trained on the 840 billion word common crawl---we use glove 300-dimension embedding vectors pre-trained on 840 billion tokens of web data | 1 |
we propose an adaptive ensemble method to adapt coreference resolution across domains---we use 300-dimensional word embeddings from glove to initialize the model | 0 |
we follow the standard machine translation procedure of evaluation , measuring bleu for every system---since bleu is the main ranking index for all submitted systems , we apply bleu as the evaluation matrix for our translation system | 1 |
such bilingual word-based n-gram models were initially described in---such bilingual word-based n-gram models were initially described in and extended in | 1 |
we used the moses toolkit to build an english-hindi statistical machine translation system---we used moses , a phrase-based smt toolkit , for training the translation model | 1 |
the language model was a kneser-ney interpolated trigram model generated using the srilm toolkit---we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing | 1 |
in this paper , the unitor system participating in the semeval-2013 sentiment analysis in twitter task is presented---clustering is a popular technique for unsupervised text analysis , often used in industrial settings to explore the content of large amounts of sentences | 0 |
we use the stanford dependency parser with the collapsed representation so that preposition nodes become edges---we apply the rules to each sentence with its dependency tree structure acquired from the stanford parser | 1 |
word sense disambiguation ( wsd ) is the nlp task that consists in selecting the correct sense of a polysemous word in a given context---word sense disambiguation ( wsd ) is a particular problem of computational linguistics which consists in determining the correct sense for a given ambiguous word | 1 |
for this informal presentation , and occasionally elsewhere , we shall mark a trigger symbol a by overlining it , thus : a---we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings , which we do not optimize during training | 0 |
for english , we use the stanford parser for both pos tagging and cfg parsing---we use the berkeley probabilistic parser to obtain syntactic trees for english and its adapted version for french | 1 |
sentence compression is a standard nlp task where the goal is to generate a shorter paraphrase of a sentence---sentence compression is a complex paraphrasing task with information loss involving substitution , deletion , insertion , and reordering operations | 1 |
adjuncts are defined to be optional arguments appearing with a wide variety of verbs and frames---adjuncts are optional arguments which , like adverbs , modify the meaning of the described event | 1 |
we train 300 dimensional word embedding using word2vec on all the training data , and fine-turning during the training process---for example , using a parallel english-german corpus , versley attempted to disambiguate german connectives via projection | 0 |
we use the skip-gram model , trained to predict context tags for each word---we use the skipgram model to learn word embeddings | 1 |
we used the stanford parser to generate dependency trees of sentences---we used a caseless parsing model of the stanford parser for a dependency representation of the messages | 1 |
an amr is a graph with nodes representing the concepts of the sentence and edges representing the semantic relations between them---amr is a formalism of sentence semantic structure by directed , acyclic , and rooted graphs , in which semantic relations such as predicate-argument relations and noun-noun relations are expressed | 1 |
in this paper , we study active learning with resampling methods addressing the class imbalance problem for wsd---in this paper , we analyze the effect of resampling techniques , including undersampling and oversampling used in active learning | 1 |
word sense disambiguation ( wsd ) is a particular problem of computational linguistics which consists in determining the correct sense for a given ambiguous word---word sense disambiguation ( wsd ) is the problem of assigning a sense to an ambiguous word , using its context | 1 |
in order to do so , we use the moses statistical machine translation toolkit---as a baseline system for our experiments we use the syntax-based component of the moses toolkit | 1 |
this is the first work , to our knowledge , to deliver a parallel gpu implementation of the fst composition algorithm---in this paper , we introduce the first ( to our knowledge ) gpu implementation of the fst composition operation , and | 1 |
we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit---we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing | 1 |
we use pre-trained glove vector for initialization of word embeddings---we use pre-trained vectors from glove for word-level embeddings | 1 |
we also use a 4-gram language model trained using srilm with kneser-ney smoothing---verbnet is a very large lexicon of verbs in english that extends levin with explicitly stated syntactic and semantic information | 0 |
a shallow or partial parser , in the sense of is also implemented and always activated before the complete parse takes place , in order to produce the default baseline output to be used by further computation in case of total failure---the fasttext pre-trained vectors are used for word embedding with embed size is 300 | 0 |
wordnet is a manually created lexical database that organizes a large number of english words into sets of synonyms ( i.e . synsets ) and records conceptual relations ( e.g. , hypernym , part of ) among them---the nodes are concepts ( or synsets as they are called in the wordnet ) | 1 |
this paper is , to the best of our knowledge , the first work to address the problem of sst for twitter---we have provided , to the best of our knowledge , the first supersense tagger for twitter | 1 |
part-of-speech tagging is the assignment of syntactic categories ( tags ) to words that occur in the processed text---in this paper , we describe a different approach to the problem of dependency grammar | 0 |
the german text was further preprocessed by splitting german compound words using the frequency-based method described in---in order to reduce the source vocabulary size translation , the german text was preprocessed by splitting german compound words with the frequency-based method described in | 1 |
we use word embedding pre-trained on newswire with 300 dimensions from word2vec---we perform pre-training using the skip-gram nn architecture available in the word2vec 13 tool | 1 |
sememes are minimum semantic units of word meanings , and the meaning of each word sense is typically composed by several sememes---sememes are defined as minimum semantic units of word meanings , and there exists a limited close set of sememes to compose the semantic | 1 |
we perform minimum-error-rate training to tune the feature weights of the translation model to maximize the bleu score on development set---we set all feature weights by optimizing bleu directly using minimum error rate training on the tuning part of the development set | 1 |
the baseline further contains a hierarchical reordering model and a 7-gram word class language model---the system includes moses baseline feature functions , plus eight hierarchical lexicalized reordering model feature functions | 1 |
in our experiments of unsupervised dependency grammar learning , we show that unambiguity regularization is beneficial to learning , and in combination with annealing ( of the regularization strength ) and sparsity priors it leads to improvement over the current state of the art---in our experiments of unsupervised dependency grammar learning , we show that unambiguity regularization is beneficial to learning , and in combination with annealing ( of the regularization strength ) and sparsity priors | 1 |
to encode the original sentences we used word2vec embeddings pre-trained on google news---we train a trigram language model with modified kneser-ney smoothing from the training dataset using the srilm toolkit , and use the same language model for all three systems | 0 |
in this run , we use a sentence vector derived from word embeddings obtained from word2vec---then uses the word2vec model to find the vector representation of each word | 1 |
each entity has its embedding , and the embeddings are updated according to the result of both of these analyses dynamically---when the result of both analyses refers to an entity , the entity embedding is updated | 1 |
information extraction ( ie ) is the task of extracting information from natural language texts to fill a database record following a structure called a template---the penn discourse treebank is the largest corpus richly annotated with explicit and implicit discourse relations and their senses | 0 |
these models tend to generate safe , commonplace responses ( e.g. , i don¡¯t know ) regardless of the input---responses tend to generate safe , commonplace responses ( e . g . , i don ¡¯ t know ) regardless of the input | 1 |
we use the penn treebank corpus with the standard section splits for training , development and testing---we have used penn tree bank parsing data with the standard split for training , development , and test | 1 |
inspired by this idea , we introduce in this paper a deep learning approach for discourse parsing---instead , we compute the relatedness of two words based on their distributed representations , which are learned using the word2vec toolkit | 0 |
the semeval semantic textual similarity tasks are a popular evaluation venue for the sts problem---predicting semantic textual similarity has been a recurring task in semeval challenges | 1 |
experiments show that features derived from semantic frame parsing have significantly better performance across years on the polarity task---on the polarity task , the semantic frame features encoded as trees perform significantly better across years and sectors than bag-of-words vectors ( bow ) , and outperform bow vectors | 1 |
the similarity used for clustering is based on a divergence-like distance between two language models that was originally proposed by juang and rabiner---the distance used for clustering is based on a divergence-like distance between two language models that was originally proposed by juang and rabiner | 1 |
the mmrreranker module is based on the maximal margin relevance criterion---mmr is an implementation of maximal marginal relevance | 1 |
furthermore , we train a 5-gram language model using the sri language toolkit---this means in practice that the language model was trained using the srilm toolkit | 1 |
first , the linguistic units of student inputs range from single words to multiple sentences---for this language model , we built a trigram language model with kneser-ney smoothing using srilm from the same automatically segmented corpus | 0 |
semantic frames are a rich linguistic resource---semantic frames can address these issues | 1 |
relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text---relation extraction is a key step towards question answering systems by which vital structured data is acquired from underlying free text resources | 1 |
this paper proposes a novel embedding method to separately model ¡°clean¡± and ¡°noisy¡± mentions , and incorporates the given type hierarchy to induce loss functions---while we are the first to exploit commonsense knowledge in user characterization | 0 |
li et al manually built a review dataset from their crawled reviews , and exploited semi-supervised co-training algorithm to identify deceptive reviews---we use the long short-term memory architecture for recurrent layers | 0 |
these character-based representations are then fed into a two-layer bidirectional long shortterm memory recurrent neural network---the underlying model used is a long shortterm memory recurrent neural network in a bidirectional configuration | 1 |
situated question answering is often formulated in terms of parsing both the question and environment into a common meaning representation where they can be combined to select the answer---situated question answering can be formulated as semantic parsing with an execution model that is a learned function of the environment | 1 |
the decoder uses a cky-style parsing algorithm to integrate the language model scores---the decoder uses cky-style parsing with cube pruning to integrate the language model | 1 |
empirical experiments on a movie dataset demonstrated the effectiveness of our proposed method with respect to several competitive baselines---the log-linear parameter weights are tuned with mert on a development set to produce the baseline system | 0 |
in previous work , a corpus of sentences from the wall street journal treebank corpus was manually annotated with subjectivity classifications by multiple judges---in previous work , a corpus of sentences from the wall street journal treebank corpus was manually annotated with subjectivity classi cations by m ultiple judges | 1 |
conditional random fields is a statistical method based on undirected graphical models---for input representation , we used glove word embeddings | 0 |
over the recent years , distributional and distributed representations of words have become a critical component of many nlp systems---word embeddings capture syntactic and semantic properties of words , and are a key component of many modern nlp models | 1 |
the irstlm toolkit is used to build ngram language models with modified kneser-ney smoothing---as a baseline system for our experiments we use the syntax-based component of the moses toolkit | 0 |
in addition , a 5-gram lm with kneser-ney smoothing and interpolation was built using the srilm toolkit---a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke | 1 |
as we introduced in the first section , we represent the knowledge math-w-7-1-0-12 as a set of examples of a binary relation math-w-7-1-0-22 associating a nl utterance to a fl command---relation extraction is the task of predicting attributes and relations for entities in a sentence ( zelenko et al. , 2003 ; bunescu and mooney , 2005 ; guodong et al. , 2005 ) | 0 |
central to the approach is a novel formulation of open ie as a sequence tagging problem , addressing challenges such as encoding multiple extractions for a predicate---central to our approach is the construction of high-accuracy , high-coverage multilingual wikipedia entity type mappings | 1 |
a pun is the exploitation of the various meanings of a word or words with phonetic similarity but different meanings---a pun is a form of wordplay in which a word suggests two or more meanings by exploiting polysemy , homonymy , or phonological similarity to another word , for an intended humorous or rhetorical effect ( cite-p-15-3-1 ) | 1 |
within this subpart of our ensemble model , we used a svm model from the scikit-learn library---in all cases , we used the implementations from the scikitlearn machine learning library | 1 |
in this example , a snippet of a longer sentence pair is shown with ner and word alignment results---in this example , a snippet of a longer sentence pair is shown with ner and word alignment | 1 |
we implemented linear models with the scikit learn package---we use a random forest classifier , as implemented in scikit-learn | 1 |
the danish dependency treebank comprises 100k words of text selected from the danish parole corpus , with annotation of primary and secondary dependencies based on discontinuous grammar---ddt comprises 100k words of text selected from the danish parole corpus , with annotation of primary and secondary dependencies based on discontinuous grammar | 1 |
we use the pre-trained 300-dimensional word2vec embeddings trained on google news 1 as input features---we train skip-gram word embeddings with the word2vec toolkit 1 on a large amount of twitter text data | 1 |
in this paper , we are interested in uncertainty sampling for pool-based active learning , in which an unlabeled example x with maximum uncertainty is selected to augment the training data at each learning cycle---in this work , we are interested in uncertainty sampling for pool-based active learning , in which an unlabeled example x with maximum uncertainty is selected for human annotation at each learning cycle | 1 |
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus---robust processing capabilities of the parser have also been shown to be able to provide a small but significant increase in the accuracy of a speech recognizer | 0 |
dreyer and eisner proposed a log-linear model to identify paradigms---dreyer and eisner propose an infinite diriclet mixture model for capturing paradigms | 1 |
a systematic study to tap the implicit functional information of ctb has been introduced by xue---xue introduced a systematic study to tap the implicit functional information of ctb | 1 |
nevertheless , studies have shown that a steady change in the linguistic nature of the symptoms and the degree in speech and writing are early and could be identified by using language technology analysis---we present a learning method for word embeddings specifically designed to be useful for relation classification | 0 |
although swsd is a promising tool , it suffers from the knowledge acquisition bottleneck---swsd is defined as a supervised task , and follows a targeted approach common in the wsd literature for performance reasons | 1 |
the system discussed in this paper performs both named entity identification and disambiguation---this paper presents a large-scale system for the recognition and semantic disambiguation of named entities | 1 |
we use an nmt-small model from the opennmt framework for the neural translation---we implement our lstm encoder-decoder model using the opennmt neural machine translation toolkit | 1 |
in mikolov et al , the authors are able to successfully learn word translations using linear transformations between the source and target word vector-spaces---in the case of bilingual word embedding , mikolov et al propose a method to learn a linear transformation from the source language to the target language for the task of lexicon extraction from bilingual corpora | 1 |
we also show that considering machine learning outcomes with and without the difficult cases , it is possible to identify specific weaknesses of the problem representation---annotation was conducted on a modified version of the brat web-based annotation tool | 0 |
poor initial policy can easily lead to bad user experience and consequently fail to attract sufficient real users for policy training---vector-based distributional semantic models of word meaning have gained increased attention in recent years | 0 |
we used the sri language modeling toolkit for this purpose---we used the sri language modeling toolkit with kneser-kney smoothing | 1 |
our method is based on constraining a shift-reduce parser using the arc-eager strategy---we propose a transition-based parser for spinal parsing , based on the arc-eager strategy | 1 |
we used minimum error rate training to optimize the feature weights---stance detection is the task of estimating whether the attitude expressed in a text towards a given topic is ‘ in favour ’ , ‘ against ’ , or ‘ neutral ’ | 0 |
we use phrase-based and hierarchical mt systems as implemented by koehn et al for our experiments---we used a phrase-based smt model as implemented in the moses toolkit | 1 |
lindberg et al introduced a sophisticated template based system which merges semantic role labels into a system that automatically generates natural language questions to support online learning---lindberg et al employed a template-based approach while taking advantage of semantic information to generate natural language questions for on-line learning support | 1 |
in this paper , we investigate matching a response with its multi-turn context using dependency information based entirely on attention---we propose a new matching model for multi-turn response selection with self-attention and cross-attention | 1 |
mikolov et al further proposed continuous bagof-words and skip-gram models , which use a simple single-layer architecture based on inner product between two word vectors---we train the twitter sentiment classifier on the benchmark dataset in semeval 2013 | 0 |
we used svm-light-tk , which enables the use of the partial tree kernel---the pre-processed monolingual sentences will be used by srilm or berkeleylm to train a n-gram language model | 0 |
for pretraining , restricted boltzmann machine , auto-encoding and sparse coding are proposed and popularly used---for pre-training , restricted boltzmann machine , auto-encoding and sparse coding are most frequently used | 1 |
coreference resolution is the task of grouping mentions to entities---coreference resolution is the task of determining which mentions in a text are used to refer to the same real-world entity | 1 |
we present a general graph representation for automatically deriving these features from labeled data---in this work , we propose a general graph representation for automatically extracting structured features from tokens and prior annotations | 1 |
word alignment is a key component of most endto-end statistical machine translation systems---word alignment is a crucial early step in the training of most statistical machine translation ( smt ) systems , in which the estimated alignments are used for constraining the set of candidates in phrase/grammar extraction ( cite-p-9-3-5 , cite-p-9-1-4 , cite-p-9-3-0 ) | 1 |
stance detection is the task of assigning stance labels to a piece of text with respect to a topic , i.e . whether a piece of text is in favour of “ abortion ” , neutral , or against---stance detection is the task of automatically determining whether the authors of a text are against or in favour of a given target | 1 |
like recent work , we use the lstm variant of recurrent neural networks as language modeling architecture---we use lstm units as 桅 for our implementation based on its recent success in language processing tasks | 1 |
this approach was first suggested in , where parameterized heuristic rules are combined with a genetic algorithm into a system for keyphrase extraction that automatically identifies keywords in a document---previous approaches have used a hand-crafted finite set of features to represent the parse history | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.