text stringlengths 82 736 | label int64 0 1 |
|---|---|
riloff and wiebe extracted subjective expressions from sentences using a bootstrapping pattern learning process---riloff and wiebe use a bootstrapping algorithm to perform a sentence-based opinion classification on the mpqa corpus | 1 |
for example , when attested in the intransitive frame , the subject of an object-drop verb is an agent , whereas the subject of an unaccusative verb is a theme---in the intransitive frame , the subject of an object-drop verb is an agent , whereas the subject of an unaccusative verb is a theme | 1 |
word embeddings are initialized from glove 100-dimensional pre-trained embeddings---the model parameters in word embedding are pretrained using glove | 1 |
zelenko et al developed a kernel over parse trees for relation extraction---zelenko et al used the kernel methods for extracting relations from text | 1 |
we formulate the system as an rnn encoder-decoder---regarding word embeddings , we use the ones trained by baziotis et al using word2vec and 550 million tweets | 0 |
all the feature weights and the weight for each probability factor are tuned on the development set with minimumerror-rate training---the optimisation of the feature weights of the model is done with minimum error rate training against the bleu evaluation metric | 1 |
we use hsmq-learning for learning a hierarchy of generation policies---for learning a generation policy , we use hierarchical q-learning | 1 |
crfs have been shown to perform well in a number of natural language processing applications , such as pos tagging , shallow parsing or np chunking , and named entity recognition---for language modeling , we computed 5-gram models using irstlm 7 and queried the model with kenlm | 0 |
experimental results on real-world datasets show that our model achieves significant and consistent improvements on relation extraction as compared with baselines---to classify the nps according to their type in biomedical terms , we have adopted the sequence ontology 2 | 0 |
shen et al proposed a target dependency language model for smt to employ target-side structured information---to overcome this problem , shen et al proposed a dependency language model to exploit longdistance word relations for smt | 1 |
the attention model boosts performance for various tasks---this system is based on the attention-based nmt | 1 |
we train our model using the europarl v7 multilingual corpora , in particular the english-german corpus---we use the europarl parallel corpus as the basis for our small-scale cross-lingual experiments | 1 |
training is done using stochastic gradient descent over mini-batches with the adadelta update rule---parameter optimisation is done by mini-batch stochastic gradient descent where back-propagation is performed using adadelta update rule | 1 |
the target fourgram language model was built with the english part of training data using the sri language modeling toolkit---to encode the original sentences we used word2vec embeddings pre-trained on google news | 0 |
to measure translation accuracy , we use the automatic evaluation measures of bleu and ribes measured over all sentences in the test corpus---to evaluate the evidence span identification , we calculate f-measure on words , and bleu and rouge | 1 |
we use approximate randomization for significance testing---we use randomization test to calculate statistical significance | 1 |
we implemented linear models with the scikit learn package---these models were implemented using the package scikit-learn | 1 |
in general , a combination of word embeddings and a convolutional neural network performs well for sentence classification tasks---due to the success of word embeddings in word similarity judgment tasks , this work also makes use of global vector word embeddings | 1 |
in section 5 , we describe how we extend this approach to allow for structural insertion and deletion , without the need for content word anchors---in section 5 , we describe how we extend this approach to allow for structural insertion and deletion , without the need for content | 1 |
constituent and dependency parses are obtained by stanford parser---constituent and dependency parses are obtained by the stanford parser | 1 |
related experimental analyses validate that our training approach can improve the robustness of nmt models---translation tasks show that our approaches can not only achieve significant improvements over strong nmt systems | 1 |
while simple and principled , our model achieves performance competitive with a state-of-the-art ensemble system combining latent semantic representations and surface similarity---a 5-gram language model was created with the sri language modeling toolkit and trained using the gigaword corpus and english sentences from the parallel data | 0 |
the first is the compression corpus of knight and marcu derived automatically from document-abstract pairs of the ziff-davis corpus---the first is the compression corpus of knight and marcu derived automatically from the document-abstract pairs of the ziff-davis corpus | 1 |
all language models were trained using the srilm toolkit---the target-side language models were estimated using the srilm toolkit | 1 |
training on 519k sentence pairs in 0.03 seconds per sentence , we achieve significantly improvement over the traditional pipeline by 0.84 b leu---in 0 . 03 second per sentence , our system achieves significant improvement by 0 . 84 b leu over the baseline system | 1 |
for nmt , we applied byte pair encoding to split word into subword segments for both source and target languages---we use byte pair encoding with 45k merge operations to split words into subwords | 1 |
takamura et al proposed a method based on the spin models in physics for extracting semantic orientations of words---takamura et al proposed using spin models for extracting semantic orientation of words | 1 |
we used srilm to build a 4-gram language model with interpolated kneser-ney discounting---for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit | 1 |
multiword expressions are word combinations which have idiosyncratic properties relative to their component words , such as taken aback or red tape---multiword expressions are lexical items that can be decomposed into single words and display idiosyncratic features | 1 |
we competed in subtask 1 and 2 , which consist , respectively , in identifying all the key phrases in scientific publications and label them---we competed in subtask 1 and 2 which consist respectively in identifying all the key phrases in scientific publications and label them | 1 |
the first and most effective method is to simply use an objective measure of translation quality , such as bleu---by utilizing the sub-labels , we gain significant improvement in model accuracy | 0 |
for the fluency and grammaticality features , we train 4-gram lms using the development dataset with the sri toolkit---we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit | 1 |
chen et al proposed a gated recursive neural network to incorporate context information---chen et al proposed gated recursive neural networks to model complicated combinations of characters | 1 |
for monolingual treebank data we relied on the conll-x and conll-2007 shared tasks on dependency parsing---we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting | 0 |
we begin by discussing related work in section 2---in this section , we briskly cover related work | 1 |
we used mecab as a morphological analyzer and cabocha 14 as the dependency parser to find the boundaries of the bunsetsu---we used the japanese data to extract the noun-verb collocation candidates using a dependency parser , cabocha | 1 |
the language models were 5-gram models with kneser-ney smoothing built using kenlm---in addition , a 5-gram lm with kneser-ney smoothing and interpolation was built using the srilm toolkit | 1 |
we propose to combine the matrix sketching algorithm with random hashing to completely remove limitations on data sizes---in this work , we apply the recently proposed matrix sketching algorithm to entirely obviate the problem with scalability | 1 |
this system is a basic encoderdecoder with an attention mechanism---it is based on the encoder-decoder with attention | 1 |
lin and hovy built the neats multi-document summarization system using term frequency , sentence position , stigma words and simplified maximal marginal relecvance---lin and hovy used term frequency , sentence position , stigma words and simplified maximal marginal relevance to build the neats multi-document summarization system | 1 |
the translation model of the smt system uses ibm4 word alignments with grow-diag-final-and phrase extraction heuristics---the translation model of each smt system uses ibm4 word alignments with grow-diag-finaland phrase extraction heuristics | 1 |
we also propose a fast decoding algorithm to speed up the joint search---since depechemood is aligned with ewn , is publicly available and has a better coverage and claimed performance compared to existing emotion lexicons , we decide to expand it using ewn semantic relations as described below | 0 |
ding and palmer introduce the notion of a synchronous dependency insertion grammar as a tree substitution grammar defined on dependency trees---ding and palmer propose a syntax-based translation model based on a probabilistic synchronous dependency insert grammar , a version of synchronous grammars defined on dependency trees | 1 |
ccg is a linguistic formalism that tightly couples syntax and semantic---ccg is a linguistically motivated categorial formalism for modeling a wide range of language phenomena | 1 |
for english , we used the pre-trained word2vec by on google news---we use word2vec from as the pretrained word embeddings | 1 |
framenet ( cite-p-22-1-8 ) is a rich linguistic resource containing considerable information about lexical and predicate-argument semantics in english---framenet ( cite-p-22-1-0 ) is a lexical database that describes english words using frame semantics ( cite-p-22-1-3 ) | 1 |
elkiss et al , perform a large-scale study on citations in the free pubmed central and show that they contain information that may not be present in abstracts---elkiss et al carried out a largescale study and confirmed that citation summaries contain extra information that does not appear in paper abstracts | 1 |
we show results for combining the models for the two aforementioned subtasks into the overall task of social network extraction---experimental results showed that the proposed methods could effectively recognize the defined discourse relations and achieve significant improvement in sentence-level polarity classification | 0 |
we obtain word clusters from word2vec k-means word clustering tool---event-similarity tasks are encouraging , indicating that our approach can outperform traditional vector-space model , and is suitable for distinguishing between topically very similar events | 0 |
relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text---relation extraction is a challenging task in natural language processing | 1 |
the n-gram language models are trained using the srilm toolkit or similar software developed at hut---each multi-bilstm internal implies a dropout layer to prevent over-fitting | 0 |
we used the iwslt-2005 japanese-english translation task for evaluating the proposed global phrase reordering model---we built a translation model on a corpus for iwslt 2005 chinese-to-english translation task , which consists of 40k pairs of sentences | 1 |
we use the maximum entropy model as a classifier---for our al framework we decided to employ a maximum entropy classifier | 1 |
we presented a sentence-level classification approach for mt system selection for diglossic languages---we study the use of sentence-level dialect identification in optimizing machine translation system selection | 1 |
xie et al explored content measures based on the lexical similarity between the response and a set of reference responses---xie et al and cheng et al assessed content using similarity scores between test responses and highly proficient sample responses , based on content vector analysis | 1 |
we use a set of 318 english function words from the scikit-learn package---the fw feature set consists of 318 english fws from the scikit-learn package | 1 |
if we can efficiently identify that two inference problems have the same solution , then we can reuse previously computed structures for newer examples , thus giving us a speedup---if we can efficiently characterize and identify inference instances that have the same solution , we can take advantage of previously performed computation | 1 |
we have participated in semeval-2017 task 4 on sentiment analysis in twitter , subtasks a ( message polarity classification ) , b ( topicbased message polarity classification ) ( cite-p-11-1-9 )---we have used for participating in subtasks a ( message polarity classification ) and b ( topicbased message polarity classification according to a two-point scale ) of semeval2017 task 4 sentiment analysis in twitter | 1 |
ding and palmer propose a syntax-based translation model based on a probabilistic synchronous dependency insert grammar , a version of synchronous grammars defined on dependency trees---translation tasks show that our approaches can not only achieve significant improvements over strong nmt systems | 0 |
we use word embeddings of dimension 100 pretrained using word2vec on the training dataset---in this research , we use the pre-trained google news dataset 2 by word2vec algorithms | 1 |
one stream of work focuses on learning a general representation for different domains based on the co-occurrences of domain-specific and domain-independent features---one line of work focuses on inducing a general lowdimensional cross-domain representation based on the co-occurrences of domain-specific and domainindependent features | 1 |
gigaword corpus we use the exact annotated gigaword corpus provided by rush et al---we use the annotated gigaword corpus provided by rush et al | 1 |
bilingual lexicons play an important role in many natural language processing tasks , such as machine translation and cross-language information retrieval---bilingual lexica provide word-level semantic equivalence information across languages , and prove to be valuable for a range of cross-lingual natural language processing tasks | 1 |
semantic role labeling ( srl ) is a major nlp task , providing a shallow sentence-level semantic analysis---the srilm toolkit was used to build the trigram mkn smoothed language model | 0 |
it can be used to search for semantically compatible candidate answers , thus greatly reducing the search space---it can be used to search for semantically compatible candidate an- swers in document passages , thus greatly reducing the search space | 1 |
recently , inversion transduction grammars , namely itg , have been used to constrain the search space for word alignment---in particular , it has been proven that inversion transduction grammar , which captures structural coherence between parallel sentences , helps in word alignment | 1 |
we first segment a chinese sentence into a word lattice , then process the lattice using a lattice-based pos tagger and a lattice-based parser---chinese sentence is first segmented into a word lattice , and then a lattice-based pos tagger and a lattice-based parser are used to process the lattice | 1 |
we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit---we built a 5-gram language model from it with the sri language modeling toolkit | 0 |
word sense disambiguation ( wsd ) is a task to identify the intended sense of a word based on its context---word sense disambiguation ( wsd ) is the task of identifying the correct meaning of a word in context | 1 |
to this end , cohen et al and cohen and smith investigated logistic normal priors , and headden iii et al used a backoff scheme---cohen et al and cohen and smith employed the logistic normal prior to model the correlations between grammar symbols | 1 |
on the test set , our best run achieves an f 1 of 76 % using the partial evaluation schema---we achieved f 1 measures ranging from 73 % to almost 76 % depending on the run | 1 |
we propose a new model to address this imbalance , based on a word-based markov model of translation which generates target translations leftto-right---we propose a new model to drop the independence assumption , by instead modelling correlations between translation decisions , which we use to induce translation | 1 |
text categorization is the classificationof documents with respect to a set of predefined categories---we show how a large body of affective stereotypes can be acquired from the web | 0 |
for word-level embedding e w , we utilize pre-trained , 300-dimensional embedding vectors from glove 6b---we use pre-trained glove vector for initialization of word embeddings | 1 |
zens and ney use a disk-based prefix tree , enabling efficient access to phrase tables much too large to fit in main memory---a 5-gram language model was built using srilm on the target side of the corresponding training corpus | 0 |
the framework of translation-model based retrieval has been introduced by berger and lafferty---this model was first proposed by berger and lafferty for monolingual document retrieval | 1 |
we use srilm for training a trigram language model on the english side of the training corpus---for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus | 1 |
to overcome these drawbacks , we abolish the syntactic information for the source side and develop a stringto-tree variant of ` mbots---to overcome the typically lower translation quality of tree-to-tree systems and minimal rules , we abolish the syntactic annotation on the source side and develop a stringto-tree variant | 1 |
table 2 presents the results from the automatic evaluation , in terms of bleu and nist scores , of 4 system setups---we use the stanford corenlp caseless tagger for part-of-speech tagging | 0 |
we propose recurrent memory network ( rmn ) , a novel rnn architecture that combines the strengths of both lstm and memory network ( cite-p-17-5-3 )---we use the scikit-learn toolkit as our underlying implementation | 0 |
in addition to that , weller et al describe methods for terminology extraction and bilingual term alignment from comparable corpora---weller et al describe methods for terminology extraction and bilingual term alignment from comparable corpora | 1 |
relation extraction is the task of recognizing and extracting relations between entities or concepts in texts---to train our model we use markov chain monte carlo sampling | 0 |
there is some recent work investigating features that directly indicate implicit sentiments---there is some work investigating features that directly indicate implicit sentiments | 1 |
we show that in this case the decipherment problem is equivalent to the quadratic assignment problem ( qap )---for the quadratic assignment problem can be directly used to solve the decipherment problem | 1 |
semantic parsing is the problem of translating human language into computer language , and therefore is at the heart of natural language understanding---semantic parsing is the task of converting natural language utterances into their complete formal meaning representations which are executable for some application | 1 |
crfs are undirected graphical models which define a conditional distribution over labellings given an observation---the parsing model used for intra-sentential parsing is a dynamic conditional random field shown in figure 7 | 0 |
we also used word2vec to generate dense word vectors for all word types in our learning corpus---we obtained these scores by training a word2vec model on the wiki corpus | 1 |
below we describe our approach in greater detail , provide experimental evidence of its value for performing inference in nell¡¯s knowledge base , and discuss implications of this work and directions for future research---we describe our approach in greater detail , provide experimental evidence of its value for performing inference in nell ¡¯ s knowledge base , and discuss implications of this work | 1 |
stance detection is the task of assigning stance labels to a piece of text with respect to a topic , i.e . whether a piece of text is in favour of “ abortion ” , neutral , or against---stance detection is the task of classifying the attitude previous work has assumed that either the target is mentioned in the text or that training data for every target is given | 1 |
crucially , this work has typically focused on specific kinds of mwes , and has not considered identification of the full spectrum of mwes---mwe identification has focused on methods that are applicable to the full spectrum of kinds of mwes | 1 |
islam and inkpen proposed a corpus-based sentence similarity measure as a function of string similarity , word similarity and common word order similarity---islam and inkpen , 2008 ) determined sentence similarity by combining string similarity , semantic similarity and common-word order similarity with normalization | 1 |
a 5-gram language model built using kenlm was used for decoding---the 5-gram target language model was trained using kenlm | 1 |
word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text---word sense disambiguation ( wsd ) is a key enabling-technology | 1 |
lu et al , 2009 , focuses on summarising short comments , each associated with an overall rating---a total of 42 systems were submitted from 21 distinct teams , and nine | 0 |
the embedded word vectors are trained over large collections of text using variants of neural networks---that makes it possible to also address the question of how these changes happened by uncovering the cognitive mechanisms and cultural processes that drive language evolution | 0 |
we trained a specific language model using srilm from each of these corpora in order to estimate n-gram log-probabilities---we trained kneser-ney discounted 5-gram language models on each available corpus using the srilm toolkit | 1 |
it can also be viewed as a way to build a class n-gram language model directly on strings , without any ¡°word¡± information a priori---and considered as a method to build a class n-gram language model directly from strings , while integrating character and word level information | 1 |
first , we interpolate language models trained on the target language and on the related language---first , we interpolate language models from in-domain and out-of-domain data | 1 |
semantic parsing is the task of converting a sentence into a representation of its meaning , usually in a logical form grounded in the symbols of some fixed ontology or relational database ( cite-p-21-3-3 , cite-p-21-3-4 , cite-p-21-1-11 )---semantic parsing is the task of transducing natural language ( nl ) utterances into formal meaning representations ( mrs ) , commonly represented as tree structures | 1 |
distinct approaches , such as tl or distant supervision have been particularly explored to overcome this limit---to overcome this issue , recent work has concentrated on distant supervision and multiple instance learning | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.