text stringlengths 82 736 | label int64 0 1 |
|---|---|
we obtained a phrase table out of this data using the moses toolkit---we used the moses toolkit to build mt systems using various alignments | 1 |
our preliminary experiments show that both methods can improve smt performance without using any additional data---in this work , we handle the medical concept normalisation | 0 |
in section 3 , we propose a new criterion for lm pruning based on n-gram distribution , and discuss in detail how to estimate the distribution---we improved over the baselines ; in some cases we obtained greater than 30 % improvement for mean r ouge scores over the best performing baseline | 0 |
a pcfg is proper if math-w-3-1-3-40 for each math-w-3-1-3-55---for any pcfg math-w-7-1-0-40 , there are equivalent ppdts | 1 |
we train a trigram language model with modified kneser-ney smoothing from the training dataset using the srilm toolkit , and use the same language model for all three systems---we train a 5-gram language model with the xinhua portion of english gigaword corpus and the english side of the training set using the srilm toolkit | 1 |
we pre-processed the data to add part-ofspeech tags and dependencies between words using the stanford parser---we parsed all source side sentences using the stanford dependency parser and trained the preordering system on the entire bitext | 1 |
the language model is trained on the target side of the parallel training corpus using srilm---in this paper , we show that the weak generative capacity of this ¡® pure ¡¯ form of ccg is strictly smaller than that of ccg with grammar-specific rules , and of other mildly context-sensitive grammar | 0 |
the search space for metaphor identification was the british national corpus that was parsed using the rasp parser of briscoe et al---because it often requires much less training time in practice than batch training algorithms | 0 |
we present a generative model for unsupervised coreference resolution that views coreference as an em clustering process---by these results , we present a generative , unsupervised model for probabilistically inducing coreference | 1 |
in this study , we used the lang-8 learner corpora created by mizumoto et al---in this study , we use the lang-8 learner corpora created by mizumoto et al | 1 |
nevertheless , we believe it is possible to do better by using a constrained topic model instead of traditional attribute selection methods---in this paper , we adopt a constrained topic model incorporating prior knowledge to select attribute | 1 |
in this paper we propose a new graph-based method that uses the knowledge in a lkb ( based on wordnet ) in order to perform unsupervised word sense disambiguation---in this paper we present a novel graph-based wsd algorithm which uses the full graph of wordnet efficiently , performing significantly better that previously published approaches in english | 1 |
we use the glove vectors of 300 dimension to represent the input words---we represent each word by a vector with length 300 | 1 |
language models were built with srilm , modified kneser-ney smoothing , default pruning , and order 5---these language models were built up to an order of 5 with kneser-ney smoothing using the srilm toolkit | 1 |
however , the framework of our proposed approach can be generalized to deal with a mix of review texts of more than one products---the tagger is based on the implementation of conditional random fields in the mallet toolkit | 0 |
soricut and marcu address the task of parsing discourse structures within the same sentence---parsing soricut and marcu firstly addressed the task of parsing discourse structure within the same sentence | 1 |
a trigram model was built on 20 million words of general newswire text , using the srilm toolkit---for all data sets , we trained a 5-gram language model using the sri language modeling toolkit | 0 |
stance detection has been defined as automatically detecting whether the author of a piece of text is in favor of the given target or against it---stance detection is the task of automatically determining from text whether the author is in favor of the given target , against the given target , or whether neither inference is likely | 1 |
kobayashi et al adopted a supervised learning technique to search for useful syntactic patterns as contextual clues---kobayashi et al identified opinion relations by searching for useful syntactic contextual clues | 1 |
recently , mikolov et al introduced an efficient way for inferring word embeddings that are effective in capturing syntactic and semantic relationships in natural language---mikolov et al introduced cbow model to learn vector representations which captures a large number of syntactic and semantic word relationships from unstructured text data | 1 |
amr relations consist of core semantic roles drawn from the propbank as well as very fine-grained semantic relations defined specifically for amr---amr relations consist of core semantic roles drawn from the propbank as well as fine-grained semantic relations defined specifically for amr | 1 |
in particular , the rank of the systems is calculated on the offical twitter 2015 test set---rank on the progress set is calculated on the performance on the twitter 2014 subset | 1 |
we feed our features to a multinomial naive bayes classifier in scikit-learn---we used the svd implementation provided in the scikit-learn toolkit | 1 |
taxonomies play an important role in many applications by organizing domain knowledge into a hierarchy of ‘ is-a ’ relations between terms---taxonomies , which serve as backbones for structured knowledge , are useful for many nlp applications | 1 |
summac has established definitively in a large-scale evaluation that automatic text summarization is very effective in relevance assessment tasks---text summarization evaluation ( summac ) has established definitively that automatic text summarization is very effective in relevance assessment tasks | 1 |
a pun is a form of wordplay , which is often profiled by exploiting polysemy of a word or by replacing a phonetically similar sounding word for an intended humorous effect---when a pun is a spoken utterance , two types of puns are commonly distinguished : homophonic puns , which exploit different meanings of the same word , and heterophonic puns , in which one or more words have similar but not identical pronunciations to some other word or phrase that is alluded to in the pun | 1 |
morphological analysis is the basis for many nlp applications , including syntax parsing , machine translation and automatic indexing---a 5-gram language model with kneser-ney smoothing is trained using s-rilm on the target language | 0 |
the parameters of the log-linear model are tuned by optimizing bleu on the development data using mert---the scaling factors are tuned with mert with bleu as optimization criterion on the development sets | 1 |
as is the case with the multi-task system , we apply the cross entropy loss function and the adam optimizer to train the energybased network---we use binary cross-entropy as the objective function and the adam optimization algorithm with the parameters suggested by kingma and ba for training the network | 1 |
then , they searched the propbank wall street journal corpus for sentences containing such lexical items and annotated them with respect to metaphoricity---a skip-gram model from mikolov et al was used to generate a 128-dimensional vector of a particular word | 0 |
we used a phrase-based smt model as implemented in the moses toolkit---in this work , we apply a standard phrase-based translation system | 1 |
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing---we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option | 1 |
on the remaining tweets , we trained a 10-gram word length model , and a 5-gram language model , using srilm with kneyser-ney smoothing---associated with each phrasal pattern is a conceptual template | 0 |
we first trained a trigram bnlm as the baseline with interpolated kneser-ney smoothing , using srilm toolkit---for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit | 1 |
le and mikolov have used a paragraph vector methodology with an unsupervised algorithm based on feed-forward neural networks that learns fixed-length vector representations from variable-length texts---le and mikolov introduce an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts | 1 |
our phrase-based mt system is trained by moses with standard parameters settings---our baseline system is an standard phrase-based smt system built with moses | 1 |
the word embeddings are identified using the standard glove representations---the word embeddings are initialized using the pre-trained glove , and the embedding size is 300 | 1 |
brown , et al describe a statistical algorithm for partitioning word senses into two groups---brown et al described a statistical algorithm for partitioning the senses of a word into two groups | 1 |
at the same time , it provides an easyto-use interface to access the revision data---complexity of this algorithm is linear in the sentence length | 0 |
the proposed approach trains models based on only a part of the training set that is more similar to the target domain---with a full training set , this approach extracts portions of the training data that are most similar to the target data | 1 |
semantic role labeling is the problem of analyzing clause predicates in open text by identifying arguments and tagging them with semantic labels indicating the role they play with respect to the verb---word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text | 0 |
we train the models for 20 epochs using categorical cross-entropy loss and the adam optimization method---we update the model parameters by minimizing l c and l k with adam optimizer | 1 |
the idea behind gold is to facilitate a more standardised use of basic grammatical features---both strategies produce f-score gains of more than 3 % across the three coreference evaluation metrics ( muc , b 3 , and ceaf ) | 0 |
finally , the graph is clustered using chinese whispers---the clustering is done with the chinese whispers algorithm | 1 |
for the semantic language model , we used the srilm package and trained a tri-gram language model with the default goodturing smoothing---we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus | 1 |
mihalcea et al propose a method to learn multilingual subjective language via crosslanguage projections---nivre and scholz proposed a variant of the model of yamada and matsumoto that reduces the complexity , from the worst case quadratic to linear | 0 |
brown clustering is a greedy , hierarchical , agglomerative hard clustering algorithm to partition a vocabulary into a set of clusters with minimal loss in mutual information---the brown algorithm is a hierarchical clustering algorithm which clusters words to maximize the mutual information of bigrams | 1 |
we trained the parser on the training portion of patb part 3---we trained the pos tagger using the aforementioned sections of the atb | 1 |
to reduce overfitting , we apply the dropout method to regularize our model---coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept | 0 |
the first two methods are language independent and we argue that the third method can be adapted to other morphologically complex languages---and we argued that the third method can be adapted to other morphologically complex languages | 1 |
in contrast , collobert et al explore a cnn architecture to solve various sequential and non-sequential nlp tasks such as part-of-speech tagging , named entity recognition and also language modeling---collobert et al propose a multi-task learning framework with dnn for various nlp tasks , including part-of-speech tagging , chunking , named entity recognition , and semantic role labelling | 1 |
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus---we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus | 1 |
for core task , we collect 6 types of similarity measures , i.e. , string similarity , number similarity , knowledge-based similarity , corpus-based similarity , syntactic dependency similarity and machine translation similarity---in core task , using 6 types of similarity measures , i . e . , string similarity , number similarity , knowledge-based similarity , corpus-based similarity , syntactic dependency similarity and machine translation similarity | 1 |
for support vector machines , we used the liblinear package---we use the svm implementation available in the li-blinear package | 1 |
training duration was decided using early stopping---training time was decided using early stopping | 1 |
the log-linear feature weights are tuned with minimum error rate training on bleu---the log-linear parameter weights are tuned with mert on the development set | 1 |
word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text---word sense disambiguation ( wsd ) is a fundamental task and long-standing challenge in natural language processing ( nlp ) | 1 |
we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing---we adopt the phrase definition in , that each phrase is composed by a pair of head term and modifier | 0 |
relation extraction is the task of recognizing and extracting relations between entities or concepts in texts---relation extraction ( re ) is the task of extracting semantic relationships between entities in text | 1 |
the results show that the rule-based chunk approach is superior---regarding svm we used linear kernels implemented in svm-light | 0 |
on the free-topic dataset , pid performs better than sid as expected ( 77.6 vs 72.3 in f-score ) but adding the features derived from the word embedding clustering underlying the automatic sid increases the results considerably , leading to an f-score of 84.8---the pid performs better than the automatically extracted sid , but adding the features derived from the word embedding clustering underlying the sid , modeling the broad discussion topics , increases the results considerably | 1 |
distributional semantic models represent lexical meaning in vector spaces by encoding corpora derived word co-occurrences in vectors---distributional semantic models encode word meaning by counting co-occurrences with other words within a context window and recording these counts in a vector | 1 |
the vector math-w-5-1-0-31 is the filter of the convolution---pooling over a linear sequence of values returns the subsequence of math-w-2-5-1-108 | 1 |
we used the stanford parser to generate the grammatical structure of sentences---we used the stanford lexicalized parser to parse the question | 1 |
the experiments were conducted with the scikit-learn tool kit---in human machine conversation , our work also motivates systematic investigations on how eye gaze contributes to attention prediction and its implications in automated language processing | 0 |
we trained word vectors with the two architectures included in the word2vec software---in this study , we propose a new co-regression algorithm to address the above problem by leveraging unlabeled reviews | 0 |
we investigate the use of deep bidirectional lstms for joint extraction of opinion entities and the is - from and is -about relations that connect them ¡ª the first such attempt using a deep learning approach---on this and other problems in nlp , we investigate here the use of deep bidirectional lstms for joint extraction of opinion expressions , holders , targets and the relations that connect them | 1 |
we use the glove pre-trained word embeddings for the vectors of the content words---multitask learning models have been proven very useful for several nlp tasks and applications , | 0 |
the rasp toolkit is used for sentence boundary detection , tokenisation , pos tagging and finding grammatical relations between words in the text---the pos tags , grammatical relations and phrase structure rules are derived from the rasp toolkit | 1 |
semantic parsing is the task of mapping natural language sentences to complete formal meaning representations---semantic parsing is the task of automatically translating natural language text to formal meaning representations ( e.g. , statements in a formal logic ) | 1 |
by adding a cnn architecture collobert et al built the senna application that uses representation in language modeling tasks---collobert et al adapted the original cnn proposed by lecun and bengio for modelling natural language sentences | 1 |
semantic role labeling ( srl ) is the task of automatically annotating the predicate-argument structure in a sentence with semantic roles---semantic role labeling ( srl ) is the process of assigning semantic roles to strings of words in a sentence according to their relationship to the semantic predicates expressed in the sentence | 1 |
we train the concept identification stage using infinite ramp loss with adagrad---cite-p-11-1-6 presented a specialized word embedding by employing an external | 0 |
finally , based on recent results in text classification , we also experiment with a neural network approach which uses a long-short term memory network---we use the word2vec tool to pre-train the word embeddings | 0 |
we used srilm to build a 4-gram language model with interpolated kneser-ney discounting---lepage proposed an algorithm for solving an analogical equation | 0 |
however , declarative knowledge is still created in a costly manual process---in this paper we present dkpro wsd , a freely licensed , general-purpose framework for wsd | 0 |
in this paper we presented a word sense disambiguation based system for multilingual lexical substitution---the discourse structure is a directed graph , where nodes correspond to segments of a document ( which we will refer to as “ blocks ” of text ) , and the edges define the dependencies between them | 0 |
in this paper , we use two web databases set1 and set2 for simplicity---in this paper , we use two web databases set1 and set2 | 1 |
coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity---with regard to surface realisation , decisions are often made according to a language model of the domain | 0 |
we estimate a 5-gram language model using interpolated kneser-ney discounting with srilm---we used srilm to build a 4-gram language model with interpolated kneser-ney discounting | 1 |
in this paper , we propose schema induction using coupled tensor factorization ( sictf ) , a novel tensor factorization method for relation schema induction---for this language , which has limited the number of possible tags , we used a very rich tagset of 680 morphosyntactic tags | 0 |
coreference resolution is the task of identifying all mentions which refer to the same entity in a document---coreference resolution is a key task in natural language processing ( cite-p-13-1-8 ) aiming to detect the referential expressions ( mentions ) in a text that point to the same entity | 1 |
a first step towards making use of such data would be to automatically align spoken words with their translations---a first step towards this goal would be to automatically align spoken words with their translations | 1 |
the obtained triple translation model is also used for collocation translation extraction---triple translation model is used to extract collocation translations | 1 |
we use the glove pre-trained word embeddings for the vectors of the content words---we use the glove vectors of 300 dimension to represent the input words | 1 |
studies in this line of work include haghighi et al. , 2009 ; denero and klein , 2010 ; setiawan et al. , 2010 , just to name a few---in particular , abstract meaning representation , is a novel representation of semantics | 0 |
cahill et al presents penn-ii treebankbased lfg parsing resources---however , yarowsky proposed an approach in which strong collocations were identified for wsd | 0 |
in this paper , our evaluation objects are the oral english picture compositions in english as a second language ( esl ) examination---in this paper , our evaluation objects are the oral english picture compositions in english | 1 |
once users are motivated to find specific information related to their information goals , the interaction context can provide useful cues for the system to automatically identify problematic situations and user intent---in finding specific information related to their information goals , user behavior and interaction context can help automatically identify problematic situations and user intent | 1 |
each system is optimized using mert with bleu as an evaluation measure---each translation model is tuned using mert to maximize bleu | 1 |
in this paper , we investigate the problem of automatic generation of scientific surveys starting from keywords provided by a user---in this paper , we explore strategies for generating and evaluating such surveys of scientific topics automatically | 1 |
word embeddings are popular representations for syntax , semantics and other areas---in our work , we adopt a knowledge-based word similarity method with wsd to measure the semantic similarity between two sentences | 0 |
in this case the environment of a learning agent is one or more other agents that can also be learning at the same time---in this case the environment of a learning agent is one or more other agents that can also be learning | 1 |
the syntax tree features were calculated using the stanford parser trained using the english caseless model---lexical and syntactic features were automatically extracted from the utterances using the stanford parser default tokenizer and part of speech tagger | 1 |
phoneme based models , such as , the ones based on weighted finite state transducers and extended markov window treat transliteration as a phonetic process rather than an orthographic process---phoneme based models like the ones based on weighted finite state transducers and extended markov window treat transliteration as a phonetic process rather than an orthographic process | 1 |
in order to measure translation quality , we use bleu 7 and ter scores---for the evaluation of the results we use the bleu score | 1 |
we use the scikit-learn toolkit as our underlying implementation---sentiment analysis is the task of identifying positive and negative opinions , sentiments , emotions and attitudes expressed in text | 0 |
for the generation of the parse trees we used the stanford parser---for the shallow parsing step , required for coplda np , we used the stanford parser | 1 |
the disadvantage of word-to-word translation is overcome by phrase-based translation and log-linear model combination---user : i want to prevent tom from reading my file | 0 |
the log-linear model is then tuned as usual with minimum error rate training on a separate development set coming from the same domain---the feature weights are tuned to optimize bleu using the minimum error rate training algorithm | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.