text stringlengths 82 736 | label int64 0 1 |
|---|---|
we train word embeddings using the continuous bag-of-words and skip-gram models described in mikolov et al as implemented in the open-source toolkit word2vec---for all the experiments below , we utilize the pretrained word embeddings word2vec from mikolov et al to initialize the word embedding table | 1 |
recently , deep learning has also been introduced to propose an end-to-end convolutional neural network for relation classification---a number of convolutional neural network , recurrent neural network , and other neural architectures have been proposed for relation classification | 1 |
we optimise the feature weights of the model with minimum error rate training against the bleu evaluation metric---in this task , we used conditional random fields | 0 |
in this paper , we deal with the problem of product aspect rating prediction---other than similarity features , we also use evaluation metrics for machine translation as suggested in for paraphrase recognition on microsoft research paraphrase corpus | 0 |
in this work , we employ the toolkit word2vec to pre-train the word embedding for the source and target languages---in our word embedding training , we use the word2vec implementation of skip-gram | 1 |
semantic parsing is the task of mapping natural language to machine interpretable meaning representations---in this paper we explore a row-less extension of universal schema that forgoes explicit row representations | 0 |
yang et al introduced an attention mechanism using a single matrix and outputting a single vector---yang et al and proposed a hierarchical rnn model to learn attention weights based on the local context using an unsupervised method | 1 |
for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences---we built a 5-gram language model on the english side of europarl and used the kneser-ney smoothing method and srilm as the language model toolkit | 1 |
we use word2vec to train the word embeddings---the most commonly used word embeddings were word2vec and glove | 1 |
we observe that a good question is a natural composition of interrogatives , topic words , and ordinary words---good questions in conversational systems are a natural composition of interrogatives , topic words , and ordinary words | 1 |
the binary syntactic features were automatically extracted using the stanford parser---this syntactic information is obtained from the stanford parser | 1 |
we propose a ranking model that combines a translation model with the cosine similarity-based method---in this paper , we propose a ranking model that combines a translation model with the cosine-based similarity method | 1 |
we used moses with the default configuration for phrase-based translation---we used moses , a state-of-the-art phrase-based smt model , in decoding | 1 |
dependency parsing is a way of structurally analyzing a sentence from the viewpoint of modification---we use the transformer model from vaswani et al which is an encoder-decoder architecture that relies mainly on a self-attention mechanism | 0 |
we report case-sensitive bleu and ter as the mt evaluation metrics---we report mt performance in table 1 by case-insensitive bleu | 1 |
the goal of multi-task learning is to learn related tasks jointly in order to improve their models over independently learned one---multi-task learning using a related auxiliary task can lead to stronger generalization and better regularized models | 1 |
we use the glove vector representations to compute cosine similarity between two words---we use the glove algorithm to obtain 300-dimensional word embeddings from a union of these corpora | 1 |
srilm can be used to compute a language model from ngram counts---srilm toolkit is used to build these language models | 1 |
we used srilm to build a 4-gram language model with interpolated kneser-ney discounting---we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing | 1 |
in this paper , we presented a new approach for domain adaptation using ensemble decoding---in this paper , we evaluate performance on a domain adaptation setting | 1 |
the weights of the different feature functions were optimised by means of minimum error rate training on the 2013 wmt test set---the log linear weights for the baseline systems are optimized using mert provided in the moses toolkit | 1 |
we created word embeddings from googles pretrained word2vec and created topic embeddings from a trained lda specific to the corpus---we used the pre-trained word embeddings that are learned using the word2vec toolkit on google news dataset | 1 |
first , papers using comparatively sized corpora have reported encouraging results for similar experiments---first , papers using comparatively sized corpora reported encouraging results for similar experiments | 1 |
the work by mccallum demonstrates a method for iteratively constructing feature conjunctions that would increase conditional log-likelihood if added to the model---mccallum suggested an efficient method of feature induction by iteratively increasing conditional loglikelihood for discrete features | 1 |
we use latent dirichlet allocation to obtain the topic words for each lexical pos---we can learn a topic model over conversations in the training data using latent dirchlet allocation | 1 |
cite-p-13-5-0 extend this idea using a recurrent language model to generate responses in a context-sensitive manner---cite-p-13-5-4 propose a model to map from dialogue acts to natural language sentences and use bleu to evaluate the quality of the generated sentences | 1 |
in this paper , we have discussed possibilities to translate via pivot languages on the character level---we use moses , an open source toolkit for training different systems | 0 |
unsupervised parsing has attracted researchers for over a quarter of a century for reviews )---unsupervised parsing has attracted researchers for decades for recent reviews ) | 1 |
the english side of the parallel corpus is trained into a language model using srilm---the language model is trained on the target side of the parallel training corpus using srilm | 1 |
sentiment analysis is the study of the subjectivity and polarity ( positive vs. negative ) of a text ( cite-p-7-1-10 )---sentiment analysis is the process of identifying and extracting subjective information using natural language processing ( nlp ) | 1 |
word sense disambiguation ( wsd ) is a key enabling-technology that automatically chooses the intended sense of a word in context---word sense disambiguation ( wsd ) is a particular problem of computational linguistics which consists in determining the correct sense for a given ambiguous word | 1 |
word embeddings have proven to be effective models of semantic representation of words in various nlp tasks---multi-task learning has resulted in successful systems for various nlp tasks , especially in cross-lingual settings | 1 |
the training set is used to train the phrase-based translation model and language model for moses---the phrase based model in moses is trained on the parallel data created from the training part of htb | 1 |
one of the most popular instantiations of loglinear models in smt are phrase-based models---most recent approaches in smt , eg , use a log-linear model to combine probabilistic features | 1 |
the dependency parser we use is an implementation of a transition-based dependency parser---our model is an extension of the transition-based parsing framework described by nivre for dependency tree parsing | 1 |
in this work we aim to computationally capture linguistic cues that predict a conversation¡¯s future health---in these approaches , our work is concerned with predicting the future trajectory of an ongoing conversation | 1 |
we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit---we built a 5-gram language model from it with the sri language modeling toolkit | 1 |
this method , described in the next section , learns transformations that capture non-linearity but vary smoothly as the input changes---in the next section , learns transformations that capture non-linearity but vary smoothly | 1 |
word sense disambiguation ( wsd ) is the task of determining the correct meaning ( “ sense ” ) of a word in context , and several efforts have been made to develop automatic wsd systems---word sense disambiguation ( wsd ) is a natural language processing ( nlp ) task in which the correct meaning ( sense ) of a word in a given context is to be determined | 1 |
we use skip-gram representation for the training of word2vec tool---we use the word2vec tool with the skip-gram learning scheme | 1 |
word sense disambiguation ( wsd ) is a key enabling-technology that automatically chooses the intended sense of a word in context---sentiment analysis is the task of automatically identifying the valence or polarity of a piece of text | 0 |
we present an approximated conditional random field using coarse-to-fine decoding and early updating---we combined heterogeneous unsupervised algorithms to obtain competitive performance | 0 |
for training the translation model and for decoding we used the moses toolkit---we used a standard pbmt system built using moses toolkit | 1 |
metaphor is ubiquitous in text , even in highly technical text---word usage is important for reasoning about the implications of text | 1 |
we use the pre-trained word2vec embeddings provided by mikolov et al as model input---we use the word2vec tool to pre-train the word embeddings | 1 |
topic models have great potential for helping users understand document corpora---topic models implicitly use document level co-occurrence information | 1 |
word alignment is a critical first step for building statistical machine translation systems---word alignment is a key component of most endto-end statistical machine translation systems | 1 |
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing---we then created trigram language models from a variety of sources using the srilm toolkit , and measured their perplexity on this data | 1 |
natural language is a medium presumably known by most users---natural language consists of a number of relational structures , many of which can be obscured by lexical idiosyncrasies , regional variation and domain-specific conventions | 1 |
the weights of the different feature functions were optimised by means of minimum error rate training---the systems were tuned using a small extracted parallel dataset with minimum error rate training and then tested with different test sets | 1 |
following the work of koo et al , we used a tagger trained on training data to provide part-of-speech tags for the development and test sets , and used 10-way jackknifing to generate part-of-speech tags for the training set---in the given example , * ss would be a finite-state acceptor that allows sibilant-sibilant sequences , but only at a cost | 0 |
we use the svm implementation from scikit-learn , which in turn is based on libsvm---we use the scikit-learn toolkit as our underlying implementation | 1 |
here we call a sequence of words which have lexicai cohesion relation with each other a lezical chain like---we call a sequence of words which are in lexieal cohesion relation with each other a icxical chain like | 1 |
user adaptation to the system ’ s lexical and syntactic choices can be particularly useful in flexible input dialog systems---we pre-train the word embeddings using word2vec | 0 |
the phrasebased machine translation uses the grow-diag-final heuristic to extend the word alignment to phrase alignment by using the intersection result---for phrase extraction the grow-diagfinal heuristics described in is used to derive the refined alignment from bidirectional alignments | 1 |
as cite-p-20-7-12 puts it , coreference resolution is a ¡°difficult , but not intractable problem , ¡± and we have been making ¡°slow , but steady progress¡± on improving machine learning approaches to the problem in the past fifteen years---as cite-p-20-7-12 puts it , coreference resolution is a ¡° difficult , but not intractable problem , ¡± and we have been making ¡° slow , but steady progress ¡± on improving machine learning approaches to the problem | 1 |
text categorization is the problem of automatically assigning predefined categories to free text documents---text categorization is a crucial and well-proven method for organizing the collection of large scale documents | 1 |
we estimate a 5-gram language model using interpolated kneser-ney discounting with srilm---while promising , this technique often introduces noise to the generated training data , which can severely affect the model | 0 |
we used the moses toolkit for performing statistical machine translation---we used the phrasebased translation system in moses 5 as a baseline smt system | 1 |
for training our system classifier , we have used scikit-learn---for data preparation and processing we use scikit-learn | 1 |
recently , rnn-based models have been successfully used in machine translation and dialogue systems---lstm networks have been used successfully for language modelling , sentiment analysis , textual entailment , and machine translation | 1 |
we rely on the sentiment analysis module in the stanford corenlp---specifically , we use the stanford sentiment treebank | 1 |
in this paper , a novel singleton detection system which makes use of word embeddings and neural networks is presented---by 3 . 85 % ) , and in medium-resource scenarios the performance was 65 . 06 % ( almost the same as baseline ) | 0 |
in this paper , we propose an inference method , the ldi , which is able to decode the optimal label sequence on latent conditional models---in this paper , we propose a new inference algorithm , latent dynamic inference ( ldi ) , by systematically | 1 |
the language model was a 5-gram language model estimated on the target side of the parallel corpora by using the modified kneser-ney smoothing implemented in the srilm toolkit---this architecture is very similar to the framework of uima | 0 |
social media is a rich source of rumours and corresponding community reactions---social media is a natural place to discover new events missed by curation , but mentioned online by someone planning to attend | 1 |
we initialize the embedding weights by the pre-trained word embeddings with 200 dimensional vectors---we embed all words and characters into low-dimensional real-value vectors which can be learned by language model | 1 |
the hsql was a valuable experience in the effort to make transportable natural language interfaces---implementation represents a a major effort in bringing natural language into practical use | 1 |
we learn our word embeddings by using word2vec 3 on unlabeled review data---we train skip-gram word embeddings with the word2vec toolkit 1 on a large amount of twitter text data | 1 |
a unified model is proposed to fuse different types of sentiment information and train sentiment classifier for target domain---the general sentiment information extracted from sentiment lexicons is adapted to target domain using domain-specific sentiment | 1 |
we have demonstrated targeted methods for extracting world knowledge that is necessary for making quantifier scope disambiguation decisions---we provide an empirical demonstration that our system is able to resolve quantifier scope ambiguities | 1 |
the quality of translations is evaluated by the case insensitive nist bleu-4 metric---translation quality is evaluated by case-insensitive bleu-4 metric | 1 |
first , to establish our baseline tagging performance , we take the classification algorithm outlined earlier in section 4 , and apply it to the switchboard corpus for both training and testing , replicating the work reported in webb et al---to that end , we take the classification algorithm outlined earlier in section 4 , and apply it to the switchboard corpus for both training and testing , replicating the work reported in webb et al | 1 |
we have presented a novel framework where word alignment is framed as submodular maximization subject to matroid constraints---in this paper , we moreover show that submodularity naturally arises in word alignment problems | 1 |
in all cases , we used the implementations from the scikitlearn machine learning library---we use the scikit-learn toolkit as our underlying implementation | 1 |
conditional random fields are undirected graphical models represented as factor graphs---conditional random fields are undirected graphical models trained to maximize a conditional probability | 1 |
the 'grammar ' consists of a lexicon where each lexical item is associated with a finite number of structures for which that item is the 'head '---the grammar consists of head-dependent relations between words and can be learned automatically from a raw corpus using the reestimation algorithm which is also introduced in this paper | 1 |
we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words---we built a 5-gram language model on the english side of europarl and used the kneser-ney smoothing method and srilm as the language model toolkit | 1 |
many methods have been proposed to compute distributional similarity between words---many methods have been proposed to compute distributional similarity between words , eg , and | 1 |
we use the l2-regularized logistic regression of liblinear as our term candidate classifier---we use the wrapper of the scikit learn python library over the liblinear logistic regression implementation | 1 |
the model parameters are trained using minimum error-rate training---unreliable scores does not result in a reliable one | 0 |
we introduce the sv000gg systems : two ensemble methods for the complex word identification task of semeval 2016---also described is a strategy for creating cooperative responses to user queries , incorporating an intelligent language generation capability that produces content-dependent verbal descriptions of listed items | 0 |
our system is competitive with the best systems , obtaining the highest reported f-scores on a number of the bakeoff corpora---and consequently , problems peculiar to spontaneous speech arise in dependency structure analysis , such as ambiguous clause boundaries | 0 |
the latent dirichlet allocation is a topic model that is assumed to provide useful information for particular subtasks---latent dirichlet allocation , first introduced by , is a type of topic model that performs the so-called latent semantic analysis | 1 |
the output of bigru is then used as the input to the capsule network---then , the output of bigru is fed as input to the capsule network | 1 |
lda is a generative probabilistic model where documents are viewed as mixtures over underlying topics , and each topic is a distribution over words---lda is a probabilistic model of text data which provides a generative analog of plsa , and is primarily meant to reveal hidden topics in text documents | 1 |
translation performance was measured by case-insensitive bleu---case-insensitive bleu4 was used as the evaluation metric | 1 |
pv is an unsupervised framework that learns distributed representations for sentences and documents---doc2vec is an unsupervised algorithm to learn distributed representation of multi-word sequences in semantic space | 1 |
finally , we propose a scalable low-rank approximation approach for learning joint embeddings of news stories and images---in this paper , we introduce a low-rank approximation based approach for learning joint embeddings of news stories and images | 1 |
the exponential log-linear model weights of our system are set by tuning the system on development data using the mert procedure by means of the publicly available zmert toolkit 1---the exponential log-linear model weights of both the smt and re-scoring stages of our system were set by tuning the system on development data using the mert procedure by means of the publicly available zmert toolkit 1 | 1 |
commonly used models such as hmms , n-gram models , markov chains , probabilistic finite state transducers and pcfgs all fall in the broad family of pfsms---commonly used models such as hmms , n-gram models , markov chains and probabilistic finite state transducers all fall in the broad family of pfsms | 1 |
we use belief propagation or bp for inference in our graphical models---we use belief propagation for inference in our crfs | 1 |
we obtained a phrase table out of this data using the moses toolkit---we use the moses smt toolkit to test the augmented datasets | 1 |
the experimental results show that our method achieves better performance than the state-of-the-art methods---experimental results show that our proposed method outperforms the state-of-the-art methods | 1 |
in this paper , we follow the work of and to extract alliteration chains and rhyme chains in text by using cmu speech dictionary 1---following , we build a feature set which includes alliteration chain and rhyme chain by using cmu speech dictionary 1 | 1 |
although entity linking is a widely researched topic , the same can not be said for entity linking geared for languages other than english---in this paper , we present a unified model for both word sense representation and disambiguation | 0 |
we employ glove , a state-of-the-art model of distributional lexical semantics to obtain vector representations for all corpus words---we use glove word embeddings , an unsupervised learning algorithm for obtaining vector representations of words | 1 |
the method is a naive-bayes classifier which learns from noisy data---campbell developed a set of linguistically motivated hand-written rules for gap insertion | 0 |
sequence labeling is the simplest subclass of structured prediction problems---although sequence labeling is the simplest subclass , a lot of real-world tasks are modeled as problems of this simplest subclass | 1 |
sentiment analysis ( sa ) is the research field that is concerned with identifying opinions in text and classifying them as positive , negative or neutral---sentiment analysis ( sa ) is the task of determining the sentiment of a given piece of text | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.