text stringlengths 82 736 | label int64 0 1 |
|---|---|
for all classifiers , we used the scikit-learn implementation---we use a random forest classifier , as implemented in scikit-learn | 1 |
we use the logistic regression implementation of liblinear wrapped by the scikit-learn library---we use a java implementation 2 of svm from liblinear , with the original parameter values used by the nrc canada system | 1 |
we use berkeley pcfg parser to parse sentences---for pcfg parsing , we select the berkeley parser | 1 |
we find that the learner¡¯s uncertainty is a robust predictive criterion that can be easily applied to different learning models---and that uncertainty is a robust predictive criterion that can be easily applied to different learning models | 1 |
dong et al employs three fixed cnns to represent questions , while ours is able to express the focus of each unique answer aspect to the words in the question---dong et al use three columns of cnns to represent questions respectively when dealing with different answer aspects | 1 |
we use the stanford parser for english language data---we parse all documents using the stanford parser | 1 |
we implemented this model using the srilm toolkit with the modified kneser-ney discounting and interpolation options---we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus | 1 |
the irstlm toolkit is used to build ngram language models with modified kneser-ney smoothing---cabrio and villata combined textual entailment with argumentation theory to automatically extract the arguments from online debates | 0 |
in this paper we study a challenging task to automatically construct sports news from live text commentary---costa-jussa and fonollosa considered the source reordering as a translation task which translates the source sentence into reordered source sentence | 0 |
we apply reinforce to directly optimize the task reward of this structured prediction problem---taxonomies are widely used for knowledge standardization , knowledge sharing , and inferencing in natural language processing tasks | 0 |
ixa pipeline provides a simple , efficient , accurate and ready to use set of nlp tools---ixa pipeline provides ready to use modules to perform efficient and accurate | 1 |
most recently , yang et al introduced hierarchical attention networks for document classification---inspired by this , yang et al introduced hierarchical attention networks where the representation of a document is hierarchically built up | 1 |
word sense disambiguation , the task of automatically assigning predefined meanings to words occurring in context , is a fundamental task in computational lexical semantics---word sense disambiguation is a key task in computational lexical semantics , inasmuch as it addresses the lexical ambiguity of text by making explicit the meaning of words occurring in a given context | 1 |
for the classification task , we use pre-trained glove embedding vectors as lexical features---for the first two features , we adopt a set of pre-trained word embedding , known as global vectors for word representation | 1 |
we evaluated our approaches using the englishfrench hansards data from the 2003 naacl shared task---natural language generation ( nlg ) plays a critical role in spoken dialogue systems ( sds ) | 0 |
chen et al proposes a character-enhanced word representation model by adding the averaged character embeddings to the word embedding---chen et al proposed a character-enhanced chinese word embedding model , which splits a chinese word into several characters and add the characters into the input layer of their models | 1 |
baroni et al show that word embeddings are able to outperform count based word vectors on a variety of nlp tasks---baroni et al found that trained , general-purpose word embeddings-bdk2014-systematically outperform count-based representations on most of these tasks | 1 |
psl is a new statistical relational learning method that has been applied to many nlp and other machine learning tasks in recent years---psl is a new model of statistical relation learning and has been quickly applied to solve many nlp and other machine learning tasks in recent years | 1 |
we used the bleu score to evaluate the translation accuracy with and without the normalization---we evaluate the translation quality using the case-insensitive bleu-4 metric | 1 |
sentence compression is the task of generating a grammatical and shorter summary for a long sentence while preserving its most important information---sentence compression is the task of shortening a sentence while preserving its important information and grammaticality | 1 |
latent dirichlet allocation is a generative model in which a document is modeled as a finite mixture of topics , where each topic is represented as a multinomial distribution of words---latent dirichlet allocation is a widely used type of topic model in which documents can be viewed as probability distributions over topics , 胃 | 1 |
zhou et al further extend it to context-sensitive shortest path-enclosed tree , which dynamically includes necessary predicate-linked path information---zhou et al further extend it to context-sensitive shortest pathenclosed tree , which includes necessary predicate-linked path information | 1 |
we use pre-trained glove vector for initialization of word embeddings---in this task , we use the 300-dimensional 840b glove word embeddings | 1 |
erkan and radev use it to compute the sentence importance based on the concept of eigenvector centrality in a graph representation of sentences---similarity is a kind of association implying the presence of characteristics in common | 0 |
in this paper , we present reranking models for discourse parsing based on support vector machines ( svms ) and tks---in this paper , we present a discriminative approach for reranking discourse trees generated by an existing probabilistic discourse | 1 |
to minimize the objective , we use stochastic gradient descent with the diagonal variant of adagrad---we use stochastic gradient descent with adagrad , l 2 regularization and minibatch training | 1 |
these character-based representations are then fed into a two-layer bidirectional long shortterm memory recurrent neural network---we process the embedded words through a multi-layer bidirectional lstm to obtain contextualized embeddings | 1 |
the annotation was performed using the brat 2 tool---the annotation was performed manually using the brat annotation tool | 1 |
the task of cross-language document summarization is to create a summary in a target language from documents in a different source language---named entity ( ne ) transliteration is the process of transcribing a ne from a source language to a target language based on phonetic similarity between the entities | 0 |
furthermore , we train a 5-gram language model using the sri language toolkit---we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit | 1 |
we use a phrase-based statistical machine translation system which is similar to---we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset | 0 |
goldwasser et al took an unsupervised approach for semantic parsing based on self-training driven by confidence estimation---goldwasser et al presented a confidence-driven approach to semantic parsing based on self-training | 1 |
meanwhile , model-refinement is employed to reduce the bias incurred by ecoc---modality scopes in arabic are most likely realized as clauses , deverbal nouns or to-infinitives , according to al-sabbagh et al | 0 |
for our purpose we use word2vec embeddings trained on a google news dataset and find the pairwise cosine distances for all words---event coreference resolution is the task of identifying event mentions and clustering them such that each cluster represents a unique real world event | 0 |
mbr decoding aims to find the candidate hypothesis that has the least expected loss under a probability model---an mbr decoder seeks the hypothesis with the least expected loss under a probability model | 1 |
in our model , we introduce a sentinel to control the tradeoff between background knowledge and information from the text---this approach was successfully used in large vocabulary continuous speech recognition and in a phrase-based system for a small task | 0 |
specifically , we characterize the student¡¯s knowledge as a vector of feature weights , which is updated as the student interacts with the system---because we take a student ¡¯ s knowledge to be a vector of prediction parameters ( feature weights ) | 1 |
for the sick and msrvid experiments , we used 300-dimension glove word embeddings---we downloaded glove data as the source of pre-trained word embeddings | 1 |
we use 300 dimension word2vec word embeddings for the experiments---we use the pre-trained word2vec embeddings provided by mikolov et al as model input | 1 |
our approach replaces the opaque word types usually modeled in lda with continuous space embeddings of these words , which are generated as draws from a multivariate gaussian---otero et al showed how wikipedia could be used as a source of comparable corpora in different language pairs | 0 |
consequently , alignment is a central component of a number of important tasks involving text comparison : textual entailment recognition , textual similarity identification , paraphrase detection , question answering and text summarization , to name a few---alignment is a preliminary step for amr parsing , and our aligner improves current amr parser performance | 1 |
we propose a deep learning approach that automatically learns context-entity similarity measure for entity disambiguation---we propose a novel method to learn context entity association enriched with deep architecture | 1 |
as expected , the glass-box features help to reduce mae and rmse for both err and n ?---as expected , the glass-box features help to reduce mae and rmse | 1 |
coreference resolution is the task of identifying all mentions which refer to the same entity in a document---additionally , coreference resolution is a pervasive problem in nlp and many nlp applications could benefit from an effective coreference resolver that can be easily configured and customized | 1 |
after this we parse articles using the stanford parser---we use the stanford parser to derive the trees | 1 |
based on these previous attempts , this study proposes a multimodal interaction model by focusing on task manipulation , and predicts conversation states using probabilistic reasoning---through natural conversational interaction , this paper proposes a probabilistic model that computes timing dependencies among different types of behaviors | 1 |
we train an english language model on the whole training set using the srilm toolkit and train mt models mainly on a 10k sentence pair subset of the acl training set---we train a 5-gram language model with the xinhua portion of english gigaword corpus and the english side of the training set using the srilm toolkit | 1 |
it has previously been shown that word embeddings represent the contextualised lexical semantics of words---it has been empirically shown that word embeddings could capture semantic and syntactic similarities between words | 1 |
in this paper , the phrase-based machine translation system is utilized---in this work , we apply a standard phrase-based translation system | 1 |
named entity recognition ( ner ) is the task of identifying named entities in free text—typically personal names , organizations , gene-protein entities , and so on---we develop translation models using the phrase-based moses smt system | 0 |
in this experiment we have customized freely available maltparser which follows a data-driven approach---we have participated in this task using the freely available maltparser which follows the data-driven approach | 1 |
the continuous bag-of-words approach described by mikolov et al is learned by predicting the word vector based on the context vectors---the model builds on the continuous bag-of-words model which learns embeddings by predicting words given their contexts | 1 |
we use pre-trained vectors from glove for word-level embeddings---we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings | 1 |
therefore , we adopt a greedy feature se-lection algorithm as described in to pick up positive features incrementally according to their contribu-tions on the development data---therefore , we adopt the greedy feature selection algorithm as described in jiang et al to pick up positive features incrementally according to their contributions | 1 |
the system was trained using moses with default settings , using a 5-gram language model created from the english side of the training corpus using srilm---target language models were trained on the english side of the training corpus using the srilm toolkit | 1 |
we also use a 4-gram language model trained using srilm with kneser-ney smoothing---as a countbased baseline , we use modified kneser-ney as implemented in kenlm | 1 |
our model can thus easily be trained to detect semantic divergences in any parallel corpus---the translation quality is evaluated by case-insensitive bleu-4 metric | 0 |
bleu is one of the most popular metrics for automatic evaluation of machine translation , where the score is calculated based on the modified n-gram precision---the bleu metric has been widely accepted as an effective means to automatically evaluate the quality of machine translation outputs | 1 |
at the same time , it provides an easyto-use interface to access the revision data---at the same time provides an easyto-use interface to access the revision data | 1 |
we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus---we trained a 4-gram language model on this data with kneser-ney discounting using srilm | 1 |
we use the logistic regression classifier in the skll package , which is based on scikit-learn , optimizing for f 1 score---we use the logistic regression classifier as implemented in the skll package , which is based on scikitlearn , with f1 optimization | 1 |
coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity---coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set | 1 |
experiment results show that our approach achieves satisfactory performance against the baseline models---experiments on the benchmark data set show that our model achieves comparable and even better performance | 1 |
we trained a 5-grams language model by the srilm toolkit---we trained a tri-gram hindi word language model with the srilm tool | 1 |
the system 's semantic interpretation component can in particular deal with scoping problems involving coordination---in general , the use of modifier structures and the associated semantic interpretation component permits a good treatment of scoping problems involving coordination | 1 |
word sense disambiguation ( wsd ) is formally defined as the task of computationally identifying senses of a word in a context---word sense disambiguation ( wsd ) is the task of determining the meaning of a word in a given context | 1 |
we propose a weakly supervised framework to extract relations from chinese ugcs---in this work , we further propose a word embedding based model that consider the word formation of ugcs | 1 |
twitter is a widely used microblogging platform , where users post and interact with messages , “ tweets ”---twitter consists of a massive number of posts on a wide range of subjects , making it very interesting to extract information and sentiments from them | 1 |
combining the two modeling techniques yields the best known result on the benchmark which shows that the two models are complementary---complementary : combining the two modeling techniques yields the best known result on the one billion word benchmark | 1 |
for nb and svm , we used their implementation available in scikit-learn---we used the implementation of the scikit-learn 2 module | 1 |
the word embeddings are pre-trained , using word2vec 3---for creating the word embeddings , we used the tool word2vec 1 | 1 |
the words in the document , question and answer are represented using pre-trained word embeddings---the word embeddings are initialized with the publicly available word vectors trained through glove 5 and updated through back propagation | 1 |
we use the open source moses phrase-based mt system to test the impact of the preprocessing technique on translation quality---we use the moses toolkit with a phrase-based baseline to extract the qe features for the x l , x u , and testing | 1 |
ccgs are a linguistically-motivated formalism for modeling a wide range of language phenomena---ccg is a linguistic formalism that tightly couples syntax and semantic | 1 |
hatzivassiloglou and mckeown proposed a method for identifying the word polarity of adjectives---hatzivassiloglou and mckeown proposed a supervised algorithm to determine the semantic orientation of adjectives | 1 |
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting | 1 |
among previous distant supervision methods , formally proposed a multi-instance multi-label framework in a bayesian framework---to address this shortcoming , developed the relaxed distant supervision assumption for multi-instance learning | 1 |
we implement our approach in the framework of phrase-based statistical machine translation---in this work , we apply a standard phrase-based translation system | 1 |
in this paper , we proposed a new task of japanese noun phrase segmentation---recently , stevens et al used an aggregate version of this metric to evaluate large amounts of topic models | 0 |
takamura et al used the spin model to extract word semantic orientation---we use a joint source and target byte-pair encoding with 10k merge operations | 0 |
given two distributions represented by two scatter matrices 危 1 and 危 2 , a number of measures can be used to compute the distance between 危 1 and 危 2 , such as choernoff and bhattacharyya distances---given two distributions represented by two scatter matrices 危 1 and 危 2 , a number of measures can be used to compute the distance between 危 1 and 危 2 , such as chernoff and bhattacharyya distances | 1 |
script knowledge is defined as the knowledge about everyday activities which is mentioned in narrative documents---script knowledge is a body of knowledge that describes a typical sequence of actions people do in a particular situation ( cite-p-7-1-6 ) | 1 |
we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing---we use srilm train a 5-gram language model on the xinhua portion of the english gigaword corpus 5th edition with modified kneser-ney discounting | 1 |
a trigram model was built on 20 million words of general newswire text , using the srilm toolkit---in the future , this work needs to be further developed to deal with anaphora in other types of texts and the use of connectives in generated text | 0 |
for nb and svm , we used their implementation available in scikit-learn---we used scikit-learn library for all the machine learning models | 1 |
next we consider the context-predicting vectors available as part of the word2vec 6 project---all parameters are initialized using glorot initialization | 0 |
we use srilm for training a trigram language model on the english side of the training corpus---we train trigram language models on the training set using the sri language modeling tookit | 1 |
coreference resolution is the task of identifying all mentions which refer to the same entity in a document---coreference resolution is a key problem in natural language understanding that still escapes reliable solutions | 0 |
we extract features from the social networks and examine their correlation with one another , as well as with metadata such as the novel¡¯s setting---in question , we compute various characteristics of the dialogue-based social network and stratify these results by categories such as the novel ¡¯ s setting | 1 |
the process of determining the antecedent of an anaphor is called anaphora resolution---anaphora resolution is the process of determining the referent of ~uaphors | 1 |
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts---relation extraction is a challenging task in natural language processing | 1 |
recently , socher et al worked on phrase level sentiment classification in english using recursive neural tensor networks over a fine grained phrase level annotated corpus---we trained the statistical phrase-based systems using the moses toolkit with mert tuning | 0 |
this score measures the precision of unigrams , bigrams , trigrams and fourgrams with respect to a reference translation with a penalty for too short sentences---coreference resolution is the process of linking multiple mentions that refer to the same entity | 0 |
we build our aspect-based sentiment polarity classification systems using deep neural networks including long short-term memory networks and convolutional neural networks---our 5-gram language model is trained by the sri language modeling toolkit | 0 |
word sense disambiguation ( wsd ) is the nlp task that consists in selecting the correct sense of a polysemous word in a given context---word sense disambiguation ( wsd ) is a natural language processing ( nlp ) task in which the correct meaning ( sense ) of a word in a given context is to be determined | 1 |
we implement an in-domain language model using the sri language modeling toolkit---we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit | 1 |
we use 300 dimension word2vec word embeddings for the experiments---we adopt pretrained embeddings for word forms with the provided training data by word2vec | 1 |
dependency parsing is the task of building dependency links between words in a sentence , which has recently gained a wide interest in the natural language processing community---however , dependency parsing , which is a popular choice for japanese , can incorporate only shallow syntactic information , i.e. , pos tags , compared with the richer syntactic phrasal categories in constituency parsing | 1 |
this model finds that some word categories , specifically pronouns used to establish group identity and common ground , are negatively aligned---bannard and callison-burch first exploited bilingual corpora for phrasal paraphrase extraction | 0 |
graph connectivity measures are employed for unsupervised parameter tuning---graph connectivity measures can be successfully employed to perform unsupervised parameter tuning | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.