text stringlengths 82 736 | label int64 0 1 |
|---|---|
our 5-gram language model is trained by the sri language modeling toolkit---work makes a first attempt at investigating the evaluation of narrative quality | 0 |
we use the diagonal variant of adagrad with minibatches , which is widely applied in deep learning literature ,---we use stochastic gradient descent with adagrad , l 2 regularization and minibatch training | 1 |
the first contribution in this paper is that a novel language model , the binarized embedding language model ( belm ) is proposed to reduce the memory consumption---a & m university had succeeded in cloning a whitetail deer | 0 |
to construct the local embeddings , we use two neural network architectures introduced by mikolov et al on our corpus , namely , the cbow and the skip-gram architectures shown in figure 1---we use binary crossentropy loss and the adam optimizer for training the nil-detection models | 0 |
as each edge in the confusion network only has a single word , it is possible to produce inappropriate translations such as “ he is like of apples ”---as each edge in the confusion network only has a single word , it is possible to produce inappropriate translations such as “ | 1 |
wu et al proposed relative position and parse template language models to detect chinese errors written by us learner---wu et al proposed a combination of relative position and analytic template language model to detect chinese errors written by american learners | 1 |
as a fundamental task in natural language processing , wsd can benefit applications such as machine translation and information retrieval---however , its application to document compression is novel | 0 |
for ner , we use a bengali news corpus , developed from the archive of a leading bengali newspaper available in the web---we have used a bengali news corpus developed from the webarchives of a widely read bengali newspaper | 1 |
we applied a 5-gram mixture language model with each sub-model trained on one fifth of the monolingual corpus with kneser-ney smoothing using srilm toolkit---for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus | 1 |
most sentence embedding models typically represent each sentence only using word surface , which makes these models indiscriminative for ubiquitous homonymy and polysemy---most sentence embedding models represent each sentence only using word surface , which makes these models indiscriminative for ubiquitous polysemy ; ( ii ) for short-text , | 1 |
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---english 4-gram language models with kneser-ney smoothing are trained using kenlm on the target side of the parallel training corpora and on the gigaword corpus | 1 |
in order to measure translation quality , we use bleu 7 and ter scores---we use case-sensitive bleu-4 to measure the quality of translation result | 1 |
lda is a topic model that generates topics based on word frequency from a set of documents---lda is a statistical method that learns a set of latent variables called topics from a training corpus | 1 |
naturally , lexical substitution is a very common first step in textual entailment recognition , which models semantic inference between a pair of texts in a generalized application independent setting ( cite-p-19-1-0 )---additionally , lexical substitution is a more natural task than similarity ratings , it makes it possible to evaluate meaning composition at the level of individual words , and provides a common ground to compare cdsms with dedicated lexical substitution models | 1 |
semantic role labeling ( srl ) is a task of automatically identifying semantic relations between predicate and its related arguments in the sentence---in this paper , we proposed a new approach for analyzing the sentiment of figurative language | 0 |
in the second pass , detailed information pieces are further extracted within the boundary of certain blocks---in the second pass , the detailed information , such as name and address , are identified in certain blocks | 1 |
moreover , by using lattice decoding , we can employ the source-side language model as a decoding feature---we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit | 0 |
we use pre-trained word vectors of glove for twitter as our word embedding---we use pre-trained 100 dimensional glove word embeddings | 1 |
in our experiments , we show that our distantly supervised approach matches the state-of-the-art performance while joint inference further improves on it by 3.2 f-score points---we show that our distantly supervised approach matches the state-of-the-art performance while joint inference further improves on it by 3 . 2 f-score points | 1 |
however , finding the best string ( e.g. , during decoding ) is then computationally intractable---unfortunately , finding the best string is then computationally intractable | 1 |
cite-p-25-3-10 explored the use of label propagation ( cite-p-25-3-18 )---cite-p-25-3-10 explored the use of label propagation ( lp ) ( zhu and ghahramani , 2002 ) | 1 |
we apply this novel learning algorithm to pos tagging---as we have seen from the other systems , graph based local measures may be the appropriate answer to reach the level of the best systems on this task , however | 0 |
moreover , we also implemented intra-sentence discourse relations for polarity identification---in our paper , we also implemented intra-sentence discourse relations for polarity identification | 1 |
we used srilm -sri language modeling toolkit to train several character models---in order to acquire syntactic rules , we parse the chinese sentence using the stanford parser with its default chinese grammar | 0 |
for the word-embedding based classifier , we use the glove pre-trained word embeddings---we use different pretrained word embeddings such as glove 1 and fasttext 2 as the initial word embeddings | 1 |
we presented a novel model to automatically mine transliteration pairs---we present a novel model of transliteration mining | 1 |
in this work , we focus on extracting subtasks from a given collection of on-task search queries---framenet is a knowledgebase of frames , describing prototypical situations | 0 |
bleu is a popular metric for evaluating statistical machine translation systems and fits our needs well---bleu is the most commonly used metric for machine translation evaluation | 1 |
this criterion removes some of the approximations employed in seymore and rosenfeld---this method is an entropy-based cutoff method , and can be considered an extension of the work of seymore and rosenfeld | 1 |
in this paper we report results of srl experiments on nominalized predicates in chinese , using a newly completed corpus , the chinese nombank---in this paper , we report srl experiments performed on nominalized predicates in chinese , taking advantage of a newly completed corpus , the chinese nombank | 1 |
tai et al utilize tree-structured longshort memory networks to learn semantic representation for sentiment classification---tai et al and zhu et al extended sequential lstms to tree-structured lstms by adding branching factors | 1 |
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing---we trained kneser-ney discounted 5-gram language models on each available corpus using the srilm toolkit | 1 |
relation extraction is the task of detecting and characterizing semantic relations between entities from free text---relation extraction is the task of finding relational facts in unstructured text and putting them into a structured ( tabularized ) knowledge base | 1 |
for learning language models , we used srilm toolkit---we use the sri language modeling toolkit for language modeling | 1 |
feature weights were set with minimum error rate training on a development set using bleu as the objective function---parameter tuning was carried out using both k-best mira and minimum error rate training on a held-out development set | 1 |
we introduce a transition-based ( cite-p-25-3-15 ) method for joint deep input surface realisation integrating linearization , function word prediction and morphological generation---in this work , we tackle addressee and response selection for multi-party conversation , in which systems are expected to select whom they address | 0 |
in order to preserve the contextual information , we further encode the text segment to its positional representation through a recurrent neural network---in order to preserve contextual information , we encode a sentence to its positional representation via a recurrent neural network | 1 |
the language model was trained using srilm toolkit---system , metaromance , is a fast rule-based parser suited to analyze romance languages with no training data | 0 |
a lattice is a connected directed acyclic graph in which each edge is labeled with a term hypothesis and a likelihood value ( cite-p-19-3-5 ) ; each path through a lattice gives a hypothesis of the sequence of terms spoken in the utterance---a lattice is a directed acyclic graph ( dag ) , a subclass of non-deterministic finite state automata ( nfa ) | 1 |
we also use a 4-gram language model trained using srilm with kneser-ney smoothing---the language model is a 5-gram with interpolation and kneserney smoothing | 1 |
this problem was illustrated using a german lfg grammar constructed as part of the pargram project---the grammar used for this experiment was developed in the pargram project | 1 |
lda is a generative model that learns a set of latent topics for a document collection---lda is a statistical method that learns a set of latent variables called topics from a training corpus | 1 |
brodsky et al suggest a simple definition of a variation set as a sequence of utterances where each successive pair of utterances has a lexical overlap of at least one element , excluding words on a stoplist---we follow berant et al . ’ s proposal , and present a novel entailment-based text exploration system , which we applied to the healthcare domain | 0 |
we evaluated the translation quality using the case-insensitive bleu-4 metric---we evaluated the translation quality using the bleu-4 metric | 1 |
in the results of the closed test in bakeoff 2005 , using crfs for the iob tagging , yielded a very high r-oov in all of the four corpora used , but the r-iv rates were lower---in the results of the closed test in bakeoff 2005 , the work of , using conditional random fields for the iob tagging , yielded very high r-oovs in all of the four corpora used , but the r-iv rates were lower | 1 |
in this run , we use a sentence vector derived from word embeddings obtained from word2vec---we use word2vec as the vector representation of the words in tweets | 1 |
coreference resolution is the task of partitioning a set of mentions ( i.e . person , organization and location ) into entities---coreference resolution is the task of grouping mentions to entities | 1 |
neural networks have played a big role in multiple nlp tasks recently owing to its nonlinear mapping ability and the avoidance of human-engineered features---these include the karma system and the att-meta project | 0 |
we show that distributional features are effective at distinguishing bracket labels , but not determining bracket locations---we have shown that distributional prototype features can allow one to specify a target labeling scheme | 1 |
mellebeek et al introduced a hybrid mt system that utilised online mt engines for msmt---ahsan and kolachina introduce a hybrid mt system that utilised online mt engines for msmt | 1 |
to take advantage of large available text resource from the web , the unknown-word boundary identification is based on the statistical pattern-matching algorithm---on the web , our unknown-word boundary identification approach is based on the statistical string pattern-matching algorithm | 1 |
this paper describes a prototype of an automatic system to assist users in writing specialized texts---against this backdrop , this article aims to present a prototype for an automatic system that provides assistance in writing specialized texts | 1 |
the sentiment analysis is a field of study that investigates feelings present in texts---the fourth and fifth benchmarks are the rg-65 and the mc-30 datasets that contain 65 and 30 pairs of nouns respectively and have been given similarity rankings by humans | 0 |
all above work leads to significant improvement on parsing accuracy---work leads to significant improvement on parsing accuracy | 1 |
these features are the output from the srilm toolkit---a 4-grams language model is trained by the srilm toolkit | 1 |
additionally , we compile the model using the adamax optimizer---we train the model using the adam optimizer with the default hyper parameters | 1 |
we trained the statistical phrase-based systems using the moses toolkit with mert tuning---we used moses as the implementation of the baseline smt systems | 1 |
the occurrences of the senses of a word usually have skewed distribution in text---components of the graph represent the different senses of the target word | 1 |
text categorization is the problem of automatically assigning predefined categories to free text documents---text categorization is the task of automatically assigning predefined categories to documents written in natural languages | 1 |
our results show that although content alone is predictive of a speaker¡¯s influence rank , persuasive argumentation also affects such indices---argumentation features such as premise and support relation appear to be better predictors of a speaker ¡¯ s influence rank compared to basic content | 1 |
experiments show that the proposed method achieves comparable gain in translation quality to the state-of-the-art method but without a manual feature design---the experiments confirmed that the proposed method achieved a translation quality comparable to the state-of-the-art preordering method | 1 |
a 4-gram language model generated by sri language modeling toolkit is used in the cube-pruning process---the srilm toolkit was used to build the trigram mkn smoothed language model | 1 |
our approach to atr is based on the c-and nc-value methods , which extract multi-word terms---we use the scikit-learn machine learning library to implement the entire pipeline | 0 |
in this paper , we provide a neural architecture to model the semantics of emojis , exploring the relation between words and emojis---in this paper , we investigate the relation between words and emojis , studying the novel task of predicting which emojis are evoked by text-based tweet | 1 |
in these approaches , terms in the centroid vector are treated as a bag of words based on the independent assumption---vsm is based on an independence assumption , which assumes that terms in a vector | 1 |
the main assumption behind translation extraction from comparable corpora is that a source word and its translation appear in similar contexts---the approach relies on the assumption that the term and its translation appear in similar contexts | 1 |
in section 2 , we provide a brief background on eliciting rationales in the context of active learning---in this section , we provide a brief background on data annotation with rationales in the context of active learning | 1 |
in addition , we use an english corpus of roughly 227 million words to build a target-side 5-gram language model with srilm in combination with kenlm---for the n-gram lm , we use srilm toolkits to train a 4-gram lm on the xinhua portion of the gigaword corpus | 1 |
we used the scikit-learn implementation of svrs and the skll toolkit---we trained a 5-gram sri language model using the corpus supplied for this purpose by the shared task organizers | 0 |
for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences---we applied a 5-gram mixture language model with each sub-model trained on one fifth of the monolingual corpus with kneser-ney smoothing using srilm toolkit | 1 |
to extract phrases we use hmm alignments along with higher quality alignments from a supervised aligner---surface generation is an np-complete problem | 0 |
table 2 gives the results measured by caseinsensitive bleu-4---automatic evaluation results are shown in table 1 , using bleu-4 | 1 |
the translation quality is evaluated by case-insensitive bleu-4---we use the word2vec tool to train monolingual vectors , 6 and the cca-based tool for projecting word vectors | 0 |
twitter is a subject of interest among researchers in behavioral studies investigating how people react to different events , topics , etc. , as well as among users hoping to forge stronger and more meaningful connections with their audience through social media---twitter is a rich resource for information about everyday events – people post their tweets to twitter publicly in real-time as they conduct their activities throughout the day , resulting in a significant amount of mundane information about common events | 1 |
blitzer et al used the structural correspondence learning algorithm with mutual information---blitzer et al apply the structural correspondence learning algorithm to train a crossdomain sentiment classifier | 1 |
other terms used in the literature include implied meanings , implied alternatives and semantically similars---other terms used in the literature include implied meanings , implied alternatives and semantically similar | 1 |
hearst found individual pairs of hypernyms and hyponyms from text using pattern-matching techniques---hearst used a small number of regular expressions over words and part-of-speech tags to find examples of the hypernym relation | 1 |
here we emphasize constraints that are analogous to the universal linguistic constraints from naseem et al---we consider universal pos tag subsequences analogous to the universal syntactic rules of naseem et al | 1 |
collobert et al use a convolutional neural network over the sequence of word embeddings---collobert et al adapted the original cnn proposed by lecun and bengio for modelling natural language sentences | 1 |
our model is a structured conditional random field---in this task , we used conditional random fields | 1 |
many nlp problems have benefited from having large amounts of data---in many nlp problems , researchers have shown that having large amounts of data is beneficial | 1 |
in this paper , we propose a novel cognition based attention model to improve the state-of-the-art neural sentiment analysis model through cognition grounded eye-tracking data---we propose a novel cognition grounded attention model to improve the state-of-the-art neural network based sentiment analysis models | 1 |
we perform named entity tagging using the stanford four-class named entity tagger---we use the stanford named entity recognizer to identify named entities in s and t | 1 |
semantic parsing is the task of converting a sentence into a representation of its meaning , usually in a logical form grounded in the symbols of some fixed ontology or relational database ( cite-p-21-3-3 , cite-p-21-3-4 , cite-p-21-1-11 )---semantic parsing is a fundamental technique of natural language understanding , and has been used in many applications , such as question answering ( cite-p-18-3-13 , cite-p-18-3-4 , cite-p-18-5-16 ) and information extraction ( cite-p-18-3-7 , cite-p-18-1-11 , cite-p-18-3-16 ) | 1 |
sentiment analysis is the study of the subjectivity and polarity ( positive vs. negative ) of a text ( cite-p-7-1-10 )---sentiment analysis is a technique to classify documents based on the polarity of opinion expressed by the author of the document ( cite-p-16-1-13 ) | 1 |
mcdonald and pereira presented a graph-based parser that can generate dependency graphs in which a word may depend on multiple heads---mcdonald and pereira presented a graph-based parser that can generate graphs in which a word may depend on multiple heads , and evaluated it on the danish treebank | 1 |
support vector machines is a state-of-the-art machine learning approach based on decision plans---grouping hypotheses by these similar words enables our algorithm | 0 |
we used the srilm toolkit to generate the scores with no smoothing---we used srilm to build a 4-gram language model with kneser-ney discounting | 1 |
to train our models , we have used the sequential conditional generalized iterative scaling technique---to estimate the optimal 伪 j values , we train our maxent model using the sequential conditional generalized iterative scaling technique | 1 |
coreference resolution is the problem of partitioning a sequence of noun phrases ( or mentions ) , as they occur in a natural language text , into a set of referential entities---although coreference resolution is a subproblem of natural language understanding , coreference resolution evaluation metrics have predominately been discussed in terms of abstract entities and hypothetical system errors | 1 |
we use a 5-gram language model with modified kneser-ney smoothing , trained on the english side of set1 , as our baseline lm---we train a trigram language model with modified kneser-ney smoothing from the training dataset using the srilm toolkit , and use the same language model for all three systems | 1 |
propbank ( cite-p-17-3-4 ) is the corpus of reference for verb-argument relations---propbank ( cite-p-15-1-6 ) is the most widely used corpus for training srl systems , probably because it contains running text from the penn treebank corpus with annotations on all verbal predicates | 1 |
although negation is a very relevant and complex semantic aspect of language , current proposals to annotate meaning either dismiss negation or only treat it in a partial manner---word embeddings were set to size 300 and initialized with pre-trained glove embedding | 0 |
the similarity between two speech samples , which are represented as vectors , was calculated based on the cosine similarity measure---the difference between two speech samples , which are represented as vectors , is calculated based on the cosine similarity measure | 1 |
our multi-modal architecture builds on the continuous log-linear skipgram language model proposed by mikolov et al---we rely on distributed representation based on the neural network skip-gram model of mikolov et al | 1 |
for preprocessing the corpus , we use the stanford pos-tagger and parser included in the dkpro framework---we process the book text using freely available components of the dkpro framework | 1 |
abstract meaning representation is a semantic formalism where the meaning of a sentence is encoded as a rooted , directed graph---abstract meaning representation is a semantic representation that expresses the logical meaning of english sentences with rooted , directed , acylic graphs | 1 |
drezde et al applied structural correspondence learning to the task of domain adaptation for sentiment classification of product reviews---blitzer et al employ the structural correspondence learning algorithm for sentiment domain adaptation | 1 |
relation extraction is the task of finding relations between entities in text , which is useful for several tasks such as information extraction , summarization , and question answering ( cite-p-14-3-7 )---relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text | 1 |
sentiment analysis is a natural language processing task whose aim is to classify documents according to the opinion ( polarity ) they express on a given subject ( cite-p-13-8-14 )---sentiment analysis is a ‘ suitcase ’ research problem that requires tackling many nlp subtasks , e.g. , aspect extraction ( cite-p-26-3-15 ) , named entity recognition ( cite-p-26-3-6 ) , concept extraction ( cite-p-26-3-20 ) , sarcasm detection ( cite-p-26-3-16 ) , personality recognition ( cite-p-26-3-7 ) , and more | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.