text stringlengths 82 736 | label int64 0 1 |
|---|---|
in this paper we proposed a new feature selection algorithm that selects features in kernel spaces in a coarse to fine order---coreference resolution is the problem of partitioning a sequence of noun phrases ( or mentions ) , as they occur in a natural language text , into a set of referential entities | 0 |
the language model is a trigram model with modified kneser-ney discounting and interpolation---similarity between sentences is a central concept of text analysis , however previous studies about semantic similarities have mainly focused either on single word similarity or complete document similarity | 0 |
in this paper , we propose a novel hierarchically aligned cross-modal attentive network ( haca ) to learn and align both global and local contexts among different modalities of the video---in this paper , we propose a novel hierarchically aligned cross-modal attention ( haca ) framework to learn and selectively fuse bo... | 1 |
for part of speech tagging and dependency parsing of the text , we used the toolset from stanford corenlp---we use stanford corenlp for preprocessing and a supervised learning approach for classification | 1 |
the hybrid tree gives a natural joint tree representation of a natural language sentence and its meaning representation---on top of the hybrid tree representation , we are able to explicitly model phrase-level dependencies amongst neighboring natural language phrases and meaning representation | 1 |
we evaluated the intermediate outputs using bleu against human references as in table 3---we evaluated system output with multireference bleu 4 , using sentences from the extended gold-standard as references | 1 |
choi et al examine opinion holder extraction using crfs with various manually defined linguistic features and patterns automatically learnt by the autoslog system---choi et al explore oh extraction using crfs with several manually defined linguistic features and automatically learnt surface patterns | 1 |
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting | 1 |
text summarization is the task of automatically condensing a piece of text to a shorter version while maintaining the important points---table 4 shows the bleu scores of the output descriptions | 0 |
while general-purpose embeddings are widely used in the nlp community , task-specific embeddings are known to lead to better results for various tasks , including sentiment analysis---we evaluate the performance of different translation models using both bleu and ter metrics | 0 |
barbosa and feng make use of three different sentiment detection websites to label twitter data , while davidov et al , kouloumpis et al and pak and paroubek use twitter hashtags and emoticons as labels---barbosa and feng make use of three different sentiment detection websites to label messages and use mostly non-lexi... | 1 |
a small improvement was obtained when this feature was used in conjunction with syntactic features in supervised classification---considerable additional improvement can be obtained by using semantic features in automatic classification | 1 |
for example hirschberg and litman found that intonational phrasing and pitch accent play a role in disambiguating cue phrases , and hence in helping determine discourse structure---in fact , hirschberg and litman found that discourse markers tend to occur at the beginning of intonational phrases , while sentential usag... | 1 |
the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit---which is composed of three cascaded components : the tagging of sr phrase , the identification of semantic-role-phrase and semantic dependency parsing | 0 |
for example , bahdanau et al have proposed an attentive neural approach to machine translation based on gated recurrent units---bahdanau et al propose a neural translation model that learns vector representations for individual words as well as word sequences | 1 |
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---we use srilm train a 5-gram language model on the xinhua portion of the english gigaword corpus 5th edition with modified kneser-ney discounting | 1 |
lee and seneff proposed an approach based on pattern matching on trees combined with word n-gram counts for correcting agreement misuse and some types of verb form errors---besides , lee and seneff propose a method to correct verb form errors through combining the features of parse trees and n-gram counts | 1 |
the problem of measuring relational similarity is to determine the degree of correspondence between two word pairs---between a pair of words is a natural approach to the task of measuring relational similarity | 1 |
goldberg and zhu proposed a semisupervised learning approach to the rating inference problem in scenarios where labeled training data is scarce---word sense disambiguation is the task of determining the particular sense of a word from a given set of pre-defined senses | 0 |
the parsing algorithm is extended to handle translation candidates and to incorporate language model scores via cube pruning---the decoder uses cky-style parsing with cube pruning to integrate the language model | 1 |
further analysis of the most informative n-gram contexts for each model shows that in comparison with the v isual pathway , the language models react more strongly to abstract contexts that represent syntactic constructions---we use the ctb dataset from the pos tagging task of the fourth international chinese language ... | 0 |
we used weka to experiment with several classifiers---we used weka for all our classification experiments | 1 |
to solve this dynamic state tracking problem , we propose a sequential labeling approach using linear-chain conditional random fields---for our experiments , we use moses as the baseline system which can support lattice decoding | 0 |
we also analyze several cases in wsd and wrl , which confirms our models are capable of selecting appropriate word senses with the favor of sememe attention---in this paper is to provide nlp researchers with a survey of the major milestones in supervised coreference research | 0 |
for training the trigger-based lexicon model , we apply the expectation-maximization algorithm---in this paper , we present an unsupervised combination approach to the aw wsd problem that relies on wn | 0 |
in this paper , we proposed an approach to represent rare words by sparse linear combinations of common ones---in addition , using the small bilingual corpus in l1 and l2 , we train another word alignment | 0 |
we present a novel approach to fsd that operates in math-w-2-1-0-91 per tweet---experiments were run with a variety of machine learning algorithms using the scikit-learn toolkit | 0 |
in order to build the englishfrench parallel corpus with discourse annotations , we used the europarl corpus---for all data sets , we trained a 5-gram language model using the sri language modeling toolkit | 0 |
part-of-speech tagging is the act of assigning each word in a sentence a tag that describes how that word is used in the sentence---part-of-speech tagging is the problem of determining the syntactic part of speech of an occurrence of a word in context | 1 |
here , the authors employ a hybrid approach , combining supervised learning with the knowledge on sentiment-bearing words , which they extract from the dal sentiment dictionary---here , the authors employ a hybrid approach , combining super-vised learning with the knowledge on sentimentbearing words , which they extrac... | 1 |
latent dirichlet allocation is a generative model in which a document is modeled as a finite mixture of topics , where each topic is represented as a multinomial distribution of words---the latent dirichlet allocation is the most basic topic model , which generates each word in a document based on a unigram word distri... | 1 |
to implement svm algorithm , we have used the publicly available python based scikit-learn package---we use scikit learn python machine learning library for implementing these models | 1 |
semeval is a yearly event in which international teams of researchers work on tasks in a competition format where they tackle open research questions in the field of semantic analysis---semeval is the international workshop on semantic evaluation , formerly senseval | 1 |
this combinatorial optimisation problem can be solved in polynomial time through the hungarian algorithm---this combinatorial optimization can be solved in polynomial time by modifying the hungarian assignment algorithm | 1 |
we used kenlm with srilm to train a 5-gram language model based on all available target language training data---statistical machine translation , especially the phrase-based model , has developed very fast in the last decade | 0 |
abeill茅 and abeill茅 and schabes identified the linguistic and computational attractiveness of lexicalized grammars for modeling non-compositional constructions in french well before dop---the linguistic and computational attractiveness of lexicalized grammars for modeling idiosyncratic constructions in french was ident... | 1 |
a 4-grams language model is trained by the srilm toolkit---all language models were trained using the srilm toolkit | 1 |
collobert and weston propose a unified deep convolutional neural network for different tasks by using a set of taskindependent word embeddings together with a set of task-specific word embeddings---collobert et al presented a model that learns word embedding by jointly performing multi-task learning using a deep convol... | 1 |
we obtained distributed word representations using word2vec 4 with skip-gram---we use skipgram model to train the embeddings on review texts for k-means clustering | 1 |
in this paper , we examined applying the latent-variable berkeley parser to the task of topological field parsing of german , which aims to identify the high-level surface structure of sentences---in this paper , we examine topological field parsing , a shallow form of parsing which identifies the major sections of a s... | 1 |
such information is also vital during language acquisition , when much of the linguistic content perceived by the child refers to their immediate visual environment ( cite-p-11-3-0 )---that occurs in a visual environment , and is crucial for language acquisition , when much of the linguistic content refers to the visua... | 1 |
the data collection methods used to compile the dataset used in offenseval is described in zampieri et al---our translation model is implemented as an n-gram model of operations using srilm-toolkit with kneser-ney smoothing | 0 |
dependency parsing consists of finding the structure of a sentence as expressed by a set of directed links ( dependencies ) between words---dependency parsing is a very important nlp task and has wide usage in different tasks such as question answering , semantic parsing , information extraction and machine translation | 1 |
t盲ckstr枚m et al also used parallel data to induce cross-lingual word clusters which added as features for their delexicalized parser---t盲ckstr枚m et al used unlabeled parallel sentences to induce crosslingual word clusterings and used these word clusterings as interlingual features | 1 |
for training the translation model and for decoding we used the moses toolkit---the weights 位 m in the log-linear model were trained using minimum error rate training with the news 2009 development set | 0 |
table 2 shows the translation quality measured in terms of bleu metric with the original and universal tagset---table 2 shows results in lowercase bleu for both the baseline and the improved baseline systems on development and held-out evaluation sets | 1 |
we use a set of 318 english function words from the scikit-learn package---we used standard classifiers available in scikit-learn package | 1 |
we implement an in-domain language model using the sri language modeling toolkit---the trigram language model is implemented in the srilm toolkit | 1 |
7 the classification-based approach is consistently better in translating words with multiple translations as evident from higher all-mode scores in tab---7 the classification-based approach is consistently better in translating words with multiple translations | 1 |
increasing the context length at the input layer thus only causes a linear growth in complexity in the worst case---with nnlm however , the increase in context length at the input layer results in only a linear growth in complexity in the worst case | 1 |
since text categorization is a task based on predefined categories , we know the categories for classifying documents---conditional random fields are a convenient formalism for sequence labeling tasks common in nlp | 0 |
it is desirable that conversational systems can learn new words automatically during human machine conversation---conversational systems must be able to learn new words automatically during human machine conversation | 1 |
we measure the overall translation quality using 4-gram bleu , which is computed on tokenized and lowercased data for all systems---we measured the overall translation quality with the help of 4-gram bleu , which was computed on tokenized and lowercased data for both systems | 1 |
we propose a novel abstractive summarization system for product reviews by taking advantage of their discourse structure---we propose a novel abstractive summarization framework that generates an aspect-based abstract from multiple reviews of a product | 1 |
according to the conceptual metaphor theory , metaphoricity is a property of concepts in a particular context of use , not of specific words---according to the conceptual metaphor theory , metaphors are not merely a linguistic , but also a cognitive phenomenon | 1 |
as a step towards better metrics , we also propose gleu , a simple variant of bleu , modified to account for both the source and the reference , and show that it hews much more closely to human judgments---as a step in the direction of better metrics , we develop the generalized language evaluation understanding metric... | 1 |
one of the most important resources for discourse connectives in english is the penn discourse treebank---another corpus has been annotated for discourse phenomena in english , the penn discourse treebank | 1 |
we use pre-trained word embeddings of moen et al , which are publicly available---phoneme connectivity table supports the grammaticality of the adjacency of two phonetic morphemes | 0 |
it outperforms existing state-of-the-art techniques dramatically---doc performs dramatically better than the state-of-the-art methods | 1 |
finin et al use amazons mechanical turk service 3 and crowdflower 4 to annotate named entities in tweets and train a crf model to evaluate the effectiveness of human labeling---finin et al use amazons mechanical turk service 2 and crowdflower 3 to annotate named entities in tweets and train a crf model to evaluate the ... | 1 |
to address this problem , long short-term memory network was proposed in where the architecture of a standard rnn was modified to avoid vanishing or exploding gradients---the lm is implemented as a five-gram model using the srilm-toolkit , with add-1 smoothing for unigrams and kneser-ney smoothing for higher n-grams | 0 |
huang et al presented an rnn model that uses document-level context information to construct more accurate word representations---huang et al , 2012 ) used the multi-prototype models to learn the vector for different senses of a word | 1 |
we use the logistic regression implementation of liblinear wrapped by the scikit-learn library---we experiment with linear kernel svm classifiers using liblinear | 1 |
machine translation models typically require large , sentence-aligned bilingual texts to learn good translation models---phrase-based translation models are widely used in statistical machine translation | 1 |
we also report state-of-the-art results on the multi30k data set---semantic role labeling ( srl ) is the task of labeling the predicate-argument structures of sentences with semantic frames and their roles ( cite-p-18-1-2 , cite-p-18-1-19 ) | 0 |
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit---twitter is a social platform which contains rich textual content | 0 |
we rely on the stanford parser , a treebank-trained statistical parser , for tokenization , part-of-speech tagging , and phrase-structure parsing---coherence is a common 'currency ' with which to measure the benefit of applying a schema | 0 |
to evaluate the reliability of the annotations , we used weighted kappa at the word level , excluding stop words---we used weighted kappa with linear weights to measure the interrater agreement | 1 |
lin and he propose a method based on lda that explicitly deals with the interaction of topics and sentiments in text---lin and he proposed a joint sentimenttopic model for unsupervised joint sentiment topic detection | 1 |
pos are normally considered useful information in shallow and full parsing---use of pos tags in parsing is that they provide useful generalizations | 1 |
the weights of the different feature functions were optimised by means of minimum error rate training on the 2013 wmt test set---grammar induction is a central problem in computational linguistics , the aim of which is to induce linguistic structures from an unannotated text corpus | 0 |
for two question sets , a context for the target word is provided , and we examine a number of word similarity measures that exploit this context---we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing | 0 |
major discourse annotated resources in english include the rst treebank and the penn discourse treebank---one of the most important resources for discourse connectives in english is the penn discourse treebank | 1 |
coreference resolution is the task of determining when two textual mentions name the same individual---as wikidata did not exist at that time , the authors relied on the structured infoboxes included in some wikipedia articles | 0 |
finally , we evaluated our overall method against a state of the art sentence paraphraser , which generates candidates by using several commercial machine translation systems and pivot languages---when paraphrasing rules apply to the input sentences , our paraphrasing method is competitive to a state of the art paraphr... | 1 |
additionally , syntax-based approaches have been proposed which concern parsing and disfluency detection together---recently , syntax-based models such as transition-based parser have been used for detecting disfluencies | 1 |
coreference resolution is the task of determining when two textual mentions name the same individual---a domain is broadly defined as a set of documents demonstrating a similar distribution of words and linguistic patterns | 0 |
the representations are based on systems such as spl and drt---this software is based on the discourse representation theory by kamp and reyle | 1 |
coreference resolution is the task of grouping mentions to entities---coreference resolution is the task of determining which mentions in a text refer to the same entity | 1 |
tuning is performed to maximize bleu score using minimum error rate training---the model weights are automatically tuned using minimum error rate training | 1 |
the default phrasal search algorithm is cube pruning---the default is the phrase-based variant of cube pruning | 1 |
in all cases , we used the implementations from the scikitlearn machine learning library---word sense disambiguation ( wsd ) is a problem long recognised in computational linguistics ( yngve 1955 ) and there has been a recent resurgence of interest , including a special issue of this journal devoted to the topic ( cite... | 0 |
the smt systems are tuned on the dev development set with minimum error rate training using bleu accuracy measure as the optimization criterion---we elaborate the syntax-driven bracketing model , including feature generation and the integration of the sdb model into phrase-based smt | 0 |
an extensive set of experiments conducted on trec-kba-2013 dataset has demonstrated the effectiveness of the proposed mixture model---an extensive set of experiments has been conducted on trec-kba-2013 dataset , and the results demonstrate that this model can yield a significant performance gain in recommendation quali... | 1 |
we present a novel architecture that considers other relations in the sentence as a context for predicting the label of the target relation---recently , it has been approached with neural sequence-to-sequence methods , inspired by the advances in neural machine translation | 0 |
t盲ckstr枚m et al also used parallel data to induce cross-lingual word clusters which added as features for their delexicalized parser---the maximum likelihood estimates are smoothed using good-turing discounting | 0 |
we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing---word sense disambiguation ( wsd ) is a natural language processing ( nlp ) task in which the correct meaning ( sense ) of a word in a given context is to be determined | 0 |
our a ∗ algorithm is 5 times faster than cky parsing , with no loss in accuracy---in this study , we propose novel syntactic measures which are relatively robust against speech recognition errors | 0 |
davidov and rappoport describe an algorithm for unsupervised discovery of word categories and evaluate it on russian and english corpora---davidov and rappoport developed a framework which discovers concepts based on high frequency words and symmetry-based pattern graph properties | 1 |
we used moses with the default configuration for phrase-based translation---building on this frame-semantic model , the berkeley framenet project has been developing a frame-semantic lexicon for the core vocabulary of english since 1997 | 0 |
we used the moses toolkit for performing statistical machine translation---by making the parser slightly more robust , the accuracy of the system rises to 93 . 5 % , and by adding one single word to the lexicon , the accuracy is boosted to 98 . 0 % | 0 |
goldwater et al used hierarchical dirichlet processes to induce contextual word models---goldwater et al explored a bigram model built upon a dirichlet process to discover contextual dependencies | 1 |
conditional random fields are undirected graphical models trained to maximize the conditional probability of the desired outputs given the corresponding inputs---sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-3-0 ) which aims at classifying documents according to the opinion expressed abou... | 0 |
the translation systems were evaluated by bleu score---systems using content-based filtering use the content information of recommendation items | 0 |
chen et al used features derived from short dependency pairs based on large-scale auto-parsed data to enhance dependency parsing---lin et al has explored the 2-level production rules for discourse analysis | 0 |
our baseline russian-english system is a hierarchical phrase-based translation model as implemented in cdec---we present a coreference resolver called babar that uses contextual role knowledge to evaluate possible antecedents | 0 |
they have been used for many tasks , including semantic role labeling , named entity recognition , parsing , and for the facebook qa tasks sukhbaatar et al , 2015 )---it has been shown that the continuous space representations improve performance in a variety of nlp tasks , such as pos tagging , semantic role labeling ... | 1 |
the log-linear model is then tuned as usual with minimum error rate training on a separate development set coming from the same domain---in this paper , we propose a novel semi-supervised approach to addressing the problem by transforming the base features into high-level features ( i . e . meta features ) | 0 |
the f1 of 0.0 on this dataset is not a fault of ubl , but rather it shows the difficulty of the task---on this dataset is not a fault of ubl , but rather it shows the difficulty of the task | 1 |
the framenet database provides an inventory of semantic frames together with a list of lexical units associated with these frames---framenet is a comprehensive lexical database that lists descriptions of words in the frame-semantic paradigm | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.