text stringlengths 82 736 | label int64 0 1 |
|---|---|
we have shown that state of the art performance can be achieved by using this approach---we show that this method achieves state of the art performance | 1 |
the language models were built using srilm toolkits---srilm toolkit is used to build these language models | 1 |
yet the choice of similarity metric interacts with the choice of clustering method---the choice of similarity metric interacts with both the choice of clustering method | 1 |
nlp-driven analysis of clinical language data has been used to assess language development , language impairment and cognitive status---language models have been used previously for language impairment on children and language dominance prediction | 1 |
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting---we trained kneser-ney discounted 5-gram language models on each available corpus using the srilm toolkit | 1 |
sentence compression is a task of creating a short grammatical sentence by removing extraneous words or phrases from an original sentence while preserving its meaning---sentence compression is a standard nlp task where the goal is to generate a shorter paraphrase of a sentence | 1 |
we use stanford corenlp for chinese word segmentation and pos tagging---for vpe detection , we improve upon the accuracy of the state-of-the-art system | 0 |
the probability model defined by a probabilistic grammar is said to be consistent if the probabilities assigned to all the strings in the language sum to one---the probability model defined by a probabilistic grammar is said to be consistent if the probabilities assigned to all the strings in the language sum to 1 | 1 |
we use a cws-oriented model modified from the skip-gram model to derive word embeddings---we use the skip-gram model , trained to predict context tags for each word | 1 |
we used the scikit-learn implementation of a logistic regression model using the default parameters---sentiment analysis is a multi-faceted problem | 0 |
topic models , such as plsa and lda , have shown great success in discovering latent topics in text collections---topic models such as latent dirichlet allocation have emerged as a powerful tool to analyze document collections in an unsupervised fashion | 1 |
the system mostly follows the standard encoder-decoder architecture using rnn layers and attention mechanism---reinforcement learning is an attractive framework for optimising nlg systems , where situations are mapped to actions by maximising a long term reward signal | 0 |
kitano and van ess-dykema extend the plan recognition model of litman and allen to consider mixed-initiative dialogue---kitano and van ess-dykema extend the plan recognition model of litman and allen to consider variable initiative dialog | 1 |
we apply a skip-gram model of windowsize 5 and filter words that occur less than 15 times---we apply a skipgram model of window size 5 and filter words that occur less than 5 times | 1 |
however , polysemy is a fundamental problem for distributional models---while polysemy is the immediate cause of the first problem , it indirectly contributes to the second problem as well by preventing the effective use of thesauri | 1 |
twitter is a popular microblogging service , which , among other things , is used for knowledge sharing among friends and peers---twitter is a social platform which contains rich textual content | 1 |
for phrase-based smt translation , we used the moses decoder and its support training scripts---we used moses , a state-of-the-art phrase-based smt model , in decoding | 1 |
in most cases our patterns correspond to linguistic phenomena---in most cases directly correspond to specific linguistic phenomena | 1 |
we then process the whole chinese dataset using the stanford corenlp toolkit to get the pos and named entity tags---for these data , we preprocess the text including using stanford corenlp to split the review documents into sentences and tokenizing all words | 1 |
we use different pretrained word embeddings such as glove 1 and fasttext 2 as the initial word embeddings---we apply a state-of-the-art language-independent entity linker to link each transliteration hypothesis to an english kb | 0 |
as mentioned in , the metrics are desirable but flawed when a corrupted triple exists in the kb---as mentioned in , the metrics are desirable but flawed when a corrupted triple exists in the kg | 1 |
then in section 5 we discuss related work , followed by the conclusion and future work in section 6---the promt smt system is based on the moses open-source toolkit | 0 |
for our experiments reported here , we obtained word vectors using the word2vec tool and the text8 corpus---we used the pre-trained word embeddings that are learned using the word2vec toolkit on google news dataset | 1 |
as monolingual baselines , we use the skip-gram and cbow methods of mikolov et al as implemented in the gensim package---we use the perplexity computation method of mikolov et al suitable for skip-gram models | 1 |
in section 3 , we review related work in data-driven dialog modeling---in this paper , we discuss methods for automatically creating models of dialog structure using dialog | 1 |
the bleu score is based on the geometric mean of n-gram precision---all back-off lms were built using modified kneserney smoothing and the sri lm-toolkit | 0 |
dyer et al introduce stack-lstms , which have the ability to recover earlier hidden states---semantic parsing is the task of converting natural language utterances into formal representations of their meaning | 0 |
yu and hatzivassiloglou use semanticallyoriented words for identification of polarity at the sentence level---yu and hatzivassiloglou used semantic orientation of words to identify polarity at sentence level | 1 |
in this paper , we propose a participant-based event summarization approach that “ zooms-in ” the twitter event streams to the participant level , detects the important sub-events associated with each participant using a novel mixture model that combines the “ burstiness ” and “ cohesiveness ” properties of the event tweets , and generates the event summaries progressively---in this work , we propose a novel participant-based event summarization approach , which dynamically identifies the participants from data streams , then “ zooms-in ” the event stream to participant level , detects the important sub-events related to each participant using a novel time-content mixture model , and generates the event summary progressively | 1 |
we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus---we trained kneser-ney discounted 5-gram language models on each available corpus using the srilm toolkit | 1 |
we report the mt performance using the original bleu metric---we adopt two standard metrics rouge and bleu for evaluation | 1 |
in statistical machine translation , word alignment plays an essential role in obtaining phrase tables or syntactic transformation rules---word alignment is an important component of statistical machine translation systems such as phrase-based smt and hierarchical phrase-based smt | 1 |
this paper presented various neural network architectures for dialogue topic tracking---this paper presents our work also on dialogue topic tracking | 1 |
our best system officially ranked number 11 among 90 participating system reporting a pearson mean correlation score of 0.5502---and they ranked number 11 , 15 and 19 among the 90 participating systems according to the official mean pearson correlation metric for the task | 1 |
in parsing , adjacent spans are combined using a small number of binary combinatory rules like forward application or composition---in parsing , adjacent spans are combined using a small number of binary combinatory rules like forward application or composition on the spanning categories | 1 |
in each plot , the green solid line indicates the best accuracy found so far , while the dotted orange line shows accuracy at each trial---in each plot , the green solid line indicates the best accuracy found so far , while the dotted orange line shows accuracy | 1 |
nenkova et al noted that the entrainment score between dialogue partners is higher than the entrainment score between non-partners in dialogue---nenkova et al found that high frequency word entrainment in dialogue is correlated with engagement and task success | 1 |
the dependency parser we use is an implementation of a transition-based dependency parser---the core dependency parser we use is an implementation of a transition-based dependency parser using an arc-eager transition strategy | 1 |
sentence compression is the task of shortening a sentence while preserving its important information and grammaticality---sentence compression is a complex paraphrasing task with information loss involving substitution , deletion , insertion , and reordering operations | 1 |
we presented an approach to automatic authorship attribution of real-world texts---we present an approach to automatic authorship attribution dealing with real-world ( or unrestricted ) text | 1 |
traditional semantic space models represent meaning on the basis of word co-occurrence statistics in large text corpora---traditional corpus-based models of semantic representation base their analysis on textual input alone | 1 |
as embedding vectors , we used the publicly available representations obtained from the word2vec cbow model---discourse structure is the hidden link between surface features and document-level properties , such as sentiment polarity | 0 |
we also use a 4-gram language model trained using srilm with kneser-ney smoothing---for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided | 1 |
relation extraction is the task of finding semantic relations between entities from text---relation extraction is the task of finding relational facts in unstructured text and putting them into a structured ( tabularized ) knowledge base | 1 |
word sense disambiguation ( wsd ) is a key enabling technology that automatically chooses the intended sense of a word in context---many words have multiple meanings , and the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) | 1 |
in this paper , a novel approach is introduced that fully utilizes semantic similarity between dialogue utterances and the ontology terms , allowing the information to be shared across domains---by exploiting semantic similarities between dialogue utterances and ontology terms , the model alleviates the need for ontology-dependent parameters | 1 |
we thus establish laso as a special case within our framework---and is thus a special case in our framework | 1 |
for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit---for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing | 1 |
the experiments showed that an accurate question classifier plays an essential role in question answering system---the experiments show that the question classifier plays an important role in determining the performance of a question answering system | 1 |
this work has tried to shed light on the contribution of semantic information to dependency parsing---work presents a set of experiments to investigate the use of lexical semantic information in dependency parsing | 1 |
we tackle this problem by testing three different variants of a semi-supervised method previously proposed for orientation detection---we tackle this problem by testing three different variants of the semi-supervised method for orientation detection | 1 |
we extracted these relations for a set of domain relevant verbs from parses of the corpus obtained with the stanford parser---we obtained both phrase structures and dependency relations for every sentence using the stanford parser | 1 |
we leverage sparse codes of words to compress neural lms---at last , a global optimization framework is proposed to generate the related work section | 0 |
we use a pbsmt model where the language model is a 5-gram lm with modified kneser-ney smoothing---the language model is a 5-gram with interpolation and kneser-ney smoothing | 1 |
we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit---we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing | 1 |
sri language modeling toolkit was employed to train 5-gram english and japanese lms on the training set---the system was trained using moses with default settings , using a 5-gram language model created from the english side of the training corpus using srilm | 1 |
we trained a 3-gram language model on all the correct-side sentences using kenlm---in this paper , we present a novel approach which performs high quality filtering automatically , through modelling not just words | 0 |
for content features , a central question is how speech content can be represented in appropriate means to facilitate automated speech scoring---ontology-based representation may facilitate obtaining better content features for speech scoring | 1 |
wikipedia is a free multilingual online encyclopedia and a rapidly growing resource---after presenting a formal definition of the acm , we described in detail | 0 |
we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing---for all data sets , we trained a 5-gram language model using the sri language modeling toolkit | 1 |
zeng et al propose the use of position feature for improving the performance of cnn in relation classification---zeng et al , 2014 ) exploit position feature as a substitute for traditional structure features in relation classification | 1 |
we pre-train the word embedding via word2vec on the whole dataset---we pre-train the 200-dimensional word embeddings on each dataset in with skipgram | 0 |
we use srilm for training a trigram language model on the english side of the training corpus---for language model scoring , we use the srilm toolkit training a 5-gram language model for english | 1 |
we use moses , an open source toolkit for training different systems---we use the popular moses toolkit to build the smt system | 1 |
we use the word2vec framework in the gensim implementation to generate the embedding spaces---we use the stanford parser to generate the grammar structure of review sentences for extracting syntactic d-features | 0 |
the smt system was tuned on the development set newstest10 with minimum error rate training using the bleu error rate measure as the optimization criterion---all model weights were trained on development sets via minimum-error rate training with 200 unique n-best lists and optimizing toward bleu | 1 |
we use the mert algorithm for tuning and bleu as our evaluation metric---we presented our study on research proceedings of approximately two decades from the leading nlp conference | 0 |
we use the glove word vector representations of dimension 300---in this task , we use the 300-dimensional 840b glove word embeddings | 1 |
second , in this model the detection of emerging genres can be done indirectly through the analysis of an unexpected combination of text types and/or genres---with this model , emerging genres can be hypothesized through the analysis of unexpected combinations of text types and / or other traits | 1 |
stemming is the process of normalizing word variations by removing prefixes and suffixes---stemming is a heuristic approach to reducing form-related sparsity issues | 1 |
zeng et al use a convolutional deep neural network to extract lexical features learned from word embeddings and then fed into a softmax classifier to predict the relationship between words---zeng et al use convolutional neural network for learning sentence-level features of contexts and obtain good performance even without using syntactic features | 1 |
we use case-sensitive bleu to assess translation quality---moreover , we propose a simple yet effective way to utilize phrase-level information that is expensive to use | 0 |
we used nwjc2vec 10 , which is a 200 dimensional word2vec model---we used word2vec , a powerful continuous bag-of-words model to train word similarity | 1 |
we also use a 4-gram language model trained using srilm with kneser-ney smoothing---we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit | 1 |
in this paper , a novel approach for modeling the semantic relevance for qa pairs in the social media sites is proposed---in this paper , we have proposed a deep belief network based approach to model the semantic relevance for the question answering pairs | 1 |
we used the scikit-learn implementation of svrs and the skll toolkit---relation extraction is a fundamental task in information extraction | 0 |
we use the glove vector representations to compute cosine similarity between two words---in this work , we use tf-idf and glove to represent sentences respectively | 1 |
qian et al proposed a bilingual active learning paradigm for chinese and english relation classification with pseudo parallel corpora and entity alignment---qian et al , 2014 ) proposed an active learning approach for bilingual relation extraction with pseudo parallel corpora | 1 |
denkowski developed a method for real time integration of post-edited mt output into the translation model by extracting a grammar for each input sentence---denkowski proposed a method for real time integration of post-edited mt output into the translation model | 1 |
in this paper , we employ the centering theory in pronoun resolution from the semantic perspective---we will further employ the centering theory in pronoun resolution from both grammatical and semantic perspectives | 1 |
in this paper , we present a new model for disfluency detection from spontaneous speech transcripts---in this paper , we introduce a new model for detecting restart and repair disfluencies in spontaneous speech transcripts | 1 |
we proposed an extension to the basic feature logic of variables , features , atoms , and equational constraints---we investigate the extension of basic feature logic with subsumption ( or matching ) constraints | 1 |
a user of this system can explore the result space of her query , by drilling down/up from one proposition to another , according to a set of entailment relations described by an entailment graph---a user of our system can explore the result space of a query by drilling down / up from one statement to another , according to entailment relations specified by an entailment graph | 1 |
the weights of the different feature functions were optimised by means of minimum error rate training---the feature weights are tuned to optimize bleu using the minimum error rate training algorithm | 1 |
in this paper , we approach the word embedding task from a different perspective by formulating it as a ranking problem---in this paper , we argue that word embedding can be naturally viewed as a ranking problem | 1 |
we feed our features to a multinomial naive bayes classifier in scikit-learn---we use the svm implementation from scikit-learn , which in turn is based on libsvm | 1 |
compared with previous work , we focus on addressing the limitation caused by the inaccurate concept mapping---in this paper , we focus on addressing the limitations caused by the imperfect mapping results | 1 |
they hypothesise that a word and its translation tend to appear in similar lexical context---in the second stage , we use this assumption that a word and its translation tend to appear in similar context across languages | 1 |
we propose to train our model using sestra , a learning algorithm that takes advantage of single-step reward observations to overcome learned biases in on-policy learning---without access to demonstrations , we propose sestra , a learning algorithm that takes advantage of single-step reward observations | 1 |
recently , methods inspired by neural language modeling received much attentions for representation learning---more recently , neural networks have become prominent in word representation learning | 1 |
probabilistic latent semantic indexing is one such model---for all models , we use the 300-dimensional glove word embeddings | 0 |
in this paper , we present an algorithm for plan recognition that is based on the sharedplan model of collaboration ( cite-p-5-16-2 , cite-p-5-16-3 ) and that satisfies these constraints---in this paper , we present an algorithm for intended recognition that is based on the sharedplan model of collaboration ( cite-p-5-16-2 , cite-p-5-16-3 ) | 1 |
collobert and weston showed that neural networks can perform well on sequence labeling language processing tasks while also learning appropriate features---in this paper , we contrast the properties of two knowledge graphs that have clean , human-vetted facts | 0 |
these methods do not exploit internal information of words , and fail to handle low-frequency words and out-of-vocabulary words---typically rely on the external context of words to represent the meaning , which usually fails to deal with low-frequency and out-of-vocabulary words | 1 |
leskovec et al perform clustering of quotations and their variations , uncovering patterns in the temporal dynamics of how memes spread through the media---leskovec et al use the evolution of quotes reproduced online to identify memes and track their spread overtime | 1 |
a 5-gram lm was trained using the srilm toolkit 12 , exploiting improved modified kneser-ney smoothing , and quantizing both probabilities and back-off weights---we parse each document using stanford corenlp in order to acquire both dependency , named entity , and coreference resolution features | 0 |
headden iii et al introduce the extended valence grammar and add lexicalization and smoothing---headden , johnson and mcclosky introduced the extended valence grammar and added lexicalization and smoothing | 1 |
therefore , we constructed a new query treebank consisting of 5,000 cqa queries , manually annotated according to our extended grammar---so that it accounts for such queries , and constructed a new query treebank , annotated according to the extended grammar | 1 |
we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model---word sense disambiguation ( wsd ) is a difficult natural language processing task which requires that for every content word ( noun , adjective , verb or adverb ) the appropriate meaning is automatically selected from the available sense inventory 1 | 0 |
importantly , word embeddings have been effectively used for several nlp tasks---embeddings , have recently shown to be effective in a wide range of tasks | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.