text stringlengths 82 736 | label int64 0 1 |
|---|---|
we achieve significant improvements on topic coherence evaluation , document clustering and document classification tasks , especially on corpora of short documents and corpora with few documents---in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd ) | 0 |
word alignment is an important component of a complete statistical machine translation pipeline---word alignment is an essential step in phrase-based statistical machine translation | 1 |
we use the glove word vector representations of dimension 300---luong et al segment words using morfessor , and use recursive neural networks to build word embeddings from morph embeddings | 0 |
we trained a 3-gram language model on all the correct-side sentences using kenlm---we used the kenlm language model toolkit with character 7-grams | 1 |
in this paper we introduce the task of detecting content-heavy sentences in cross-lingual context---we provide an operational characterization of content-heavy sentences in the context of chinese-english translation | 1 |
a tri-gram language model is estimated using the srilm toolkit---the trigram language model is implemented in the srilm toolkit | 1 |
relation extraction is the task of recognizing and extracting relations between entities or concepts in texts---we train 300 dimensional word embedding using word2vec on all the training data , and fine-turning during the training process | 0 |
we advance a bayesian nonparametric model of extractive multi-document summarization to achieve this goal---in this paper , we propose a bayesian nonparametric model for multi-document summarization | 1 |
the negative words and positive words come from the dictionary provided by hu and liu---adaptor grammars can be used to study a variety of different linguistic | 0 |
coreference resolution is the process of linking together multiple referring expressions of a given entity in the world---latent dirichlet allocation is one of the most popular topic models used to mine large text data sets | 0 |
z score can distinguish the importance of each term in each class , their performances have been proved---z score can distinguish the importance of each term in each class , their performances have been proved in | 1 |
we define noun phrase translation as a subtask of machine translation---system tuning was carried out using both k-best mira and minimum error rate training on the held-out development set | 0 |
all parameters are initialized using glorot initialization---parameters are initialized using the method described by glorot and bengio | 1 |
phrase-based models treat phrase as the basic translation unit---in phrase-based smt , words may be grouped together to form so-called phrases | 1 |
neural networks , working on top of conventional n-gram back-off language models , have been introduced in as a potential mean to improve discrete language models---neural networks , working on top of conventional n-gram back-off language models , have been introduced in as a potential means to improve discrete language models | 1 |
we use 300 dimensional glove embeddings trained on the common crawl 840b tokens dataset , which remain fixed during training---we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings , which we do not optimize during training | 1 |
finkel et al used gibbs sampling , a simple monte carlo method used to perform approximate inference in factored probabilistic models---finkel et al used gibbs sampling to add non-local dependencies into linear-chain crf model for information extraction | 1 |
as previously reported in , a person may express the same stance towards a target by using negative or positive language---this is also in line with what has been previously observed in that a person may express the same stance towards a target by using negative or positive language | 1 |
wang et al propose a regional cnn-lstm model for dimensional sentiment analysis---wang et al proposed a regional cnn-lstm-based approach to documentlevel emotion regression | 1 |
in this way , these β garbage collector effects β are a form of overfitting---however , these algorithms have a problem of overfitting , leading to β garbage collector effects | 1 |
we proposed a statistically sound replicability analysis framework for cases where algorithms are compared across multiple datasets---we propose a replicability analysis framework for a statistically sound analysis of multiple comparisons between algorithms | 1 |
all the feature weights and the weight for each probability factor are tuned on the development set with minimumerror-rate training---lopcrf can provide a competitive alternative to conventional regularisation with a prior while avoiding the requirement to search a hyperparameter space | 0 |
a narrative event chain is a partially ordered set of events related by a common protagonist---a story is usually viewed as a sequence of events based on information extraction | 1 |
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided---in our experiments , we used the srilm toolkit to build 5-gram language model using the ldc arabic gigaword corpus | 1 |
trigram language models were estimated using the sri language modeling toolkit with modified kneser-ney smoothing---we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit | 1 |
cite-p-24-3-6 propose a joint model to process word segmentation and informal word detection---cite-p-24-1-12 proposed a joint model for word segmentation , pos tagging and normalization | 1 |
for instance , mihalcea et al compare two corpus-based and six knowledge-based measures on the task of text similarity computation---mihalcea et al use both corpusbased and knowledge-based measures of the semantic similarity between words | 1 |
we propose using alignment distance to validate transliterations---we propose a novel evaluation metric for transliteration alignment | 1 |
we use the 300-dimensional pre-trained word2vec 3 word embeddings and compare the performance with that of glove 4 embeddings---we use pre-trained 50 dimensional glove vectors 4 for word embeddings initialization | 1 |
the word-based approach searches for all possible segmentations , usually created using a dictionary , for the optimal one that maximizes a certain utility---word-based approach searches in all possible segmentations for one that maximizes a predefined utility | 1 |
surdeanu et al propose a two-layer multi-instance multi-label framework to capture the dependencies among relations---surdeanu et al describe an extended model , where each entity pair may link multiple instances to multiple relations | 1 |
all weights are initialized by the xavier method---all weights are initialised using the approach in glorot and bengio | 1 |
to reduce error propagation , we use beam-search and scheduled sampling , respectively---and since the concept of iterated action is central to planning , the generalisation across iteration and distributives , along with the observations about their nature , have interesting implications for work in this area | 0 |
replacing a conjunct with the whole coordination phrase usually produce a coherent sentence ( huddleston et al. , 2002 )---conjuncts tend to be similar and ( b ) that replacing the coordination phrase with a conjunct results in a coherent sentence | 1 |
relation extraction is a fundamental task that enables a wide range of semantic applications from question answering ( cite-p-13-3-12 ) to fact checking ( cite-p-13-3-10 )---relation extraction is the key component for building relation knowledge graphs , and it is of crucial significance to natural language processing applications such as structured search , sentiment analysis , question answering , and summarization | 1 |
this paper presents constraint projection ( cp ) , another method for disjunctive unification---this paper presents constraint projection , a new method for unification of disjunctive feature | 1 |
neural network approaches to language modelling have made remarkable performance gains over traditional count-based ngram lms---our system already has a positive effect on extractive summarization | 0 |
we implement our approach in the framework of phrase-based statistical machine translation---in particular , we adopt the approach of phrase-based statistical machine translation | 1 |
deep neural networks , emerging recently , can learn underlying features automatically , and have attracted growing interest in the literature---deep neural networks , emerging recently , provide a way of highly automatic feature learning ( cite-p-18-1-1 ) , and have exhibited considerable potential | 1 |
in this paper , we proposed a new task of japanese noun phrase segmentation---with these resources , we propose a new task of japanese noun phrase segmentation | 1 |
the hierarchical phrase-based model has been widely adopted in statistical machine translation---the target-normalized hierarchical phrase-based model is based on a more general hierarchical phrase-based model | 1 |
the grammar is the general dart of the syntactic box , the part concerned with syntactic structures---mitchell and lapata presented a framework for representing the meaning of phrases and sentences in vector space | 0 |
most previous studies on meeting summarization have focused on extractive summarization---most current research on meeting summarization has focused on extractive summarization | 1 |
in the official evaluation , our system achieves an f1 score of 26.90 % in overall performance on the blind test set---in the official evaluation , our system achieves an f1 score of 26 . 90 % in overall performance | 1 |
where math-w-16-5-1-1 is the sentence extracted at step math-w-16-5-1-11 , and math-w-16-5-1-14 is the indicator function defined as : math-p-16-6---f1 gain , specifically : math-p-4-6-0 where math-w-4-7-0-1 is the set of previously selected sentences , and we omit the condition math-w-4-7-0-17 of math-w-4-7-0-20 | 1 |
while the majority of ccg parsers are chart-based , there has been some work on shift-reduce ccg parsing---japanese wsdthe semeval-2010 japanese wsd task consists of 50 polysemous words for which examples were taken from the bc-cwj tagged corpus | 0 |
lexical chains provide a representation of the lexical cohesion structure of a text---lexical chains provide a representation of the lexical cohesion structure of the target document that is to be generated | 1 |
as for je translation , we use a popular japanese dependency parser to obtain japanese abstraction trees---for japanese-to-english task , we use a chunkbased japanese dependency tree | 1 |
as our machine learning component we use liblinear with a l2-regularised l2-loss svm model---we use liblinear with l2 regularization and default parameters to learn a model | 1 |
the nnlm weights are optimized as the other feature weights using minimum error rate training---then we split compounds with the lattice-based model in cdec | 0 |
we evaluated the translation quality of the system using the bleu metric---we compute the interannotator agreement in terms of the bleu score | 1 |
relation extraction is the task of finding relationships between two entities from text---the srilm toolkit was used to build the 5-gram language model | 0 |
al-onaizan and knight present a hybrid model for arabic-to-english transliteration , which is a linear combination of phoneme-based and grapheme-based models---al-onaizan and knight find that a model mapping directly from english to arabic letters outperforms the phoneme-toletter model | 1 |
this paper presents the results of a study on the semantic constraints imposed on lexical choice by certain contextual indicators---this paper presents the results of a study of the correlation between named entities ( people , places , or organizations | 1 |
we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing---for the translation from german into english , german compounds were split using the frequencybased method described in | 0 |
relation extraction is a core task in information extraction and natural language understanding---relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text | 1 |
we estimated unfiltered 5-gram language models using lmplz and loaded them with kenlm---our 5-gram language model is trained by the sri language modeling toolkit | 0 |
in our extension of lcseg , we use a similar method to consolidate different segments ; however , in our case the linearity constraint is absent---in our extension of lcseg , we use a similar method to consolidate different segments ; however , in our case | 1 |
we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training---we use pre-trained 50 dimensional glove vectors 4 for word embeddings initialization | 1 |
the trigram language model is implemented in the srilm toolkit---the language model is trained and applied with the srilm toolkit | 1 |
we use the open source moses phrase-based mt system to test the impact of the preprocessing technique on translation quality---we first obtain word representations using the popular skip-gram model with negative sampling introduced by mikolov et al and implemented in the gensim package | 0 |
we used the moses toolkit with its default settings---for decoding , we used moses with the default options | 1 |
for the textual sources , we populate word embeddings from the google word2vec embeddings trained on roughly 100 billion words from google news---we also consider the recently popular word2vec tool to obtain vector representation of words which are trained on 300 million words of google news dataset and are of length 300 | 1 |
in this paper , we presented an alternative method based on decision tree learning and longest match---we have demonstrated many promising features of returnn | 0 |
in addition , we utilize the pre-trained word embeddings with 300 dimensions from for initialization---to keep consistent , we initialize the embedding weight with pre-trained word embeddings | 1 |
basic reordering models in phrase-based systems use linear distance as the cost for phrase movements---ensembles have been applied to parsing , word sense disambiguation , sentiment analysis and information extraction | 0 |
language models were built with srilm , modified kneser-ney smoothing , default pruning , and order 5---the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit | 1 |
the parsers are trained out-of-domain and contain a significant amount of noise---parses are trained on out-of-domain data and often contain a significant amount of noise | 1 |
relation classification is the task of identifying the semantic relation holding between two nominal entities in text---relation classification is the task of assigning sentences with two marked entities to a predefined set of relations | 1 |
this can also be interpreted as a generalization of standard class-based models---this can be regarded as the clustering criterion usually used in a class-based n-gram language model | 1 |
to reflect this observation , in this paper we explore the value-based formulation approach for arbitrary slot filling tasks---the united kingdom is a country in northwest europe | 0 |
however , parameter tuning is a tricky issue for tracking ( cite-p-19-1-14 ) because the number of initial positive training stories is very small ( one to four ) , and topics are localized in space and time---parameter tuning is a key problem for statistical machine translation ( smt ) | 1 |
the latent dirichlet allocation is a topic model that is assumed to provide useful information for particular subtasks---latent dirichlet allocation is one of the most popular topic models used to mine large text data sets | 1 |
we have presented an approach that uses a supervised learning method with a graph based representation---we create an approach that uses a graph based representation to extract relevant words that are used in a supervised learning method | 1 |
kalchbrenner et al proposed to extend cnns max-over-time pooling to k-max pooling for sentence modeling---kalchbrenner et al showed that their dcnn for modeling sentences can achieve competitive results in this field | 1 |
parameter optimisation is done by mini-batch stochastic gradient descent where back-propagation is performed using adadelta update rule---the stochastic gradient descent with back-propagation is performed using adadelta update rule | 1 |
the attentional structure of a discourse can be modeled as a stack of focus spaces that contains the individuals salient at each point in a discourse---one substructure of a coherent discourse structure is its attentional structure , which can be modeled as a stack of focus spaces | 1 |
we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training---we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset | 1 |
we have presented a unifying framework of β violation-fixing β perceptron which guarantees convergence with inexact search---based on the structured perceptron , we propose a general framework of β violation-fixing β perceptrons for inexact search with a theoretical guarantee for convergence | 1 |
sentiment classification is a task to predict a sentiment label , such as positive/negative , for a given text and has been applied to many domains such as movie/product reviews , customer surveys , news comments , and social media---sentiment classification is the task of classifying an opinion document as expressing a positive or negative sentiment | 1 |
we use the set of shallow parsing features described by sha and pereira , in addition to the brown clusters mentioned above---since the task is basically identical to shallow parsing by crfs , we follow the feature sets used in the previous work by sha and pereira | 1 |
the bilstm-gcn encoder part of our model resembles the bilstm-treelstm model proposed by miwa and bansal , as they also stack a dependency tree on top of sequences to jointly model entities and relations---we also compare our model to an endto-end lstm model by miwa and bansal which comprises of a sequence layer for entity extraction and a tree-based dependency layer for relation classification | 1 |
we are able to get to within 5 % of an exact system β s performance while using only 30 % of the memory required---we are able to get a ceafe score within 5 % of a non-streaming system while using only 30 % of the memory | 1 |
rooth et al used pseudodisambiguation to evaluate a class-based model that is derived from unlabeled data using the expectation maximization algorithm---therefore , rooth et al propose a probabilistic latent variable model using expectation-maximization clustering algorithm to induce class-based sps | 1 |
in this paper , we present a method for learning the basic patterns contained within a plan and the ordering among them---in this paper we presented a technique for extracting order constraints among plan elements | 1 |
we have investigated the impact of cohesion for identifying discourse elements in student essays---we focus on identifying discourse elements for sentences in persuasive essays | 1 |
we first build an optimization model to infer the topics of microblogs by employing the topic-word distribution of the external knowledge---we enrich the content of microblogs by inferring the association between microblogs and external words | 1 |
feature weights were set with minimum error rate training on a development set using bleu as the objective function---the weights of the different feature functions were optimised by means of minimum error rate training | 1 |
the hr algebra provides the building blocks for the manipulation of s-graphs---we use case-sensitive bleu to assess translation quality | 0 |
genetic algorithms are known to be more effective than classical methods such as weighted metrics , goal programming , for solving multiobjective problems primarily because of their population-based nature---genetic algorithms are known to be more effective than classical methods such as weighted metrics , goal programming , for solving moo primarily because of their populationbased nature | 1 |
lstms were introduced by hochreiter and schmidhuber in order to mitigate the vanishing gradient problem---to solve this problem , hochreiter and schmidhuber introduced the long short-term memory rnn | 1 |
coreference resolution is the process of linking multiple mentions that refer to the same entity---coreference resolution is the task of clustering a sequence of textual entity mentions into a set of maximal non-overlapping clusters , such that mentions in a cluster refer to the same discourse entity | 1 |
the second kernel is the intersection string kernel 2 , which was first used in a text mining task by , although it is much more popular in computer vision---despite the fact that the intersection kernel is very popular in computer vision , it has never been used before in text mining | 1 |
the benchmark corpus were made available with the semeval-2013 shared task on sentiment analysis in twitter---davidov and rappoport developed a framework which discovers concepts based on high frequency words and symmetry-based pattern graph properties | 0 |
for training the translation model and for decoding we used the moses toolkit---we translated each german sentence using the moses statistical machine translation toolkit | 1 |
we optimise the feature weights of the model with minimum error rate training against the bleu evaluation metric---we set all feature weights using minimum error rate training , and we optimize their number on the development dataset | 1 |
polysynthetic languages pose unique challenges for traditional computational systems---they also pose unique challenges for traditional computational systems | 1 |
the log-linear model is then tuned as usual with minimum error rate training on a separate development set coming from the same domain---the relative weight δ½ is adjusted to maximize the performance on the development set , using an algorithm similar to minimum error-rate training | 1 |
we begin with a maximum likelihood estimate of the joint based on a word aligned old -domain corpus and update this distribution using new -domain comparable data---this goes beyond previous work on semantic parsing such as lu et al or zettlemoyer and collins which rely on unambiguous training data where every sentence is paired only with its meaning | 0 |
the proposed model automatically induces features sensitive to multi-predicate interactions from the word sequence information of a sentence---which automatically induces features sensitive to multi-predicate interactions exclusively from the word sequence information of a sentence | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.