text stringlengths 82 736 | label int64 0 1 |
|---|---|
we use the rmsprop optimization algorithm to minimize a loss function over the training data---the distance used for clustering is based on a divergence-like distance between two language models that was originally proposed by juang and rabiner | 0 |
in this work we propose injecting information about predicate-argument structures of sentences in nmt models---information about predicate-argument structure of source sentences can be integrated into standard attention-based nmt models | 1 |
our preliminary results have shown the potential of eye gaze in improving spoken language processing---we use the moses toolkit with a phrase-based baseline to extract the qe features for the x l , x u , and testing | 0 |
in this study , we propose an innovative sentence compression model based on expanded constituent parse trees---by incorporating this sentence compression model , our summarization system can yield significant performance gain in linguistic quality | 1 |
arabic is a morphologically rich language that is much more challenging to work , mainly due to its significantly larger vocabulary---moreover , arabic is a morphologically complex language | 1 |
zhang et al proposed synchronous binarization , a principled method to binarize an scfg in such a way that both the source-side and target-side virtual non-terminals have contiguous spans---zhang et al introduced a synchronous binarization technique that improved decoding efficiency and accuracy by ensuring that rule binarization avoided gaps on both the source and target sides | 1 |
by testing a state–of–the–art srl system with the two alternative role annotations , we show that the propbank role set is more robust to the lack of verb–specific semantic information and generalizes better to infrequent and unseen predicates---and unseen predicates , we study the performance of a state-of-the-art srl system trained on either codification of roles and some specific settings , i . e . including / excluding verb-specific information | 1 |
kalchbrenner et al propose a convolutional architecture for sentence representation that vertically stacks multiple convolution layers , each of which can learn independent convolution kernels---kalchbrenner et al introduced a convolutional neural network for sentence modeling that uses dynamic k-max pooling to better model inputs of varying sizes | 1 |
currently , recurrent neural network based models are widely used on natural language processing tasks for excellent performance---recently , a recurrent neural network architecture was proposed for language modelling | 1 |
richardson and domingos propose a method for reasoning about databases and logical constraints using markov random fields---poon and domingos proposed a model for unsupervised semantic parsing that transforms dependency trees into semantic representations using markov logic | 1 |
in addition , we utilize the pre-trained word embeddings with 300 dimensions from for initialization---we adopt pretrained embeddings for word forms with the provided training data by word2vec | 1 |
we use long shortterm memory networks to build another semanticsbased sentence representation---we use a standard long short-term memory model to learn the document representation | 1 |
a discourse structure is a tree whose leaves correspond to elementary discourse units ( edu ) s , and whose internal nodes correspond to contiguous text spans ( called discourse spans )---discourse structure is the hidden link between surface features and document-level properties , such as sentiment polarity | 1 |
we use the standard corpus for this task , the penn treebank---for training and evaluating the itsg parser , we employ the penn wsj treebank | 1 |
bethard et al identify opinion propositions and their holders by semantic parsing techniques---we present a generative semantic parser that considers the structure of table and the syntax of sql language | 0 |
a 4-gram language model is trained on the monolingual data by srilm toolkit---to accommodate this result , we sought to de-to this , it should provide explicit support for velop an architecture that is more general than representing alternative specifications | 0 |
we also obtain the embeddings of each word from word2vec---we initialize our word vectors with 300-dimensional word2vec word embeddings | 1 |
the smt system was tuned on the development set newstest10 with minimum error rate training using the bleu error rate measure as the optimization criterion---for acquisition of better conversion rules , xia et al proposed a method to automatically extract conversion rules from a target treebank | 0 |
results show that the system using the phrase-based error model outperforms significantly its baseline systems---we extend the constrained lattice training of tackstrom et al . ( 2013 ) to non-linear conditional | 0 |
an n-gram language model was then built from the sinica corpus released by the association for computational linguistics and chinese language processing using the srilm toolkit---the third baseline , a bigram language model , was constructed by training a 2-gram language model from the large english ukwac web corpus using the srilm toolkit with default good-turing smoothing | 1 |
the model parameters are trained using minimum error-rate training---the 位 f are optimized by minimum-error training | 1 |
socher et al model the two sentences with recursive neural networks , and then feed similarity scores between words and phrases to a cnn with dynamic pooling to capture sentence interactions---on eight different languages , show that the ncrf-ae model can outperform competitive systems in both supervised and semi-supervised scenarios | 0 |
we use 5-gram models with modified kneser-ney smoothing and interpolated back-off---in our implementation , we employ a kn-smoothed 7-gram model | 1 |
the empty string is the unique string of length zero denoted math-w-3-1-2-99---math-w-15-1-1-45 itself is efficient in the length of the string | 1 |
we competed in both subtasks and ranked 4 th in terms of accuracy in subtask a and 7 th in subtask b---we achieved fourth place in subtask a and seventh in subtask b in terms of accuracy | 1 |
compositional meaning representations may also be computationally more advantageous , since they can be computed very efficiently from syntactic representations ( e.g . in unification-based formalisms )---compositional formation of meaning representations may be computationally more attractive in some cases ( e . g . in unification-based formalisms ) | 1 |
we use the 100-dimensional pre-trained word embeddings trained by word2vec 2 and the 100-dimensional randomly initialized pos tag embeddings---we use the word2vec tool to train monolingual vectors , 6 and the cca-based tool for projecting word vectors | 1 |
this work helps understand the limitations of both classes of models , and suggest directions for improving recurrent models---we initialize our word vectors with 300-dimensional word2vec word embeddings | 0 |
the experimental results show overall high performance---evaluation results show overall high performance | 1 |
we used the phrase-based smt model , as implemented in the moses toolkit , to train an smt system translating from english to arabic---for the training of the smt model , including the word alignment and the phrase translation table , we used moses , a toolkit for phrase-based smt models | 1 |
choi and cardie developed inference rules to capture compositional effects at the lexical level on phrase-level polarity classification---zhang and lee used the same taxonomy as li and roth , as well as the same training and testing data | 0 |
we use 100-dimension glove vectors which are pre-trained on a large twitter corpus and fine-tuned during training---we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings , which we do not optimize during training | 1 |
the parameters of the log-linear model are tuned by optimizing bleu on the development data using mert---the log-linear parameter weights are tuned with mert on a development set to produce the baseline system | 1 |
relational similarity measures the correspondence between word-word relations---we adapt the models of mikolov et al and mikolov et al to infer feature embeddings | 0 |
task 4 subtask c of semeval-2016 seeks to classify the sentiment of tweets into an ordinal five-point scale---subtask c of semeval-2016 is to classify the sentiment of tweets into an ordinal five-point scale | 1 |
the system dictionary of our word-pair identifier is comprised of 155,746 chinese words taken from the moe-mandarin dictionary and 29,408 unknown words auto-found in udn2001 corpus by a chinese word autoconfirmation system---the system dictionary of our wsm is comprised of 82,531 chinese words taken from the ckip dictionary and 15,946 unknown words autofound in the udn2001 corpus by a chinese word auto-confirmation system | 1 |
much work around features in nlp is aimed at improving classifier accuracy---choice of features can substantially improve classifier performance | 1 |
from results of our experiments , our method showed reasonably comparable performance compared with a supervised method---though our method uses only title words and unlabeled data , it shows reasonably comparable performance in comparison with that of the supervised naive | 1 |
therefore , word segmentation is a crucial first step for many chinese language processing tasks such as syntactic parsing , information retrieval and machine translation---the first one is the ws-353 dataset , which contains 353 pairs of english words that have been assigned similarity ratings by humans | 0 |
we compare the final system to moses 3 , an open-source translation toolkit---we use the opensource moses toolkit to build a phrase-based smt system | 1 |
we evaluated translation quality using uncased bleu and ter---these sequences of words are lexical chains , and they have been successfully used in research areas such as information retrieval and document summarization | 0 |
in this paper , we introduce a neural network model for the coherence task based on distributed sentence representation---in this paper , we apply two neural network approaches to the sentence-ordering ( coherence ) task , using compositional sentence | 1 |
in a tree adjoining grammar , a feature structure is associated with each node in an elementary tree---in a unification frame , a feature structure is associated with each node in an elementary tree | 1 |
the translation quality is evaluated by bleu and ribes---the translation quality is evaluated by case-insensitive bleu-4 metric | 1 |
the global wordnet association , built on the results of princeton wordnet and euro wordnet , is a free and public association that provides a platform that shares and connects all languages in the world---the global wordnet association 2 built on the results of princeton wordnet and euro wordnet is a free and public association that provides a platform to share and connect all languages in the world | 1 |
our first choice is the bottom-up agglomerative word clustering algorithm of brown et al , which derives a hierarchical clustering of words from unlabeled data---brown et al present a hierarchical word clustering algorithm that can handle a large number of classes and a large vocabulary | 1 |
the lexicalized reordering models have become the de facto standard in modern phrase-based systems---among them , lexicalized reordering models have been widely used in practical phrase-based systems | 1 |
our approach to atr is based on the c-and nc-value methods , which extract multi-word terms---our approach to atr is based on the c-value method , which extracts multi-word terms | 1 |
we presented an unsupervised graph-based model for coreference resolution---we present an unsupervised model for coreference resolution | 1 |
xiong et al integrated first-sense and hypernym features in a generative parse model applied to the chinese penn treebank and achieved significant improvement over their baseline model---xiong et al experimented with first-sense and hypernym features from hownet and cilin in a generative parse model applied to the chinese penn treebank | 1 |
haagsma and bjerva use violations of selectional preferences to find novel metaphors---socher et al present a novel recursive neural network for relation classification that learns vectors in the syntactic tree path that connects two nominals to determine their semantic relationship | 0 |
meanwhile , we propose an intuitionistic model for dependency parsing , which uses a classifier to determine whether a pair of words form a dependency edge---we describe an intuitionistic method for dependency parsing , where a classifier is used to determine whether a pair of words forms a dependency edge | 1 |
we use the 300-dimensional pre-trained word2vec 3 word embeddings and compare the performance with that of glove 4 embeddings---supervised models for re require adequate amounts of annotated data for their training | 0 |
semantic parsing is the task of translating text to a formal meaning representation such as logical forms or structured queries---semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation ( mr ) | 1 |
model fitting for our model is based on the expectation-maximization algorithm---all word alignment models we consider are normally trained using the expectation maximization algorithm | 1 |
we used srilm to build a 4-gram language model with interpolated kneser-ney discounting---we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit | 1 |
experimental results on the wat ’ 15 englishto-japanese translation dataset demonstrate that our proposed model achieves the best ribes score and outperforms the sequential attentional nmt model---word alignment is a well-studied problem in natural language computing | 0 |
distributional semantic models extract vectors representing word meaning by relying on the distributional hypothesis , that is , the idea that words that are related in meaning will tend to occur in similar contexts---in phrase-based smt , the building blocks of translation are pairs of phrases | 0 |
from the perspective of online language comprehension , processing difficulty is quantified by surprisal---pcfg surprisal is a measure of incremental hierarchic syntactic processing | 1 |
wu et al compared machine learning methods for abbreviation detection---therefore wu et al compared machine learning methods for abbreviation detection | 1 |
vinyals et al proposed an idg model that uses a vector , encoding the image as input based on the sequence-to-sequence framework---vinyals et al used a convolutional neural network to encode an image , followed by an lstm decoder to produce an output sequence | 1 |
zhang and mcdonald generalized the eisner algorithm to handle arbitrary features over higher-order dependencies---in particular , haussler proposed the well-known convolution kernels for a discrete structure | 0 |
finally , we have made several theoretical contributions---we make several theoretical contributions | 1 |
our maxent and dtree parsers run at speeds 40-270 times faster than state-of-the-art parsers , but with 5-6 % losses in accuracy---in accuracy , our dtree and maxent parsers run at speeds 40-270 times faster than state-of-the-art parsers | 1 |
we confirm prior results showing that users adapt to the system¡¯s lexical and syntactic choices---shrestha and mckeown proposed a supervised rule induction method to detect interrogative questions in email conversations based on part-of-speech features | 0 |
coreference resolution is the task of identifying all mentions which refer to the same entity in a document---coreference resolution is the process of linking multiple mentions that refer to the same entity | 1 |
kalchbrenner et al show that a cnn for modeling sentences can achieve competitive results in polarity classification---the target language model is built on the target side of the parallel data with kneser-ney smoothing using the irstlm tool | 0 |
niessen and ney have used morphological decomposition to improve alignment quality---niessen and ney used morphological decomposition to get better alignments | 1 |
in comparison , we develop an ir system to find proper existing summaries as soft templates---to this end , we use a popular ir platform to retrieve proper summaries | 1 |
we used srilm for training the 5-gram language model with interpolated modified kneser-ney discounting ,---we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting | 1 |
the model was built using the srilm toolkit with backoff and good-turing smoothing---these language models were built up to an order of 5 with kneser-ney smoothing using the srilm toolkit | 1 |
all of the english sentences were parsed using the charniak parser---the reranking parser of charniak and johnson was used to parse the bnc | 1 |
this work introduces a new strategy to compare the numerous representations that have been proposed over the years for expressing dependency structures and discover the one that is easiest to learn---work introduces a new strategy to compare the numerous conventions that have been proposed over the years for expressing dependency structures and discover the one for which | 1 |
we implement the pbsmt system with the moses toolkit---for the n-gram lm , we use srilm toolkits to train a 4-gram lm on the xinhua portion of the gigaword corpus | 0 |
experiments using real-life online debate data showed the effectiveness of the model---the resolveipa approach of indicating possible reference ambiguities resembles that proposed by kameyama | 0 |
we use moses , a statistical machine translation system that allows training of translation models---we use the moses toolkit to train various statistical machine translation systems | 1 |
relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text---relation extraction is a fundamental step in many natural language processing applications such as learning ontologies from texts ( cite-p-12-1-0 ) and question answering ( cite-p-12-3-6 ) | 1 |
summarisation of the comments allows interaction at a higher level and can lead to an understanding of the overall discussion---summarising the content of these comments allows users to interact with the data at a higher level , providing a transparency to the underlying data | 1 |
we use scikitlearn as machine learning library---we used the svm implementation of scikit learn | 1 |
we demonstrate that concept drift is an important consideration---sapkota et al showed that classical character n-grams lose some information in merging instances of ngrams like the which could be a prefix , a suffix , or a standalone word | 0 |
word alignment is a critical component in training statistical machine translation systems and has received a significant amount of research , for example , ( cite-p-17-1-0 , cite-p-17-1-8 , cite-p-17-1-4 ) , including work leveraging syntactic parse trees , e.g. , ( cite-p-17-1-1 , cite-p-17-1-2 , cite-p-17-1-3 )---word alignment , which can be defined as an object for indicating the corresponding words in a parallel text , was first introduced as an intermediate result of statistical translation models ( cite-p-13-1-2 ) | 1 |
we used the scikit-learn implementation of svrs and the skll toolkit---our departure point is the skip-gram neural embedding model introduced in trained using the negative-sampling procedure presented in | 0 |
this type of model is closely related to several other approaches---our models are similar to several other approaches | 1 |
coreference resolution is the process of determining whether two expressions in natural language refer to the same entity in the world---in recent years , mln has been adopted for several natural language processing tasks and achieved a certain level of success | 0 |
this section describes nonlocal dependencies in the penn treebank---table 1 shows statistics from sections 2-21 of the penn wsj treebank | 1 |
knowledge graphs such as freebase , yago and wordnet are among the most widely used resources in nlp applications---knowledge graphs , such as freebase , contain a wealth of structured knowledge in the form of relationships between entities and are useful for numerous end applications | 1 |
semantic role labeling ( srl ) is the task of automatically annotating the predicate-argument structure in a sentence with semantic roles---semantic role labeling ( srl ) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence | 1 |
more recently , rama combined the subsequence features and a number of word shape similarity scores as features to train a svm model---rama combines subsequence feature with the system developed by hauer and kondrak , which employs a number of word shape similarity scores as features to train a svm model | 1 |
we have introduced a novel hybrid neural model with two nested levels of attention : word-level and character-level---developing upon recent work on neural machine translation , we propose a new hybrid neural model with nested attention layers | 1 |
also , we will use recent advances in learning representations based on deep contextualized embeddings such as elmo and bert---it will be also interesting to adopt even stronger input models , especially , those enhanced with contextualized representations from elmo or bert | 1 |
the berkeley framenet is an ongoing project for building a large lexical resource for english with expert annotations based on frame semantics---the berkeley framenet project is an ongoing effort of building a semantic lexicon for english based on the theory of frame semantics | 1 |
for the phrase based system , we use moses with its default settings---we use the opensource moses toolkit to build a phrase-based smt system | 1 |
these applications depend heavily on the quality of the word alignment---the solutions of these problems depend heavily on the quality of the word alignment | 1 |
instead of using crf model , we use the hidden markov support vector machines , which is also a sequence labeling model like crf---in addition , instead of using the popular crf model , we use another sequence labeling model in this paper -- -the hidden markov support vector machines model | 1 |
the translation model is induced by combining the maximum similarity alignment with the competitive linking algorithm of melamed---a translation model is induced between phonemes in two wordlists by combining the maximum similarity alignment with the competitive linking algorithm of melamed | 1 |
for training our system classifier , we have used scikit-learn---we use the linear svm classifier from scikit-learn | 1 |
the language model was a kneser-ney interpolated trigram model generated using the srilm toolkit---coreference resolution is a well known clustering task in natural language processing | 0 |
to solve this task we use a multi-class support vector machine as implemented in the liblinear library---we release a wide-coverage chinese zero anaphora corpus of 100 documents , which adds a layer of annotation to the manually-parsed sentences in the chinese treebank ( ctb ) | 0 |
feature function scaling factors 位 m are optimized based on a maximum likely approach or on a direct error minimization approach---feature function scaling factors 位 m are optimized based on a maximum likelihood approach or on a direct error minimization approach | 1 |
in this paper , we propose an efficient method for implementing ngram models based on double-array structures---in this paper , we propose the double-array language model ( dalm ) which uses double-array structures | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.