text stringlengths 82 736 | label int64 0 1 |
|---|---|
our new a ∗ parsing algorithm is 5 times faster than cky parsing , without loss of accuracy---a ∗ parsing algorithm is 5 times faster than cky parsing , without loss of accuracy | 1 |
we use the sequential minimal optimization algorithm from weka and the feature set mentioned above for all experiments---for our baseline , we used a small parallel corpus of 30k english-spanish sentences from the europarl corpus | 0 |
socher et al introduce a family of recursive neural networks for sentence-level semantic composition---socher et al used recursive neural networks to model sentences for different tasks , including paraphrase detection and sentence classification | 1 |
the language model was trained using srilm toolkit---language models were built using the srilm toolkit 16 | 1 |
daum茅 and jagarlamudi , zhang and zong , and irvine et al use new-domain comparable corpora to mine translations for unseen words---daum茅 and jagarlamudi use contextual and string similarity to mine translations for oov words in a high resource language domain adaptation for a machine translation setting | 1 |
in particular , we chose to start with the aspect of ¡°coverage¡±---in this work , we chose to start with criteria related to content | 1 |
the language model is a trigram model with modified kneser-ney discounting and interpolation---we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit | 1 |
in initial experiments , this surpassed em for training a simple feature-poor generative model , and also improved the performance of a feature-rich , conditionally estimated model where em could not easily have been applied---for training a simple feature-poor generative model , and also improved the performance of a feature-rich , conditionally estimated model where em could not easily have been applied | 1 |
we used bleu as our evaluation criteria and the bootstrapping method for significance testing---therefore , we used bleu and rouge as automatic evaluation measures | 1 |
the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit---the language model is a 5-gram with interpolation and kneserney smoothing | 1 |
we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training---we initialize the embedding weights by the pre-trained word embeddings with 200 dimensional vectors | 1 |
we learn our word embeddings by using word2vec 3 on unlabeled review data---we proposed an unsupervised method for finding lexical variations | 0 |
these results are still at word level and are based on the noisy context---still at word / phrase level and are based on the noisy context | 1 |
for lm training and interpolation , the srilm toolkit was used---the model was built using the srilm toolkit with backoff and kneser-ney smoothing | 1 |
language models were built using the sri language modeling toolkit with modified kneser-ney smoothing---the srilm toolkit was used to build the trigram mkn smoothed language model | 1 |
in this paper we presented an unsupervised dynamic bayesian modeling approach to modeling speech style accommodation in face-to-face interactions---we used a regularized maximum entropy model | 0 |
as a fundamental task in natural language processing , wsd can benefit applications such as machine translation and information retrieval---in this paper we introduce m awps , an online repository of math word problems | 0 |
we follow the architecture proposed in ding and palmer for synchronous dependency insertion grammars , reproduced in fig---our text simplification system follows the architecture proposed in ding and palmer for synchronous dependency insertion grammars , reproduced in fig | 1 |
the standard phrase-based machine translation system focuses on finding the most probable target sentence given the source sentence---weights are optimized by mert using bleu as the error criterion | 0 |
the target language model is trained by the sri language modeling toolkit on the news monolingual corpus---the english side of the parallel corpus is trained into a language model using srilm | 1 |
following sutskever et al and bahdanau et al , we decided to use a multi-layer lstm decoder with an attention mechanism---we use the attentive nmt model introduced by bahdanau et al as our text-only nmt baseline | 1 |
introduced by bengio et al , the authors proposed a statistical language model based on shallow neural networks---schwenk and gauvain , bengio et al , mnih and hinton , and collobert et al proposed language models based on feedforward neural networks | 1 |
as an example of a simple substitution , suppose the dialogue preceding the query the various cargo holds---as an example of a simple substitution , suppose the dialogue preceding the query | 1 |
based on such a dual-view representation , we design a dual-view co-training approach---in this work , we propose a dual-view co-training algorithm based on dual-view | 1 |
levy and goldberg further reveal that the attractive properties observed in word embeddings are not restricted to neural models such as word2vec and glove---in the value of the input fan-out bound math-w-19-1-0-19 | 0 |
futrelle and nikolakis , 1995 ) developed a constraint grammar formalism for parsing vector-based visual displays and producing structured representations of the elements comprising the display---textual entailment ( te ) is a directional relationship between an entailing text fragment t and an entailed hypothesis , h , saying that the meaning of t entails ( or implies ) the meaning of h | 0 |
because shorter sentences are generally better processed by nlp systems , it could be used as a preprocessing step which facilitates and improves the performance of parsers , semantic role labelers and machine translation systems---we used the svd implementation provided in the scikit-learn toolkit | 0 |
the experimental results demonstrate that our models outperform the baselines on five word similarity datasets---tasks , our models also surpass all the baselines including a morphology-based model | 1 |
language models were estimated using the sri language modeling toolkit with modified kneser-ney smoothing---the language model was generated from the europarl corpus using the sri language modeling toolkit | 1 |
next we consider the context-predicting vectors available as part of the word2vec 6 project---third , we convert the stanford glove twitter model to word2vec and obtain the word embeddings | 1 |
a dimensionality reduction creates a space representing the syntactic categories of unambiguous words---dimensionality reduction makes the global distributional pattern of a word available in a profile | 1 |
mikolov et al proposed to use recurrent neural network to construct language model---mikolov et al demonstrate a recurrent neural network language model for word ordering | 1 |
the bleu metric has been widely accepted as an effective means to automatically evaluate the quality of machine translation outputs---an early mt evaluation metric , bleu , is still the most commonly used metric in automatic machine translation evaluation | 1 |
our work is positioned at the intersection of noisy text parsing and grammatical error correction---finally , we position this effort at the intersection of noisy text parsing and grammatical error correction | 1 |
we apply the proposed approach to opinion summarization , a typical opinion mining task---we apply the proposed approach to enhance opinion summarization | 1 |
recently , convolutional neural networks are reported to perform well on a range of nlp tasks---a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit | 0 |
we posit that there is a latent mapping of the question-answer meaning representation graph onto the text meaning representation graph that explains the answer---as distributions , we propose to minimize their earth mover ’ s distance , a measure of divergence between distributions | 0 |
we initialize our word vectors with 300-dimensional word2vec word embeddings---the nodes are concepts ( or synsets as they are called in the wordnet ) | 0 |
for the machine translation framework , we used phrase-based smt with the moses toolkit as a decoder---we used the phrase-based smt model , as implemented in the moses toolkit , to train an smt system translating from english to arabic | 1 |
recovering these entities in text is a hard problem , and the most recently reported numbers in literature for chinese are around a f-score of 50---verbnet is a very large lexicon of verbs in english that extends levin with explicitly stated syntactic and semantic information | 0 |
for english , we use the pre-trained glove vectors---evaluations show that the generated paraphrases almost always follow their target specifications , while paraphrase quality does not significantly deteriorate compared to vanilla | 0 |
keyphrase extraction is the problem of automatically extracting important phrases or concepts ( i.e. , the essence ) of a document---keyphrase extraction is a fundamental technique in natural language processing | 1 |
we use the datasets , experimental setup , and scoring program from the conll 2011 shared task , based on the ontonotes corpus---we follow a standard machine learning approach , and use the training , development and test sets released by the organizers of the conll-2011 shared task | 1 |
we employed the machine learning tool of scikit-learn 3 , for training the classifier---a pun is a form of wordplay in which a word suggests two or more meanings by exploiting polysemy , homonymy , or phonological similarity to another word , for an intended humorous or rhetorical effect ( cite-p-15-3-1 ) | 0 |
we used the first-stage pcfg parser of charniak and johnson for english and bitpar for german---we used the first-stage parser of charniak and johnson for english and bitpar for german | 1 |
regarding to this , cite-p-20-3-11 explicitly feed this target word into the attention model , and demonstrate the significant improvements in alignment accuracy---with the charniak ( cite-p-8-1-1 ) language model , our results exceed those of the previous best ( cite-p-8-3-6 ) | 0 |
the sentence pairs with top scores are selected to train the system---weber et al used three-dimensional tensor-based networks to construct the event representations | 0 |
our translation model is implemented as an n-gram model of operations using srilm-toolkit with kneser-ney smoothing---for this language model , we built a trigram language model with kneser-ney smoothing using srilm from the same automatically segmented corpus | 1 |
word sense disambiguation ( wsd ) is the task to identify the intended sense of a word in a computational manner based on the context in which it appears ( cite-p-13-3-4 )---word sense disambiguation ( wsd ) is a natural language processing ( nlp ) task in which the correct meaning ( sense ) of a word in a given context is to be determined | 1 |
the documents were parsed using the stanford parser---relation extraction ( re ) is the task of assigning a semantic relationship between a pair of arguments | 0 |
as evaluation measures , we use the standard bleu as well as ribes , a reorderingbased metric that has been shown to have high correlation with human evaluations on the ntcir data---word sense disambiguation ( wsd ) is the task of identifying the correct meaning of a word in context | 0 |
jiang et al , 2007 ) put forward a ptc framework based on the svm model---jiang et al put forward a ptc framework based on support vector machine | 1 |
we used the moses toolkit for performing statistical machine translation---we use the moses statistical mt toolkit to perform the translation | 1 |
as with , we train the language model on the penn treebank---we trained on the standard penn treebank wsj corpus | 1 |
in all cases , we used the implementations from the scikitlearn machine learning library---in the framework of the phorevox project supported by the french national research agency | 0 |
the weights associated to feature functions are optimally combined using the minimum error rate training---the srilm toolkit was used to build the trigram mkn smoothed language model | 0 |
furthermore , we propose a graph-based microblog entity linking ( gmel ) method---in nlp , mikolov et al show that a linear mapping between vector spaces of different languages can be learned to infer missing dictionary entries by relying on a small amount of bilingual information | 0 |
although each phrase consists of multiple words , the semantic orientation of the phrase is not a mere sum of the orientations of the component words---a phrase consists of a content word and one or more suffixes , such as postpositional particles | 1 |
in this paper we presented a new methodology to identify relations between entities in text---in this paper , we address the problem of identifying implicit relations in text | 1 |
our nmt model follows the common attentional encoder-decoder networks---our model is based on the standard lstm encoder-decoder model with an attention mechanism | 1 |
we use skip-gram with negative sampling for obtaining the word embeddings---in our model , we use negative sampling discussed in to speed up the computation | 1 |
we first consider the stochastic gradient langevin dynamics sampler to generate posterior samples---to address the costs of inference step , we apply an efficient sampling procedure via stochastic gradient langevin dynamics | 1 |
two technologies , i.e. , sentence weighting and domain weighting , are proposed to apply instance weighting to nmt---in this paper , two instance weighting technologies , i . e . , sentence weighting and domain weighting with a dynamic weight learning strategy , are proposed for nmt | 1 |
dependency parsing is the task of predicting the most probable dependency structure for a given sentence---sentiment analysis is a technique to classify documents based on the polarity of opinion expressed by the author of the document ( cite-p-16-1-13 ) | 0 |
we implement the classifiers using the text classification framework dkpro tc which includes all of the abovementioned classifiers---we use the word2vec tool to pre-train the word embeddings | 0 |
ju et al designed a sequential stack of flat ner layers that detects nested entities---ju et al dynamically stack multiple flat ner layers and extract outer entities based on the inner ones | 1 |
galley et al propose the ghkm scheme to model the string-to-tree mapping---galley et al proposes a method for extracting tree transducer rules from a parallel corpus | 1 |
relation extraction is the task of finding relations between entities in text , which is useful for several tasks such as information extraction , summarization , and question answering ( cite-p-14-3-7 )---we evaluated the models trained on the task data for the extraction of the relations mentioned above from biomedical abstracts | 0 |
we described two models for relation classification with which participated in the semeval-2018 task 7 , subtasks 1.1 and 1.2 on relation classification : an svm model and a cnn model---with which we participated in the semeval 2018 task 7 , subtask 1 on semantic relation classification : an svm model and a cnn model | 1 |
a 5-gram language model on the english side of the training data was trained with the kenlm toolkit---an english 5-gram language model is trained using kenlm on the gigaword corpus | 1 |
relation extraction is the task of detecting and characterizing semantic relations between entities from free text---relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text | 1 |
we have presented efficient algorithms for maximum expected f-score decoding---we show that the maximum expected f-score decoding problem can be solved in polynomial time | 1 |
zhang et al impose a sparsity prior over the rule probabilities to prevent the search from having to consider all the rules found in the viterbi biparses---zhang et al use variational bayes with a sparsity prior over the parameters to prevent the size of the grammar to explode when allowing for adjacent terminals in the viterbi biparses to chunk together | 1 |
koehn and knight automatically induce the initial seed bilingual dictionary by using identical spelling features such as cognates and similar contexts---koehn and knight construct the seed dictionary automatically based on identical spelled words in the two languages | 1 |
in the above mentioned apple , orange , microsoft example , we encourage apple and orange to share the same topic label a and try to push apple and microsoft to the same topic b---in the above mentioned apple , orange , microsoft example , we encourage apple and orange to share the same topic label a and try to push apple and microsoft to the same topic | 1 |
in our paper , we use te to compute connectivity between nodes of the graph and apply the weighted minimum vertex cover ( w mvc ) algorithm on the graph to select the sentences for the summary---in our paper , we use te to compute connectivity between nodes of the graph and apply the weighted minimum vertex cover ( w mvc ) algorithm on the graph | 1 |
the model is able to exploit phrasal and structural system-weighted consensus and also to utilize existing information about word ordering present in the target hypotheses---model is able to exploit phrasal and structural system-weighted consensus and also able to utilize existing information about word ordering present in the target hypotheses | 1 |
as in we apply our approach to a linear chain conditional random field model using the mallet toolkit 1 with default parameters---we solve this sequence tagging problem using the mallet implementation of conditional random fields | 1 |
basic , lexical and heads features are standard in role labeling---coreference resolution is the task of determining which mentions in a text are used to refer to the same real-world entity | 0 |
the ud annotation has evolved by reconstruction of the standford dependencies and it uses a slightly extended version of google universal tag set for part of speech---the annotation scheme is based on an evolution of stanford dependencies and google universal part-of-speech tags | 1 |
yarowsky proposes a method for word sense disambiguation , which is based on monolingual bootstrapping---yarowsky presented an approach that significantly reduces the amount of labeled data needed for word sense disambiguation | 1 |
table 1 summarizes test set performance in bleu , nist and ter---table 1 shows the performance for the test data measured by case sensitive bleu | 1 |
finkel and manning apply this method to dependency parsing , by using a hierarchical bayesian model---recent work using span-level end-to-end models have seen success in nlp tasks following the same pattern as re and semantic role labeling , | 0 |
a pun is a word used in a context to evoke two or more distinct senses for humorous effect---pun is a figure of speech that consists of a deliberate confusion of similar words or phrases for rhetorical effect , whether humorous or serious | 1 |
we further show that this model is useful for disambiguating polysemous verbs in context---sentiment analysis is a technique to classify documents based on the polarity of opinion expressed by the author of the document ( cite-p-16-1-13 ) | 0 |
relation extraction ( re ) is the task of extracting semantic relationships between entities in text---and an argument model finds trees that are linguistically more plausible | 0 |
feature engineering is a critical part for supervised model---wordnet has been used in many tasks relying on word-based similarity , including document and image retrieval systems | 0 |
we use srilm for training a trigram language model on the english side of the training corpus---we trained a tri-gram hindi word language model with the srilm tool | 1 |
mikolov et al reported their vector-space word representation is able to reveal linguistic regularities and composite semantics using simple vector addition and subtraction---mikolov et al found that the learned word representations capture meaningful syntactic and semantic regularities referred to as linguistic regularities | 1 |
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit---we employed the glove as the word embedding for the esim | 0 |
for language modeling , we used the trigram model of stolcke---coreference resolution is the task of partitioning a set of mentions ( i.e . person , organization and location ) into entities | 0 |
we use the 100-dimensional pre-trained word embeddings trained by word2vec 2 and the 100-dimensional randomly initialized pos tag embeddings---in addition , we utilize the pre-trained word embeddings with 300 dimensions from for initialization | 1 |
dakka and cucerzan trained an svm classifier by using features related to the structure of wikipedia articles---finally , we used kenlm to create a trigram language model with kneser-ney smoothing on that data | 0 |
we use adadelta to update the parameters during training---to train our models , which are fully differentiable , we use the adadelta optimizer | 1 |
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing---these language models were built up to an order of 5 with kneser-ney smoothing using the srilm toolkit | 1 |
for all classifiers , we used the scikit-learn implementation---we implemented the different aes models using scikit-learn | 1 |
kataria et aland sen used a latent topic model to learn the context model of entities---sen proposed a latent topic model to learn the context entity association | 1 |
optimizing for clustering-level accuracy---without necessarily optimizing for clustering-level accuracy | 1 |
for query-focused summarization , we use word vectors from word2vec which allows us to obtain better similarity scores between the sentences and the queries---for the cluster- based method , we use word2vec 2 which provides the word vectors trained on the google news corpus | 1 |
the word embeddings are pre-trained , using word2vec 3---transitionbased and graph-based models have attracted the most attention of dependency parsing in recent years | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.