text stringlengths 82 736 | label int64 0 1 |
|---|---|
we also report the results using bleu and ter metrics---second , we propose a novel abstractive summarization technique based on an optimization framework that generates section-specific summaries for wikipedia | 0 |
ghosh et al , 2014 , used a linear tagging approach based on conditional random fields---ghosh et al proposed a linear tagging approach for argument identification using conditional random fields and n-best results | 1 |
we use srilm to train a 5-gram language model on the xinhua portion of the english gigaword corpus 5th edition with modified kneser-ney discounting---feng and cohn present another generative word-based markov chain translation model which exploits a hierarchical pitmanyor process for smoothing , but it is only applied to induce word alignments | 0 |
we use 4-gram language models in both tasks , and conduct minimumerror-rate training to optimize feature weights on the dev set---we perform minimum-error-rate training to tune the feature weights of the translation model to maximize the bleu score on development set | 1 |
the target-side language models were estimated using the srilm toolkit---target language models were trained on the english side of the training corpus using the srilm toolkit | 1 |
we use stanford log-linear partof-speech tagger to produce pos tags for the english side---we also extract subject-verbobject event representations , using the stanford partof-speech tagger and maltparser | 1 |
we use the logistic regression implementation of liblinear wrapped by the scikit-learn library---we use scikit learn python machine learning library for implementing these models | 1 |
in this paper , we addressed this problem by developing expected f-measure training for an rnn shift-reduce parsing model---in this paper , we present a global neural network parsing model , optimized for a task-specific loss | 1 |
to this end , we postulate a language model for generating reviews---for this , we propose a language model for generating reviews | 1 |
the word alignment models are trained using fast-align---the word alignment models were trained with fastalign | 1 |
our empirical results have shown that this approach outperforms a previous graph-based approach with an absolute gain of 9 %---for all three classifiers , we used the word2vec 300d pre-trained embeddings as features | 0 |
we used moses for pbsmt and hpbsmt systems in our experiments---we used the moses toolkit to build mt systems using various alignments | 1 |
word2vec offers efficient methods to pre-train word representations in an unsupervised fashion such that they reflect word similarities and relations---in this work we use the open-source toolkit moses | 0 |
then we use the standard minimum error-rate training to tune the feature weights to maximize the systemζ½s bleu score---we utilize minimum error rate training to optimize feature weights of the paraphrasing model according to ndcg | 1 |
our system is based on the phrase-based part of the statistical machine translation system moses---our baseline is an in-house phrase-based statistical machine translation system very similar to moses | 1 |
we train a kn-smoothed 5-gram language model on the target side of the parallel training data with srilm---we then lowercase all data and use all unique headlines in the training data to train a language model with the srilm toolkit | 1 |
the authors in suggest a number of features , some of which we incorporate in our current da-ner system , namely , the head and trailing 2-grams , 3-grams , and 4-grams characters in a word---the authors in suggest a number of features , that we incorporate a subset of in our da ner system , namely , the head and trailing bigrams , trigrams , and 4-grams characters | 1 |
it can be seen that we achieved 40 % improvements over our legacy system---experiments point out an array of issues that future qa systems may need to solve | 0 |
accurately identifying events in unstructured text is a very difficult task---we show experimentally that this technique gives substantially better performance than pra and its variants , improving mean average precision from . 432 to . 528 | 0 |
i refer to the task of identifying these independent threads and untangling them from one another as multiple narrative disentanglement ( mnd )---i introduce the task of multiple narrative disentanglement ( mnd ) , in which the aim is to tease these narratives apart | 1 |
a voice building process using the hidden markov model -based speech synthesis technique has been investigated to create personalized vocas---recently , a new voice building process using the hidden markov model -based speech synthesis technique has been investigated to create personalized vocas | 1 |
we use srilm for training a trigram language model on the english side of the training corpus---we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit | 1 |
training is done using stochastic gradient descent over mini-batches with the adadelta update rule---they are trained via stochastic gradient descent with shuffled mini-batches and the adadelta update rule | 1 |
it is possible to compute the moore-penrose pseudoinverse using the svd in the following way---the feature weights are tuned to optimize bleu using the minimum error rate training algorithm | 0 |
in this work , we present a sentence similarity using esa and syntactic similarities---we use word2vec from as the pretrained word embeddings | 0 |
we use the moses toolkit with a phrase-based baseline to extract the qe features for the x l , x u , and testing---we used the disambig tool provided by the srilm toolkit | 0 |
to express such algorithms as deduction systems , we use the notion of d-rules---to represent these algorithms as deduction systems , we use the notion of d-rules | 1 |
we used l2-regularized logistic regression classifier as implemented in liblinear---we used an l2-regularized l2-loss linear svm to learn the attribute predictions | 1 |
word sense disambiguation ( wsd ) is a problem of finding the relevant clues in a surrounding context---our 5-gram language model is trained by the sri language modeling toolkit | 0 |
the disadvantage of word-to-word translation is overcome by phrase-based translation and log-linear model combination---it is straightforward to integrate the predicate translation model into phrase-based smt | 1 |
experimental results suggest that they rival standard reference-based metrics in terms of correlations with human judgments on new test instances---abstract meaning representation is a compact , readable , whole-sentence semantic annotation | 0 |
the system was evaluated in terms of bleu score , word error rate and sentence error rate---the output was evaluated against reference translations using bleu score which ranges from 0 to 1 | 1 |
neelakantan et al proposed the multisense skip-gram model , that jointly learns context cluster prototypes and word sense embeddings---neelakantan et al proposed an extension of the skip-gram model combined with context clustering to estimate the number of senses for each word as well as learn sense embedding vectors | 1 |
we evaluate the translation quality using the case-insensitive bleu-4 metric---socher et al extend the recursive neural networks with matrix-vector spaces , and use mv-rnn to learn representations along the constituency tree for relation classification | 0 |
importantly , word embeddings have been effectively used for several nlp tasks , such as named entity recognition , machine translation and part-of-speech tagging---the probabilistic language model is constructed on google web 1t 5-gram corpus by using the srilm toolkit | 0 |
turney and littman decide on semantic orientation of a word using statistical association with a set of positive and negative paradigm words---turney and littman use pointwise mutual information and latent semantic analysis to determine the similarity of the word of unknown polarity with the words in both positive and negative seed sets | 1 |
to set up our systems , we employ the open source statistical machine translation toolkit jane , which is freely available for non-commercial use---semantic role labeling ( srl ) is a major nlp task , providing a shallow sentence-level semantic analysis | 0 |
we used the corpus of 52 million tweets used in with the tokenizer described in the same work---pre-trained word embeddings provide a simple means to attain semi-supervised learning in natural language processing tasks | 0 |
semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them---semantic role labeling ( srl ) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence | 1 |
in our experiments , we use 300-dimension word vectors pre-trained by glove---for word embeddings , we used popular pre-trained word vectors from glove | 1 |
despite the important outcomes associated with alignment , its sources are not clear---and its degree correlates with important social factors such as power and likability , its sources are still uncertain | 1 |
to solve the feature coverage problem with the em algorithm , meng et al leverage the unlabeled parallel data to learn unseen sentiment words---sarcasm is a form of speech in which speakers say the opposite of what they truly mean in order to convey a strong sentiment | 0 |
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting---we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting | 1 |
we trained kneser-ney discounted 5-gram language models on each available corpus using the srilm toolkit---for all data sets , we trained a 5-gram language model using the sri language modeling toolkit | 1 |
we have test our method by using homogeneous smt systems and a single pivot language---for this task , we use glove pre-trained word embedding trained on common crawl corpus | 0 |
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting---we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus | 1 |
collobert et al used a large amount of unlabeled data to map words to high-dimensional vectors and a neural network architecture to generate an internal representation---collobert et al first introduced an end-to-end neural-based approach with sequence-level training and uses a convolutional neural network to model the context window | 1 |
furthermore , we train a 5-gram language model using the sri language toolkit---we train trigram language models on the training set using the sri language modeling tookit | 1 |
kiperwasser and goldberg use birnns to obtain node representation with sentence-level information---kiperwasser and goldberg describe a dependency parser based on a bilstm layer representing the input sentence | 1 |
we used the scikit-learn implementation of a logistic regression model using the default parameters---we applied liblinear via its scikitlearn python interface to train the logistic regression model with l2 regularization | 1 |
in our wok , we have used the stanford log-linear part-of-speech to do pos tagging---we implement the weight tuning component according to the minimum error rate training method | 0 |
we use the stanford parser for english language data---after this we parse articles using the stanford parser | 1 |
srilm toolkit is used to build these language models---goldwater and griffiths propose a bayesian approach for learning the hmm structure | 0 |
the fourth category of auxiliary features uses model-specific explanations---for the fourth category of features , we have proposed and evaluated the novel idea of using explanations | 1 |
srilm toolkit has been used to develop the language models using target language sentences from the training and tuning sets of parallel corpora---the srilm toolkit is used to build the character-level language model for generating the lm features in nsw detection system | 1 |
relation extraction is the task of finding relationships between two entities from text---the un-pre-marked japanese corpus is used to train a language model using kenlm | 0 |
for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus---acquiring such a corpus is expensive and time-consuming | 0 |
in this work , we propose a new approach to obtain temporal relations from time anchors , i.e . absolute time value , of all mentions---in this paper , we propose a new approach to obtain temporal relations from absolute time value ( a . k . a . time anchors ) , which is suitable for texts containing rich temporal information | 1 |
the embedding layer in the model is initialized with 300-dimensional glove word vectors obtained from common crawl---for the character-based model we use publicly available pre-trained character embeddings 3 de- rived from glove vectors trained on common crawl | 1 |
we implemented linear models with the scikit learn package---we use the linear svm classifier from scikit-learn | 1 |
semantic role labeling ( srl ) is the task of automatically annotating the predicate-argument structure in a sentence with semantic roles---semantic role labeling ( srl ) is the task of labeling the predicate-argument structures of sentences with semantic frames and their roles ( cite-p-18-1-2 , cite-p-18-1-19 ) | 1 |
word embeddings have been empirically shown to preserve linguistic regularities , such as the semantic relationship between words---continuous representations of words have been found to capture syntactic and semantic regularities in language | 1 |
in our experiments we used ims as the representative supervised wsd system---since we are interested in a fully supervised wsd tool , ims is selected in our work | 1 |
we propose a framework to select and rank mandatory matching phrases ( mmp ) for question answering---we propose a new model that identifies important terms and phrases in a natural language question , providing better query analysis | 1 |
the machine translation back-end is powered by the open source moses decoder---the smt system is implemented using moses and the nmt system is built using the fairseq toolkit | 1 |
in our example , one should treat β page β , β plant β and β gibson β also as named-entity mentions and aim to disambiguate them together with β kashmir β---in our example , one should treat β page β , β plant β and β gibson β also as named-entity mentions | 1 |
we also obtain the embeddings of each word from word2vec---we first train a word2vec model on fr-wikipedia 11 to obtain non contextual word vectors | 0 |
conditional random fields are probabilistic models for labelling sequential data---conditional random fields are a probabilistic framework for labeling structured data and model p δ½ | 1 |
we use the partial tree kernel to compute k tk as it is the most general convolution tree kernel , which at the same time shows rather good efficiency---we use the partial tree kernel to measure the similarity between two trees , since it is suitable for dependency parsing | 1 |
ganchev et al propose another approach for agreement between the directed models by adding constraints on the alignment posteriors---ganchev et al propose postcat which uses posterior regularization to enforce posterior agreement between the two models | 1 |
it is rather frustrating to language engineers that the n-gram model is the workhorse of virtually every speech recognition system---into the models , the n-gram model remains the state of the art , used in virtually all speech recognition systems | 1 |
we also report the results using bleu and ter metrics---we use bleu as the metric to evaluate the systems | 1 |
morphological analysis is a staple of natural language processing for broad languages---morphological analysis is the basis for many nlp applications , including syntax parsing , machine translation and automatic indexing | 1 |
a core of order k of a graph g is a maximal connected subgraph of g in which every vertex v has at least degree k---a core of order k of g is a maximal connected subgraph of g in which every vertex v has at least degree k | 1 |
we used srilm for training the 5-gram language model with interpolated modified kneser-ney discounting ,---in this paper , we extended the integer linear programming to a quadratic formulation , arguing that it simplifies the modeling | 0 |
for the sick and msrvid experiments , we used 300-dimension glove word embeddings---for word embeddings , we used popular pre-trained word vectors from glove | 1 |
we use mstparser of mcdonald et al and focus on non-projective dependency parse trees with nontyped edges---throughout this work , we use mstperl , an implementation of the mstparser of mcdonald et al , with first-order features and non-projective parsing | 1 |
speech is a single step within a larger system---speech is a major component of modern user interfaces as it is the natural means of human communication | 1 |
furthermore , several annotation efforts have been devoted to developing resources for different languages , needed for supervised learning---several annotation efforts have been devoted to developing resources for different languages , needed for supervised learning | 1 |
compressing deep learning models is an active area of current research---mihalcea et al compared knowledgebased and corpus-based methods , using word similarity and word specificity to define one general measure of text semantic similarity | 0 |
we also use editor score as an outcome variable for a linear regression classifier , which we evaluate using 10-fold cross-validation in scikit-learn---we used the logistic regression implementation in scikit-learn for the maximum entropy models in our experiments | 1 |
for chinese-english , we train a standard phrase-based smt system over the available 21,863 sentences---our english-french system is a phrase-based smt system with a combination of two decoders , moses and docent | 1 |
xu et al learned robust relation representations from sdp through a cnn , and proposed a straightforward negative sampling strategy to improve the assignment of subjects and objects---xu et al propose to learn a robust representation using a convolutional neural network that works on the dependency path between subjects and objects , and propose a negative sampling strategy to address the relation directionality | 1 |
in this paper , we proposed two algorithms for automatically ontologizing binary semantic relations into wordnet : an anchoring approach and a clustering approach---in this paper , we take the next step and explore two algorithms for ontologizing binary semantic relations into wordnet | 1 |
we use glove word embeddings , which are 50-dimension word vectors trained with a crawled large corpus with 840 billion tokens---we use glove word embeddings , an unsupervised learning algorithm for obtaining vector representations of words | 1 |
an energy-based model was proposed by bordes et al to create disambiguated meaning embeddings , and neelakantan et al and tian et al extended the skip-gram model to learn multiple word embeddings---neelakantan et al proposed the mssg model which extends the skip-gram model to learn multi-prototype word embeddings by clustering the word embeddings of context words around each word | 1 |
mohammad and hirst show how distributional measures can be used to compute distance between very coarse word senses or concepts , and even obtain better results than traditional distributional similarity---marathe and hirst use distributional measures of conceptual distance , based on the methodology of mohammad and hirst to compute the relation between two words | 1 |
yang and kirchhoff used a back off model in a phrase-based smt system which translated word forms in the source language by hierarchical morphological abstractions---yang and kirchhoff proposed a backoff model for phrase-based smt that translated word forms in the source language by hierarchical morphological phrase level abstractions | 1 |
coreference resolution is the process of determining whether two expressions in natural language refer to the same entity in the world---coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities | 1 |
this paper complements the original paper by studying the algorithm empirically---this paper describes an empirical study of the phrase-based decoding algorithm | 1 |
unfortunately , this approach is difficult to utilize because it requires multiple segmenters that behave differently on the same input---ju et al designed a sequential stack of flat ner layers that detects nested entities | 0 |
we trained several language models on character and word level using kenlm from moses using default parameters---we trained a 3-gram language model on all the correct-side sentences using kenlm | 1 |
the probability of a word is governed by its latent topic , which is modeled as a categorical distribution in lda---we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting | 0 |
the standard minimum error rate training algorithm was used for tuning---90 % of the weights results in a more appreciable decrease of 1 . 0 bleu , the model is drastically smaller with 8m parameters , which is 26Γ fewer than the original teacher model | 0 |
we propose a framework to model human comprehension of discourse connectives---in this paper , we first describe an annotation transformation algorithm to automatically transform a human-annotated corpus | 0 |
even in light of all these advancements , there is still interest in a completely unsupervised method for pos induction for several reasons---in light of all these advancements , there is still interest in a completely unsupervised method for pos induction | 1 |
the disadvantage of word-to-word translation is overcome by phrase-based translation and log-linear model combination---we consider a phrase-based translation model and a hierarchical translation model | 1 |
the results show that our topic modelling approach outperforms the other two methods---we demonstrate that an lda-based topic modelling approach outperforms a baseline distributional semantic approach | 1 |
coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set---coreference resolution is the next step on the way towards discourse understanding | 1 |
the subtree ranking approach is a generalization of the perceptron-based approach---which is a generalization of current perceptron-based reranking methods | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.