text stringlengths 82 736 | label int64 0 1 |
|---|---|
our word embeddings is initialized with 100-dimensional glove word embeddings---we compared sn models with two different pre-trained word embeddings , using either word2vec or fasttext | 0 |
however , we use a large 4-gram lm with modified kneser-ney smoothing , trained with the srilm toolkit , stolcke , 2002 and ldc english gigaword corpora---we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting | 1 |
in this paper , we consider the the task of unsupervised prediction of acceptability---in this paper we present the task of unsupervised prediction of speakers ’ acceptability | 1 |
event schema induction is the task of learning a representation of events ( e.g. , bombing ) and the roles involved in them ( e.g , victim and perpetrator )---event schema induction is the task of learning high-level representations of complex events ( e.g. , a bombing ) and their entity roles ( e.g. , perpetrator and victim ) from unlabeled text | 1 |
we use the nltk stopwords corpus to identify function words---we use the group average agglomerative clustering package within nltk | 1 |
more recently , the works in the area of bengali ner can be found in ekbal et al , and ekbal and bandyopadhyay with the crf , and svm approach , respectively---more recently , the related works in this area can be found in ekbal et al , ekbal and bandyopadhyay with the crf , and svm approach , respectively | 1 |
we observe that sictf is not only significantly more accurate than such baselines , but also much faster---on multiple real-world datasets , we observe that sictf is not only more accurate than kb-lda but also significantly faster | 1 |
the learning rule was adam with standard parameters---the learning rule was adam with default tensorflow parameters | 1 |
bunescu and mooney introduce multiple instance learning to handle the weak confidence in the assigned label---bunescu and mooney connect weak supervision with multi-instance learning and extend their relational extraction kernel to this context | 1 |
each system is optimized using mert with bleu as an evaluation measure---weights are optimized by mert using bleu as the error criterion | 1 |
semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance---semantic parsing is the task of automatically translating natural language text to formal meaning representations ( e.g. , statements in a formal logic ) | 1 |
we use a gibbs sampling method for performing inference on our model---the development set is used to optimize feature weights using the minimum-error-rate algorithm | 0 |
multiword expressions or mwes can be understood as idiosyncratic interpretations or words with spaces wherein concepts cross the word boundaries or spaces---multiword expressions are lexical items that can be decomposed into single words and display idiosyncratic features | 1 |
tokenization and detokenization for both source and target texts were performed by our in-house text processing tools---we used in-house text processing tools for the tokenization and detokenization steps | 1 |
in this paper we have presented a maximum entropy ranking-based approach to russian stress prediction---pang et al applied machine learning based classifiers for sentiment classification on movie reviews | 0 |
we use dutch and spanish data sets from the conll 2002 the source language---we use the dutch data set from the conll 2002 shared task | 1 |
to this end , we use first-and second-order conditional random fields---to exploit these kind of labeling constraints , we resort to conditional random fields | 1 |
we use the stanford parser to derive the trees---we use stanford corenlp to obtain dependencies | 1 |
the models are estimated using srilm and converted to wfsts for use in ttm translation---the target-side language models were estimated using the srilm toolkit | 1 |
we train the concept identification stage using infinite ramp loss with adagrad---we apply online training , where model parameters are optimized by using adagrad | 1 |
in addition , we use an english corpus of roughly 227 million words to build a target-side 5-gram language model with srilm in combination with kenlm---we also use 200 million words from ldc arabic gigaword corpus to generate a 5-gram language model using srilm toolkit , stolcke , 2002 translation to be our source in each case | 1 |
wordnets play a central role in many natural language processing tasks---wordnets ( wns ) , play a central role in many natural language processing ( nlp ) tasks | 1 |
the optimisation of the feature weights of the model is done with minimum error rate training against the bleu evaluation metric---we introduce an algorithm that uses this hypothesis to classify a word sense in a given context | 0 |
all language models are created with the srilm toolkit and are standard 4-gram lms with interpolated modified kneser-ney smoothing---a more recent development was the use of conditional random field for pos tagging | 0 |
we extract continuous vector representations for concepts using the continuous log-linear skipgram model of mikolov et al , trained on the 100m word british national corpus---to train the link embeddings , we use the speedy , skip-gram neural language model of mikolov et al via their toolkit word2vec | 1 |
we presented a novel , fast approach for incorporating first-order implication rules into distributed representations of relations---we present a highly efficient method for incorporating implication rules into distributed representations | 1 |
we use the mallet implementation of a maximum entropy classifier to construct our models---as a model learning method , we adopt the maximum entropy model learning method | 1 |
3 in the literature ( cite-p-13-3-5 , cite-p-13-3-12 ) , translating romanized japanese or chinese names to chinese characters is also known as back-transliteration---recovering the original word from the transliterated target is called back-transliteration | 1 |
for lm training and interpolation , the srilm toolkit was used---all language models were trained using the srilm toolkit | 1 |
bannard and callison-burch introduced the pivoting approach , which relies on a 2-step transition from a phrase , via its translations , to a paraphrase candidate---bannard and callison-burch use a method that is also rooted in phrase-based statistical machine translation | 1 |
in this paper , we proposed a history-based structured learning approach that jointly detects entities and relations---this paper proposes a history-based structured learning approach that jointly extracts entities and relations | 1 |
we expect that a better binarization will also help improve the efficiency of chart parsing---we show that it is feasible to combine existing parsing speedup techniques with our binarization to achieve even better performance | 1 |
in addition , this measure takes into account context dependent word importance information---the reason for choosing svm is that it currently is the best performing machine learning technique across multiple domains and for many tasks , including language identification | 0 |
for the word-embedding based classifier , we use the glove pre-trained word embeddings---we use pre-trained 50 dimensional glove vectors 4 for word embeddings initialization | 1 |
this leads to an improved statistical word alignment performance , and has the advantages of improving the translation model and generalizing to unseen verb forms , during translation---by the base form of the head verb , we achieve a better statistical word alignment performance , and are able to better estimate the translation model and generalize to unseen verb forms during translation | 1 |
coreference resolution is the process of linking together multiple referring expressions of a given entity in the world---although coreference resolution is a subproblem of natural language understanding , coreference resolution evaluation metrics have predominately been discussed in terms of abstract entities and hypothetical system errors | 1 |
our system learns word and sentence embeddings jointly by training a multilingual skip-gram model together with a cross-lingual sentence similarity model---since our multilingual skip-gram and cross-lingual sentence similarity models are trained jointly , they can inform each other through the shared word embedding layer | 1 |
we evaluate our method on a range of languages taken from the conll shared tasks on multilingual dependency parsing---following pitler et al , we report in table 1 figures for the training sets of six languages used in the conll-x shared task on dependency parsing | 1 |
srilm was used for 5-gram language modeling and kneserney smoothing for both german-to-english and english-to-german translation---the srilm toolkit was used for training the language models using kneser-ney smoothing | 1 |
it has been proven to be useful for many applications , including microblog retrieval , query expansion , a---hashtags have been proven to be useful for many applications , including microblog retrieval , query expansion , a | 1 |
word alignment is the task of identifying corresponding words in sentence pairs---word alignment is a key component of most endto-end statistical machine translation systems | 1 |
semantic role labeling ( srl ) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence---semantic role labeling ( srl ) is the task of labeling predicate-argument structure in sentences with shallow semantic information | 1 |
sennrich et al proposed a method using synthetic parallel texts , in which target monolingual corpora are translated back into the source language---in a second stage , it chooses the best among the candidate compressions using a support vector machine regression ( svr ) model | 0 |
the idea behind our method is to utilize certain layout structures and linguistic pattern---behind our method is to utilize certain layout structures and linguistic pattern | 1 |
grosz and sidner claim that a robust model of discourse understanding must use multiple knowledge sources in order to recognize the complex relationships that utterances have to one another---we then created trigram language models from a variety of sources using the srilm toolkit , and measured their perplexity on this data | 0 |
for our primary results , we perform random replications of parameter tuning , as suggested by clark et al---we perform random replications of parameter tuning , as suggested by clark et al | 1 |
g lossy ¡¯s extractions have proven useful as seed definitions in an unsupervised wsd task---in spite of this broad attention , the open ie task definition has been lacking | 0 |
coreference resolution is a task aimed at identifying phrases ( mentions ) referring to the same entity---we used svm multiclass from svm-light toolkit as the classifier | 0 |
we built a trigram language model with kneser-ney smoothing using kenlm toolkit---we estimated unfiltered 5-gram language models using lmplz and loaded them with kenlm | 1 |
the framework of translation-model based retrieval has been introduced by berger and lafferty---statistical translation models for retrieval have first been introduced by berger and lafferty | 1 |
the language model is a trigram-based backoff language model with kneser-ney smoothing , computed using srilm and trained on the same training data as the translation model---for single systems ; while we noted that reranking provides a general approach applicable to any system that can generate n-best lists | 0 |
bilingual lexicons play a vital role in many natural language processing applications such as machine translation or crosslanguage information retrieval---bilingual lexicons serve as an indispensable source of knowledge for various cross-lingual tasks such as cross-lingual information retrieval or statistical machine translation | 1 |
for evaluation metric , we used bleu at the character level---for evaluation , we used the case-insensitive bleu metric with a single reference | 1 |
after standard preprocessing of the data , we train a 3-gram language model using kenlm---for language modeling , we computed 5-gram models using irstlm 7 and queried the model with kenlm | 1 |
our experimental results further confirm the strength of the good grief model---our empirical results further confirm the strength of the model | 1 |
for bi we use 2-gram kenlm models trained on the source training data for each domain---the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) | 0 |
the baseline of our approach is a statistical phrase-based system which is trained using moses---we use the moses mt framework to build a standard statistical phrase-based mt model using our old-domain training data | 1 |
text summarization is the task of automatically condensing a piece of text to a shorter version while maintaining the important points---text summarization is a task to generate a shorter and concise version of a text while preserving the meaning of the original text | 1 |
sentiment classification is the task of labeling a review document according to the polarity of its prevailing opinion ( favorable or unfavorable )---sentiment classification is a task to predict a sentiment label , such as positive/negative , for a given text and has been applied to many domains such as movie/product reviews , customer surveys , news comments , and social media | 1 |
for the automatic evaluation we used the bleu and meteor algorithms---for the automatic evaluation , we used the bleu metric from ibm | 1 |
due to the lack of benchmark data for implicit discourse relation analysis , earlier work used unlabeled data to generate synthetic implicit discourse data---identification of user intent also has important implications in building intelligent conversational qa systems | 0 |
for the evaluation of the results we use the bleu score---in the experiments presented in this paper , we use bleu scores as training labels | 1 |
for all models , we use fixed pre-trained glove vectors and character embeddings---as a classifier , we choose a first-order conditional random field model | 0 |
hence , our model has a more powerful representation capability than the traditional mention-pair or entity-mention model---in contrast to the traditional mention-pair model , our model can capture information beyond single mention pairs | 1 |
this paper proposes how to automatically identify korean comparative sentences from text documents---in this paper , we have presented how to extract comparative sentences from korean text documents | 1 |
furthermore , we train a 5-gram language model using the sri language toolkit---we use srilm for training a trigram language model on the english side of the training data | 1 |
the feature weights 位 m are tuned with minimum error rate training---feature weights are tuned using minimum error rate training on the 455 provided references | 1 |
the experiments discussed in section 6 show promising results for these directions---in section 6 show promising results for these directions | 1 |
we extract all word pairs which occur as 1-to-1 alignments , and later refer to them as the list of word pairs---we extract all word pairs which occur as 1-to-1 alignments and later refer to them as a list of word pairs | 1 |
the selected plain sentence pairs are further parsed by stanford parser on both the english and chinese sides---finally , we conduct paired bootstrap sampling to test the significance in bleu scores differences | 0 |
regarding to this , cite-p-20-3-11 explicitly feed this target word into the attention model , and demonstrate the significant improvements in alignment accuracy---to take long-term dependencies into account , cite-p-20-5-1 propose a lookahead attention by additionally modeling | 1 |
to generate the greatest breadth of synonyms , the tool uses a distributional thesaurus , wordnet and a paraphrase generation tool---the system automatically generates a thesaurus using a measure of distributional similarity and an untagged corpus | 1 |
however , because accommodation reflects social processes that extend over time within an interaction , one may expect a certain consistency of motion within the stylistic shift---because accommodation reflects social processes that extend over time within an interaction , one may expect a certain consistency of motion within the stylistic shift | 1 |
arabic is a highly inflectional language with 85 % of words derived from trilateral roots ( alfedaghi and al-anzi 1989 )---morphologically , arabic is a non-concatenative language | 1 |
coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set---coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text | 1 |
our 5-gram language model is trained by the sri language modeling toolkit---in this paper , we propose a new document clustering approach | 0 |
larochelle and lauly proposed a neural autoregressive topic model to compute the hidden units of the network efficiently---table 2 displays the quality , of the automatic translations generated for the test partitions | 0 |
judgments of groups , however , can be more reliably predicted using a siamese neural network , which outperforms all other approaches by a wide margin---but that judgments of groups can be more reliably predicted using a siamese neural network , which outperforms all other approaches by a wide margin | 1 |
distributional semantic models extract vectors representing word meaning by relying on the distributional hypothesis , that is , the idea that words that are related in meaning will tend to occur in similar contexts---distributional models use statistics of word cooccurrences to predict semantic similarity of words and phrases , based on the observation that semantically similar words occur in similar contexts | 1 |
a 4-grams language model is trained by the srilm toolkit---srilm toolkit is used to build these language models | 1 |
we trained a 4-gram language model on the xinhua portion of gigaword corpus using the sri language modeling toolkit with modified kneser-ney smoothing---we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing | 1 |
semantic parsing is the mapping of text to a meaning representation---for our baseline , we used a small parallel corpus of 30k english-spanish sentences from the europarl corpus | 0 |
beside , we also present a novel feature type based on word embeddings that are induced using neural language models over a large raw cor-pus---in addition , we can use pre-trained neural word embeddings on large scale corpus for neural network initialization | 1 |
we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit---we built a 5-gram language model from it with the sri language modeling toolkit | 1 |
we could also do more than simply use the sentences and paragraphs as their own definitions---we used the text of the phrase , sentence or paragraph to serve as its own definition | 1 |
model fitting for our model is based on the expectation-maximization algorithm---hence we use the expectation maximization algorithm for parameter learning | 1 |
the language models were interpolated kneser-ney discounted trigram models , all constructed using the srilm toolkit---the system used a tri-gram language model built from sri toolkit with modified kneser-ney interpolation smoothing technique | 1 |
we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset---for the character-based model we use publicly available pre-trained character embeddings 3 de- rived from glove vectors trained on common crawl | 1 |
like we used support vector machines via the classifier svmlight---in the experiments reported here we use support vector machines through the svm light package | 1 |
therefore , we can learn embeddings for all languages in wikipedia without any additional annotation or supervision---because we can obtain multilingual word and title embeddings for all languages in wikipedia without any additional data beyond wikipedia | 1 |
ppdb we use lexical features from the paraphrase database---we created 5-gram language models for every domain using srilm with improved kneserney smoothing on the target side of the training parallel corpora | 0 |
we adapted the moses phrase-based decoder to translate word lattices---acquiring such a corpus is expensive and time-consuming | 0 |
cite-p-15-1-10 present another extension of weak nets , downward connected nets---as mentioned in section 5 , another extension of weak nets , downward connected nets , has been proposed by | 1 |
coreference resolution is the task of partitioning the set of mentions of discourse referents in a text into classes ( or ‘ chains ’ ) corresponding to those referents ( cite-p-12-3-14 )---we consider a phrase-based translation model and a hierarchical translation model | 0 |
this work presented four ensemble methods for learning metaembeddings from multiple embedding sets : conc , svd , 1 to n and 1 to n +---learning , this paper proposes an ensemble approach of combining different public embedding sets | 1 |
the system takes friend seeds provided by users and generates a ranked list according to the likelihood of a test user being in the group---by first providing a small number of seeding users , then the system ranks the friend list according to how likely a user belongs to the group indicated by the seeds | 1 |
following the release of snli , there has been tremendous interest in the task , and many endto-end neural models were developed , achieving promising results---following the release of the large-scale snli dataset ( cite-p-17-1-0 ) , many endto-end neural models have been developed for the task , achieving high accuracy | 1 |
a 4-gram language model is trained on the monolingual data by srilm toolkit---the language model is trained on the target side of the parallel training corpus using srilm | 1 |
unification grammars are widely accepted as an expressive means for describing the structure of natural languages---grammars have been proposed , and they are used extensively by computational linguists to describe the structure of a variety of natural languages | 1 |
we used srilm for training the 5-gram language model with interpolated modified kneser-ney discounting ,---we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.