text
stringlengths
82
736
label
int64
0
1
we use the stanford pos tagger to obtain the perspectives p and l---we use the stanford pos tagger to obtain the lemmatized corpora for the parss task
1
bleu is a precision measure based on m-gram count vectors---specifically , we tested the methods word2vec using the gensim word2vec package and pretrained glove word embeddings
0
nouns , verbs , adjectives and adverbs are grouped into sets of cognitive synonyms , each expressing a distinct concept---nouns , verbs , adjectives , and adverbs are grouped into sets of cognitive synonyms , each expressing a distinct concept
1
word embeddings can be pre-trained using tools such as word2vec or glov e , in which case a table lookup is enough---word embeddings can either be initialized randomly or use the output of a tool like word2vec or glove
1
gale et al . refer to this as the ‘ one sense per discourse ’ property ( cite-p-14-3-0 )---because of the ‘ one sense per discourse ’ claim ( cite-p-14-3-0 )
1
we have used a simple heuristic-based baseline parser as done by lin et al for implicit connectives---we make use of the automatic pdtb discourse parser from lin et al to obtain the discourse relations over an input article
1
distant supervision is a well-known idea for training robust statistical classifiers---distant supervision is a scheme to generate noisy training data for relation extraction by aligning entities of a knowledge base with text
1
we implement logistic regression with scikit-learn and use the lbfgs solver---we used the scikit-learn implementation of a logistic regression model using the default parameters
1
we proposed a novel neural method for ddi extraction using both textual and molecular information---in this study , we propose a novel method to utilize both textual and molecular information for ddi extraction
1
we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing---we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing
1
we used srilm to build a 4-gram language model with interpolated kneser-ney discounting---we used the srilm software 4 to build langauge models as well as to calculate cross-entropy based features
1
furthermore , we train a 5-gram language model using the sri language toolkit---we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model
1
our mt system is a phrase-based , that is developed using the moses statistical machine translation toolkit---to get a dictionary of word embeddings , we use the word2vec tool 2 and train it on the chinese gigaword corpus
0
a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit---the lms are build using the srilm language modelling toolkit with modified kneserney discounting and interpolation
1
we call ¡®but¡¯ and ¡®therefore¡¯ explicit discourse connectives ( dcs )---in this paper , we proposed a novel neural inductive teaching framework ( nite ) to transfer knowledge from existing domain-specific ner models into an arbitrary deep neural network
0
in this paper , we propose a new method of event extraction by well using cross-entity inference---in this paper , we propose a new method of transductive inference , named cross-entity inference , for event extraction by well
1
leacock and chodorow used an nb classifier , and indicated that by combining topic context and local context they could achieve higher accuracy---in comparison with other studies , leacock and chodorow lacked collocations , ng and lee lacked local context , and escudero used local context and collocations with smaller sizes
1
for language models , we use the srilm linear interpolation feature---we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words
0
in this paper , we used the decision list to solve the homophone problem---it has been demonstrated that cnns produce state-of-the-art results in many nlp tasks such as text classification and sentiment analysis
0
we observe that a good question is a natural composition of interrogatives , topic words , and ordinary words---in standard supervised learning problems , we explore reducing a regular supervised learning problem to the few-shot meta-learning scenario
0
most prominently , it has been used for wsd , noun learning and amr parsing and generation---most prominently , they have been used for word sense disambiguation , noun learning and recently , amr parsing and generation
1
we use the aligned english and german sentences in europarl for our experiments---in our experiments , we use the english-french part of the europarl corpus
1
we used moses for pbsmt and hpbsmt systems in our experiments---we used moses as the implementation of the baseline smt systems
1
we implement the pbsmt system with the moses toolkit---we used the moses toolkit to build mt systems using various alignments
1
yang et al borrowed negative instances from different genres such as news websites and proverbs---mihalcea and strapparava and yang et al borrowed negative instances from different genres such as news websites or proverbs
1
we used the liblinear-java library 2 with the l2-regularized logistic regression method for both trigger detection and edge detection---we trained the classifiers for relation extraction using l1-regularized logistic regression with default parameters using the liblinear package
1
our data is taken from the conll 2006 and 2007 shared tasks---in the multi-agent decentralizedpomdp reach implicature-rich interpretations simply as a by-product of the way they reason about each other to maximize joint utility
0
twitter is a huge microblogging service with more than 500 million tweets per day from different locations of the world and in different languages ( cite-p-8-1-9 )---twitter is a social platform which contains rich textual content
1
gru and lstm have been shown to yield comparable performance---nevertheless , gru has been experimentally proven to be comparable in performance to lstm
1
coreference resolution is the task of partitioning the set of mentions of discourse referents in a text into classes ( or ‘ chains ’ ) corresponding to those referents ( cite-p-12-3-14 )---active learning is a framework that makes it possible to efficiently train statistical models by selecting informative examples from a pool of unlabeled data
0
to generate the n-gram language models , we used the kenlm n-gram , language modeling tool---for all systems , we trained a 6-gram language model smoothed with modified kneser-ney smoothing using kenlm
1
in order to do so , we use the moses statistical machine translation toolkit---for our baseline we use the moses software to train a phrase based machine translation model
1
our translation model is implemented as an n-gram model of operations using the srilm toolkit with kneser-ney smoothing---the language model is a trigram-based backoff language model with kneser-ney smoothing , computed using srilm and trained on the same training data as the translation model
1
after standard preprocessing of the data , we train a 3-gram language model using kenlm---we use kenlm 3 for computing the target language model score
1
the penn discourse treebank provides annotations for the arguments and relation senses of one hundred pre-selected discourse connectives over the news portion of the penn treebank corpus---the penn discourse treebank is the largest available annotated corpora of discourse relations over 2,312 wall street journal articles
1
such models perform well in single domain---this phenomenon is quite common in many domains
1
in order to find the predominant sense of a target word we use a thesaurus acquired from automatically parsed text based on the method of lin---the method uses a thesaurus acquired from automatically parsed text based on the method described by lin
1
the log-linear parameter weights are tuned with mert on the development set---for pcfg parsing , we select the berkeley parser
0
thus , in section 4 , we present a tool to efficiently access wikipedia¡¯s edit history---in section 4 , we describe tools allowing to efficiently access wikipedia ¡¯ s edit history
1
thanks to the refined translation models , this approach produces better translations with a much shorter re-decoding time---decoding paths adopted by other mt systems , this framework achieves better translation quality with much less re-decoding time
1
specifically , we tested the methods word2vec using the gensim word2vec package and pretrained glove word embeddings---we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset
1
we use word embedding pre-trained on newswire with 300 dimensions from word2vec---we trained a continuous bag of words model of 400 dimensions and window size 5 with word2vec on the wiki set
1
the machine translation engines for language translation use the marian decoder for translation , with neural models trained with the nematus toolkit---the machine translation engines for language translation currently use the marian 7 decoder for translation with neural mt models trained with the nematus toolkit
1
semantic role labeling ( srl ) is a major nlp task , providing a shallow sentence-level semantic analysis---semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them
1
on the penn chinese treebank 5.0 , it achieves an f-measure of 98.43 % , significantly outperforms previous works although using a single classifier with only local features---that word ( containing at least one character ) is the appropriate unit for chinese language processing
0
the second and third benchmarks are the rg-65 and the mc-30 datasets that contain 65 and 30 pairs of nouns respectively and have been given similarity rankings by humans---the fourth and fifth benchmarks are the rg-65 and the mc-30 datasets that contain 65 and 30 pairs of nouns respectively and have been given similarity rankings by humans
1
ner is a task to identify names in texts and to assign names with particular types ( cite-p-12-3-17 , cite-p-12-3-19 , cite-p-12-3-18 , cite-p-12-3-2 )---ner is a fundamental task in many natural language processing applications , such as question answering , machine translation , text mining , and information retrieval ( cite-p-15-3-11 , cite-p-15-3-6 )
1
all features used by pavlick et al for formality detection and by danescu et al for politeness detection are included in our analysis---for formality detection and by danescu et al for politeness detection have been included in our analysis for a comparison against baselines
1
our experiments show the models are able to achieve notable improvements over baselines containing a recurrent lm---our direct system uses the phrase-based translation system
0
finally , following bousmalis et al , we further encourage the domain-specific features to be mutually exclusive with the shared features by imposing soft orthogonality constraints---bousmalis et al extend the framework of ganin et al by additionally encouraging the private and shared features to be mutually exclusive
1
these ciphers use a substitution table as the secret key---key ciphers also use a secret substitution function
1
we also show how this approach can be combined with discourse features previously shown to be beneficial for the task of answer reranking---and find that the combination of all four feature types is most beneficial for answer reranking
1
we use the 300-dimensional skip-gram word embeddings built on the google-news corpus---of all the errors , determiner and preposition errors are the two main research topics
0
we use the glove word vector representations of dimension 300---we use the publicly available glove vectors 2 of length 100
1
we use pre-trained vectors from glove for word-level embeddings---in this task , we use the 300-dimensional 840b glove word embeddings
1
the characteristics of this method is that it is fully automatic and can be applied to arbitrary html documents---because this method is fully automatic and can be applied to arbitrary html documents
1
coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set---coreference resolution is the process of linking together multiple expressions of a given entity
1
second , the sentence-plan-ranker ( spr ) ranks the list of output sentence plans , and then selects the top-ranked plan---in the second phase , the sentence-plan-ranker ( spr ) ranks the sample sentence plans , and then selects the top-ranked output to input to the surface
1
we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words---for all data sets , we trained a 5-gram language model using the sri language modeling toolkit
1
by fixing this , we get new measures that improve performance over not just pmi but on other popular co-occurrence measures as well---entity linking ( el ) is the task of automatically linking mentions of entities ( e.g . persons , locations , organizations ) in a text to their corresponding entry in a given knowledge base ( kb ) , such as wikipedia or freebase
0
named entity ( ne ) transliteration is the process of transcribing a ne from a source language to some target language based on phonetic similarity between the entities---named entity ( ne ) transliteration is the process of transcribing a ne from a source language to a target language based on phonetic similarity between the entities
1
semantic parsing is the task of mapping natural language to a formal meaning representation---semantic parsing is the mapping of text to a meaning representation
1
chen and ji applied various kinds of lexical , syntactic and semantic features to address the special issues in chinese---chen and ji applied various kinds of lexical , syntactic and semantic features to address the specific issues in chinese
1
spam has been historically studied in the contexts of web text or email---spam has historically been investigated in the contexts of e-mail and the web
1
in future work , we plan to extend the parameterization of our models to not only predict phrase orientation , but also the length of each displacement as in ( cite-p-10-1-0 )---in future work , we plan to extend the parameterization of our models to not only predict phrase orientation , but also the length of each displacement
1
compared with the source content , the annotated summary is short and well written---summary generation remains a significant open problem for natural language processing
0
after standard preprocessing of the data , we train a 3-gram language model using kenlm---to rerank the candidate texts , we used a 5-gram language model trained on the europarl corpus using kenlm
1
lau et al leverage a common framework to address sense induction and disambiguation based on topic models---our results show consistent improvement over a monolingual baseline
0
then we interpolate the above two models to further improve word alignment between l1 and l2---as a pivot language , we can build a word alignment model for l1 and l2 based on the above two models
1
different dialogue act labeling standards and datasets have been provided , including switchboard-damsl , icsi-mrda and ami---different dialogue act labeling standards and datasets have been provided in recent years , including switchboard-damsl , icsi-mrda and ami
1
the method produces performance higher than the previous best results on conll ’ 00 syntactic chunking and conll ’ 03 named entity chunking ( english and german )---on conll ’ 00 syntactic chunking and conll ’ 03 named entity chunking ( english and german ) , the method exceeds the previous best systems ( including those which rely on hand-crafted resources
1
in a language generation system , a content planner embodies one or more ¡°plans¡± that are usually hand¨ccrafted , sometimes through manual analysis of target text---in a language generation system , a content planner typically uses one or more ¡° plans ¡± to represent the content to be included in the output
1
we use 300-dimensional vectors that were trained and provided by word2vec tool using a part of the google news dataset 4---here we use the word vectors trained by skip-gram model on 100 billion words of google news 6
1
however , their model has a high order of time complexity , and thus can not be applied in practice---despite its superior performance , their model is infeasible in most realistic situations
1
hasegawa et al introduce the task of relation discovery---hasegawa et al tried to extract multiple relations by choosing entity types
1
we used kenlm with srilm to train a 5-gram language model based on all available target language training data---we trained a 5-gram sri language model using the corpus supplied for this purpose by the shared task organizers
1
yarowsky presented an unsupervised wsd system which rivals supervised techniques---yarowsky , 1995 ) demonstrated that semi-supervised wsd could be successful
1
simulated asr errors are typically used to improve asr applications---authorship attribution is the task of determining the author of a disputed text given a set of candidate authors and samples of their writing
0
we initialize the word embeddings for our deep learning architecture with the 100-dimensional glove vectors---we use 300-dimensional glove vectors trained on 6b common crawl corpus as word embeddings , setting the embeddings of outof-vocabulary words to zero
1
this baseline is adapted from yang et al , who applied attention on both the word level and the sentence level in a hierarchical lstm network for document representation---yang et al extends the hierarchical lstm network of li et al , applying attention for weighting different words and sentences , giving state-of-the-art accuracies for document classification
1
le and mikolov introduce an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts---le and mikolov presented the paragraph vector algorithm to learn a fixed-size feature representation for documents
1
a frontier node is a node of which the span and the complement span do not overlap with each other---frontier node is the key in the ssmt model , as it identifies the bilingual information which is consistent with both the parse tree and alignment matrix
1
for example , on a book review website , each book entry contains a title , the author ( s ) and an introduction of the book---on a book review website , each book entry contains a title , the author ( s ) and an introduction of the book
1
the results produced by this method were slightly better than those of other approaches---but this approach showed significantly lower performance than alternatives
1
the word vectors of vocabulary words are trained from a large corpus using the glove toolkit---we use pre-trained glove vector for initialization of word embeddings
1
chambers et al focused on classifying the temporal relation type of event-event pairs using previously learned event attributes as features---chambers et al and tatu and srikanth identify event attributes and event-event features which are used to describe temporal relations between events
1
we first examine various correlates of perceived creativity based on information theoretic measures and the connotation of words , then present experiments based on supervised learning that give us further insights on how different aspects of lexical composition collectively contribute to the perceived creativity---based on information theoretic measures and the connotation of words , then present experiments based on supervised learning that give us further insights on how different aspects of lexical composition collectively contribute to the perceived creativity
1
this paper proposes a new hardware algorithm for high speed morpheme extraction , and also describes its implementation on a specific machine---we present the mineral ( medical information extraction and linking ) system for recognizing and normalizing mentions of clinical conditions , with which we participated in task 14 of semeval 2015 evaluation campaign
0
mikolov et al and mikolov et al introduce efficient methods to directly learn high-quality word embeddings from large amounts of unstructured raw text---negation is a grammatical category that comprises devices used to reverse the truth value of propositions
0
we analyze the relationships of focus with speech acts to tone marks---by analyzing speech acts , we can understand how speech with prosody can convey distinct speaker
1
we used weka for all our classification experiments---for all the experiments we used the weka toolkit
1
in the unsupervised setting , only a handful of seeds is used to define the two polarity classes---in an unsupervised setting where a handful of seeds is used to define the two polarity classes
1
the weight parameter 位 is tuned by a minimum error-rate training algorithm---the corresponding weight is trained through minimum error rate method
1
the correlations are above 95 % for all of the four runs , which means in general , a better performance on mt will lead to a better performance on retrieval---table 2 presents the translation performance in terms of various metrics such as bleu , meteor and translation edit rate
0
truncation size is set to math-w-14-8-0-55---we also use 200 million words from ldc arabic gigaword corpus to generate a 5-gram language model using srilm toolkit , stolcke , 2002 translation to be our source in each case
0
we used the open source moses decoder package for word alignment , phrase table extraction and decoding for sentence translation---lm features gave rise to significant improvement on arabic-to-english and chineseto-english translation on nist mt06 and mt08 newswire data
0
the main goal of this paper is to show that discontinuous phrases can greatly improve the performance of phrase-based systems---in this paper , we presented a generalization of conventional phrase-based decoding to handle discontinuities
1
lsa has remained a popular approach for asag and been applied in many variations---lsa has remained as a popular approach for asag and been applied in many variations
1
idioms are also relatively non-compositional---while not all semantic outliers are idioms , non-compositional
1
the language model was trained using kenlm---the 5-gram target language model was trained using kenlm
1