text
stringlengths
82
736
label
int64
0
1
we implement an in-domain language model using the sri language modeling toolkit---we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit
1
we work with the phrase-based smt framework as the baseline system---we trained word embeddings using word2vec on 4 corpora of different sizes and types
0
we use the l2-regularized logistic regression of liblinear as our term candidate classifier---classifier we use the l2-regularized logistic regression from the liblinear package , which we accessed through weka
1
keskar et al . observe for the mnist , timit , and cifar dataset , that the generalization gap is not due to overfitting or overtraining , but due to different generalization capabilities of the local minima the networks converge to---for the mnist , timit , and cifar dataset , that the generalization gap is not due to overfitting or overtraining , but due to different generalization capabilities of the local minima
1
we used the moses toolkit to build mt systems using various alignments---our joint model provides a precise mathematical formulation of answer chunk quality
0
relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text---relation extraction ( re ) is the task of recognizing relationships between entities mentioned in text
1
we apply our approach to train a semantic parser that uses 77 relations from freebase in its knowledge representation---sources of supervision allows us to train an accurate semantic parser for any knowledge base
1
mihalcea et al use both corpusbased and knowledge-based measures of the semantic similarity between words---mihalcea et al proposed a method to measure the semantic similarity of words or short texts , considering both corpus-based and knowledge-based information
1
relation extraction is the task of tagging semantic relations between pairs of entities from free text---recently , distributed representations have been widely used in a variety of natural language processing tasks
0
we first train a word2vec model on fr-wikipedia 11 to obtain non contextual word vectors---we use skipgram model to train the embeddings on review texts for k-means clustering
1
analogously , cui et al proposed a joint model for scfg rule selection---but we find that our syntactic tailoring can lead to embeddings that match the parsing performance of brown ( on all test sets ) in a fraction of the training time
0
in this work , we calculated automatic evaluation scores for the translation results using a popular metrics called bleu---we measured performance using the bleu score , which estimates the accuracy of translation output with respect to a reference translation
1
we use the stanford parser for english language data---we use the stanford parser for obtaining all syntactic information
1
feature hashing is a technique of converting string features to vectors---in this paper , we discuss methods for automatically creating models of dialog structure
0
smor is a finite-state based morphological analyzer covering the productive word formation processes of german , namely inflection , derivation and compounding---thus , we can efficiently solve the algorithm by using the hungarian method
0
as a classifier , we choose a first-order conditional random field model---for simplicity , we use the well-known conditional random fields for sequential labeling
1
li and yarowsky proposed an unsupervised method for extracting the mappings from chinese abbreviations and their full-forms---li and yarowsky introduced an unsupervised method used to extract phrases and their abbreviation pair using parallel dataset and monolingual corpora
1
the srilm toolkit is used to train 5-gram language model---our 5-gram language model is trained by the sri language modeling toolkit
1
to evaluate the effectiveness of our proposed method , we conduct experiments on a widely used chinese word-segmented corpora , namely pku , from the second sighan international chinese word segmentation bakeoff---in the experiments , we use two widely used and freely available 1 manually word-segmented corpora , namely , pku and msr , from the second sighan international chinese word segmentation bakeoff
1
distinguishing between south-slavic languages has been researched by ljube拧ic et al , tiedemann and ljube拧ic , ljube拧ic and kranjcic , and ljube拧ic and kranjcic---we use the moses smt toolkit to test the augmented datasets
0
liu et al , 2012 ) formulated identifying opinion relations between words as an alignment process---to resolve these problems , liu et al formulated identifying opinion relations between words as an monolingual alignment process
1
socher et al present a model for compositionality based on recursive neural networks---socher et al , 2012 , uses a recursive neural network in relation extraction , and further use lstm
1
we evaluated on the data set with real errors---we report performance on data with real typing errors
1
the stochastic gradient descent with back-propagation is performed using adadelta update rule---in particular , the stochastic gradient descent with back-propagation is performed using adadelta update rule
1
word embeddings have proven to be effective models of semantic representation of words in various nlp tasks---finally , word embeddings have also been used as features to improve performance in a variety of supervised tasks such as sequence labeling and dependency parsing
1
we applied our system to the xtag english grammar 3 , which is a large-scale fb-ltag grammar for english---we apply our system to the latest version of the xtag english grammar , which is a large-scale fb-ltag grammar
1
we test this hypothesis with an approximate randomization approach---we compute statistical significance using the approximate randomization test
1
the word embeddings required by our proposed methods were trained using the gensim 5 implementation of the skip gram version of word2vec---we then used word2vec to train word embeddings with 512 dimensions on each of the prepared corpora
1
a kernel is a measure of similarity between every pair of examples in the data and a kernel-based machine learning algorithm accesses the data only through these kernel values---a kernel is a function that calculates the inner product of two transformed vectors of a high dimensional feature space using the original feature vectors as shown in eq
1
we use evaluation metrics similar to those in---we use the evaluation criterion described in
1
empirical results from testing on ntcir factoid questions show a 40 % performance improvement in chinese answer selection and a 45 % improvement in japanese answer selection---chinese and japanese factoid questions show that the framework significantly improved answer selection performance
1
twitter is a microblogging social network launched in 2006 with 310 million active users per month and where 340 million tweets are daily generated 1---twitter is the medium where people post real time messages to discuss on the different topics , and express their sentiments
1
we implemented the different aes models using scikit-learn---we used scikit-learn library for all the machine learning models
1
such networks demand a considerable amount of labeled data for each specific task---since these networks require a large amount of labeled data
1
matsuyoshi et al organized a hierarchical japanese fe dictionary , named tsutsuji---matsuyoshi et al first built a dictionary of japanese fes named tsutsuji
1
it uses the linguistic knowledge of possible conjuncts and diphthongs in bengali and their equivalents in english---whereas the present work uses linguistic knowledge in the form of possible conjuncts and diphthongs in bengali
1
in this paper , we present two deep-learning systems that competed at semeval-2017 task 4 ( cite-p-18-3-16 )---in this paper we present two deep-learning systems that competed at semeval-2017 task 4
1
in figure 1 we define the position of m4 to be right after m3 ( because ¡°the¡± is after ¡°held¡± in leftto-right order on the target side )---we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit
0
the rules were generated using the apriori tool 4 , an implementation of the apriori algorithm for association rule mining---comma splices are one of the errors addressed in the 2014 conll shared task on grammatical error correction
0
in this paper , we propose to translate from video pixels to natural language with a single deep neural network---in this paper , we propose to translate videos directly to sentences using a unified deep neural network
1
we presented a simple multi-task learning algorithm that jointly trains three word alignment models over disjoint bitexts---we present a multi-task learning approach that jointly trains three word alignment models over disjoint bitexts
1
keyphrases can be extracted automatically by generating a list of keyphrase candidates , ranking these candidates , and selecting the top-ranked candidates as keyphrases---automatically can be performed by generating a list of keyphrase candidates , ranking these candidates , and selecting the top-ranked candidates as keyphrases
1
our baseline discriminative model uses first-and second-order features provided in---the first-stage model we use is a first-order dependency model , with labeled dependencies , as described in
1
we showed that the use of non-standard ccg combinators is highly effective for parsing sentences with the types of phenomena seen in spontaneous , unedited natural language---we describe a learning algorithm that retains the advantages of using a detailed grammar , but is highly effective in dealing with phenomena seen in spontaneous natural language
1
this paper has proposed a novel noisy channel model of speech repairs and has used it to identify reparandum words---this paper describes a noisy channel model of speech repairs , which can identify and correct repairs
1
the clustering method used in this work is latent dirichlet allocation topic modelling---we have applied topic modeling based on latent dirichlet allocation as implemented in the mallet package
1
barzilay and mckeown used a monolingual parallel corpus to obtain paraphrases---barzilay and mckeown acquire paraphrases from a monolingual parallel corpus using a co-training algorithm
1
we have participated in semeval-2017 task 4 on sentiment analysis in twitter , subtasks a ( message polarity classification ) , b ( topicbased message polarity classification ) ( cite-p-11-1-9 )---ensemble methods , in particular , have proved crucial to reach top performance on this task and other related document categorization tasks like the discrimination of language variants
0
ji et al proposed a latent variable rnn for modeling discourse relations between sentences---ji et al introduced an extra latent variable to a hierarchical rnn model to represent discourse relation
1
the current state-of-the-art methods regard word segmentation as a sequence labeling problem---the popular method is to regard word segmentation as a sequence labeling problems
1
to extract part-of-speech tags , phrase structure trees , and typed dependencies , we use the stanford parser on both train and test sets---we use the part-ofspeech tagger , the named-entity recognizer , the parser , and the coreference resolution system
1
we estimate a 5-gram language model using interpolated kneser-ney discounting with srilm---recently , syntax-based models such as transition-based parser have been used for detecting disfluencies
0
bond et al use grammars to paraphrase the whole source sentence , covering aspects like word order and minor lexical variations , but not content words---bond et al use grammars to paraphrase the source side of training data , covering aspects like word order and minor lexical variations but not content words
1
we also use a 4-gram language model trained using srilm with kneser-ney smoothing---a 4-gram language model is trained on the monolingual data by srilm toolkit
1
for around 25 % of the events , the most informative temporal expression is even five or more sentences away---the most informative temporal expression is more than one sentence apart from the event
1
we use 300-dimensional word embeddings from glove to initialize the model---also , we initialized all of the word embeddings using the 300 dimensional pre-trained vectors from glove
1
framing is a phenomenon largely studied and debated in the social sciences , where , for example , researchers explore how news media shape debate around policy issues by deciding what aspects of an issue to emphasize , and what to exclude---relation extraction is a fundamental task in information extraction
0
those models were trained using word2vec skip-gram and cbow---these were trained using the word2vec implementation in the gensim toolkit
1
the weights of the different feature functions were optimised by means of minimum error rate training---we apply our model to the english portion of the conll 2012 shared task data , which is derived from the ontonotes corpus
0
we adopt a neural crf with a long-short-termmemory feature layer for baseline pos tagger---mikolov et al demonstrate a recurrent neural network language model for word ordering
0
zelenko et al proposed a tree kernel over shallow parse tree representations of sentences---zelenko et al developed a kernel over parse trees for relation extraction
1
our measure can be exactly calculated in quadratic time---on a type level , this method does not give satisfying results for verbs whose aspectual value varies across readings ( henceforth ¡® aspectually polysemous verbs ¡¯ ) , which are far from exceptional ( see section 3 )
0
we evaluated translation quality using uncased bleu and ter---we evaluated translation quality with the case-insensitive bleu-4 and nist
1
the baseline system was trained on all available bilingual data and used a 4-gram lm with modified kneserney smoothing , trained with the srilm toolkit---the iwslt phrase-based baseline system is trained on all available bilingual data , and uses a 4-gram lm with modified kneser-ney smoothing , trained with the srilm toolkit
1
further analyses showed that the compound features reduced errors on rare-words and ambiguous words and could be better utilized by linear models---experiments showed that the compound features not only improved the performances on several nlp tasks
1
the second system of our ensemble uses word embeddings---the second system of our ensemble uses features based on word embeddings
1
in recent years , corpus based approaches to machine translation have become predominant , with phrase based statistical machine translation being the most actively progressing area---corpus-based approaches to machine translation have become predominant , with phrase-based statistical machine translation being the most actively progressing area
1
the language models were interpolated kneser-ney discounted trigram models , all constructed using the srilm toolkit---the target language model was a standard ngram language model trained by the sri language modeling toolkit
1
distant supervision has been successfully used for the problem of relation extraction---the idea of distant supervision has widely used in the task of relation extraction
1
both files are concatenated and learned by word2vec---the embeddings have been trained with word2vec on twitter data
1
polanyi and zaenen investigate the usage of contextual valence shifters and discourse connectives inside a text---polanyi and zaenen argue that discourse structure is important in polarity classification
1
the german text was further preprocessed by splitting german compound words using the frequency-based method described in---in order to reduce the source vocabulary size translation , the german text was preprocessed by splitting german compound words with the frequencybased method described in
1
lakoff and johnson state that conceptual metaphor is a language phenomenon in which a speaker understands a particular concept through the use of another concept---in practical use , aggressive memory reuse in opennmt provides a saving of 70 % of gpu memory
0
we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing---for the language model , we used srilm with modified kneser-ney smoothing
1
first , arabic is a morphologically rich language ( cite-p-19-3-7 )---we estimate the parameters by maximizingp using the expectation maximization algorithm
0
by further coupling such relations , cpra significantly outperforms pra , in terms of both predictive accuracy and model interpretability---by further coupling such relations , cpra substantially outperforms pra , in terms of not only predictive accuracy
1
the corpus for this experiment consists of 172,481 bilingual sentences of english and japanese extracted from a large-scale travel conversation corpus---the corpus for the experiment was extracted from the basic travel expression corpus , a collection of conversational travel phrases for chinese , english , japanese and korean
1
subsequently , hashimoto et al introduced a method which jointly learns word and phrase embeddings by using a variety of predicateargument structures---we use the stanford pos-tagger and name entity recognizer
0
we used kenlm with srilm to train a 5-gram language model based on all available target language training data---the 4-gram language model was trained with the kenlm toolkit on the english side of the training data and the english wikipedia articles
1
he et al investigate stacked denoising auto-encoders to learn entity representation---he et al learn enttiy representation via stacked denoising auto-encoders
1
the state-ofthe-art baseline is a standard phrase-based smt system tuned with mert---the phrase-based baseline is a standard phrasebased smt system tuned with mert and contains a hierarchical reordering model
1
wu proposes a bilingual segmentation grammar extending the terminal rules by including phrase pairs---wu presents a better-constrained grammar designed to only produce tail-recursive parses
1
the weights of the log-linear interpolation model were optimized via minimum error rate training on the ted development set , using 200 best translations at each tuning iteration---minimum error training under bleu was used to optimise the feature weights of the decoder with respect to the dev2006 development set
1
alternatively , blacoe and lapata show that latent word representations can be combined with simple elementwise operations to identify the semantic similarity of larger units of text---blacoe and lapata , 2012 ) demonstrate the effectiveness of combining latent representations with simple element-wise operations , for the purpose of identifying semantic similarity amongst larger text units
1
for english , we convert the ptb constituency trees to dependencies using the stanford dependency framework---we apply the rules to each sentence with its dependency tree structure acquired from the stanford parser
1
we used the moses toolkit for performing statistical machine translation---we used the moseschart decoder and the moses toolkit for tuning and decoding
1
to our knowledge , triviaqa is the first dataset where questions are authored by trivia enthusiasts , independently of the evidence documents---to our knowledge , triviaqa is the first dataset where full-sentence questions are authored organically ( i . e . independently of an nlp task ) and evidence documents
1
log linear models have been proposed to incorporate those features---relation extraction ( re ) is the task of determining semantic relations between entities mentioned in text
0
we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing---we built a 5-gram language model from it with the sri language modeling toolkit
1
lexical simplification is a popular task in natural language processing and it was the topic of a successful semeval task in 2012 ( cite-p-14-1-9 )---they designed class-type transformation templates and used the transformation-based error-driven learning method of brill to learn what word delimiters should be modified
0
to that end , several lexical resources have been created , such as wordnet-affect and sentiwordnet---examples include wordnetaffect and sentiwordnet , both of which stem from expert annotation
1
relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text---relation extraction is the task of finding relational facts in unstructured text and putting them into a structured ( tabularized ) knowledge base
1
to adapt the lssvm model to enable the efficient search of query spelling correction , we study how features can be designed---the translation quality is evaluated by case-insensitive bleu-4 metric
0
sentence compression is the task of producing a shorter form of a single given sentence , so that the new form is grammatical and retains the most important information of the original one ( cite-p-15-3-1 )---sentence compression is a task of creating a short grammatical sentence by removing extraneous words or phrases from an original sentence while preserving its meaning
1
similarly , the third-best team , qcri , used features that model a comment in the context of the entire comment thread , focusing on user interaction---similarly , the third-best team , qcri , used features to model a comment in the context of the entire comment thread , focusing on user interaction
1
morante et al also discuss the need for corpora which cover other domains---for example , morante et al discuss the need for corpora which covers different domains apart from biomedical
1
knowledge-based work , such as used hand-coded rules or supervised machine learning based on an annotated corpus to perform wsd---knowledge-based work , such as used hand-coded rules or supervised machine learning based on annotated corpus to perform wsd
1
in this research , we use the pre-trained google news dataset 2 by word2vec algorithms---on all datasets and models , we use 300-dimensional word vectors pre-trained on google news
1
zhou et al explore various features in relation extraction using support vector machine---i will explore to make convolution kernels more scalable
0
using the navigational context , spacebook pushes point-of-interest information which can then initiate tourist information tasks using the qa module---using the navigational context , spacebook can push point-of-interest information which can then initiate touristic exploration tasks using the qa module
1