text
stringlengths
82
736
label
int64
0
1
the output was evaluated against reference translations using bleu score which ranges from 0 to 1---we created 5-gram language models for every domain using srilm with improved kneserney smoothing on the target side of the training parallel corpora
0
the weights associated to feature functions are optimally combined using the minimum error rate training---we set the feature weights by optimizing the bleu score directly using minimum error rate training on the development set
1
the n-gram language models are trained using the srilm toolkit or similar software developed at hut---all language models are created with the srilm toolkit and are standard 4-gram lms with interpolated modified kneser-ney smoothing
1
we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit---we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing
1
contrast to joint methods , this paper proposes to exploit argument information explicitly for ed---in this work , we propose to exploit argument information explicitly for ed
1
recently , with the development of neural network , deep learning based models attract much attention in various tasks---additionally , a back-off 2-gram model with goodturing discounting and no lexical classes was built from the same training data , using the srilm toolkit
0
the use of word unigrams is a standard approach in text classification , and has also been successfully used to predict reading difficulty---using appropriate word weighting functions is known to improve the performance of text categorization
1
we propose min ( memory interaction network ) , a novel lstm-based deep multi-task learning framework for the ate task---yessenalina and cardie model each word as a matrix and combine words using iterated matrix multiplication
0
twitter 1 is a microblogging service , which according to latest statistics , has 284 million active users , 77 % outside the us that generate 500 million tweets a day in 35 different languages---twitter is a microblogging site where people express themselves and react to content in real-time
1
the hierarchical phrasebased translation model , which adopts a synchronous context-free grammar , is considered to be prominent in capturing global reorderings---we present connectionist bidirectional rnn models which are especially suited for sentence classification tasks
0
latent dirichlet allocation is a generative probabilistic topic model where documents are represented as random mixtures over latent topics , characterized by a distribution over words---latent dirichlet allocation is a popular probabilistic model that learns latent topics from documents and words , by using dirichlet priors to regularize the topic distributions
1
we use the penn wsj treebank for our experiments---as with , we train the language model on the penn treebank
1
the task is to classify whether each sentence provides the answer to the query---qa , the task is to pick sentences that are most relevant to the question
1
we do so because character n-gram based approaches have largely outperformed function word based approaches indicating that lexical words may also help with authorship attribution---however , character n-gram based approaches have largely outperformed function word based approaches indicating that some lexical words may also help with authorship attribution
1
the original pmg implementation has utilised conditional random fields , due to the considerable representation capabilities of this model---we use logistic regression as the per-class binary classifier , implemented using liblinear
0
to the best of our knowledge the connection between the decipherment problem and the quadratic assignment problem was not known---we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing
0
in this paper , we present a machine learning approach to the identification and resolution of chinese anaphoric zero pronouns---and our present work is the first to perform both identification and resolution of chinese anaphoric zero pronouns using a machine learning approach
1
a 3-gram language model was trained from the target side of the training data for chinese and arabic , using the srilm toolkit---a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit
1
the results reported here indicate that the proposed methodology yields usable results in understanding the qur ’ an on the basis of its lexical semantics---preliminary results indicate that construction and semantic interpretation of cluster trees based on lexical frequency is a useful approach to discovering thematic interrelationships among the suras that constitute the qur ’ an
1
we use srilm toolkits to train two 4-gram language models on the filtered english blog authorship corpus and the xinhua portion of gigaword corpus , respectively---we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting
1
we used the penn treebank to perform empirical experiments on the proposed parsing models---in our previous work , we used the wsj penn treebank to train the mixed trigram model
1
our machine translation system is a phrase-based system using the moses toolkit---we used moses , a phrase-based smt toolkit , for training the translation model
1
collobert et al initially introduced neural networks into the srl task---in our experiments we use a publicly available implementation of conditional random fields
0
to capture the hierarchical relationship among codes , we build a tree lstm along the code tree---in this paper , drawing intuition from the turing test , we propose using adversarial training for open-domain dialogue generation
0
firstly , we built a forward 5-gram language model using the srilm toolkit with modified kneser-ney smoothing---we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit
1
syntactic knowledge is important for discourse relation recognition---the feature weights are tuned to optimize bleu using the minimum error rate training algorithm
0
t盲ckstr枚m et al derive crosslingual clusters from bitext to help delexicalized parser transfer---t盲ckstr枚m et al use cross-lingual word clusters to show transfer of linguistic structure
1
we present in detail the framework of the twin-candidate model for anaphora resolution---we have introduced in detail the framework of the twin-candidate model for anaphora resolution
1
importantly , word embeddings have been effectively used for several nlp tasks---unsupervised word embeddings trained from large amounts of unlabeled data have been shown to improve many nlp tasks
1
word sense disambiguation ( wsd ) is a natural language processing ( nlp ) task in which the correct meaning ( sense ) of a word in a given context is to be determined---in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd )
1
the target fourgram language model was built with the english part of training data using the sri language modeling toolkit---with shared parameters , the model is able to learn a general way to act in slots , increasing its scalability to large domains
0
these predictions-as-features style methods model high order label dependencies and obtain high performance---multi-faceted approach , motivated by the diversity of input data imperfections , can eliminate a large proportion of the spurious outputs
0
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting---we use srilm train a 5-gram language model on the xinhua portion of the english gigaword corpus 5th edition with modified kneser-ney discounting
1
the network was trained using stochastic gradient descent with adam---all neural networks were trained using adam optimizer
1
bleu is a precision measure based on m-gram count vectors---bleu is used as a standard evaluation metric
1
in this paper , we presented a self-attentive hybrid gru-based network for predicting valence intensity for short text---in this paper , we present a self-attentive hybrid gru-based network ( sahgn ) that competed at semeval-2018 task
1
the phrase-based model segments a bilingual sentence pair into phrases that are continuous sequences of words---phrase-based smt segments a bilingual sentence pair into phrases that are continuous sequences of words or discontinuous sequences of words
1
early approaches to mwes identification concentrated on their collocational behavior---early approaches to identifying mwes concentrated on their collocational behavior
1
our training data is the switchboard portion of the english penn treebank corpus , which consists of telephone conversations about assigned topics---we describe an alternative , a latent variable model , to learn long range dependencies
0
we used the first-stage parser of charniak and johnson for english and bitpar for german---we primarily used the charniak-johnson generative parser to parse the english europarl data and the test data
1
latent dirichlet allocation is one of the widely adopted generative models for topic modeling---latent dirichlet allocation is one of the most popular topic models used to mine large text data sets
1
recurrent neural networks have successfully been used in sequence learning problems , for example machine translation , and language modeling---neural networks have been successfully applied to nlp problems , specifically , sequence-to-sequence or models applied to machine translation and word-to-vector
1
the weights of the different feature functions were optimised by means of minimum error rate training on the 2013 wmt test set---the weights of the log-linear interpolation were optimized by means of mert , using the news-commentary test set of the 2008 shared task as a development set
1
all linear models were trained with the perceptron update rule---these models were implemented using the package scikit-learn
1
as such , masc is the first large-scale , open , community-based effort to create much needed language resources for nlp---as such , masc is the first large-scale , open , community-based effort to create a much-needed language resource for nlp
1
we use stanford log-linear partof-speech tagger to produce pos tags for the english side---we use the stanford nlp pos tagger to generate the tagged text
1
in this paper , we present gated self-matching networks for reading comprehension and question answering---we propose a self-matching attention mechanism to refine the representation
1
we also use a 4-gram language model trained using srilm with kneser-ney smoothing---we build a 9-gram lm using srilm toolkit with modified kneser-ney smoothing
1
moreover , mmrbased feature selection sometimes produces some improvements of conventional machine learning algorithms over svm which is known to give the best classification accuracy---following foulds et al , we perform simulated annealing which varies the m-h acceptance ratio to improve mixing
0
in this paper we focus on a new problem of event coreference resolution across television news videos---coreference resolution is a set partitioning problem in which each resulting partition refers to an entity
0
for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing---we use sri language modeling toolkit to train a 5-gram language model on the english sentences of fbis corpus
1
the language models were trained using srilm toolkit---the target-side language models were estimated using the srilm toolkit
1
the experimental results and analysis demonstrate that our model is effective in exploiting both source and target document context , and statistically significantly outperforms the previous work in terms of bleu and meteor---documents show that our model is effective in exploiting both source and target document context , and statistically significantly outperforms the previous work in terms of bleu and meteor
1
recently there are some efforts in applying machine learning approaches to the acquisition of dialogue strategies---machine learning techniques , and particularly reinforcement learning , have recently received great interest in research on dialogue management
1
for part of speech tagging and dependency parsing of the text , we used the toolset from stanford corenlp---we preprocess the texts using the stanford corenlp suite for tokenization , lemmatization , part-of-speech tagging , and named entity recognition
1
lda is a simple model for topic modeling where topic probabilities are assigned words in documents---in this paper , we outline an approach to detecting such egregious conversations , using behavioral cues
0
named entity recognition ( ner ) is the task of identifying and typing phrases that contain the names of persons , organizations , locations , and so on---named entity recognition ( ner ) is a fundamental information extraction task that automatically detects named entities in text and classifies them into predefined entity types such as person , organization , gpe ( geopolitical entities ) , event , location , time , date , etc
1
the experimental results demonstrate that our approach outperforms the template extraction based approaches---the latter approach represents word contexts as vectors in some space and uses similarity measures and automatic clustering in that space
0
liu et al allow for application of nonsyntactic phrase pairs in their tree-to-string alignment template system---liu et al also add non-syntactic pbsmt phrases into their tree-to-string translation system
1
semeval is the international workshop on semantic evaluation that has evolved from senseval---flat tags can be relaxed , using context , with the resulting polysemous clustering outperforming gold part-of-speech tags for the english dependency grammar induction task
0
finally , our work is similar to the comparison of the chart-based mstparser and shift-reduce maltparser for dependency parsing---the comparison reported in this section is similar to the comparison between the chartbased mstparser and shiftreduce maltparser for dependency parsing
1
for the compilation , we focus on travel blogs , which are defined as travel journals written by bloggers in diary form---for the compilation , we focused on travel blogs , which are defined as travel journals
1
we also present a novel baseline that performs remarkably well without using topic identification---here , we present an approach that instead uses distant supervision
1
in this sense , our work follows foster et al , who weigh out-of-domain phrase pairs according to their relevance to the target domain---in this sense , our work follows foster , goutte , and kuhn , who weigh out-of-domain phrase pairs according to their relevance to the target domain
1
we trained word embeddings for this dataset using word2vec on over around 10m documents of clinical records---we used the pre-trained word embeddings that are learned using the word2vec toolkit on google news dataset
1
coreference resolution is the task of determining when two textual mentions name the same individual---since coreference resolution is a pervasive discourse phenomenon causing performance impediments in current ie systems , we considered a corpus of aligned english and romanian texts to identify coreferring expressions
1
it has been shown that user opinions about products , companies and politics can be influenced by opinions posted by other online users in online forums and social networks---it has been shown that user opinions about products , companies and politics can be influenced by opinions posted by other online users
1
this paper describes a system that allows users to explore large cultural heritage collections---this paper describes a system for navigating large collections of information about cultural heritage
1
our approach is similar to conneau et al where authors investigate transfer learning to find universal sentence representation---as erhan et al reported , word embeddings learned from a significant amount of unlabeled data are more powerful for capturing the meaningful semantic regularities of words
0
the n-gram language models are trained using the srilm toolkit or similar software developed at hut---the pre-processed monolingual sentences will be used by srilm or berkeleylm to train a n-gram language model
1
we use srilm for training a trigram language model on the english side of the training corpus---with the proposed discriminative model , we can directly optimize the search phase of query spelling correction
0
openccg uses a hybrid symbolic-statistical chart realizer which takes logical forms as input and produces sentences by using ccg combinators to combine signs---we used glove 10 to learn 300-dimensional word embeddings
0
informally , nlg is the production of a natural language text from computer-internal representation of information , where nlg can be seen as a complex -- potentially cascaded -- decision making process---nlg is a critical component in a dialogue system , where its goal is to generate the natural language given the semantics provided by the dialogue manager
1
for language modeling , we use the english gigaword corpus with 5-gram lm implemented with the kenlm toolkit---the un-pre-marked japanese corpus is used to train a language model using kenlm
1
we use the word2vec vectors with 300 dimensions , pre-trained on 100 billion words of google news---to encode the original sentences we used word2vec embeddings pre-trained on google news
1
the types of events to extract are known in advance---we use pre-trained glove vector for initialization of word embeddings
0
mert was used to tune development set parameter weights and bleu was used on test sets to evaluate the translation performance---the mert was used to tune the feature weights on the development set and the translation performance was evaluated on the test set with the tuned weights
1
however , no single approach alone can cover the entire smart selection problem---none alone solves the entire smart selection problem
1
the visargue framework provides a novel visual analytics toolbox for exploratory and confirmatory analyses of multi-party discourse data---we present a novel visual analytics framework that encodes various layers of discourse properties and allows for an analysis of multi-party discourse
1
to gather examples from these parallel corpora , we followed the approach in---to gather examples from parallel corpora , we followed the approach in
1
translation results are evaluated using the word-based bleu score---koo et al used the brown algorithm to learn word clusters from a large amount of unannotated data and defined a set of word cluster-based features for dependency parsing models
0
in addition , the concept of iteratmath-w-5-3-1-66ant to planning , so that a generalisation across distributives and iteratives plus what has been said about their temporal nature should have interesting implications in this area---and since the concept of iterated action is central to planning , the generalisation across iteration and distributives , along with the observations about their nature , have interesting implications for work in this area
1
we evaluated translation quality using uncased bleu and ter---the translation outputs were evaluated with bleu and meteor
1
coreference resolution is a task aimed at identifying phrases ( mentions ) referring to the same entity---coreference resolution is a set partitioning problem in which each resulting partition refers to an entity
1
we represent each word as a vector using twitter glove embedding---we represent terms using pre-trained glove wikipedia 6b word embeddings
1
the outline of this paper is as follows : in section 2 , we review current approaches to building dialog systems---in this paper , we discuss methods for automatically creating models of dialog structure
1
the main goal of this paper is to propose automatic schemes for the translation paired comparison method---in this paper , we propose an automatization scheme for the translation paired comparison method that employs available automatic
1
we created 5-gram language models for every domain using srilm with improved kneserney smoothing on the target side of the training parallel corpora---using multi-layered neural networks to learn word embeddings has become standard in nlp
0
the discussed model in this contribution is an extension of the classical top-down tree transducer , which was introduced by rounds and thatcher---the model discussed in this contribution is an extension of the classical top-down tree transducer , which was introduced by rounds and thatcher
1
collobert et al used a large amount of unlabeled data to map words to high-dimensional vectors and a neural network architecture to generate an internal representation---in this work , we uncover several latent semantic structures behind humor , in terms of meaning
0
for samt grammar extraction , we parsed the english training data using the berkeley parser with the provided treebank-trained grammar---to extract the features of the rule selection model , we parse the english part of our training data using the berkeley parser
1
we train a kn-smoothed 5-gram language model on the target side of the parallel training data with srilm---we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus
1
the phrasebased machine translation uses the grow-diag-final heuristic to extend the word alignment to phrase alignment by using the intersection result---the international corpus of learner english was widely used until recently , despite its shortcomings 1 being widely noted
0
in this paper we propose a novel approach to identify asymmetric relations between verbs---in this paper we presented a method to discover asymmetric entailment relations between verbs
1
to provide a standard benchmark for english sts , we present the sts benchmark , a careful selection of the english data sets from previous sts tasks ( 2012-2017 )---for assessing new methods , we present the sts benchmark , a publicly available selection of data from english sts
1
in recent years , phrase-based systems for statistical machine translation have delivered state-of-the-art performance on standard translation tasks---during the last four years , various implementations and extentions to phrase-based statistical models have led to significant increases in machine translation accuracy
1
we also conduct human evaluations on abstractive summarization and find that our method outperforms a purely supervised learning baseline in a statistically significant manner---the translations are evaluated in terms of bleu score
0
brown , et al describe a statistical algorithm for partitioning word senses into two groups---we ran mt experiments using the moses phrase-based translation system
0
we use minimum error rate training to tune the feature weights of hpb for maximum bleu score on the development set with serval groups of different start weights---in practice , policy gradient method is usually used to calculate gradients for the generator due to discrete symbols
0
in this paper we propose a method for defining kernels in terms of a probabilistic model of parsing---pseudo-word is a kind of multi-word expression ( includes both unary word and multi-word )
0