text
stringlengths
82
736
label
int64
0
1
we chose the optimal model that achieves the best bleu score over the dev corpus---we evaluated the system using bleu score on the test set
1
bengio et al use distributed representations for words to fight the curse of dimensionality when training a neural probabilistic language model---bengio et al propose a feedforward neural network to train a word-level language model with a limited n-gram history
1
can be evaluated by maximising the pseudo-likelihood on a training corpus---can be evaluated by maximizing the pseudo-likelihood on a training corpus ,
1
in this paper we present l obby b ack , a system to reconstruct the ¡°dark corpora¡± that is comprised of model bills which are copied ( and modified ) by resource constrained state legislatures---in this paper we propose l obby b ack , a system that automatically identifies clusters of documents that exhibit text reuse , and generates ¡° prototypes ¡±
1
all systems are evaluated using case-insensitive bleu---case-insensitive bleu4 was used as the evaluation metric
1
translation quality is evaluated by case-insensitive bleu-4 metric---results are reported using case-insensitive bleu with a single reference
1
to remedy this problem , a modification called regularized winnow has been proposed---performance can be achieved by using the newly proposed regularized winnow method
1
later , several works explore global features , trying to capture coherence among concepts that appear in close proximity in the text---earlier works proposed to explore global features , trying to capture coherence among titles that appear in the text
1
wan et al use a dependency grammar to model word ordering and apply greedy search to find the best permutation---wan et al uses a dependency grammar to solve word ordering , and zhang and clark uses ccg for word ordering and word choice
1
lei et al proposed to learn features by representing the cross-products of some primitive units with low-rank tensors for dependency parsing---lei et al also use low-rank tensor learning in the context of dependency parsing , where like in our case dependencies are represented by conjunctive feature spaces
1
we propose a measure that takes into account each word¡¯s contribution to fluency and meaning---we used a linear chain crf as implemented in crfsuite package for training all our models
0
we built a linear svm classifier using svm light package---we used the default parameter in svm light for all trials
1
in our experiments , we evaluate our model on the semeval-2010 task 8 dataset , which is one of the most widely used benchmarks for relation classification---a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke
0
a 5-gram language model was created with the sri language modeling toolkit and trained using the gigaword corpus and english sentences from the parallel data---the probabilistic language model is constructed on google web 1t 5-gram corpus by using the srilm toolkit
1
we adopt glove vectors as the initial setting of word embeddings v---we employ the pretrained word vector , glove , to obtain the fixed word embedding of each word
1
third , for different applications , methods of designing handcrafted representations may be quite different , lacking of a general guideline---for different applications , methods of designing handcrafted representations may be quite different , lacking of a general guideline
1
in this paper , we present a novel model , structure regularized brcnn , to classify the relation of two entities in a sentence---in this paper , we focus on the study of applying structure regularization to the relation classification task of chinese literature
1
we selected support vector machine and naive bayes as classifiers for our base systems to be optimally ensembled---we used a support vector machine classifier with radial basis function kernels to classify the data
1
coreference resolution is a multi-faceted task : humans resolve references by exploiting contextual and grammatical clues , as well as semantic information and world knowledge , so capturing each of these will be necessary for an automatic system to fully solve the problem---coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept
1
we trained an english 5-gram language model using kenlm---a 5-gram language model was created with the sri language modeling toolkit and trained using the gigaword corpus and english sentences from the parallel data
0
the word embeddings can provide word vector representation that captures semantic and syntactic information of words---we use srilm for n-gram language model training and hmm decoding
0
newman and blitzer also address the problem of summarizing archived discussion lists---newman and blitzer , 2003 ) also address the problem of summarizing archived discussion lists
1
we used kenlm with srilm to train a 5-gram language model based on all available target language training data---semantic role labeling ( srl ) is the task of labeling the predicate-argument structures of sentences with semantic frames and their roles ( cite-p-18-1-2 , cite-p-18-1-19 )
0
other training criteria , such as maximum likelihood or max-margin , could also be employed---common training criteria include the maximum likelihood , averaged structured perceptron , and max-margin
1
we evaluated translation quality with the case-insensitive bleu-4 and nist---we evaluate the translation quality using the case-insensitive bleu-4 metric
1
we used the part of speech tagged for tweets with the twitter nlp tool---we use the cmu twitter tagger 2 to recognize named entities
1
in this paper we present a faq-based question answering system over a sms interface that solves this problem for two languages---in this paper , we present a service that allows a user to query a frequentlyasked-questions ( faq ) database built in a local language ( hindi ) using noisy sms
1
the baseline of our approach is a statistical phrase-based system which is trained using moses---our baseline system is phrase-based moses with feature weights trained using mert
1
our running example is the following misspelling of a search query , involving multiple types of errors---1 our running example is a truncated variant of an item from the shared task training data
1
visweswariah et al and tromble and eisner have considered the source reordering problem to be a problem of learning word reordering from word-aligned data---visweswariah et al regarded the preordering problem as a traveling salesman problem and applied tsp solvers for obtaining reordered words
1
we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit---we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit
1
the proposed approach trains models based on only a part of the training set that is more similar to the target domain---for this set of experiments , we use the combination of sense and words as features
0
named entity disambiguation is the task of linking an entity mention in a text to the correct real-world referent predefined in a knowledge base , and is a crucial subtask in many areas like information retrieval or topic detection and tracking---named entity disambiguation ( ned ) is the task of resolving ambiguous mentions of entities to their referent entities in a knowledge base ( kb ) ( e.g. , wikipedia )
1
for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences---we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing
1
in addition , we construct a webbased open system for teachers to prepare their own games to best meet their teaching goals---for preprocessing , we used corenlp to automatically parse the raw text of wsj for feature extraction
0
for the generative model , we used the dependency model with valence as it appears in klein and manning---bilingual lexica provide word-level semantic equivalence information across languages , and prove to be valuable for a range of cross-lingual natural language processing tasks
0
in this paper , we focus on the first-and secondorder graph models---in this paper , we implement our approach based on graph-based parsing models
1
the common inventory incorporates some of the general relation types defined by gildea and jurafsky for their experiments in classifying semantic relations in framenet using a reduced inventory---hierarchical phrase-based translation was proposed by chiang
0
we present the text to the encoder as a sequence of word2vec word embeddings from a word2vec model trained on the hrwac corpus---to get a dictionary of word embeddings , we use the word2vec tool 2 and train it on the chinese gigaword corpus
1
word sense disambiguation ( wsd ) is the task of determining the correct meaning ( “ sense ” ) of a word in context , and several efforts have been made to develop automatic wsd systems---word sense disambiguation ( wsd ) is a difficult natural language processing task which requires that for every content word ( noun , adjective , verb or adverb ) the appropriate meaning is automatically selected from the available sense inventory 1
1
following boye and saur铆 and pustejovsky , we characterize evidential justification in terms of epistemic support---word alignment is a crucial early step in the training of most statistical machine translation ( smt ) systems , in which the estimated alignments are used for constraining the set of candidates in phrase/grammar extraction ( cite-p-9-3-5 , cite-p-9-1-4 , cite-p-9-3-0 )
0
socher et al defined a recurrent neural network model , which , in essence , learns those polarity shifters relying on sentence-level sentiment labels---socher et al and socher et al present a framework based on recursive neural networks that learns vector space representations for multi-word phrases and sentences
1
the need for information systems to support physicians at the point of care has been well studied---given that the mean probability used in the mean probability rule is sensitive to outliers , an alternative is to use the median as a more robust estimate of the central value
0
empirical results show their method can improve nmt performance , and this approach provides a natural baseline---for unigram models , k-gram models , and topic models , each of which represents its perplexity with respect to a reduced vocabulary , under the assumption that the corpus follows zipf ¡¯ s law
0
syntactic universals are a well studied concept in linguistics , and were recently used in similar form by naseem et al for multilingual grammar induction---these structural correspondences , referred to as syntactic universals , have been extensively studied in linguistics and underlie many approaches in multilingual parsing
1
text categorization is the task of classifying documents into a certain number of predefined categories---kiros et al propose a skip-gram-like objective function at the sentence level to obtain the sentence embeddings
0
named entity recognition ( ner ) is the task of identifying named entities in free text—typically personal names , organizations , gene-protein entities , and so on---we use the moses toolkit to train our phrase-based smt models
0
we used 300 dimensional skip-gram word embeddings pre-trained on pubmed---we used the pre-trained google embedding to initialize the word embedding matrix
1
the system then extracts various dependency mappings between the source and target trees---system then extracts various dependency mappings between the source and target trees
1
the reason behind this intention is that if the recall for any method is around 0.5 , this means that the method fails to detect or correct around 50 percent of the errors---for any method is around 0 . 5 , this means that the method fails to detect or correct around 50 percent of the errors
1
our baseline system is a popular phrase-based smt system , moses , with 5-gram srilm language model , tuned with minimum error training---specifically , in testing , we replace the charniak parser with a more accurate reranking parser
0
then , a lattice-based pos tagger and a lattice-based parser are used to process the word lattice from two different viewpoints---into a word lattice , and then a lattice-based pos tagger and a lattice-based parser are used to process the lattice
1
in this paper , we have engineered and studied several models for relation learning---among many natural language processing ( nlp ) tasks , such as text classification , question answering and machine translation , a common problem is modelling the relevance/similarity of a pair of texts , which is also called text semantic matching
0
in this paper , we present a semantic parsing framework for question answering using a knowledge base---in this work , we aim to learn a semantic parser that maps a natural language question
1
event extraction is the task of extracting and labeling all instances in a text document that correspond to a predefined event type---event extraction is a particularly challenging information extraction task , which intends to identify and classify event triggers and arguments from raw text
1
in comparison , although concat performs consistently well for 1 → 2 , 3 → 4 , and 5 → 6 , its qwk scores for 7 → 8 are quite poor and even lower than those of targetonly for 25 or more target essays---for 1 → 2 , 3 → 4 , and 5 → 6 , its qwk scores for 7 → 8 are quite poor and even lower than those of targetonly for 25 or more target essays
1
in the worst case , the fan-out of math-w-8-7-0-164 can be as large as math-w-8-7-0-171---for probabilities , we trained 5-gram language models using srilm
0
we used the stanford parser to parse the corpus---we used stanford corenlp to generate dependencies for the english data
1
performance is measured based on the bleu scores , which are reported in table 4---neural networks , working on top of conventional n-gram back-off language models , have been introduced in as a potential means to improve discrete language models
0
for all the models trained in this paper , we have used the skip-gram , cbow and fasttext algorithms---for this paper , we directly utilize the pre-trained fasttext word embeddings model which is trained on wikipedia data
1
word sense disambiguation ( wsd ) is a key task in computational lexical semantics , inasmuch as it addresses the lexical ambiguity of text by making explicit the meaning of words occurring in a given context ( cite-p-18-3-10 )---word sense disambiguation ( wsd ) is the task of determining the meaning of a word in a given context
1
a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit---the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit
1
we train the models for 20 epochs using categorical cross-entropy loss and the adam optimization method---we use binary crossentropy loss and the adam optimizer for training the nil-detection models
1
bannard and callison-burch described a pivoting approach that can exploit bilingual parallel corpora in several languages---bannard and callison-burch used the bilingual pivoting method on parallel corpora for the same task
1
chen et al and koo et al proposed the methods to obtain new features from large-scale unlabeled data---chen et al presented a method of extracting short dependency pairs from large-scale autoparsed data
1
abstract meaning representation is a semantic representation that expresses the logical meaning of english sentences with rooted , directed , acylic graphs---in this paper , we propose a new inference algorithm , latent dynamic inference ( ldi ) , by systematically
0
the language model is trained on the target side of the parallel training corpus using srilm---the language model is trained and applied with the srilm toolkit
1
in the future , this work needs to be further developed to deal with anaphora in other types of texts and the use of connectives in generated text to create cohesive discourse---in the future , this work needs to be further developed to deal with anaphora in other types of texts and the use of connectives in generated text
1
for the extraction of translation tables , we use the de facto standard smt toolbox moses with default settings---for decoding , we used the state-of-theart pbsmt toolkit moses with default options , except for the phrase length limit following
1
dependency parsing is the task of building dependency links between words in a sentence , which has recently gained a wide interest in the natural language processing community---dependency parsing is the task of building dependency links between words in a sentence , which has recently gained a wide interest in the natural language processing community and has been used for many problems ranging from machine translation ( cite-p-12-1-4 ) to question answering ( zhou et al. , 2011a )
1
named entity recognition ( ner ) is a frequently needed technology in nlp applications---named entity recognition ( ner ) is a fundamental task in text mining and natural language understanding
1
for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus---for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing
1
input layer word embeddings are initialized with glove embeddings pre-trained on twitter text---the word embeddings and attribute embeddings are trained on the twitter dataset using glove
1
recent work has focused on a much larger set of fine-grained types---recent work has focused on a much larger set of fine grained labels
1
we use 100-dimension glove vectors which are pre-trained on a large twitter corpus and fine-tuned during training---entity-mention model is effective for the coreference resolution task
0
in this work we presented a method that enables using discriminative learning methods for refining generative language models---we extend this line of work to study the extent to which discriminative learning methods can lead to better generative language models
1
cleartktimeml ranked 1 st for temporal relation f1 , time extent strict f1 and event tense accuracy---cleartk-timeml ranked 1 st in relation f1 , time extent strict f1 and event tense accuracy
1
fasttext pre-trained vector is used for word embedding with embed size is 300---the fasttext pre-trained vectors are used for word embedding with embed size is 300
1
ksc-pal , has at its core the tutalk system , a dialogue management system that supports natural language dialogue in educational applications---dialoguewise , its core is tutalk , a dialogue management system that supports natural lan-guage dialogue in educational applications
1
arabic is a morphologically complex language---arabic is a morphologically rich language where one lemma can have hundreds of surface forms ; this complicates the tasks of sa
1
we tuned the weights in the log-linear model by optimizing bleu on the tuning dataset , using mert , pro , or mira---tokenization of the english data was done using the berkeley tokenizer
0
intrinsic nlg evaluations often involve ratings of text quality or responses to questionnaires , with some studies using post-editing by human experts---intrinsic evaluation in nlg has often relied on human input , typically in the form of ratings of or responses to questionnaires
1
the language model was trained using kenlm---the 5-gram language models were built using kenlm
1
finally , we extend feature noising for structured prediction to a transductive or semi-supervised setting---so we can estimate it more accurately via a semi-supervised or transductive extension
1
relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text---relation extraction is a fundamental step in many natural language processing applications such as learning ontologies from texts ( cite-p-12-1-0 ) and question answering ( cite-p-12-3-6 )
1
however , these approaches tend to generate clusters that contain a single element depending on a certain criterion of merging similar clusters---in these approaches , a lot of clusters that contain only one element tend to be generated , depending on a certain criterion for merging similar clusters
1
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---we have used the srilm with kneser-ney smoothing for training a language model of order five and mert for tuning the model with development data
1
a knsmoothed 5-gram language model is trained on the target side of the parallel data with srilm---a 4-gram language model is trained on the monolingual data by srilm toolkit
1
we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit---we used a 5-gram language model trained on 126 million words of the xinhua section of the english gigaword corpus , estimated with srilm
1
the stanford parser 1 was used to produce all dependency parses---the stanford parser was used to generate the dependency parse information for each sentence
1
with additional lexical knowledge from wordnet , performance is further improved to surpass the state-of-the-art results---we use word vectors produced by the cbow approach-continuous bagof-words
0
the last several years have seen phrasal statistical machine translation systems outperform word-based approaches by a wide margin---we used the wordsim353 test collection which consists of similarity judgments for word pairs
0
twitter consists of a massive number of posts on a wide range of subjects , making it very interesting to extract information and sentiments from them---we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing
0
we present a generalized discriminative model for spelling error correction which targets character-level transformations---we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting
0
as input to the aforementioned model , we are going to use dense representations , and more specifically pre-trained word embeddings , such as glove---for the actioneffect embedding model , we use pre-trained glove word embeddings as input to the lstm
1
we use the term-sentence matrix to train a simple generative topic model based on lda---we have applied topic modeling based on latent dirichlet allocation as implemented in the mallet package
1
similarity is a kind of association implying the presence of characteristics in common---in this work , we present a generic discriminative phrase pair extraction framework that can integrate multiple features
0
it emphasizes the role of zero subject detection as the part of mention detection – the initial step of endto-end coreference resolution---using such an advanced separate classifier for zero subject detection improves the mention detection and , furthermore , endto-end coreference resolution
1
djuric et al , 2015 ) highlighted the effectiveness of comment embeddings in detection of hate speech , by joint modelling comments and words using continuous-bag of words to generate a low dimensional embedding---djuric et al propose an approach that learns low-dimensional , distributed representations of user comments in order to detect expressions of hate speech
1
sentiment analysis is the natural language processing ( nlp ) task dealing with the detection and classification of sentiments in texts---we did experiments with the samt model with the moses
0