text
stringlengths
82
736
label
int64
0
1
the bnnjm uses the current target word as input , so the information about the current target word can be combined with the context word information and processed in hidden layers---in this paper , we describe zebra , an svm-based system for segmenting the body text of email messages into nine zone types
0
the parsing system has been implemented and has confirmed the feasibility of ottr approach to the modeling of these phenomena---word reordering between source and target sentences has been a research focus since the emerging of statistical machine translation
0
huang et al presented an rnn model that uses document-level context information to construct more accurate word representations---in this paper we present a novel graph-based wsd algorithm which uses the full graph of wordnet efficiently , performing significantly better that previously published approaches in english
0
title queries are found to be preferred in mt-based clir---that title queries are preferred for mtbased clir
1
we trained the five classifiers using the svm implementation in scikit-learn---we implemented linear models with the scikit learn package
1
we used conditional random fields for the machine learning task---we use the mallet implementation of conditional random fields
1
we tune all feature weights automatically to maximize the bleu score on the dev set---we tune the feature weights with batch k-best mira to maximize bleu on a development set
1
our smt-based query expansion techniques are based on a recent implementation of the phrasebased smt framework---moreover , al-sabbagh and girju described an approach of mining the web to build a da-to-msa lexicon
0
the language model is a large interpolated 5-gram lm with modified kneser-ney smoothing---we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing
1
table 2 shows the blind test results using bleu-4 , meteor and ter---table 4 shows the comparison of the performances on bleu metric
1
we use the cube pruning method to approximately intersect the translation forest with the language model---we extract translation rules from a hypergraph for the hierarchical phrase-based system
1
word sense disambiguation ( wsd ) is the task of identifying the correct meaning of a word in context---the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd )
1
cite-p-14-1-2 proposed methods for acquiring multimodal representations by applying svd to distributional semantics and bag-of-visual-words ( bovw )---cite-p-14-3-19 proposed a deep learning method for learning multimodal representations by solving pseudo-supervised tasks
1
in experiments with nlp tasks , we show that the proposed method can extract effective combination features , and achieve high performance with very few features---we applied the proposed methods to nlp tasks , and found that our methods can achieve the same high performance
1
our work demonstrates an alternative way to improve blstm-rnn ’ s performance by learning useful word representations---in this work , we propose a novel approach to learn distributed word representations by training blstm-rnn
1
twitter is a popular microblogging service , which , among other things , is used for knowledge sharing among friends and peers---twitter is a widely used social networking service
1
combinatory categorial grammar is a lexicalized grammar formalism that has been used for both broad coverage syntactic parsing and semantic parsing---typically , shen et al propose a string-todependency model , which integrates the targetside well-formed dependency structure into translation rules
0
the n-gram models were built using the irstlm toolkit on the dewac corpus , using the stopword list from nltk---the target language model is built on the target side of the parallel data with kneser-ney smoothing using the irstlm tool
1
their output can be used either by itself , or as training material for modern supervised srl algorithms---in this context , as instances tagged by high quality annotation could be later used as training data for supervised srl algorithms
1
le and mikolov , 2014 ) proposed the paragraph vector that learns fixed-length representations from variable-length pieces of texts---most of previous work rely on the use of crfs
0
the chinese word embeddings are pre-trained using skip-gram model on the raw cqa corpus---the word embeddings are initialized as 50 dimensions , trained on chinese wikipedia dump 5 via the skip-gram model
1
in this way , these “ garbage collector effects ” are a form of overfitting---transliteration is the task of converting a word from one alphabetic script to another
0
note that , unlike active learning used in the nlp community , non-interactive active learning algorithms exclude expert annotators¡¯ human labels from the protocol---for sampling nodes , non-interactive active learning algorithms exclude expert annotators ¡¯ human labels from the protocol
1
by treating time as a continuous variable , we can capture this gradual shift---as is done in previous work , we represent time as a continuous variable
1
we have also exploited , random and manually trained embeddings for initialization---in addition , we utilize the pre-trained word embeddings with 300 dimensions from for initialization
1
figure 1 : an example of the sentences with entity attributes annotated in timebank---that suggest syllable weight encodes largely the same information for word segmentation that dictionary stress information does
0
mturk has been adopted for a variety of uses both in industry and academia , ranging from user studies to image labeling---mturk has been adopted for a variety of uses both in in dustry and academia from user studies to image labeling
1
our mt decoder is a proprietary engine similar to moses---we use a pbsmt model built with the moses smt toolkit
1
this work uses either grapheme or phoneme based models to transliterate words lists---most of the works are devoted to phoneme-based transliteration modeling
1
the metrics that were used to evaluate the model were bleu , ne dist and nist---the bleu metric has been used to evaluate the performance of the systems
1
in run3 , we averaged run1 with a previously proposed surface-based approach as a kind of integration---semantic role labeling ( srl ) is the task of automatic recognition of individual predicates together with their major roles ( e.g . frame elements ) as they are grammatically realized in input sentences
0
a kernel is a measure of similarity between every pair of examples in the data and a kernel-based machine learning algorithm accesses the data only through these kernel values---a fisher kernel is a function that measures the similarity between two data items not in isolation , but rather in the context provided by a probability distribution
1
we use large 300-dim skip gram vectors with bag-of-words contexts and negative sampling , pre-trained on the 100b google news corpus---we use publicly-available 1 300-dimensional embeddings trained on part of the google news dataset using skip-gram with negative sampling
1
we find that both methods can reconstruct elided predicates with very high accuracy from gold standard dependency trees---we find that both methods can reconstruct elided material from dependency trees with high accuracy
1
relation extraction is the task of finding semantic relations between two entities from text---in our work , we develop our active dual supervision framework using constrained non-negative tri-factorization
0
we evaluate the translation quality using the case-insensitive bleu-4 metric---we use case-insensitive bleu-4 and rouge-l as evaluation metrics for question decomposition
1
for our parsing experiments , we use the berkeley parser---we use the berkeley parser word signatures
1
this is the approach mentioned briefly in johson and wood---relation extraction ( re ) is the task of determining semantic relations between entities mentioned in text
0
the output of these systems has been used to support many nlp tasks such as learning selectional preference , acquiring sense knowledge , and recognizing entailment---the output of open ie systems has been used to support tasks like learning selectional preferences , acquiring common sense knowledge , and recognizing entailment
1
coreference resolution is the task of partitioning a set of mentions ( i.e . person , organization and location ) into entities---coreference resolution is the process of linking multiple mentions that refer to the same entity
1
as our machine learning component we use liblinear with a l2-regularised l2-loss svm model---we use the l2-regularized logistic regression of liblinear as our term candidate classifier
1
below we describe our approach in greater detail , provide experimental evidence of its value for performing inference in nell ’ s knowledge base , and discuss implications of this work and directions for future research---we describe our approach in greater detail , provide experimental evidence of its value for performing inference in nell ’ s knowledge base , and discuss implications of this work
1
in this study , we propose a new co-regression algorithm to address the above problem by leveraging unlabeled reviews in the target language---work , we will apply the proposed co-regression algorithm to other cross-language or cross-domain regression problems
1
to minimize the objective , we use stochastic gradient descent with the diagonal variant of adagrad---for the optimization process , we apply the diagonal variant of adagrad with mini-batches
1
this means in practice that the language model was trained using the srilm toolkit---the language model is trained and applied with the srilm toolkit
1
the weights for the loglinear model are learned using the mert system---the log-linear parameter weights are tuned with mert on a development set to produce the baseline system
1
we use srilm toolkits to train two 4-gram language models on the filtered english blog authorship corpus and the xinhua portion of gigaword corpus , respectively---in this paper , we propose a new model based on the cbow , hence
0
our results show that the vision-based model outperforms the language-only model on our dataset---our results show that the visual model outperforms the language-only model
1
in addition , we use an english corpus of roughly 227 million words to build a target-side 5-gram language model with srilm in combination with kenlm---our labeled data comes from the penn treebank and consists of about 40,000 sentences from wall street journal articles annotated with syntactic information
0
in some cases , the performance of adaptation is even lower than that without adaptation , which is usually known as negative transfer ( cite-p-19-1-20 )---in some cases , negative transfer may happen ( cite-p-19-1-1 , cite-p-19-1-17 ) , which means the performance of adaptation is worse than that without adaptation
1
we experiment with the model on english celex data and german derivbase ( cite-p-19-4-3 ) data---on the english portion of celex ( cite-p-18-1-2 ) , we achieve a 5 point improvement in segmentation accuracy
1
sentiment analysis is the computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-11-3-3 )---minimum error rate training under bleu criterion is used to estimate 20 feature function weights over the larger development set
0
yang and kirchhoff anticipated oov words that are potentially morphologically related using phrase-based backoff models---yang and kirchhoff proposed a backoff model for phrase-based smt that translated word forms in the source language by hierarchical morphological phrase level abstractions
1
we proposed a formal annotation graph representation that can be used to derive these features automatically---in this work , we propose a general graph representation for automatically extracting structured features from tokens and prior annotations
1
lastly , i will attempt to make convolution kernels more scalable and interpretable---i will explore to make convolution kernels more scalable
1
this paper presents an innovative unsupervised method for automatic sentence extraction using graph-based ranking algorithms---in this paper , we investigate a range of graph-based ranking algorithms , and evaluate their application to automatic unsupervised sentence extraction
1
choi and cardie first proposed a joint sequence labeling approach to extract opinion expressions and label them with polarity and intensity---choi and cardie first developed a joint sequence labeler that jointly tags opinions , polarity and intensity by training crfs with hierarchical features
1
assamese is a morphologically rich , agglutinative and relatively free word order indic language---assamese is a morphologically rich , free word order , inflectional language
1
ne recognition is a fundamental ie task , that detects some named constituents in sentences , for instance names of persons , places , organizations , dates , times , and so on---recognition is a classic computer vision ( cv ) problem including tasks such as recognizing instances of object classes in images ( such as car , cat , or sofa ) ; classifying images by scene ( such as beach or forest ) ; or detecting attributes in an image ( such as wooden or feathered )
1
we use a pbsmt model where the language model is a 5-gram lm with modified kneser-ney smoothing---in this and our other n-gram models , we used kneser-ney smoothing
1
to prevent overfitting , we apply dropout operators to non-recurrent connections between lstm layers---we apply dropout on the lstm layer to prevent network parameters from overfitting and control the co-adaptation of features
1
the feature weights of the log-linear models were trained with the help of minimum error rate training and optimized for 4-gram bleu on the development test set---the scaling factors of the features were optimized for bleu on the development set with minimum error rate training on 100-best lists
1
word embeddings are a crucial component in many nlp approaches since they capture latent semantics of words and thus allow models to better train and generalize---one of the most useful neural network techniques for nlp is the word embedding , which learns vector representations of words
1
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing---we estimated lexical surprisal using trigram models trained on 1 million hindi sentences from emille corpus using the srilm toolkit
1
deep learning models have demonstrated successful results in many nlp tasks such as language translation , image captioning and sentiment analysis---deep learning is used to automatically learn representations , which has achieved some promising results on sentiment analysis
1
coreference resolution is the task of clustering a set of mentions in the text such that all mentions in the same cluster refer to the same entity---we propose 1-best a * , 1-best iterative a * , k-best a * and k-best iterative viterbi a * algorithms for sequential decoding
0
our parser produces a full syntactic parse of every sentence , and furthermore produces logical forms for portions of the sentence that have a semantic representation within the parser¡¯s predicate vocabulary---part-of-speech ( pos ) tagging is a critical task for natural language processing ( nlp ) applications , providing lexical syntactic information
0
our cdsm feature is based on word vectors derived using a skip-gram model---using word2vec , we compute word embeddings for our text corpus
1
the binary syntactic features were automatically extracted using the stanford parser---the reordering rules are based on parse output produced by the stanford parser
1
we classically used the error metric p k proposed in and its variant windowdiff to measure segmentation accuracy---as for the boundary detection problem , we use the windowdiff and p k metrics
1
toutanova et al and punyakanok et al presented a re-ranking model and an integer linear programming model respectively to jointly learn a global optimal semantic roles assignment---toutanova et al presented a re-ranking model to jointly learn the semantic roles of multiple constituents in the srl task
1
we used the sri language modeling toolkit for this purpose---we use the sri language modeling toolkit for language modeling
1
coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity---coreference resolution is the problem of partitioning a sequence of noun phrases ( or mentions ) , as they occur in a natural language text , into a set of referential entities
1
we use the publicly available word2vec vectors trained on 100 billion words from google news using the continuous bag-of-words architecture to initialize word embeddings , but randomly initialize character embeddings---we train word embeddings using the continuous bag-of-words and skip-gram models described in mikolov et al as implemented in the open-source toolkit word2vec
1
on the remaining tweets , we trained a 10-gram word length model , and a 5-gram language model , using srilm with kneyser-ney smoothing---during the last few years , smt systems have evolved from the original word-based approach to phrase-based translation systems
0
we propose an effective content enriching method for microblog , to enhance classification accuracy---we present an lda-based enriching method using the news corpus , and apply it to the task of microblog classification
1
in their model , citing articles “ vote ” on each cited article ’ s topic distribution in retrospect , via a network flow model---in their model , citing articles “ vote ” on each cited article ’ s topic distribution
1
using a case study , we show that variation in oral reading rate across passages for professional narrators is consistent across readers and much of it can be explained using features of the texts being read---using a case study that variation in reading rate across passages for professional narrators is consistent across readers and much of it can be explained using features of the texts being read
1
the diverse nature of input noise leads us to pursue a multi-faceted approach to robustness---multi-faceted approach , motivated by the diversity of input data imperfections , can eliminate a large proportion of the spurious outputs
1
relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text---relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text
1
we measure translation quality via the bleu score---we report decoding speed and bleu score , as measured by sacrebleu
1
history-based feature models for predicting the next parser action 3---history-based models for predicting the next parser action 3
1
in this work , we first present the construction of a large test collection extracted from systematic literature reviews---this task usually requires aspect segmentation , followed by prediction or summarization
0
the core part of our algorithm is a cognitively-motivated scheduler according to which training instances and their ¡°reviews¡± are spaced over time---core part of our algorithm is a scheduler that ensures a given neural network spends more time working on difficult training instances
1
our shouldmodel jointly controls the contributions from the source and target contexts---semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation ( mr )
0
the most common word embeddings used in deep learning are word2vec , glove , and fasttext---other popular , pre-trained word embeddings include glove , word2vec over twitter , and fasttext
1
our experiment shows that even for small thresholds , quite good results can be obtained---our experiments confirm our hypothesis and show that this simple rule gives quite good results for chinese word extraction
1
we train a trigram language model with the srilm toolkit---we used srilm -sri language modeling toolkit to train several character models
1
keyphrase extraction is a basic text mining procedure that can be used as a ground for other , more sophisticated text analysis methods---keyphrase extraction is a fundamental technique in natural language processing
1
the hierarchical phrase-based translation model has been widely adopted in statistical machine translation tasks---hierarchical phrase-based translation models that utilize synchronous context free grammars have been widely adopted in statistical machine translation
1
jansen et al proposed a reranking model that used both shallow and deep discourse features to identify answer structures in large answer collections across different tasks and genres---jansen et al describe answer reranking experiments on ya using a diverse range of lexical , syntactic and discourse features
1
for our purpose we use word2vec embeddings trained on a google news dataset and find the pairwise cosine distances for all words---we use publicly-available 1 300-dimensional embeddings trained on part of the google news dataset using skip-gram with negative sampling
1
word sense disambiguation ( wsd ) is the task to identify the intended sense of a word in a computational manner based on the context in which it appears ( cite-p-13-3-4 )---in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd )
1
by removing the tensor ’ s surplus parameters , our methods learn better and faster as was shown in experiments---by removing the tensor ’ s surplus parameters , our methods learn better and faster
1
lepage proposed an algorithm for computing the solutions of a formal analogical equation---lepage proposed an algorithm for solving an analogical equation
1
in this work , we apply a standard phrase-based translation system---we consider a phrase-based translation model and a hierarchical translation model
1
deep neural networks have seen widespread use in natural language processing tasks such as parsing , language modeling , and sentiment analysis---the frequent frame you it , for example , largely identifies verbs , as shown in , taken from the childes database of child-directed speech
0
we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting---we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus
1
exact marginalization is made tractable through dynamic programming over shift-reduce parsing and minimal rnn-based feature sets---exact decoding and globally-normalized discriminative training is tractable with dynamic programming
1
vinyals and le adopted the sequence-tosequence model used in machine translation in the task of automatic response generation---sutskever et al introduce this sequence-to-sequence architecture for machine translation
1