text
stringlengths
82
736
label
int64
0
1
we implemented scaling , which is similar to that for hmms , in the forward-backward phase of crf training to deal with long sequences due to sentence concatenation---we implemented scaling , which is similar to that for hmms , in the forward-backward phase of crf training to deal with very long sequences due to sentence concatenation
1
for translation experiments , we use a phrase-based decoder that incorporates a set of standard features and a hierarchical reordering model---as a baseline we use a translation system with distortion limit 6 and a lexicalized reordering model
1
in section 6 , the proposed word embeddings show evident improvements on sentiment classification , as compared to the base model word2vec and other baselines using the same lexical resource---in section 6 , the proposed word embeddings show evident improvements on sentiment classification , as compared to the base model
1
we applied bpe to all data using 32,000 merge operations---we used a script from with 89 , 500 merge operations
1
several researchers ( cite-p-20-1-4 , cite-p-20-3-1 , van der cite-p-20-3-9 ) have used large monolingual corpora to extract distributionally similar words---distributional similarity is used in many proposals to find semantically related words ( cite-p-20-1-4 , cite-p-20-3-1 , van der cite-p-20-3-9 )
1
we use a frame based parser similar to the dypar parser used by carbonell , et al to process ill-formed text , semantic information is represented in a set of frames---for simplicity , we use the well-known conditional random fields for sequential labeling
0
katiyar and cardie proposed a neural network-based approach that learns hypergraph representation for nested entities using features extracted from a recurrent neural network---katiyar and cardie presented a standard lstm-based sequence labeling model to learn the nested entity hypergraph structure for an input sentence
1
by contrast , our approach directly uses and optimizes nmt parameters using the ¡°supervised¡± alignments---in this paper , we utilize the ¡° supervised ¡± alignments , and put the alignment cost to the nmt objective
1
semantic parsing is the task of automatically translating natural language text to formal meaning representations ( e.g. , statements in a formal logic )---semantic parsing is the task of converting natural language utterances into formal representations of their meaning
1
ng et al exploit category-specific information for multi-document summarization---ng et al exploited category-specific information for multi-document summarization
1
we use the perplexity computation method of mikolov et al suitable for skip-gram models---we adapt the models of mikolov et al and mikolov et al to infer feature embeddings
1
to measure the translation quality , we use the bleu score and the nist score---we use case-sensitive bleu-4 to measure the quality of translation result
1
one of the clear successes in computational modeling of linguistic patterns has been finite state transducer models for morphological analysis and generation---in addition to that we use pre-trained embeddings , by training word2vec skip-gram model on wikipedia texts
0
we initialize the word embedding matrix with pre-trained glove embeddings---we initialize the embedding weights by the pre-trained word embeddings with 200 dimensional vectors
1
the 50-dimensional pre-trained word embeddings are provided by glove , which are fixed during our model training---the representations of words are pre-trained by glove , and all these embeddings are fine-tuned in the training process
1
the training module , shown in figure 1 , is based on the language modeler presented in---the feature weights are tuned to optimize bleu using the minimum error rate training algorithm
0
lui et al proposed a system that does language identification in multilingual documents , using a generative mixture model that is based on supervised topic modeling algorithms---lui et al proposed a system for language identification in multilingual documents using a generative mixture model that is based on supervised topic modeling algorithms
1
persing and ng introduced an approach for recognizing the argumentation strength of an essay---lin and pantel describe an unsupervised algorithm for discovering inference rules from text
0
we use the berkeley probabilistic parser to obtain syntactic trees for english and its adapted version for french---we use the berkeley probabilistic parser to obtain syntactic trees for english and its bonsai adaptation for french
1
we conclude in section 5 and identify avenues we believe deserve investigations---in section 5 and identify avenues we believe deserve investigations
1
previous research has shown that complex word identification considerably improves lexical simplification---the task of complex word identification has often been regarded as a critical first step for automatic lexical simplification
1
the weights associated to feature functions are optimally combined using the minimum error rate training---minimum error rate training under bleu criterion is used to estimate 20 feature function weights over the larger development set
1
we have used latent dirichlet allocation model as our main topic modeling tool---we have applied topic modeling based on latent dirichlet allocation as implemented in the mallet package
1
it is a speechenhanced version of the why2-atlas tutoring system---word sense disambiguation ( wsd ) is formally defined as the task of computationally identifying senses of a word in a context
0
a 4-grams language model is trained by the srilm toolkit---our letter ngram is a standard letter-ngram model trained using the srilm toolkit
1
we used the srilm toolkit to generate the scores with no smoothing---to measure the importance of the generated questions , we use lda to identify the important sub-topics from the given body of texts
0
to reduce the number of features , we employ the l1-regularization in training to enforce sparse solutions , using the off-the-shelf lib-linear toolkit---to keep the number features to a manageable size , we employ the l1-regularization in training to enforce sparse solutions , using the off-the-shelf lib-linear toolkit
1
sentence compression is the task of producing a shorter form of a single given sentence , so that the new form is grammatical and retains the most important information of the original one ( cite-p-15-3-1 )---sentence compression is a standard nlp task where the goal is to generate a shorter paraphrase of a sentence
1
for the mix one , we also train word embeddings of dimension 50 using glove---the model parameters in word embedding are pretrained using glove
1
the language is a form of modal propositional logic---language is the primary tool that people use for establishing , maintaining and expressing social relations
1
we adapted the moses phrase-based decoder to translate word lattices---we evaluated the reordering approach within the moses phrase-based smt system
1
hearst used a small number of regular expressions over words and part-of-speech tags to find examples of the hypernym relation---hearst utilized a list of patterns indicative for the hyponym relation in general texts
1
we adapted the moses phrase-based decoder to translate word lattices---phrase-based translation models ( chiang , 2007 ) are widely used in machine translation systems due to their ability to achieve local fluency through phrasal translation and handle non-local phrase reordering
0
these features were extracted using stanford corenlp---we conducted baseline experiments for phrasebased machine translation using the moses toolkit
0
domain specific language and translation models are created from the data within each bilingual cluster---our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing
0
the language model was a kneser-ney interpolated trigram model generated using the srilm toolkit---we then made videos of every schedule for every sentence , using the festival speech synthesiser and the ruth talking head
0
for evaluation , caseinsensitive nist bleu is used to measure translation performance---the evaluation metric for the overall translation quality is caseinsensitive bleu4
1
in particular , the cooccurrence based embeddings of words in a corpus has been demonstrated to encode meaningful semantic relationships between them---word embeddings have been empirically shown to preserve linguistic regularities , such as the semantic relationship between words
1
we consider a simple constraint that a verb should not have multiple subjects/objects as its children---we consider a simple linguistic constraint that a verb should not have multiple subjects / objects as its children
1
we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model---we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus
1
the work we present in this article focuses on the automatic building of a thesaurus from a corpus---in this article , we propose a more direct approach focusing on the identification of the neighbors of a thesaurus
1
word embeddings have proven to be effective models of semantic representation of words in various nlp tasks---word vector embeddings have become a standard building block for nlp applications
1
our model extends the rational speech act model from cite-p-21-3-1 to incorporate updates to listeners¡¯ beliefs as discourse proceeds---that extends the rational speech act model from cite-p-21-3-1 to incorporate updates to listeners ¡¯ beliefs as discourse proceeds
1
to resolve the problem that translation systems generates grammatically dubious sentence , our method utilizes dependency structures and japanese dependency constraints to determine the word order of a translation---our method utilizes the feature that word order is flexible in japanese , and determines the word order of a translation based on dependency structures and japanese dependency constraints
1
pennington et al shows that the word embeddings produced by the model achieves state-of-the-art performance in word analogy task---newman et al found that aggregate pairwise pmi scores over the top-n topic words correlated well with human ratings
0
we use the moses package to train a phrase-based machine translation model---as a baseline model we develop a phrase-based smt model using moses
1
as embedding vectors , we used the publicly available representations obtained from the word2vec cbow model---for all three classifiers , we used the word2vec 300d pre-trained embeddings as features
1
han and baldwin proposed a supervised method to detect ill-formed words and used morphophonemic similarity to generate correction candidates---han and baldwin , 2011 ) developed classifiers for detecting the ill-formed words and generated corrections based on the morphophonemic similarity
1
we use srilm for training a trigram language model on the english side of the training data---we use srilm for training a trigram language model on the english side of the training corpus
1
furthermore , we train a 5-gram language model using the sri language toolkit---we use the sri language modeling toolkit for language modeling
1
coreference resolution is a task aimed at identifying phrases ( mentions ) referring to the same entity---transe ( cite-p-13-1-3 ) is a typical model considering relation vector as translating operations between head and tail vector , i.e. , math-w-2-3-0-13 when math-w-2-3-0-21 holds
0
to this end , we use first-and second-order conditional random fields---as a sequence labeler we use conditional random fields
1
the component features are weighted to minimize a translation error criterion on a development set---the parameter weights are optimized with minimum error rate training
1
coreference resolution is a multi-faceted task : humans resolve references by exploiting contextual and grammatical clues , as well as semantic information and world knowledge , so capturing each of these will be necessary for an automatic system to fully solve the problem---coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity
1
we use several classifiers including logistic regression , random forest and adaboost implemented in scikit-learn---we use the logistic regression classifier as implemented in the skll package , which is based on scikitlearn , with f1 optimization
1
in this paper , we propose an attention-based hierarchical neural network for discourse parsing---in this paper , we propose to use a hierarchical bidirectional long short-term memory ( bi-lstm ) network
1
in the context of the web 2.0 , the importance of social media has been constantly growing in the past years---we use rouge , an automatic evaluation metric that was originally used for summarization evaluation and was recently found useful for evaluating definitional question answering
0
a tag is a rewriting system that derives trees starting from a finite set of elementary trees---although tag is a class of tree rewriting systems , a derivation relation can be defined on strings in the following way
1
in this article , we are also concerned with improving tagging efficiency at test time---in this article , we are also concerned with improving tagging efficiency
1
mwes are defined as idiosyncratic interpretations that cross word boundaries---mwes consist of combinations of several words that show some idiosyncrasy
1
all the data were extracted from the penn treebank using the tgrep tools---the texts were pos-tagged , using the same tag set as in the penn treebank
1
a 4-gram language model is trained on the monolingual data by srilm toolkit---we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing
1
the reason for choosing svms is that it currently is the best performing machine learning technique across multiple domains and for many tasks , including language identification---the reason for choosing svm is that it currently is the best performing machine learning technique across multiple domains and for many tasks , including language identification
1
li et al jointly models chinese pos tagging and dependency parsing , and report the best tagging accuracy on ctb---li et al and bohnet and nivre use joint models for pos tagging and dependency parsing , significantly outperforming their pipeline counterparts
1
following previous work , we design a blocked metropolis-hastings sampler that samples derivations per entire parse trees all at once in a joint fashion---since tag derivations are highly structured objects , we design a blocked metropolis-hastings sampler that samples derivations per entire parse trees all at once in a joint fashion
1
the release of the penn discourse treebank has advanced the development of english discourse relation recognition---the senses in wordnet are ordered according to the frequency data in the manually tagged resource semcor
0
ner is a sequence tagging task that consists in selecting the words that describe entities and recognizing their types ( e.g. , a person , location , company , etc . )---ner is defined as the computational identification and classification of named entities ( nes ) in running text
1
semantic parsing is the task of mapping natural language to machine interpretable meaning representations---semantic parsing is the task of mapping a natural language ( nl ) sentence into a completely formal meaning representation ( mr ) or logical form
1
since verbnet uses argument labels that are more consistent across verbs , we are able to demonstrate that these new labels are easier to learn---by taking advantage of verbnet ¡¯ s more consistent set of labels , we can generate more useful role label annotations
1
text simplification ( ts ) is a monolingual text-to-text transformation task where an original ( complex ) text is transformed into a target ( simpler ) text---text simplification ( ts ) is the task of modifying an original text into a simpler version of it
1
they are undirected graphical models trained to maximize a conditional probability---these models are an instance of conditional random fields and include overlapping features
1
nouns , verbs , adjectives and adverbs are grouped into sets of cognitive synonyms , each expressing a distinct concept---large-scale knowledge bases like freebase , yago , nell can be useful in a variety of applications like natural language question answering , semantic search engines , etc
0
an important feature of the approach is the use of a supervised learning method , without the need for manual tagging of training data---the word embeddings are initialized with pre-trained word vectors using word2vec 2 and other parameters are randomly initialized including pos embeddings
0
these features were optimized using minimum error-rate training and the same weights were then used in docent---we then lowercase all data and use all unique headlines in the training data to train a language model with the srilm toolkit
0
the penn discourse treebank is a large corpus annotated with discourse relations ,---we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words
0
xia et al automatically extracted conversion rules from a target treebank and proposed strategies to handle the case when more than one conversion rule are applicable---for acquisition of better conversion rules , xia et al proposed a method to automatically extract conversion rules from a target treebank
1
unlike previous work , pblm exploits the structure of its input , and its output consists of a vector per input word---but , in contrast to previous models , it relies on sequential nns to exploit the structure of the input text
1
we use glove word embeddings , which are 50-dimension word vectors trained with a crawled large corpus with 840 billion tokens---meanwhile , we adopt glove pre-trained word embeddings 5 to initialize the representation of input tokens
1
to the best of our knowledge , there is no previous work on approaching the offensive language problem using style transfer methods---relatedness measure computed in a multilingual space is able to acquire and leverage additional information from the multilingual representation , and thus be strengthened
0
we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings---we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset
1
we trained the statistical phrase-based systems using the moses toolkit with mert tuning---we used the moses mt toolkit with default settings and features for both phrase-based and hierarchical systems
1
for our purpose we use word2vec embeddings trained on a google news dataset and find the pairwise cosine distances for all words---in this work , we employ the toolkit word2vec to pre-train the word embedding for the source and target languages
1
zeng et al developed a deep convolutional neural network to extract lexical and sentence level features , which are concatenated and fed into the softmax classifier---zeng et al use a convolutional deep neural network to extract lexical features learned from word embeddings and then fed into a softmax classifier to predict the relationship between words
1
we implement logistic regression with scikit-learn and use the lbfgs solver---we create mwes with word2vec skipgram 1 and estimate w with scikit-learn
1
we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing---we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing
1
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus---we propose an approach that consists in directly replacing unknown source terms , using source-language resources and models
0
we train 300 dimensional word embedding using word2vec on all the training data , and fine-turning during the training process---we adopt pretrained embeddings for word forms with the provided training data by word2vec
1
the follow-up needs to be related to the content of the previous interaction---which is required to be contextually relevant to the content of the previous interaction
1
parsers are reporting impressive numbers these days , but coordination remains an area with room for improvement---generic parameters are useful predictors of user satisfaction
0
at present , our implementation of the training and tagging components is based on the conditional random fields---in our experiments we use a publicly available implementation of conditional random fields
1
many existing active learning methods are based on selecting the most uncertain examples using various measures---in a practical spoken dialogue system called ovis
0
we obtain reasonable performance on c omplex q uestions , and analyze the types of compositionality that are challenging for a web-based qa model---on c omplex q uestions , a dataset designed to focus on compositional language , and find that our model obtains reasonable performance
1
framing is a sophisticated form of discourse in which the speaker tries to induce a cognitive bias through consistent linkage between a topic and a specific context ( frame )---framing is a political strategy in which politicians carefully word their statements in order to control public perception of issues
1
cite-p-18-1-3 proposed to use a tree-based constituency parsing model to handle nested entities---cite-p-18-1-0 proposed a cascading approach using multiple linear-chain crf models , each handling a subset of all the possible mention types
1
for sequence modeling in all three components , we use the long short-term memory recurrent neural network---biadsy et al presented a system that identifies dialectal words in speech through acoustic signals
0
another popular method for continuous sentence representation is based on the recursive neural network---the most noticeable models may be the recursive autoencoder neural network which builds the representation of a sentence from subphrases recursively
1
heilman et al combined unigram models with grammatical features and trained machine learning models for readability assessment---heilman et al combined a language modeling approach with grammarbased features to improve readability assessment for first and second language texts
1
experiments show that our proposed model outperforms the standard attention-based neural machine translation baseline---we provide an experimental study comparing ltag-based features
0
linguistically , metaphor is defined as a language expression that uses one or several words to represent another concept , rather than taking their literal meanings of the given words in the context ( cite-p-14-1-6 )---we use the sentiment pipeline of stanford corenlp to obtain this feature
0
the metamap program identifies all words and terms in a text which could be mapped onto a umls cui---the metamap program is available to map text to the concepts and semantic type
1