text
stringlengths
82
736
label
int64
0
1
translation performance was measured by case-insensitive bleu---the evaluation metric for the overall translation quality was case-insensitive bleu4
1
however , as liang et al note , and we confirm , sequence-based predictors are often not necessary when an appropriately rich feature set is used---however , while structure does provide valuable information , liang et al have shown that gains provided by structured prediction can be largely recovered by using a richer feature set
1
this data split is different from other similar text classification shared tasks which provide much more training than test instances---we obtained both phrase structures and dependency relations for every sentence using the stanford parser
0
zens and ney show that itg constraints yield significantly better alignment coverage than the constraints used in ibm statistical machine translation models on both german-english and french-english---for the n-gram lm , we use srilm toolkits to train a 4-gram lm on the xinhua portion of the gigaword corpus
0
semantic parsing is the task of mapping a natural language query to a logical form ( lf ) such as prolog or lambda calculus , which can be executed directly through database query ( zettlemoyer and collins , 2005 , 2007 ; haas and riezler , 2016 ; kwiatkowksi et al. , 2010 )---semantic parsing is the task of mapping natural language sentences to a formal representation of meaning
1
to that end , we use the state-of-the-art phrase based statistical machine translation system moses---for our baseline we use the moses software to train a phrase based machine translation model
1
le and mikolov extended the word embedding learning model by incorporating paragraph information---le and mikolov extends the neural network of word embedding to learn the document embedding
1
we obtained parse trees using the stanford parser , and used jacana for word alignment---in this paper , the extensions do not affect the complexity or the basic tenets of the two-level model
0
much of the additional work on generative modeling of 1-to-n word alignments is based on the hmm model---that shows how only the baroni dataset provides consistent results
0
moreover , the integration of visual , acoustic , and linguistic features can improve significantly over the use of one modality at a time , with incremental improvements observed for each added modality---use of linguistic , acoustic , and visual modalities allows us to better sense the sentiment being expressed as compared to the use of only one modality at a time
1
the detection model is implemented as a conditional random field , with features over the morphology and context---we present a specialized dataset that specifically tests a human ¡¯ s coreference
0
time normalization is the task of translating natural language expressions of time to computer-readable forms---time normalization is a crucial part of almost any information extraction task that needs to place entities or events along a timeline
1
the english side of the parallel corpus is trained into a language model using srilm---a kn-smoothed 5-gram language model is trained on the target side of the parallel data with srilm
1
moreover , our approach uses no hand-crafted features or sentiment lexicons---of previous years , we do not rely on hand-crafted features , sentiment lexicons
1
we used kenlm with srilm to train a 5-gram language model based on all available target language training data---our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing
1
neural machine translation has recently become the dominant approach to machine translation---neural network models for machine translation are now largely successful for many language pairs and domains
1
this model shows a significant improvement over the state-of-the-art hierarchical phrase-based system---this model significantly outperforms the state-of-the-art hierarchical phrasebased model
1
k-best iterative a * algorithm can be several times or orders of magnitude faster than the state-of-the-art k-best decoding algorithm---a * algorithm can be several times or orders of magnitude faster than the state-of-the-art k-best decoding algorithm
1
we used the implementation of the scikit-learn 2 module---we use the skll and scikit-learn toolkits
1
spooren and degand argue that low agreement scores may contribute to the fact that reliability scores are often not reported in corpus-based discourse studies---spooren and degand note that sufficiently reliable annotation appears to be an issue within the field of discourse coherence
1
we use the selectfrommodel 4 feature selection method as implemented in scikit-learn---we implemented linear models with the scikit learn package
1
experimental results show that our f-scores of 85.45 on chinese and 92.62 on english outperform the previously best-reported systems by 1.21 and 0.52 , respectively---experimental results show that our final results , an f-score of 92 . 62 on english and 85 . 45 on chinese , outperform the previously best-reported systems by 0 . 52
1
for automated scoring of unrestricted spontaneous speech , speech proficiency has been evaluated primarily on aspects of pronunciation , fluency , vocabulary and language usage but not on aspects of content and topicality---moses is a phrase-based system with lexicalized reordering
0
we tune model weights using minimum error rate training on the wmt 2008 test data---we tune weights by minimizing bleu loss on the dev set through mert and report bleu scores on the test set
1
wei and gulla , 2010 ) modeled the hierarchical relation between product aspects---wei and gulla modeled the hierarchical relation between product aspects
1
we trained a 5-grams language model by the srilm toolkit---we used srilm to build a 4-gram language model with kneser-ney discounting
1
in this paper , we have successfully applied the discriminative reranking to machine translation---in this paper , we will present some novel discriminative reranking techniques applied to machine translation
1
besides , we used the character language model that and proposed , on the vlsp dataset and our vtner dataset---in particular , we integrated character language model that and proposed , with our system
1
word sense has been successfully used in many natural language processing tasks , such as machine translation---as a fundamental task in natural language processing , wsd can benefit applications such as machine translation and information retrieval
1
our framework has made clear advancements with respect to existing structured topic models---we report bleu and ter evaluation scores
0
we previously resort to a heuristic measure to segment noun phrases---to segment each noun phrase , we use non-parametric bayesian language
1
a resulting summary consists of abstractive sentences representing the phrasal query information and the overall content of the conversation---a resulting summary consists of one or more mini-summaries , each on a subtopic from the discussion
1
akiva and koppel investigated this limitation and presented a generic unsupervised method---the authors of akiva and koppel addressed the drawbacks of the above approach by proposing a generic unsupervised approach for abdd
1
word sense disambiguation ( wsd ) is the task of determining the correct meaning or sense of a word in context---with the convolutional neural network , we summarize the information of a phrase pair and its context , and further compute the pair ’ s matching score
0
sentiment analysis ( sa ) is a hot-topic in the academic world , and also in the industry---sentiment analysis ( sa ) is a fundamental problem aiming to allow machines to automatically extract subjectivity information from text ( cite-p-16-5-8 ) , whether at the sentence or the document level ( cite-p-16-3-3 )
1
all preprocessing was performed using the stanford corenlp toolkit---extraction of pos tags was performed using the postaggerannotator from the stanford corenlp suite
1
in algorithm 1 , we consistently use the sparse averaged perceptron algorithm as the “ learn ” function---in algorithm 1 , we consistently use the sparse averaged perceptron algorithm
1
we use the stanford pos tagger to obtain the lemmatized corpora for the sre task---to generate dependency links , we use the stanford pos tagger 18 and the malt parser
1
finally , the ape system was tuned on the development set , optimizing ter with minimum error rate training---we use 300d glove vectors trained on 840b tokens as the word embedding input to the lstm
0
the unique characteristics of tree-adjoining grammars , its elementary objects found in the~lexicon ( extended trees ) and the derivational history of derived trees ( also a tree ) , require a specially crafted interface in which the perspective has shifted from a string-based to a tree-based system---and the distinction between derived tree and its derivational history ( also a tree ) , require a specially crafted interface in which the perspective must be shifted from a string-based to a tree-based system
1
first , the training data for the parser is projectivized by applying a number of lifting operations and encoding information about these lifts in arc labels---first , the training data for the parser is projectivized by applying a minimal number of lifting operations and encoding information about these lifts in arc labels
1
we compute the spearman correlation coefficient between the similarity scores given by the embedding models and those given by human annotators---we compute the spearman correlation between the human-labeled scores and similarity scores computed by embeddings
1
word alignments were induced using a hidden markov model based alignment model initialized with bilexical parameters from ibm model 1---word alignments were induced from the hmmbased alignment model , initialized with the bilexical parameters of ibm model 1
1
in this paper , we address the task of cross-cultural deception detection---in this paper , we addressed the task of deception detection
1
the language models were built using srilm toolkits---the language models were trained using srilm toolkit
1
we trained a 5-grams language model by the srilm toolkit---the language models in our systems are trained with srilm
1
rather , our focus is the integration of multiple tools into a single pipeline---our focus is the hierarchical structure of a sentence : each sentence consists of chunks , and each chunk consists of words
1
in our study of similes in tweets , we found that 92 % of similes are open similes so the property must be inferred---in our study of similes in tweets , we found that 92 % of similes are open similes
1
anderson et al construct semantic models using visual data and show a high correlation to brain activation patterns from fmri---however , the experiments in anderson et al failed to detect differential interactions of semantic models with brain areas
1
we use a cws-oriented model modified from the skip-gram model to derive word embeddings---we use skipgram model to train the embeddings on review texts for k-means clustering
1
coreference resolution is the task of determining when two textual mentions name the same individual---coreference resolution is the task of determining which mentions in a text are used to refer to the same real-world entity
1
the translation selection task may also be modified slightly to output a ranked list of translations---experimental results demonstrate that our proposed method outperforms three kb-qa baseline methods
0
to do this , we use the standard topic modeling technique , lda---we use the term-sentence matrix to train a simple generative topic model based on lda
1
we employ the data selection method of , which builds upon---we employ the data selection method , which is inspired by
1
however , sdp is a special structure in which every two neighbor words are separated by a dependency relations---prettenhofer and stein proposed a cross-language structural correspondence learning method to induce language-independent features by using word translation oracles
0
for spanishenglish and italian-english , we choose to use treetagger 9 for preprocessing , as in---for spanish-english and italian-english , we choose to use treetagger 6 for preprocessing , as in
1
we employed the glove as the word embedding for the esim---we used crfsuite and the glove word vector
1
for language model scoring , we use the srilm toolkit training a 5-gram language model for english---we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit
1
we use the mallet implementation of conditional random fields---as a sequence labeler we use conditional random fields
1
twitter is a popular microblogging service which provides real-time information on events happening across the world---twitter is a huge microbloging service with more than 500 million tweets per day 1 from different locations in the world and in different languages
1
previous work has already regarded ner as a knowledge intensive task---previous work shows that ner is a knowledge intensive task
1
specifically , we employ the seq2seq model with attention implemented in opennmt---we use an nmt-small model from the opennmt framework for the neural translation
1
we used a script from with 89 , 500 merge operations---we give an efficient polynomial-time algorithm to calculate unigram bleu on confusion networks , but show that even small generalizations of this data
0
lin and he propose the joint sentiment topic model to model the dependency between sentiment and topics---as a pivot language , we can build a word alignment model for l1 and l2
0
we adopt this solution , according to , since it is simple and effective---we adopted this solution , according to , since it is simple and effective
1
we use the stanford pos-tagger and name entity recognizer---we tag the source language with the stanford pos tagger
1
we then evaluate the effect of word alignment on machine translation quality using the phrase-based translation system moses---for this experiment , we train a standard phrase-based smt system over the entire parallel corpus
1
moreover , our approach uses no hand-crafted features or sentiment lexicons---sentiment analysis is a research area where does a computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-12-1-3 )
0
this has been done by representing the word meaning in context as a point in a high-dimensional semantics space---indeed , m眉ller et al and silfverberg et al show that sub-tag dependencies improve the performance of linear taggers
0
in addition , we propose two novel models which combine the best of both residual learning and lstm---madamira is a system developed for morphological analysis and disambiguation of arabic text
0
a minimum of this function can be found using the em algorithm---hence we use the expectation maximization algorithm for parameter learning
1
for the generation of the parse trees we used the stanford parser---we employed the stanford parser to produce parse trees
1
we use the mert algorithm for tuning and bleu as our evaluation metric---we measure translation quality via the bleu score
1
loglinear weighs were estimated by minimum errorrate training on the tune partition---next , we use dropout as a regularization technique for reducing overfitting in neural networks
0
the conclusion here is that none of the prior methods for named-entity disambiguation is robust enough to cope with such difficult inputs---here is that none of the prior methods for named-entity disambiguation is robust enough to cope with such difficult inputs
1
le and mikolov , 2014 ) proposed the paragraph vector that learns fixed-length representations from variable-length pieces of texts---before we conclude , we briefly describe other research challenges we are actively working on in order to improve the quality of the literature
0
the parameters of the log-linear model are tuned by optimizing bleu on the development data using mert---maximum phrase length is set to 10 words and the parameters in the log-linear model are tuned by mert
1
for word-level embedding e w , we utilize pre-trained , 300-dimensional embedding vectors from glove 6b---we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training
1
this leads to a straightforward account of the semantics ofattitude verbs---contributions combined significantly improves unlabeled dependency accuracy : 90 . 82 % to 92 . 13 %
0
the statistics for these datasets are summarized in settings we use glove vectors with 840b tokens as the pre-trained word embeddings---neg-finder successfully removes the necessity of including manually crafted supervised knowledge
0
for the semantic language model , we used the srilm package and trained a tri-gram language model with the default goodturing smoothing---for the fluency and grammaticality features , we train 4-gram lms using the development dataset with the sri toolkit
1
in order to reduce the cost of pragmatics , vikner and jensen apply the qualia structure of the possessee noun and type-shift even a non-inherently relational np 2 into a relational noun---instead of selective binding , vikner and jensen type-shift the possessor noun using one of the qualia roles to explain the meaning of the genitive phrases following partee
1
traditional mt metrics such as bleu are based on a comparison of the translation hypothesis to one or more human references---in this work , we present a generic discriminative phrase pair extraction framework that can integrate multiple features
0
following , we minimize the objective by the diagonal variant of adagrad with minibatchs---on the output layer , rsvm s optimize the sequence-level max-margin training criterion used by structured support vector machines
0
semantic role labeling ( srl ) is a task of automatically identifying semantic relations between predicate and its related arguments in the sentence---we trained a 4-gram language model on this data with kneser-ney discounting using srilm
0
relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text---relation extraction ( re ) is the task of determining semantic relations between entities mentioned in text
1
for the fst representation , we used the the opengrm-ngram language modeling toolkit and used an n-gram order of 4 , with kneser-ney smoothing---we have used the srilm with kneser-ney smoothing for training a language model of order five and mert for tuning the model with development data
1
random forest is an ensemble method that learns many classification trees and predicts an aggregation of their result---the word embedding is pre-trained using the skip-gram model in word2vec and fine-tuned during the learning process
0
one such source of cognitive information is gaze behaviour---we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit
0
our proposed cp-decomposition method can operate on edge-weighted graphs---we present a novel algorithm for the cp-decomposition
1
all weights are initialized by the xavier method---the parameters are initialized by the techniques described in
1
following , we use gru as the recurrent unit in this paper---in grammar , a part-of-speech ( pos ) is a linguistic category of words , which is generally defined by the syntactic or morphological behavior of the word in question
0
syntactic language models have the potential to fill this modelling gap---syntactic language models try to overcome the limitation to a local n-gram context
1
in this paper , we are interested in explicitly modeling sentiment knowledge for translation---in this paper , we take a lexicon-based , unsupervised approach to considering sentiment consistency for translation
1
mann and thompson introduce rhetorical structure theory , which was originally developed during the study of automatic text generation---mann and thompson mann and thompson introduce rhetorical structure theory , which was originally developed during the study of automatic text generation
1
the skipgram is a feed-forward network with localist input and output layers , and one hidden layer which determines the dimensionality of the final vectors---however , skip-gram is a discriminative model ( due to the use of negative sampling ) , not generative
1
the best model achieved an overall wer improvement of 10 % relative to the 3-gram baseline---to be the only parse , the reduction in ppl — relative to a 3-gram baseline
1
the training corpus was parsed by the stanford parser---therefore , the training corpus was parsed by the stanford parser
1
importantly , word embeddings have been effectively used for several nlp tasks , such as named entity recognition , machine translation and part-of-speech tagging---word embeddings have also been effectively employed in several tasks such as named entity recognition , adjectival scales and text classification
1
second , we utilize word embeddings 3 to represent word semantics in dense vector space---then , we use word embedding generated by skip-gram with negative sampling to convert words into word vectors
1