text
stringlengths
82
736
label
int64
0
1
we use the treebanks from the conll shared tasks on dependency parsing for evaluation---we used the dataset from the conll shared task for cross-lingual dependency parsing
1
for our implementation we use 300-dimensional part-of-speech-specific word embeddings v i generated using the gensim word2vec package---for the embeddings trained on stack overflow corpus , we use the word2vec implementation of gensim 8 toolkit
1
in this paper , we tackle the above-mentioned issue by introducing a novel model for joint mention extraction and classification---in this work , we have introduced a novel model for the task of joint modeling of mention
1
particularly , the learning-based system enriched with more features does not yield much improvement over the rule-based system---into the learning-based system yields a minor improvement over the rule-based system
1
these are automatically annotated with state-of-the-art taggers of standard language for slovene and croatian and serbian---transliteration is a key building block for multilingual and cross-lingual nlp since it is useful for user-friendly input methods and applications like machine translation and cross-lingual information retrieval
0
we perform the analysis with data from 110 different language pairs drawn from the europarl project---the model parameters of word embedding are initialized using word2vec
0
we implemented this model using the srilm toolkit with the modified kneser-ney discounting and interpolation options---we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit
1
in addition , a 5-gram lm with kneser-ney smoothing and interpolation was built using the srilm toolkit---for given images , we aim to generate more natural japanese captions
0
an empirical evaluation using ntcir test questions showed that the framework significantly improves baseline answer selection performance---chinese and japanese factoid questions show that the framework significantly improved answer selection performance
1
however , we have been unable to use unlabeled data to improve the accuracy---we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit
0
for example have shown that adding human-provided emotional scaffolding to an automated reading tutor increases student persistence---for example , have shown that adding human-provided emotional scaffolding to an automated reading tutor increases student persistence
1
we then postprocessed the parses to obtain stanford dependencies---we extract the corresponding feature from the output of the stanford parser
1
one for a task of interest , such as named entity recognition , is critical for practitioners and researchers---for a task of interest , such as named entity recognition ( ner ) , it is crucial for prac-tioneers and researchers
1
our mt system is a phrase-based , that is developed using the moses statistical machine translation toolkit---our machine translation system is a phrase-based system using the moses toolkit
1
a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit---a survey of the most relevant approaches to sa on twitter can be see in ,
0
we trained a specific language model using srilm from each of these corpora in order to estimate n-gram log-probabilities---we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model
1
we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus---we used a 5-gram language model trained on 126 million words of the xinhua section of the english gigaword corpus , estimated with srilm
1
for the svm classifier we use the python scikitlearn library---dependency parsing is a fundamental task for language processing which has been investigated for decades
0
we use the collapsed tree formalism of the stanford dependency parser---we extract syntactic dependencies using stanford parser and use its collapsed dependency format
1
for all three classifiers , we used the word2vec 300d pre-trained embeddings as features---constituent and dependency parses are obtained by the stanford parser
0
the recent conll shared tasks have been focusing on semantic dependency parsing along with the traditional syntactic dependency parsing---in particular , the recent shared tasks of conll 2008 tackled joint parsing of syntactic and semantic dependencies
1
we find that the learner ’ s uncertainty is a robust predictive criterion that can be easily applied to different learning models---and that uncertainty is a robust predictive criterion that can be easily applied to different learning models
1
questions concerning people , dates , numerical quantities etc , which can generally be answered by a short sentence or phrase---questions concerning people , dates , etc , which can generally be answered by a short sentence or phrase
1
we first use the popular toolkit word2vec 1 provided by mikolov et al to train our word embeddings---we use the pre-trained word2vec embeddings provided by mikolov et al as model input
1
jacy is a type of hand-crafted japanese grammar based on hpsg that can compute a detailed semantic representation---jacy is a hand-crafted japanese hpsg grammar that provides semantic information as well as linguistically motivated analysis of complex constructions
1
we use a word2vec model pretrained on 100 billion words of google news---we trained a 5-gram sri language model using the corpus supplied for this purpose by the shared task organizers
0
word sense disambiguation ( wsd ) is the task of determining the correct meaning for an ambiguous word from its context---word sense disambiguation ( wsd ) is the task of determining the correct meaning ( “ sense ” ) of a word in context , and several efforts have been made to develop automatic wsd systems
1
for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing---we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing
1
in recent years , searches and processing of data beyond the limiting level of surface words are becoming more important than it used to be---as a result , searches and processing of data beyond the limiting level of surface words are becoming increasingly important
1
we use the evaluation criterion described in---we use the same evaluation criterion as described in
1
sentiment analysis is the study of the subjectivity and polarity ( positive vs. negative ) of a text ( cite-p-7-1-10 )---sentiment analysis is a nlp task that deals with extraction of opinion from a piece of text on a topic
1
in this paper , we propose a novel method to model sememe information for learning better word representations---in this paper , we present that , word sememe information can improve word representation learning
1
socher et al introduce a matrix-vector recursive neural network model that learns compositional vector representations for phrases and sentences---socher et al extend the recursive neural networks with matrix-vector spaces , and use mv-rnn to learn representations along the constituency tree for relation classification
1
we use pre-trained 100 dimensional glove word embeddings---we used crfsuite and the glove word vector
1
the minimum error rate training was used to tune the feature weights---the weights for these features are optimized using mert
1
sun and xu enhanced the segmentation results by interpolating the statistics-based features derived from unlabeled data to a crfs model---sun and xu utilized the features derived from large-scaled unlabeled text to improve chinese word segmentation
1
we have also presented an approach to learning the edit operations and a classification-based approach---we also present an approach where the edit operations are trained from data
1
the training data are tagged with pos tags and lemmatized with treetagger---we tested it on : for english , it outperforms the best published method we are aware of
0
bannard and callison-burch introduced the pivoting approach , which relies on a 2-step transition from a phrase , via its translations , to a paraphrase candidate---a 4-gram language model is trained on the monolingual data by srilm toolkit
0
as discussed in section 4 , k ? bonferroni is the appropriate estimator of the number of cases one algorithm outperforms another---as discussed in section 4 , k bonferroni is the appropriate estimator of the number of cases
1
we use the transformer model which translates through an encoderdecoder framework , with each layer involving an attention network followed by a feed-forward network---we use the transformer model from vaswani et al which is an encoder-decoder architecture that relies mainly on a self-attention mechanism
1
however , these agents are often ignored and abused in collaborative learning scenarios involving multiple students---in collaborative learning scenarios causes the agents to compete for the attention of the students
1
the approach computes the highest probability permutation of the input bag of words under an n-gram language model---approach finds the highest probability permutation of the input bag of words under an n-gram language model
1
we use the opensource moses toolkit to build a phrase-based smt system---we use the popular moses toolkit to build the smt system
1
for example , dagan and itai carried out wsd experiments using monolingual corpora , a bilingual lexicon and a parser for the source language---dagan and itai proposed an approach to wsd using monolingual corpora , a bilingual lexicon and a parser for the source language
1
pointer + init means we initialize the model with the lm weights---init means we initialize the model with the lm weights
1
for the classification task , we use pre-trained glove embedding vectors as lexical features---in this task , we use the 300-dimensional 840b glove word embeddings
1
to obtain the confidence interval of the bleu score , we resort to the bootstrap resampling described by koehn---to compute the statistical significance of the performance differences between qe models , we use paired bootstrap resampling following koehn
1
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing---in this paper , we re-embed pre-trained word embeddings with a stage of manifold learning
0
we rely on the partial tree kernel to handle feature engineering over the structural representations---our kernel is based on the partial tree kernel proposed by moschitti
1
semantic parsing is the problem of mapping natural language strings into meaning representations---semantic parsing is a domain-dependent process by nature , as its output is defined over a set of domain symbols
1
we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus---we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option
1
semantic role labeling ( srl ) is a kind of shallow sentence-level semantic analysis and is becoming a hot task in natural language processing---semantic role labeling ( srl ) is the task of identifying the arguments of lexical predicates in a sentence and labeling them with semantic roles ( cite-p-13-3-3 , cite-p-13-3-11 )
1
word sense disambiguation ( wsd ) is a key enabling technology that automatically chooses the intended sense of a word in context---word sense disambiguation ( wsd ) is the task of identifying the correct meaning of a word in context
1
this noisy labeled data causes poor extraction performance---topic models implicitly use document level co-occurrence information
0
tsvetkov et al presented a language-independent approach to metaphor identification---the method of tsvetkov et al used both concreteness features and hand-coded domain information for words
1
translation results are evaluated using the word-based bleu score---translation scores are reported using caseinsensitive bleu with a single reference translation
1
our baseline system is a state-of-the-art smt system , which adapts bracketing transduction grammars to phrasal translation and equips itself with a maximum entropy based reordering model---our baseline is a state-of-the-art smt system which adapts bracketing transduction grammars to phrasal translation and augment itself with a maximum entropy based reordering model
1
the performance of semantic parsing can be potentially improved by using discriminative reranking , which explores arbitrary global features---performance of semantic parsing can be potentially improved by using discriminative reranking , which explores arbitrary global features
1
in a semantic role labeling task , the syntax and semantics are correlated with each other , that is , the global structure of the sentence is useful for identifying ambiguous semantic roles---the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit
0
word embeddings have shown promising results in nlp tasks , such as named entity recognition , sentiment analysis or parsing---for the second step , sentence selection adopts a particular strategy to choose content
0
svms have proven to be an effective means for text categorization as they are capable to robustly deal with high-dimensional , sparse feature spaces---svms have been shown to be robust in classification tasks involving text where the dimensionality is high
1
informally , a compound is a combination of two or more words that function as a single unit of meaning---the model is slightly modified from the word-lattice-based character bigram model of lee et al
0
to measure the translation quality , we use the bleu score and the nist score---we evaluate the translation quality using the case-insensitive bleu-4 metric
1
in this paper we present a formal computational framework for modeling manipulation actions---in this paper , we are concerned with manipulation actions , that is actions performed by agents ( humans or robots )
1
we report the mt performance using the original bleu metric---in this paper , we propose a label-aware double transfer learning framework ( ladtl ) for cross-specialty ner , so that a medical ner system designed for one specialty
0
shen et al presented a conditional variational framework for generating specific responses based on specific attributes---similarly , shen et al presented a conditional variational framework to generate specific responses based on the dialog context
1
we created a new large benchmark data set by utilizing a new annotation scheme and several filtering strategies for crowdsourced data---we create a new large crowdsourced benchmark data set containing 9 , 111 argument pairs multi-labeled with 17 categories which is improved by local and global filtering techniques
1
a letter-trigram language model with sri lm toolkit was then built using the target side of ne pairs tagged with the above position information---a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit
1
experimental results show that the proposed methods are effective to improve the retrieval performance , and their performances are comparable to other top-performing systems in the trec medical records track---we build upon our previous approach for joint concept disambiguation and clustering
0
in this work , we employ the toolkit word2vec to pre-train the word embedding for the source and target languages---we initialize our word vectors with 300-dimensional word2vec word embeddings
1
in this paper , we introduce a framework for incorporating declarative knowledge in word problem solving---in this paper , we develop declarative rules which govern the translation of natural language description of these concepts
1
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit---we used srilm to build a 4-gram language model with kneser-ney discounting
1
in order to prevent overfitting , we used early stopping based on the performance on the development set---zha proposes a method for simultaneous keyphrase extraction and text summarization by using only the heterogeneous sentence-to-word relationships
0
furthermore , we train a 5-gram language model using the sri language toolkit---we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing
1
for this model , we use a binary logistic regression classifier implemented in the lib-linear package , coupled with the ovo scheme---classifier we use the l2-regularized logistic regression from the liblinear package , which we accessed through weka
1
in this paper , we presented a system for identifying opinion subgroups in arabic online discussions---coreference resolution is the problem of identifying which noun phrases ( nps , or mentions ) refer to the same real-world entity in a text or dialogue
0
our approach explicitly determines the words which are equally significant with a consistent polarity across source and target domains---for the cluster- based method , we use word2vec 2 which provides the word vectors trained on the google news corpus
0
we study the problem of textual relation embedding with distant supervision---in this work , we study the problem of embedding textual relations , defined as the shortest dependency path
1
semantic role labeling ( srl ) has been defined as a sentence-level natural-language processing task in which semantic roles are assigned to the syntactic arguments of a predicate ( cite-p-14-1-7 )---semantic role labeling ( srl ) is a task of automatically identifying semantic relations between predicate and its related arguments in the sentence
1
evaluation on a standard data set shows that the performance of our approach is consistently superior to previously reported methods---evaluation on a standard data set shows that our method consistently outperforms the best performing previously reported method , which is supervised
1
a lattice is a directed acyclic graph that is used to compactly represent the search space for a speech recognition system---a lattice is a directed acyclic graph , a subclass of non-deterministic finite state automata
1
stance detection is the task of classifying the attitude previous work has assumed that either the target is mentioned in the text or that training data for every target is given---stance detection is the task of automatically determining whether the authors of a text are against or in favour of a given target
1
a 5-gram language model with kneser-ney smoothing is trained using s-rilm on the target language---we use the stanford pos-tagger and name entity recognizer
0
we trained a neural encoder-decoder network using the attention model from to perform neural machine translation---in a second baseline model , we also incorporate 300-dimensional glove word embeddings trained on wikipedia and the gigaword corpus
0
correct stress placement is important in text-to-speech systems , in terms of both the overall accuracy and the naturalness of pronunciation---correct stress placement is important in text-to-speech systems because it affects the accuracy of human word recognition
1
for the training of the smt model , including the word alignment and the phrase translation table , we used moses , a toolkit for phrase-based smt models---we used the moses mt toolkit with default settings and features for both phrase-based and hierarchical systems
1
however , we use a large 4-gram lm with modified kneser-ney smoothing , trained with the srilm toolkit , stolcke , 2002 and ldc english gigaword corpora---we also use 200 million words from ldc arabic gigaword corpus to generate a 5-gram language model using srilm toolkit , stolcke , 2002 translation to be our source in each case
1
translation results are evaluated using the word-based bleu score---in this paper , we present a training method for building a dependency parser for a resource-poor language
0
throughout this work , we use the datasets from the conll 2011 shared task 2 , which is derived from the ontonotes corpus---we apply our model to the english portion of the conll 2012 shared task data , which is derived from the ontonotes corpus
1
the srilm toolkit was used for training the language models using kneser-ney smoothing---by representing each document as a graph-of-words , we are able to model these relationships
0
we use the hierarchical phrase-based machine translation model from the open-source cdec toolkit , and datasets from the workshop on machine translation---moreover , throughout this paper we use the hierarchical phrase-based translation system , which is based on a synchronous contextfree grammar model
1
the method adopted to achieve this goal is the equivalence class method---the approach taken is innovative , since it is based on the equivalence class method
1
we used kenlm with srilm to train a 5-gram language model based on all available target language training data---we used europarl and wikipedia as parallel resources and all of the finnish data available from wmt to train five-gram language models with srilm and kenlm
1
we then train a single multi-class linear-kernel support vector machine using liblinear with the language identifiers as labels---from this point of view , conventional automatic evaluation metrics of translation quality disregard word order mistakes
0
we have used latent dirichlet allocation model as our main topic modeling tool---we used latent dirichlet allocation as our exploratory tool
1
in an example shown above , ¡°sad¡± is an emotion word , and the cause of ¡°sad¡± is ¡°i lost my phone¡±---in an example shown above , ¡° sad ¡± is an emotion word , and the cause of ¡° sad ¡± is ¡°
1
we presented a supervised framework that learns automatic pyramid scores and uses them for optimization-based summary extraction---we present a new supervised framework that learns to estimate automatic pyramid scores and uses them for optimization-based extractive
1
we used glove word embeddings with 300 dimensions pre-trained using commoncrawl to get a vector representation of the evidence sentence---we use 100-dimension glove vectors which are pre-trained on a large twitter corpus and fine-tuned during training
1
in this paper , we present a novel discriminative model for query spelling correction---in this paper is a novel unified way to directly optimize the search phase of query spelling correction
1