text
stringlengths
82
736
label
int64
0
1
our aligner not only outperforms existing aligners in each task , but also approaches top systems for the extrinsic tasks---our aligner outperforms existing aligners , and even a naive application of the aligner approaches state-of-the-art performance in each extrinsic task
1
this thesaurus has been applied to many tasks relying on word-based similarity , including document and image retrieval systems---wordnet has been used in many tasks relying on word-based similarity , including document and image retrieval systems
1
wordnet is a byproduct of such an analysis---to compare translations , the bleu measure is used
0
pang et al , turney , we are interested in fine-grained subjectivity analysis -the identification , extraction and characterization of subjective language at the phrase or clause level---pang et al , turney , we are interested in fine-grained subjectivity analysis , which is concerned with subjectivity at the phrase or clause level
1
in other cases , these modules are integrated by means of statistical or uncertainty reasoning teclmiques---in other cases , these modules are integrated by means of statistical or uncertainty reasoning techniques
1
this paper presents a simple , robust and ( almost ) unsupervised dictionary-based method , qwn-ppv ( q-wordnet as personalized pageranking vector ) to automatically generate polarity lexicons---this paper presents a simple , robust and ( almost ) unsupervised dictionary-based method , qwordnet-ppv ( qwordnet by personalized pagerank vector ) to automatically generate polarity lexicons
1
we used the penn treebank to perform empirical experiments on the proposed parsing models---we have used penn tree bank parsing data with the standard split for training , development , and test
1
it has been empirically shown that word embeddings can capture semantic and syntactic similarities between words---word embeddings capture syntactic and semantic properties of words , and are a key component of many modern nlp models
1
semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them---mihalcea et al used various text based similarity measures , including wordnet and corpus based similarity methods , to determine if two phrases are paraphrases
0
we evaluate our method on a range of languages taken from the conll shared tasks on multilingual dependency parsing---distributional semantic models represent lexical meaning in vector spaces by encoding corpora derived word co-occurrences in vectors
0
in recent years , neural word embeddings have proved very effective in improving various nlp tasks ( e.g . part-of-speech tagging , chunking , named entity recognition and semantic role labeling ) ( cite-p-22-1-2 )---in recent years , vector space models ( vsms ) have been proved successful in solving various nlp tasks including named entity recognition , part-of-speech tagging , parsing , semantic role-labeling
1
relation extraction is a traditional information extraction task which aims at detecting and classifying semantic relations between entities in text ( cite-p-10-1-18 )---relation extraction is a challenging task in natural language processing
1
we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training---we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings
1
as inputs we use a random sample of sentences from the penn treebank and represent each word as a 100d glove embedding---for sentences , we tokenize each sentence by stanford corenlp and use the 300-d word embeddings from glove to initialize the models
1
similarly , turian et al collectively used brown clusters , cw and hlbl embeddings , to improve the performance of named entity recognition and chucking tasks---we use the long short-term memory architecture for recurrent layers
0
based on this , we propose the first endto-end incremental parser that jointly parses at both constituency and discourse levels---we propose the first endto-end discourse parser that jointly parses in both syntax and discourse levels , as well as the first syntacto-discourse treebank
1
in this paper , we propose multi-relational latent semantic analysis ( mrlsa ) which generalizes latent semantic analysis ( lsa ) for lexical semantics---in order to identify such useful patterns , for each pattern we build a graph following
0
we apply canonical lexical head projection rules in order to lexicalize syntactic trees---in addition , we utilize the pre-trained word embeddings with 300 dimensions from for initialization
0
according to pickering and garrod , the act of engaging in a dialog facilitates the use of similar representations at all linguistic levels , and these representations are shared between speech production and comprehension processes---pickering and garrod propose that the automatic alignment at many levels of linguistic representation is key for both production and comprehension in dialogue , and facilitates interaction
1
coreference resolution is the process of linking multiple mentions that refer to the same entity---recently , zhou et al proposed a query expansion framework based on individual user profiles
0
some examples are the colemanliau index , which was specifically designed for automated assessment of readability , the smog formula and the fry readability formula---word segmentation is the first step of natural language processing for japanese , chinese and thai because they do not delimit words by whitespace
0
similar to the issue-response relationship , shrestha et al proposed methods to identify the question-answer pairs from an email thread---shrestha and mckeown proposed a supervised rule induction method to detect interrogative questions in email conversations based on part-of-speech features
1
in this paper , we propose a method for constructing a dictionary of lexical variants of known words that facilitates lexical normalisation via simple string substitution ( e.g . tomorrow for tmrw )---in this paper , we describe a method for automatically constructing a normalisation dictionary that supports normalisation of microblog text through direct substitution of lexical variants
1
garera et al use a vector space model with dependency links as dimensions instead of cooccurring words---garera , et al defines context vectors on the dependency tree rather than using adjacency
1
sentence compression is the task of compressing long , verbose sentences into short , concise ones---qian and liu presented a multi-step learning method using weighted m 3 n model for disfluency detection
0
we study how to summarize email conversations based on the conversational cohesion and the subjective opinions---of this paper , we study how to make use of the subjective opinions expressed in emails
1
we use the 300-dimensional pre-trained word2vec 3 word embeddings and compare the performance with that of glove 4 embeddings---srilm toolkit is used to build these language models
0
this metagrammar can generate all possible combinations of these analyses automatically , creating different versions of a grammar that cover the same phenomena---to the metagrammar , the engineer can automatically generate versions of the grammar containing different combinations of previous analyses
1
other recent examples of the utility of finite-state constraints for parsing pipelines include glaysher and moldovan , djordjevic et al , hollingshead and roark , and roark and hollingshead---to get a dictionary of word embeddings , we use the word2vec tool 2 and train it on the chinese gigaword corpus
0
we used the sri language modeling toolkit with kneser-kney smoothing---as the text databases available to users become larger and more heterogeneous , genre becomes increasingly important for computational linguistics
0
after special handling of unknown words and model ensembling , we obtain the best score reported to date on this task with bleu=40.4---after utilizing unknown word processing and model ensemble of three models , we obtained a bleu score of 40 . 4 , an improvement of 2 . 9 bleu points
1
word embeddings are initialised using pre-trained glove vectors , and their weights are fixed during training---on the other hand , smt systems require large quantities of parallel text
0
for both systems , we used the berkeley aligner with default settings to align the parallel data---the text is a joke that relies on the ambiguity of phrasing
0
we used minimum error rate training to tune the feature weights for maximum bleu on the development set---we used minimum error rate training for tuning on the development set
1
unlike in , we consider all punctuation characters as hfws---unlike , we consider all punctuation characters as hfws
1
an empirical evaluation using ntcir test questions showed that the framework significantly improves baseline answer selection performance---empirical results from testing on ntcir factoid questions show a 40 % performance improvement in chinese answer selection
1
the probabilistic verb class model underlying the semantic classes is trained by a combination of the em algorithm and the mdl principle , providing soft clusters with two dimensions ( verb senses and subcategorisation frames with selectional preferences ) as a result---for the word-embedding based classifier , we use the glove pre-trained word embeddings
0
we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing---we show the effectiveness of partial-label learning in digesting the encoded knowledge from wikipedia data
0
we used moses with the default configuration for phrase-based translation---in the translation tasks , we used the moses phrase-based smt systems
1
named entity disambiguation ( ned ) is the task of linking mentions of entities in text to a given knowledge base , such as freebase or wikipedia---we use the logistic regression implementation of liblinear wrapped by the scikit-learn library
0
relation extraction is a core task in information extraction and natural language understanding---daum茅 proposed a feature space transformation method for domain adaptation based on a simple idea of feature augmentation
0
the training module , shown in figure 1 , is based on the language modeler presented in---the base language evaluation submodule , shown in figure 3 , is a modified version of the evaluation process in
1
we observe that a good question is a natural composition of interrogatives , topic words , and ordinary words---we used the logistic regression implementation in scikit-learn for the maximum entropy models in our experiments
0
we show that the proposed method offers superior accuracy over rule-based methods , as well as significant improvement in search recall---we demonstrate superior performance over rule-based methods , as well as a significant reduction in the number of queries that yield null search
1
in a unified architecture for nlp that learns features relevant to the tasks at hand given very limited prior knowledge is presented---in a unified architecture for natural language processing that learns features relevant to the tasks at hand given very limited prior knowledge is presented
1
we trained a 4-gram language model on the xinhua portion of gigaword corpus using the sri language modeling toolkit with modified kneser-ney smoothing---faruqui et al introduce a graph-based retrofitting method where they post-process learned vectors with respect to semantic relationships extracted from additional lexical resources
0
zeng et al proposed a cnn network integrating with position embeddings to make up for the shortcomings of cnn missing contextual information---zeng et al exploit a convolutional neural network to extract lexical and sentence level features for relation classification
1
we used pos tags predicted by the stanford pos tagger---weights are optimized by mert using bleu as the error criterion
0
experimental results show that the proposed approach consistently achieves great success---the experimental results reveal that our approach achieves significant improvement
1
we used svm classifier that implements linearsvc from the scikit-learn library---from the combination point of view , our proposed scheme can be considered as a novel system combination method which goes beyond the existing post-decoding style
0
we presented a method of improving japanese dependency parsing by using large-scale statistical information---in this paper , we present a method for improving the accuracy of japanese dependency analysis
1
we used the stanford parser to extract dependency features for each quote and response---we used the stanford parser to generate dependency trees of sentences
1
the results of our experiments on two datasets show that our system was able to outperform other logic-based systems---we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus
0
in arabic , there is a reasonable number of sentiment lexicons but with major deficiencies---arabic is a morphologically rich language , in which a word carries not only inflections but also clitics , such as pronouns , conjunctions , and prepositions
1
in a second baseline model , we also incorporate 300-dimensional glove word embeddings trained on wikipedia and the gigaword corpus---we use glove 300-dimension embedding vectors pre-trained on 840 billion tokens of web data
1
the bleu metric and the closely related nist metric , along with wer and per , have been widely used by many machine translation researchers---in semeval 2018 task9 , our results , achieve 1st on spanish , 2nd on italian , 6th on english
0
to evaluate segment translation quality , we use corpus level bleu---as a baseline system for our experiments we use the syntax-based component of the moses toolkit
0
so far , we have crowdsourced a dataset of more than 14k comparison paragraphs comparing entities from nine major categories---through comparison comprehension , we have crowdsourced a dataset of more than 14k comparison paragraphs comparing entities from nine broad categories
1
we use the opensource moses toolkit to build a phrase-based smt system---we use the moses software to train a pbmt model
1
word alignment models were first introduced in statistical machine translation---word alignments were first introduced as an intermediate result of statistical machine translation systems
1
the pre-trained word embeddings were learned with the word2vec toolkit on a domain corpus which consists of about 490,000 student essays---the word embedding is pre-trained using the skip-gram model in word2vec and fine-tuned during the learning process
1
callison-burch et al tackle the problem of unseen phrases in smt by adding source language paraphrases to the phrase table with appropriate probabilities---for this , we used the combination of the entire swedish-english europarl corpus and the smultron data
0
in this study , we adopt the event extraction task defined in the bionlp 2009 shared task as a model information extraction task---the biomedical event extraction task in this work is adopted from the genia event extraction subtask of the well-known bionlp shared task ,
1
latent dirichlet allocation is a widely adopted generative model for topic modeling---lda is a widely used topic model , which views the underlying document distribution as having a dirichlet prior
1
the log-linear parameter weights are tuned with mert on a development set to produce the baseline system---all the feature weights and the weight for each probability factor are tuned on the development set with minimum-error-rate training
1
in the last phase , we will look at ways of extending our lexicon and ontology to less familiar words---with the improved the grammar and ontology , we will use the knowledge learned to extend our model to words not in lexeed , using definition
1
we describe how the attentional state properties modeled by centering can account for these differences---attentional state properties modeled by centering can account for these differences
1
in addition to language models , heilman et al and schwarm and ostendorf also use some syntactic features to estimate the grade level of texts---in this paper , we attempted to define a measure of distributional semantic content
0
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing
1
in this paper , we introduce the task of sentence dependency tagging---in this paper , we investigated sentence dependency tagging of question and answer ( qa ) threads
1
coreference resolution is a set partitioning problem in which each resulting partition refers to an entity---katiyar and cardie presented a standard lstm-based sequence labeling model to learn the nested entity hypergraph structure for an input sentence
0
we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training---we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings , which we do not optimize during training
1
semantic role labeling ( srl ) is a task of analyzing predicate-argument structures in texts---semantic role labeling ( srl ) is the task of labeling predicate-argument structure in sentences with shallow semantic information
1
we collect monolingual data for each language from the machine translation workshop data , 5 europarl and eu bookshop corpus---we used the google news pretrained word2vec word embeddings for our model
0
our baseline is an in-house phrase-based statistical machine translation system very similar to moses---our baseline system is an standard phrase-based smt system built with moses
1
we have also presented an approach to learning the edit operations and a classification-based approach---our experiments show empirical evidence that the directionality hypothesis with rsps can indeed be used to filter incorrect inference rules
0
given the word alignment in figure 1 , table 1 demonstrates the difference between hierarchical rules in chiang and hd-hrs defined here---our baseline is a phrase-based mt system trained using the moses toolkit
0
using espac medlineplus , we trained an initial phrase-based moses system---we evaluated the reordering approach within the moses phrase-based smt system
1
like soricut and marcu , they formulate the discourse segmentation task as a binary classification problem of deciding whether a word is the boundary or no-boundary of edus---paradigmatic gaps are puzzling because they seemingly contradict the highly productive nature of inflectional systems
0
in an experimental study by cite-p-13-1-2 , each essay was scored by 16 professional raters on a scale of 1 to 6 , allowing plus and minus scores as well , quantified as 0.33 – thus , a score of 4- is rendered as 3.67---in an experimental study by cite-p-13-1-2 , each essay was scored by 16 professional raters on a scale of 1 to 6 , allowing plus and minus scores as well , quantified as 0 . 33 –
1
we build a baseline error correction system , using the moses smt system---we then conduct training by maximizing f using iterative expectationmaximization algorithm
0
sarcasm is a sophisticated speech act which commonly manifests on social communities such as twitter and reddit---sarcasm is a sophisticated form of communication in which speakers convey their message in an indirect way
1
in this paper , we extent pv by introducing concept information---in the experiments , we evaluate the cse models
1
cite-p-18-1-3 proposed to use a tree-based constituency parsing model to handle nested entities---in this paper , we name the problem of choosing the correct word from the homophone set
0
this has the benefit of reducing the total number of parameters in our model---we find a reduction in the total number of parameters
1
yang and kirchhoff , 2006 ) decomposed the unknown source words at the test time into morphological subwords and translated these subwords that are unknown to the decoder by using phrasebased back-off models---word embeddings have been used to help to achieve better performance in several nlp tasks
0
in this work , we tackle addressee and response selection for multi-party conversation : given a context , predict an addressee and response---in this work , we tackle addressee and response selection for multi-party conversation , in which systems are expected to select whom they address
1
we follow the hyperparameter setting from vaswani et al , limiting the embeddings to 512 dimensions---supervised methods shows that while supervised methods generally outperform the unsupervised ones
0
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit---for the language model , we used srilm with modified kneser-ney smoothing
1
sentiment analysis is the task of identifying the polarity ( positive , negative or neutral ) of review---semantic role labeling ( srl ) is a task of analyzing predicate-argument structures in texts
0
additionally , coreference resolution is a pervasive problem in nlp and many nlp applications could benefit from an effective coreference resolver that can be easily configured and customized---we enhance the neural model with discourse chunk features that were previously found useful for this task
0
named entity recognition ( ner ) is the task of identifying named entities in free text—typically personal names , organizations , gene-protein entities , and so on---named entity recognition ( ner ) is a fundamental information extraction task that automatically detects named entities in text and classifies them into predefined entity types such as person , organization , gpe ( geopolitical entities ) , event , location , time , date , etc
1
we propose a novel neural couplet machine to tackle this problem based on neural network structures---for couplet generation , we propose different neural models for different concerns
1
as a baseline for this comparison , we use morfessor categories-map---we propose a cascaded linear model for joint chinese
0
sentiment analysis is the study of the subjectivity and polarity ( positive vs. negative ) of a text ( cite-p-7-1-10 )---sentiment analysis is the task of identifying positive and negative opinions , sentiments , emotions and attitudes expressed in text
1
tiny sensors within this field allow the inference of articulator positions and velocities to within 1 mm of error---tiny sensors within this field induce small electric currents whose energy allows the inference of articulator positions and velocities to within 1 mm of error
1
by exploiting generic patterns , system recall substantially increases with little effect on precision---taglda is a representative latent topic model by extending latent dirichlet allocation
0
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit---we used srilm to build a 4-gram language model with kneser-ney discounting
0
the language model component uses the srilm lattice-tool for weight assignment and nbest decoding---the srilm language modelling toolkit was used with interpolated kneser-ney discounting
1
there have not been clear results on whether adding more layers to nlms helps---and there have not been clear results on whether having more layers helps
1