text
stringlengths
82
736
label
int64
0
1
in this paper we extended the spectral learning ideas to learn a simple yet powerful dependency parser---in this paper , we propose a spectral learning algorithm where latent states are not restricted to hmm-like distributions of modifier sequences
1
in this paper , we propose entity linking using densified knowledge graphs ( elden )---in this paper , we propose elden , an el system which increases nodes and edges of the kg
1
following the setup of duan et al , zhang and clark and huang and sagae , we split ctb5 into training , development , and test sets---we used the maximum entropy approach 5 as a machine learner for this task
0
in this paper , we propose multi-relational latent semantic analysis ( mrlsa ) which generalizes latent semantic analysis ( lsa ) for lexical semantics---in this paper , we propose multi-relational latent semantic analysis ( mrlsa ) , which strictly generalizes lsa
1
it has obvious advantage to model the compositional semantics and to capture the long distance dependencies between words---to represent the document , lstms have obvious advantage to model the compositional semantics and to capture the long distance dependencies between words
1
for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus---the english side of the parallel corpus is trained into a language model using srilm
1
we optimise the feature weights of the model with minimum error rate training against the bleu evaluation metric---then we perform minimum error rate training on validation set to give different features corresponding reasonable weights
1
stance detection is the task of automatically determining from the text whether the author of the text is in favor of , against , or neutral towards a proposition or target---stance detection is the task of automatically determining from text whether the author is in favor of the given target , against the given target , or whether neither inference is likely
1
for example , gross shows that whereas french dictionaries contain about 1,500 single-word adverbs there are over 5,000 multiword adverbs---for example , gross shows that dictionaries contain about 1,500 single-word adverbs but that french contains over 5,000 multiword adverbs
1
word embeddings are considered one of the key building blocks in natural language processing and are widely used for various applications---in dependency parsers , the structure of the sentence is represented as dependency trees consisting of directed dependency
0
a morphological analysis consists of a part-of-speech tag ( pos ) , possibly other morphological features , and a lemma ( basic form ) corresponding to this tag and features combination ( see table 1 for examples )---we assume that a morphological analysis consists of three processes : tokenization , dictionary lookup , and disambiguation
1
socher et al introduce a matrix-vector recursive neural network model that learns compositional vector representations for phrases and sentences---we use the skll and scikit-learn toolkits
0
for automatic evaluation , we employed bleu by following---for evaluation , we used the case-insensitive bleu metric with a single reference
1
for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus---we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting
1
semantic role labeling ( srl ) is the task of identifying the predicate-argument structure of a sentence---in our implementation , we train the stance classifier using svm light
0
as with , we train the language model on the penn treebank---this model is based on document vectorization using doc2vec
0
we used the moses toolkit to build an english-hindi statistical machine translation system---it can be used to search for semantically compatible candidate an- swers in document passages , thus greatly reducing the search space
0
for all submissions , we used the phrase-based variant of the moses decoder---we used moses , a phrase-based smt toolkit , for training the translation model
1
all models used interpolated modified kneser-ney smoothing---the language model is a 5-gram lm with modified kneser-ney smoothing
1
features in such a space may suffer from the data sparseness problem and thus have less discriminative power on unseen data---in a relatively high-dimensional feature space may suffer from the data sparseness problem and thus exhibit less discriminative power on unseen data
1
this would prefer analysis toward semantically appropriate word sequences---in the base model by capturing semantic plausibility of word sequences
1
the candidate with the highest probability was chosen as the target entity---the candidate answer with the highest probability will be selected as the target
1
in this work , we presented a novel bayesian decipherment approach that can effectively solve a variety of substitution ciphers---using the bayesian decipherment , we show for the first time a truly automated system that successfully solves the zodiac-408 cipher
1
for word embeddings , we use an in-house java re-implementation of word2vec to build 300-dimensional vector representations for all types that occur at least 10 times in our unannotated corpus---to obtain these features , we use the word2vec implementation available in the gensim toolkit to obtain word vectors with dimension 300 for each word in the responses
1
convolutional neural networks have obtained good results in text classification , which usually consist of convolutional and pooling layers---convolutional neural networks have recently achieved remarkably strong performance also on the practically important task of sentence classification
1
this maximum matching problem can be solved using the hungarian algorithm---prediction method demonstrates that further improvements in language modeling for word prediction are likely to appreciably increase communication rate
0
the smt system was tuned on the development set newstest10 with minimum error rate training using the bleu error rate measure as the optimization criterion---in this paper , we present a unified model for both word sense representation and disambiguation
0
additionally , a back-off 2-gram model with goodturing discounting and no lexical classes was built from the same training data , using the srilm toolkit---in this work , we present a novel task for grounded language understanding : disambiguating a sentence given a visual scene
0
in this paper , we present de riv b ase , a derivational resource for german based on a rule-based framework---in this paper , we describe the project of obtaining derivational knowledge for german
1
the first step of the method is to parse the source language string that is being translated---first step of the method is to parse the source language string that is being translated
1
in order to measure translation quality , we use bleu 7 and ter scores---we evaluate the translation quality using the case-insensitive bleu-4 metric
1
additionally , coreference resolution is a pervasive problem in nlp and many nlp applications could benefit from an effective coreference resolver that can be easily configured and customized---coreference resolution is the task of determining which mentions in a text refer to the same entity
1
we use pre-trained glove embeddings to represent the words---we use theano and pretrained glove word embeddings
1
we present a novel learning method for word embeddings designed for relation classification---we present a learning method for word embeddings specifically designed to be useful for relation classification
1
the experimental results demonstrate the effectiveness of our approach---experimental results have demonstrated the effectiveness of our approach
1
these experiments demonstrate that fbrnn achieves competitive results compared to the current state-of-the-art---experimental results confirm that fbrnn is competitive compared to the state-of-the-art
1
coreference resolution is a field in which major progress has been made in the last decade---although coreference resolution is a subproblem of natural language understanding , coreference resolution evaluation metrics have predominately been discussed in terms of abstract entities and hypothetical system errors
1
framenet is a knowledgebase of frames , describing prototypical situations---framenet is a comprehensive lexical database that lists descriptions of words in the frame-semantic paradigm
1
our empirical results show that eye gaze has a potential in improving automated language processing---experiments have shown that eye gaze is tightly linked to human language processing
1
the language models were interpolated kneser-ney discounted trigram models , all constructed using the srilm toolkit---we focused on identifying causal relations between events in a given text document
0
table 2 presents the results from the automatic evaluation , in terms of bleu and nist scores , of 4 system setups---table 1 presents the results from the automatic evaluation , in terms of bleu and nist test
1
the statistical significance test is performed using the re-sampling approach---long short term memory is a variant of recurrent neural network , which enables to address the gradient vanishing and exploding problems in rnn via introducing gate mechanism and memory cell
0
translation quality is measured by case-insensitive bleu on newstest13 using one reference translation---translation performances are measured with case-insensitive bleu4 score
1
we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing---for probabilities , we trained 5-gram language models using srilm
1
in this study , we attempt to develop a boltzmann machine based undirected generative model for dialogue structure analysis---in this study , we examined our model via qualitative visualization and quantitative analysis
1
a 3-gram language model was trained from the target side of the training data for chinese and arabic , using the srilm toolkit---the pre-processed monolingual sentences will be used by srilm or berkeleylm to train a n-gram language model
1
it is possible to compute the moore-penrose pseudoinverse using the svd in the following way---it is possible to compute the moorepenrose pseudoinverse using the svd in the following way
1
information extraction ( ie ) is the process of finding relevant entities and their relationships within textual documents---information extraction ( ie ) is a main nlp aspects for analyzing scientific papers , which includes named entity recognition ( ner ) and relation extraction ( re )
1
from the introspection aspect , luo et al propose to select supportive law articles and use the articles to enhance the charge prediction accuracy---luo et al proposes a hierarchical attentional network to predict charges and extract relevant articles jointly
1
sarcasm is a pervasive phenomenon in social media , permitting the concise communication of meaning , affect and attitude---as with , we train the language model on the penn treebank
0
moreover , using temporal information together with semantic relatedness rescoring further improves word acquisition---semantic and temporal information are incorporated in statistical translation models for word acquisition
1
we used the uiuic dataset 5 which contains 5952 factoid questions from different sources---in this task we used the trec question dataset 10 which contains 5952 questions
1
we first propose a simple yet powerful semi-supervised discriminative model appropriate for handling large scale unlabeled data---sentiment analysis is a growing research field , especially on web social networks
0
while many idioms do have these properties , all idioms fall on the continuum from being compositional to being partly unanalyzable to completely non-compositional---while many idioms do have these properties , many idioms fall on the continuum from being compositional to being partly unanalyzable to completely noncompositional
1
in this paper , we suggest a framework for evaluating inference-rule resources---as a case study , we applied our method to evaluate algorithms for learning inference rules
1
for input representation , we used glove word embeddings---we represent terms using pre-trained glove wikipedia 6b word embeddings
1
word sense disambiguation ( wsd ) is a key enabling-technology that automatically chooses the intended sense of a word in context---word sense disambiguation ( wsd ) is a problem of finding the relevant clues in a surrounding context
1
automatic evaluation shows that our system is both less repetitive and more diverse than baselines---evaluation shows that the quality of the text produced by our model exceeds that of competitive baselines by a large margin
1
translation results are evaluated using the word-based bleu score---translation quality is evaluated by case-insensitive bleu-4 metric
1
relation extraction is the task of finding relations between entities in text , which is useful for several tasks such as information extraction , summarization , and question answering ( cite-p-14-3-7 )---relation extraction ( re ) is the task of recognizing relationships between entities mentioned in text
1
in this paper we propose a novel method for dealing with the word order problem that is efficient and does not rely on a source or target side parse being available---in this paper we propose a model that does not require either source or target side syntax while also preserving the efficiency of reordering
1
twitter is a huge microblogging service with more than 500 million tweets per day from different locations of the world and in different languages ( cite-p-8-1-9 )---twitter is a famous social media platform capable of spreading breaking news , thus most of rumour related research uses twitter feed as a basis for research
1
for the evaluation of the results we use the bleu score---we report the mt performance using the original bleu metric
1
we also evaluate a number of methods based directly on word vectors of the continuous bag-of-words model---the continuous bag-of-words approach described by mikolov et al is learned by predicting the word vector based on the context vectors
1
the results of automatic evaluation and manual assessment confirm the benefits of this design : our system is consistently ranked higher than non-hierarchical baselines---the results of automatic evaluation and manual assessment of title quality show that the output of our system is consistently ranked higher than that of non-hierarchical baselines
1
the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit---the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit
1
based on uima , it allows for efficient parallel processing of large volumes of text---twitter is a well-known social network service that allows users to post short 140 character status update which is called β€œ tweet ”
0
cheng , et al propose a similar method for translating unknown queries with web corpora for cross-language information retrieval---both cheng et al have explored language-mixed search-result pages for extracting translations of frequent unknown queries
1
we use the moses toolkit to train various statistical machine translation systems---we use the open-source moses toolkit to build four arabic-english phrase-based statistical machine translation systems
1
we pre-trained embeddings using word2vec with the skip-gram training objective and nec negative sampling---we train 300 dimensional word embedding using word2vec on all the training data , and fine-turning during the training process
1
event extraction is a task in information extraction where mentions of predefined events are extracted from texts---however , for generalized higher order graphical models , a lightweight decomposition is not at hand
0
we use the skip-gram model with negative sampling to learn word embeddings from a corpus of 400 million tweets also used in---we use the skipgram model with negative sampling to learn word embeddings on the twitter reference corpus
1
word sense disambiguation ( wsd ) is a key enabling technology that automatically chooses the intended sense of a word in context---word sense disambiguation ( wsd ) is a difficult natural language processing task which requires that for every content word ( noun , adjective , verb or adverb ) the appropriate meaning is automatically selected from the available sense inventory 1
1
stacking with auxiliary features ( swaf ) is an ensembling technique that combines outputs from multiple systems using their confidence scores and task-relevant features---we implement classification models using keras and scikit-learn
0
word sense disambiguation ( wsd ) is a particular problem of computational linguistics which consists in determining the correct sense for a given ambiguous word---word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text
1
experimentation on chinese-to-english translation demonstrates that all proposed approaches are able to improve the translation accuracy---theoretically , one can directly apply em to solve the problem
0
the experiment data used herein consisted of the 35 nouns from the semeval-2007 english lexical sample task---the experiment data used herein was the 35 nouns from the semeval-2007 english lexical sample task
1
the experiments were conducted with the scikit-learn tool kit---in this work , we present an ann architecture that combines the effectiveness of typical ann models to classify sentences
0
named entity recognition was first defined as recognizing proper names---named entity recognition was initially defined as recognizing proper names
1
similar approaches were applied in multiple other languages , including italian , german and basque---these systems have been created for english , portuguese , italian and german
1
sentiment analysis is the computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-11-3-3 )---sentiment analysis is a recent attempt to deal with evaluative aspects of text
1
language models are built using the sri-lm toolkit---language models were built using the srilm toolkit 16
1
twitter is a widely used microblogging platform , where users post and interact with messages , β€œ tweets ”---twitter is a social platform which contains rich textual content
1
language models were built using the sri language modeling toolkit with modified kneser-ney smoothing---the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime
1
we use hsmq-learning for learning a hierarchy of generation policies---word sense disambiguation ( wsd ) is the problem of assigning a sense to an ambiguous word , using its context
0
word embeddings have recently led to improvements in a wide range of tasks in natural language processing---word embedding has been proven of great significance in most natural language processing tasks in recent years
1
in the restricted condition , all non-concat models perform near the cosine baseline , suggesting that in the standard setting they were memorizing antonyms of semantically similar words---in the restricted condition , all non-concat models perform near the cosine baseline , suggesting that in the standard setting
1
yannakoudakis et al formulate aes as a pair-wise ranking problem by ranking the order of pair essays---yannakoudakis et al formulated aes as a pairwise ranking problem by ranking the order of pair essays based on their quality
1
we integrate our proposed model into a state-of-the-art translation system and demonstrate the efficacy of our proposal in a large-scale chinese-to-english translation task---we use srilm to build 5-gram language models with modified kneser-ney smoothing
0
to address these problems , the sentiment vector space model ( s-vsm ) is proposed to represent song lyric document---the vectors are pre-trained using the skipgram model 1
0
we use 300-dimensional word embeddings from glove to initialize the model---third , we convert the stanford glove twitter model to word2vec and obtain the word embeddings
1
other examples include , qa-by-dossier with constraints , a method of improving qa accuracy by asking auxiliary questions related to the original question in order to temporally verify and restrict the original answer---other examples include qa-bydossier with constraints , a method of improving qa accuracy by asking auxiliary questions related to the original question in order to temporally verify and restrict the original answer
1
table 2 shows the blind test results using bleu-4 , meteor and ter---testing results in terms of bleu , lrscore and ter are shown in table 4
1
we use the stanford pos tagger to obtain the perspectives p and l---we use the stanford pos tagger to obtain the lemmatized corpora for the sre task
1
based on hypothesis 1 , we learn sense-based embeddings from a large data set , using the continuous skip-gram model---a key feature of our approach is the comparison of dependency relation paths attested in the framenet annotations and raw text
0
words were downcased and lemmatized using the wordnet lemmatizer in the nltk 2 toolkit---the english text was tokenized using the word tokenize routine from nltk
1
semantic role labeling ( srl ) is the task of labeling the predicate-argument structures of sentences with semantic frames and their roles ( cite-p-18-1-2 , cite-p-18-1-19 )---semantic role labeling ( srl ) is the task of labeling predicate-argument structure in sentences with shallow semantic information
1
we describe our experience with automatic alignment of sentences in parallel english-chinese texts---in this paper , we describe our experience with automatic alignment of sentences in parallel english-chinese texts
1
some researchers have found that transliteration is quite useful in proper name translation---some researchers have applied the rule of transliteration to automatically translate proper names
1
since the computation of full softmax is time consuming , the techniques of hierarchical softmax and negative sampling are proposed for approximation---information extraction ( ie ) is the task of generating structured information , often in the form of subject-predicate-object relation triples , from unstructured information such as natural language text
0