text
stringlengths
82
736
label
int64
0
1
by adding word-knowledge features and refining the inference , we improve the performance of a state-of-the-art system of ( cite-p-19-1-1 ) by 3 muc , 2 b 3 and 2 ceaf f1 points on the non-transcript portion of the ace 2004 dataset---by adding word-knowledge features and using learning-based multi-sieve approach , we improve the performance of the state-of-the-art system of ( cite-p-19-1-1 ) by 3 muc , 2 b 3 and 2 ceaf f1 points
1
under this framework , we introduce a wide range of composition models which we evaluate empirically on a sentence similarity task---under this framework , we introduce novel composition models which we compare empirically against previous work
1
hearst used a small number of regular expressions over words and part-of-speech tags to find examples of the hypernym relation---relation extraction is the task of predicting semantic relations over entities expressed in structured or semi-structured text
0
klein and manning presented an unlexicalized parser that eliminated all lexicalized parameters---klein and manning presented an unlexicalized pcfg parser that eliminated all the lexicalized parameters
1
the proposed models empirically show consistent improvement over the previous methods in both the bleu and err evaluation metrics---that empirically shows significant improved performances in comparison with the previous approaches
1
in several tasks , fasttext obtains performance on par with recently proposed methods inspired by deep learning , while being much faster---fasttext is often on par with deep learning classifiers in terms of accuracy , and many orders of magnitude faster
1
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting---we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit
1
we notice erratic behavior when optimizing sparse feature weights with m 2 and offer partial solutions---sentiment analysis is the process of identifying and extracting subjective information using natural language processing ( nlp )
0
the novelty of our approach lies in the feature generation and weighting , using not only single words and ngrams as features but also skipgrams---grammar induction is a task within the field of natural language processing that attempts to construct a grammar of a given language solely on the basis of positive examples of this language
0
shimbo and hara considered many features for coordination disambiguation and automatically optimized their weights , which were heuristically determined in kurohashi and nagao , using a discriminative learning model---shimbo and hara and hara et al considered many features for coordination disambiguation and automatically optimized their weights , which were heuristically determined in kurohashi and nagao , by using a discriminative learning model
1
sentiment analysis is the computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-11-3-3 )---sentiment analysis is a nlp task that deals with extraction of opinion from a piece of text on a topic
1
word-level measures were not able to differentiate between different senses of one word , while sense-level measures could even increase correlation when shifting to sense similarities---word-level measures were not able to differentiate between different senses of one word , while sense-level measures actually increase correlation when shifting to sense similarities
1
hu and liu proposed a statistical approach to capture object features using association rules---hu and liu proposed a technique based on association rule mining to extract product features
1
it is thus crucial to be able to determine accurately how similar two documents are by defining a document similarity measure---as a graph-of-words , we are able to model these relationships and then determine how similar two documents are
1
the irstlm toolkit was used to build the 5-gram language model---the irstlm toolkit is used to build ngram language models with modified kneser-ney smoothing
1
the data and leaderboard are available at http : //lic.nlp.cornell.edu/nlvr---leader board are available at http : / / lic . nlp . cornell . edu / nlvr
1
abstractive summarization is the ultimate goal of document summarization research , but previously it is less investigated due to the immaturity of text generation techniques---in this work , we explore the task of acquiring and incorporating external evidence to improve extraction accuracy
0
we evaluated these summarisation approaches with the rouge-1 method , a widely used summarisation evaluation metric that correlates well with human evaluation---feature weights are tuned using minimum error rate training on the 455 provided references
0
the incremental parsing process of our parser is based on the shift-reduce parsers of sagae and lavie and wang et al , with slight modifications---our transition-based parser is based on a study by zhu et al , which adopts the shift-reduce parsing of sagae and lavie and zhang and clark
1
tiedemann proposed a cache-model to enforce consistent translation of phrases across the document---for example , collobert et al used a feed-forward neural network to effectively identify entities in a newswire corpus by classifying each word using contexts within a fixed number of surrounding words
0
predicate models such as framenet are core resources in most advanced nlp tasks , such as question answering , textual entailment or information extraction---predicate models such as framenet , verbnet or propbank are core resources in most advanced nlp tasks , such as question answering , textual entailment or information extraction
1
wikipedia is the central infrastructure for knowledge curation , as exemplified by freebase and wikification---wikipedia , as the largest comprehensive online encyclopedia , is the most used corpus for creating entity-aware resources such as yago , dbpedia and freebase
1
for most language pairs in the world , large bilingual corpora are unavailable , which causes a bottleneck for machine translation on such language pairs---nevertheless , such large bilingual corpora are unavailable for most language pairs in the world , which causes a bottleneck for both of the smt and nmt machine translation methods
1
more recently , neural networks have become prominent in word representation learning---convolutional neural networks have been shown to be effective in modeling natural language semantics
1
automated annotation of social behavior in conversation is necessary for large-scale analysis of real-world conversational data---annotation of conversation can power adaptive intervention in collaborative learning settings
1
ma et al extended the model using time series to capture the variation of features over time---ma et al adapted features from earlier studies and proposed to model them over time
1
to generate the textual view of each document , we combine the benefits of both word2vec and tf-idf---to get a dictionary of word embeddings , we use the word2vec tool 2 and train it on the chinese gigaword corpus
1
paraphrases can be viewed as bidirectional entailment rules---we used standard classifiers available in scikit-learn package
0
kilicoglu and bergler apply a linguistically motivated approach to the same classification task by using knowledge from existing lexical resources and incorporating syntactic patterns---kilicoglu and bergler proposed a linguistically motivated approach based on syntactic information to semi-automatically refine a list of hedge cues
1
word embedding models are aimed at learning vector representations of word meaning---the matrix factorization approach builds word embeddings by factorizing wordcontext co-occurrence matrices
1
in our experiment , word embeddings were 200-dimensional as used in , trained on gigaword with word2vec---we used the google news pretrained word2vec word embeddings for our model
1
learning large and accurate resources of entailment rules is essential in many semantic inference applications---performing textual inference is in the heart of many semantic inference applications
1
multi-task learning is a common approach for neural domain adaptation---more recently , neural networks have become prominent in word representation learning
1
for this task , we use glove pre-trained word embedding trained on common crawl corpus---several authors investigate neural network models that learn a vector of latent variables to represent each word
0
by adding these retrieved parallel sentences to already available human translated parallel corpora we were able to improve the bleu score on the test set by almost 2.5 points---by adding these retrieved parallel sentences to already available human translated parallel corpora we were able to improve the bleu score on the test set
1
sentiment analysis ( sa ) is a field of knowledge which deals with the analysis of people ’ s opinions , sentiments , evaluations , appraisals , attitudes and emotions towards particular entities ( liu , 2012 )---sentiment analysis ( sa ) is a hot-topic in the academic world , and also in the industry
1
bahdanau et al proposed an attentional encoder-decoder architecture for machine translation---bahdanau et al propose a neural translation model that learns vector representations for individual words as well as word sequences
1
we extend this approach from phrase-based translation to syntax-based translation by generalizing the evaluation metrics for partial translations to handle tree-structured derivations in a way inspired by inside-outside algorithm---which beyonds the capability of phrase-based mt , we extend the search-aware tuning framework from phrase-based mt to syntax-based mt , in particular the hierarchical phrase-based translation model
1
for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing---relation extraction is a crucial task in the field of natural language processing ( nlp )
0
our second approach is based on a notion of feature coverage---for our second method , we develop the concept of feature coverage
1
we use the 300-dimensional pre-trained word2vec 3 word embeddings and compare the performance with that of glove 4 embeddings---for this task , we use glove pre-trained word embedding trained on common crawl corpus
1
the circles denote fixations , and the lines are saccades---circles denote events , squares denote arguments , solid arrows represent event-event relations , and dashed arrows represent event-argument relations
1
we tie the input embeddings of both the encoder and the decoder , as well as the softmax weights---in the decoder , we tie the embeddings with the output softmax layer
1
lattices are learned from a dataset of automatically-annotated definitions from wikipedia---the srilm toolkit was used to build the trigram mkn smoothed language model
0
coreference resolution is the task of partitioning a set of mentions ( i.e . person , organization and location ) into entities---coreference resolution is the next step on the way towards discourse understanding
1
to evaluate the full abstract generation system , the bleu score is computed with human abstracts as reference---the system output is evaluated using the meteor and bleu scores computed against a single reference sentence
1
our framework is motivated by distant supervision for learning relation extraction models---in this paper , we present an algorithm that transforms an lcfrs into a strongly equivalent form in which all productions have rank
0
to remedy this problem , we propose a neural model which automatically induces features sensitive to multi-predicate interactions exclusively from the word sequence information of a sentence---kalchbrenner et al introduced a dynamic k-max pooling to handle variable length sequences
0
in an example shown above , β€œ sad ” is an emotion word , and the cause of β€œ sad ” is β€œ i lost my phone ”---in an example shown above , β€œ sad ” is an emotion word , and the cause of β€œ sad ” is β€œ
1
the decoder and encoder word embeddings are of size 500 , the encoder uses a bidirectional lstm layer with 1k units to encode the source side---the decoder and encoder word embeddings are of size 620 , the encoder uses a bidirectional layer with 1000 lstms to encode the source side
1
a pun is the exploitation of the various meanings of a word or words with phonetic similarity but different meanings---a pun is a form of wordplay , which is often profiled by exploiting polysemy of a word or by replacing a phonetically similar sounding word for an intended humorous effect
1
to alleviate this shortcoming , we performed smoothing of the phrase table using the good-turing smoothing technique---to compensate this shortcoming , we performed smoothing of the phrase table using the goodturing smoothing technique
1
bayesian inference methods have become popular in natural language processing---bayesian inference has been widely used in natural language processing
1
the data come from the conll-x and conll 2007 shared tasks---the data format is based on conll shared task on dependency parsing
1
word sense disambiguation ( wsd ) is the task to identify the intended sense of a word in a computational manner based on the context in which it appears ( cite-p-13-3-4 )---the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd )
1
for word embeddings , we use an in-house java re-implementation of word2vec to build 300-dimensional vector representations for all types that occur at least 10 times in our unannotated corpus---word representations to learn word embeddings from our unlabeled corpus , we use the gensim im-plementation of the word2vec algorithm
1
we adopted the case-insensitive bleu-4 as the evaluation metric---we use case-insensitive bleu as evaluation metric
1
coreference resolution is a well known clustering task in natural language processing---we used moses tokenizer 5 and truecaser for both languages
0
the srilm toolkit is used to train 5-gram language model---a tri-gram language model is estimated using the srilm toolkit
1
we follow zhang and clark to integrate search and learning---while the notion of scene construction is not new , our insight is that this can be done with a simple ‘° knowledge graph ‘± representation
0
latent dirichlet allocation is a representative of topic models---costa-jussa and fonollosa considered the source reordering as a translation task which translates the source sentence into reordered source sentence
0
stance detection is a difficult task since it often requires reasoning in order to determine whether an utterance is in favor of or against a specific issue---stance detection is the task of automatically determining from the text whether the author of the text is in favor of , against , or neutral towards a proposition or target
1
in an experiment using spoken business listing name queries from a business directory assistance service , we achieve a 16.8 % absolute improvement in recognition accuracy and a 3-fold speedup in recognition time with geocentric language models when compared with a nationwide language model---in an experiment using lbvs queries , we achieve : a 16 . 8 % absolute improvement in recognition accuracy and a 3-fold speedup in recognition time with geo-centric language models when compared with a nationwide language model
1
text segmentation is the task of automatically segmenting texts into parts---text segmentation is the task of splitting text into segments by placing boundaries within it
1
cnns have proven useful for various nlp tasks because of their effectiveness in identifying patterns in their input---recently , cnns have been shown to be useful in many natural language processing and information retrieval tasks by effectively modeling natural language semantics
1
foster et al further raised the granularity by weighting at the level of phrase pairs---foster et al extended this by weighting phrases rather than sentence pairs
1
the gain is a significant reduction in the size number of transformational rules , as much as a factor of three for certain verb classes---we showed that romanian stress is predictable , though not deterministic , by using data-driven machine learning techniques
0
we verify the correctness of our theory on synthetic corpora and examine the gap between theory and practice on real corpora---in section 4 , we verify the correctness of our theory on synthetic corpora and examine the gap between theory and practice
1
we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit---for language model scoring , we use the srilm toolkit training a 5-gram language model for english
1
in realistic settings in which the geolinguistic dependence is obscured by noise , this can dramatically diminish the power of the test---we used the chunker yamcha , which is based on support vector machines
0
li and abe propose a tree cut model based on minimal description length principle for the induction of semantic classes---li and abe used a tree cut model over wordnet , based on the principle of minimum description length
1
coreference resolution is the process of linking together multiple expressions of a given entity---mcclosky et al used self-training for constituency parsing
0
we show that a simple method disambiguates some subject-object ambiguities in german , while making few errors---we show that our method disambiguates a significant proportion of subject-object ambiguities in german
1
in this paper , we build a system that allows information to flow in both directions---in this paper , we discuss the procedure for identifying semantic roles at parse time
1
to mitigate overfitting , we apply the dropout method to the inputs and outputs of the network---we apply dropout on the lstm layer to prevent network parameters from overfitting and control the co-adaptation of features
1
we train a trigram language model with the srilm toolkit---we apply this method to english part-of-speech tagging and japanese morphological analysis
0
this paper presents a model that extends semantic role labeling---this paper presents a strategy for extending semantic role labeling
1
semantic role labeling ( srl ) is a kind of shallow sentence-level semantic analysis and is becoming a hot task in natural language processing---semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them
1
for the semantic language model , we used the srilm package and trained a tri-gram language model with the default goodturing smoothing---between events , this paper focuses on the joint extraction of temporal and causal relations
0
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit---the tsvm , a representative of transductive inference method , was introduced by joachims
0
we evaluated the translation quality using the bleu-4 metric---we measure translation quality via the bleu score
1
the co-training algorithm is a specific semi-supervised learning approach which starts with a set of labeled data and increases the amount of labeled data using the unlabeled data by bootstrapping---co-training is a representive bootstrapping method , which starts with a set of labeled data , and increase the amount of annotated data using some amounts of unlabeled data in an incremental way
1
in section 2 , we discuss related work in building endto-end task-oriented dialogue systems---in this work , we present a hybrid learning method for training task-oriented dialogue systems
1
dye et al introduce a system that utilizes scripts for specific situations---we obtained word embeddings for our experiments by using the open source google word2vec 1
0
we used nwjc2vec 10 , which is a 200 dimensional word2vec model---we obtained distributed word representations using word2vec 4 with skip-gram
1
the semantic orientation of a phrase is not a mere sum of its component words---semantic orientation of the phrase is not a mere sum of the orientations of the component words
1
srilm toolkit was used to create up to 5-gram language models using the mentioned resources---we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing
1
the embedded word vectors are trained over large collections of text using variants of neural networks---the skip-gram model adopts a neural network structure to derive the distributed representation of words from textual corpus
1
language models were estimated using the sri language modeling toolkit with modified kneser-ney smoothing---the language models were interpolated kneser-ney discounted trigram models , all constructed using the srilm toolkit
1
automatic essay scoring ( aes ) is the task of assigning grades to essays written in an educational setting , using a computer-based system with natural language processing capabilities---we used a caseless parsing model of the stanford parser for a dependency representation of the messages
0
in this paper , we propose a hierarchical neural network which incorporates user and product information via word and sentence level attentions---with the consideration of user and product information , our model can significantly improve the performance of sentiment classification
1
language is a weaker source of supervision for colorization than user clicks---language is the primary tool that people use for establishing , maintaining and expressing social relations
1
in particular , we perform experiments with dependency-based contexts , and show that they produce markedly different embeddings---contexts are replaced with arbitrary ones , and experimented with dependency-based contexts , showing that they produce markedly different kinds of similarities
1
word sense disambiguation ( wsd ) is the task of determining the correct meaning or sense of a word in context---word sense disambiguation ( wsd ) is the task of determining the meaning of an ambiguous word in its context
1
we explore deception detection in interview dialogues---we presented a study of deceptive language in interview dialogues
1
in this paper we present a framework that given a few seed locations as a specification of a region , discovers additional locations ( including alternate location names ) and map-like travel paths through this region labeled by transport type labels---in this paper we presented a framework which , given a small set of seed terms describing a geographical region , discovers an underlying connectivity and transport graph
1
dependency parsing is a way of structurally analyzing a sentence from the viewpoint of modification---dependency parsing is a core task in nlp , and it is widely used by many applications such as information extraction , question answering , and machine translation
1
our word embeddings is initialized with 100-dimensional glove word embeddings---we use the glove algorithm to obtain 300-dimensional word embeddings from a union of these corpora
1
the 50-dimensional pre-trained word embeddings are provided by glove , which are fixed during our model training---we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training
1
we use binary crossentropy loss and the adam optimizer for training the nil-detection models---for the loss function , we used the mean square error and adam optimizer
1