text
stringlengths
82
736
label
int64
0
1
aman and szpakowicz classify emotional and non-emotional sentences based on a knowledgebased approach---for example , aman and szpakowicz classified emotional and non-emotional sentences with a predefined emotion lexicon
1
coreference resolution is the task of clustering referring expressions in a text so that each resulting cluster represents an entity---coreference resolution is the problem of identifying which noun phrases ( nps , or mentions ) refer to the same real-world entity in a text or dialogue
1
coreference resolution is the task of determining when two textual mentions name the same individual---we perform pre-training using the skipgram nn architecture available in the word2vec tool
0
for the feature-based system we used logistic regression classifier from the scikit-learn library---we used svm classifier that implements linearsvc from the scikit-learn library
1
haagsma and bjerva use violations of selectional preferences to find novel metaphors---haagsma and bjerva employed clustering and neural network approaches using selectional preferences to detect novel metaphors
1
recently , large corpora have been manually annotated with semantic roles in framenet and propbank---in this line , several domain-independent semantic representations have been developed , propbank , framenet ,
1
this loss function allows us to integrate syntactic structure into the statistical mt framework without building detailed models of syntactic features and retraining models from scratch---an anaphoric zero pronoun ( azp ) is a zp that corefers with one or more preceding noun phrases ( nps ) in the associated text
0
we evaluated our mt output using the surface based evaluation metrics bleu , meteor , cder , wer , and ter---we evaluated our mt output using the surface based evaluation metric bleu and the edit distance evaluation metric ter
1
we use theano and pretrained glove word embeddings---we used 300-dimensional pre-trained glove word embeddings
1
the standard approach to word alignment from sentence-aligned bitexts has been to construct models which generate sentences of one language from the other , then fitting those generative models with em---the evaluation metric is casesensitive bleu-4
0
for the word-embedding based classifier , we use the glove pre-trained word embeddings---the language model was trained using kenlm toolkit with modified kneser-ney smoothing
0
msa is the language used in education , scripted speech and official settings while da is the native tongue of arabic speakers---msa is the formal arabic that is mostly used in news broadcasting channels and magazines to address the entire arab region
1
in an evaluation on 826 argumentative essays , our learning-based approach , which combines our novel features with n-gram features and faulkner ’ s features , significantly outperformed four baselines , including our reimplementation of faulkner ’ s system---in an evaluation on 826 essays , our approach significantly outperforms four baselines , one of which relies on features previously developed specifically for stance classification
1
we showed experimentally that we can reduce running time by an order of magnitude , while at the same time improving mean average precision from .432 to .528 and mean reciprocal rank from .850 to .933---we used minimum error rate training to optimize the feature weights
0
while there is a large body of work on bilingual comparable corpora , most of it is focused on learning word translations---much of the work involving comparable corpora has focused on extracting word translations
1
her contributions were made during an internship at ibm research---her contributions were made during an internship
1
relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text---relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text
1
the srilm toolkit was used for training the language models using kneser-ney smoothing---the language model is a large interpolated 5-gram lm with modified kneser-ney smoothing
1
for language model scoring , we use the srilm toolkit training a 5-gram language model for english---the results evaluated by bleu score is shown in table 2
0
with this in mind , we have set out to build an interface system that could operate a television via spoken dialogue in place of manual operations---with ease , we built a prototype interface system that operates a television through voice interactions
1
in general , the crf model parameters w are estimated using a training set of annotated text using , for example , the maximum likelihood criterion as in---the model parameters w are estimated discriminatively from the annotated data set d using iterative learning algorithms
1
a general , configurable platform was designed for our model---we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing
0
our evaluation shows significant performance gains over a state-of-the-art monolingual baseline---our results show consistent improvement over a monolingual baseline
1
coreference resolution is a fundamental component of natural language processing ( nlp ) and has been widely applied in other nlp tasks ( cite-p-15-3-9 )---coreference resolution is the task of determining which mentions in a text refer to the same entity
1
these instances were then converted to semantic sequential representations ( ssrs )---which were later converted to semantic sequential representations ( ssrs )
1
we also showed that our monolingual features add 1.5 bleu points when combined with standard bilingually estimated features---we further show that our monolingual features add 1 . 5 bleu points when combined with standard bilingually estimated
1
in this paper , we investigate the use of selectional preferences to detect compositionality---as evaluation metrics , we use mean average precision and mean reciprocal rank , following recent work evaluating relation extraction performance
0
the penn discourse treebank includes annotations of 18,459 explicit and 16,053 implicit discourse relations in texts from the wall street jounal---we use the skipgram model to learn word embeddings
0
therefore , dependency parsing is a potential “ sweet spot ” that deserves investigation---dependency parsing is a central nlp task
1
in this paper , we introduce allvec , an exact and efficient word embedding method based on full batch learning---in this paper , we presented allvec , an efficient batch learning based word embedding model that is capable to leverage all positive and negative training examples
1
transliteration is the conversion of a text from one script to another---transliteration is the task of converting a word from one alphabetic script to another
1
word sense disambiguation ( wsd ) is a key enabling technology that automatically chooses the intended sense of a word in context---word sense disambiguation ( wsd ) is a natural language processing ( nlp ) task in which the correct meaning ( sense ) of a word in a given context is to be determined
1
we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings---we represent input words using pre-trained glove wikipedia 6b word embeddings
1
one of the most effective feature combinations is the word posterior probability as proposed by ueffing et al associated with ibm-model based features---one of the most effective feature combinations is the word posterior probability as suggested by ueffing et al associated with ibm-model based features
1
we hope that the same ¡°cluster and label¡± strategy will be applicable to word sense disambiguation---in this work , we use a ¡° cluster and label ¡± strategy to generate labeled data
1
the skip-gram model has become one of the most popular manners of learning word representations in nlp---skipgrams are a relatively new approach in nlp , most notable for their effectiveness in approximating word meaning in vector space models
1
the log linear model is defined as a conditional probability distribution of a corrected word and a rule set for the correction conditioned on the misspelled word---the log linear model is defined as a conditional probability distribution of a corrected word and a rule set for the correction given the misspelled word
1
both files are concatenated and learned by word2vec---the word embeddings are pre-trained by skip-gram
1
this study has presented an hal-based cascaded model for variable-length semantic pattern induction---in this work , we present a text mining framework capable of inducing variable-length semantic patterns
1
the quality of retrieved segments was evaluated using the machine translation evaluation metric bleu---in pronoun resolution is guided by extending the centering theory from the grammatical level to the semantic level
0
furthermore , the bag-of-words methods we test are equivalent in retrieval accuracy to the more expensive segment order-sensitive methods , but superior in retrieval speed---in order to model topics of news article bodies , we apply standard latent dirichlet allocation
0
for this model , we use a binary logistic regression classifier implemented in the lib-linear package , coupled with the ovo scheme---we use logistic regression as the per-class binary classifier , implemented using liblinear
1
the word embeddings were built from 200 million tweets using the word2vec model---domain adaptation is a challenge for ner and other nlp applications
0
cucchiarini et al describe a system for dutch pronunciation scoring along similar lines---cucchiarini et al designed a system for scoring dutch pronunciation along a similar line
1
an example of such a query is : ” asus laptop + opinions ” , another , more detailed query , might be ” asus laptop + positive opinions ”---we used svm classifier that implements linearsvc from the scikit-learn library
0
the original princeton wordnet for english defines a set of word senses , which many other wordnets map to other languages---for this reason , we used glove vectors to extract the vector representation of words
0
relation extraction is a core task in information extraction and natural language understanding---topic models such as latent dirichlet allocation have emerged as a powerful tool to analyze document collections in an unsupervised fashion
0
sarcasm is a sophisticated speech act which commonly manifests on social communities such as twitter and reddit---sarcasm , commonly defined as ‘ an ironical taunt used to express contempt ’ , is a challenging nlp problem due to its highly figurative nature
1
a dialogue strategy is the procedure by which a system chooses its next action given the current state of the dialogue---most dialogue system have a characteristic behaviour with respect to dialogue management , which is known as dialogue strategy
1
we use 5-grams for all language models implemented using the srilm toolkit---srilm toolkit is used to build these language models
1
in this paper , we present a test collection for mathematical information retrieval composed of real-life , research-level mathematical information needs---in this paper we present a test collection composed of real-life , research-level mathematical topics and associated relevance judgements procured from the online collaboration website mathoverflow
1
we used 200 dimensional glove word representations , which were pre-trained on 6 billion tweets---we used glove vectors trained on common crawl 840b 4 with 300 dimensions as fixed word embeddings
1
in this paper , our coreference resolution system for conll-2012 shared task is summarized---englishto-japanese dataset demonstrate that our proposed model considerably outperforms sequenceto-sequence attentional nmt models
0
the data was processed using the standard moses pipeline , specifically , punctuation normalization , tokenization and truecasing---the pipeline consisted in normalizing punctuation , tokenization and truecasing using the standard moses scripts
1
we model meaning as an ontologically richly sorted , relational structure , using a description logic-like framework---these meaning representations are ontologically richly sorted , relational structures , formulated in a description logic , more precisely in the hlds formalism
1
however , consider the interactive information-access application described above---in this task , we use the 300-dimensional 840b glove word embeddings
0
the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd )---word sense disambiguation ( wsd ) is the problem of assigning a sense to an ambiguous word , using its context
1
for this , we utilize the publicly available glove 1 word embeddings , specifically ones trained on the common crawl dataset---as a baseline , we employ a publicly available set of 300-dimensional word embeddings trained with glove on the common crawl data 4
1
we use moses , a statistical machine translation system that allows training of translation models---we use the moses statistical mt toolkit to perform the translation
1
in this paper , we introduce a low-rank multimodal fusion method that performs multimodal fusion with modality-specific low-rank factors---in this paper , we propose the low-rank multimodal fusion , a method leveraging low-rank weight tensors to make multimodal fusion efficient
1
for our experiments , we create a manually labeled dataset of dialogues from tv series ¡®friends¡¯---we create a manually-labeled dataset of dialogue from tv series ¡® friends ¡¯
1
relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text---relation extraction is the task of predicting semantic relations over entities expressed in structured or semi-structured text
1
the restaurants dataset contains 3,710 english sentences from the restaurant reviews of ganu et al---the restaurants dataset contains 3,710 english sentences from the reviews of ganu et al
1
the need of annotations results in extremely high cost and poor scalability in system development---and the annotation process results in extremely high cost and poor scalability in system development
1
moreover , we augment our model with the attention mechanism to push the model to distill the relevant information from context---furthermore , we introduce the attention mechanism that encourages the model to focus on the important information
1
sentiment analysis is a research area where does a computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-12-1-3 )---sentiment analysis is a nlp task that deals with extraction of opinion from a piece of text on a topic
1
for language modeling , we used the trigram model of stolcke---for the language model , we used srilm with modified kneser-ney smoothing
1
additionally , by comparing vot values for stops produced by native and non-native speakers for specific languages , researchers have provided some suggestions for language learning and teaching---we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training
0
however , this finding has been refuted to a certain extent by levy et al , stating that much of the perceived superiority of word embeddings is due to hyperparameter optimizations rather than principled advantages---however , their claim was challenged by levy et al , who showed that superiority of neural word embeddings is not due to the embedding algorithm , but due to certain design choices and hyperparameters optimizations
1
we use a sequential lstm to encode this description---in our case , the encoder is a two layer bidirectional lstm network
1
capturing these changes is problematic for current language technologies , which are typically developed for speakers of the standard dialect only---in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd )
0
in order to deal with ambiguity , morpa has been provided with a probabilistic context-free grammar ( pcfg ) , i.e . it combines a conventional context-free morphological grammar to filter out ungrammatical segmentations with a probability-based scoring function which determines the likelihood of each successful parse---morpa is provided with a probabilistic context-free grammar ( pcfg ) , i . e . it combines a conventional context-free morphological grammar to filter out ungrammatical segmentations with a probability-based scoring function which determines the likelihood of each successful parse
1
we use bigram and biterm language models to capture the term dependence---in our approach , we propose bigram and biterm models to capture the term dependence
1
in addition , we add an attention mechanism to make the seq2seq baseline stronger---furthermore , we introduce the attention mechanism that encourages the model to focus on the important information
1
in processing the holj documents we have built a pipeline using as key components the programs distributed with the lt ttt and lt xml toolsets , and the xmlperl program---in processing medline abstracts we have built a number of such pipelines using as key components the programs distributed with the lt ttt and lt xml toolsets
1
zhang et al impose a sparsity prior over the rule probabilities to prevent the search from having to consider all the rules found in the viterbi biparses---like wikipedia and wiktionary , which have been applied in computational methods only recently , offer new possibilities to enhance information retrieval
0
we introduced a reinforcement learning framework for task-oriented automatic query reformulation---in this work , we introduce a query reformulation system based on a neural network that rewrites a query
1
one such approach , reported in is based on the class based n-gram models---to avoid this problem we use the concept of class proposed for a word n-gram model
1
accordingly , we have used the rmsprop optimization algorithm to minimize the mean squared error loss function over the training data---in this paper , we focus on the inference rules contained in the dirt resource
0
a trigram model was built on 20 million words of general newswire text , using the srilm toolkit---these language models were built up to an order of 5 with kneser-ney smoothing using the srilm toolkit
1
we evaluate the translation quality using the case-insensitive bleu-4 metric---we report the mt performance using the original bleu metric
1
we begin with a brief overview of the standard phrase-based statistical machine translation model---and we report the performance of a phrase-based statistical model
1
munteanu and marcu use a bilingual lexicon to translate some of the words of the source sentence---munteanu and marcu , 2005 , use a bilingual lexicon to translate some of the words of the source sentence
1
we trained our default model using the widely used tool word2vec with the default parameters values on the bnc corpus 1---we obtained word embeddings for our experiments by using the open source google word2vec 1
1
unlike other translation models , it can automatically produce dictionary-sized translation lexicons , and it can do so with over 99 % accuracy---model can automatically produce dictionary-sized translation lexicons , and it can do so with over 99 % accuracy
1
coreference resolution is the task of clustering referring expressions in a text so that each resulting cluster represents an entity---relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form
0
we adapted the moses phrase-based decoder to translate word lattices---in the translation tasks , we used the moses phrase-based smt systems
1
the data consist of sections of the wall street journal part of the penn treebank , with information on predicate-argument structures extracted from the propbank corpus---the data consists of sections of the wall street journal part of the penn treebank , with information on predicate-argument structures extracted from the propbank corpus
1
bannard and callison-burch introduced the pivot approach to extracting paraphrase phrases from bilingual parallel corpora---bannard and callison-burch introduced the pivoting approach , which relies on a 2-step transition from a phrase , via its translations , to a paraphrase candidate
1
the most influential generative word alignment models are the ibm models 1-5 and the hmm model---the classic generative model approach to word alignment is based on ibm models 1-5 and the hmm model
1
persing and ng introduced an approach for recognizing the argumentation strength of an essay---persing and ng annotated the argumentative strength of essays composing multiple arguments with notable agreement
1
we propose a neural architecture which learns a distributional semantic representation that leverages a greater amount of semantic context ¨c both document and sentence level information ¨c than prior work---we propose a neural architecture which learns a distributional semantic representation that leverage both document and sentence level information
1
coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity---coreference resolution is a field in which major progress has been made in the last decade
1
relation extraction ( re ) is the task of extracting semantic relationships between entities in text---relation extraction is the task of finding semantic relations between entities from text
1
we use the word2vec skip-gram model to learn initial word representations on wikipedia---we used the implementation of random forest in scikitlearn as the classifier
0
we use bleu to evaluate translation quality---the bleu metric was used for translation evaluation
1
in this paper , we describe the system we submitted to the semeval-2012 lexical simplification task---in this paper we presented the mmsystem for lexical simplification we submitted to the semeval-2012 task
1
translation quality is measured by case-insensitive bleu on newstest13 using one reference translation---to train the link embeddings , we use the speedy , skip-gram neural language model of mikolov et al via their toolkit word2vec
0
the translation quality is evaluated by bleu and ribes---the translation quality is evaluated by caseinsensitive bleu-4 metric
1
we used europarl and wikipedia as parallel resources and all of the finnish data available from wmt to train five-gram language models with srilm and kenlm---we built a 5-gram language model on the english side of europarl and used the kneser-ney smoothing method and srilm as the language model toolkit
1