text
stringlengths
82
736
label
int64
0
1
word sense disambiguation ( wsd ) is the nlp task that consists in selecting the correct sense of a polysemous word in a given context---word sense disambiguation ( wsd ) is a task to identify the intended sense of a word based on its context
1
however , a neural network architecture can be hard to train---while a neural network language model can be painful and long to train
1
since coreference resolution is a pervasive discourse phenomenon causing performance impediments in current ie systems , we considered a corpus of aligned english and romanian texts to identify coreferring expressions---coreference resolution is the process of linking together multiple referring expressions of a given entity in the world
1
the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime---a 5-gram language model was built using srilm on the target side of the corresponding training corpus
1
for example , bengio et al introduced a model that learns word vector representations as part of a simple neural network architecture for language modeling---for building our ap e b2 system , we set a maximum phrase length of 7 for the translation model , and a 5-gram language model was trained using kenlm
0
for disambiguation and clustering we build upon our previous work---we present a new supervised framework that learns to estimate automatic pyramid scores and uses them for optimization-based extractive
0
fine-grained sentiment analysis methods have been developed by hatzivassiloglou and mckeown , hu and liu and popescu and etzioni , among others---methods for fine-grained sentiment analysis are developed by hu and liu , ding et al and popescu and etzioni
1
in addition , a 5-gram lm with kneser-ney smoothing and interpolation was built using the srilm toolkit---the lm is implemented as a five-gram model using the srilm-toolkit , with add-1 smoothing for unigrams and kneser-ney smoothing for higher n-grams
1
hassan et al proposed a method for identifying the polarity of nonenglish words using multilingual semantic graphs---hassan et al , 2011 , present a method to identify the sentiment polarity of foreign words by using wordnet in the target foreign language
1
in this paper we consider several estimation methods for probabilistic context-free grammars , and we show that the resulting grammars have the consistency property---we have investigated a number of methods for the empirical estimation of probabilistic context-free grammars , and have shown that the resulting grammars have the so-called consistency property
1
in our implementation , we train a tri-gram language model on each phone set using the srilm toolkit---we trained a 5-gram sri language model using the corpus supplied for this purpose by the shared task organizers
1
lda is a generative model that learns a set of latent topics for a document collection---lda is a widely used topic model , which views the underlying document distribution as having a dirichlet prior
1
waseem et al proposed a typology for various sub-types of abusive language---waseem et al , 2017 ) proposed a typology of abusive language sub-tasks
1
on the same dataset , it improves our part-of-speech tagger from 74 % to 80 % accuracy on rare words---on the same dataset , it improves our part-of-speech tagger from 74 % to 80 % accuracy
1
labeling of sentence boundaries is a necessary prerequisite for many natural language processing tasks , including part-of-speech tagging and sentence alignment---we have proposed a model for video description which uses neural networks for the entire pipeline from pixels to sentences
0
we use the stanford pos tagger to obtain the lemmatized corpora for the sre task---li et al use sense paraphrases to estimate probabilities of senses and carry out wsd
0
unsupervised parsing attracts researchers for many years ,---unsupervised parsing has been explored for several decades for a recent review )
1
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit---we created 5-gram language models for every domain using srilm with improved kneserney smoothing on the target side of the training parallel corpora
1
we perform pre-training using the skip-gram nn architecture available in the word2vec 13 tool---we perform pre-training using the skipgram nn architecture available in the word2vec tool
1
schwarm and ostendorf and feng suggested that mixed models combining words and parts of speech are more effective for readability assessment than simple word based models---schwarm and ostendorf suggested that syntactic complexity of a sentence can be used as a feature for reading level assessment
1
as for je translation , we use a popular japanese dependency parser to obtain japanese abstraction trees---we used the statistical japanese dependency parser cabocha for parsing
1
to our knowledge , this paper is the first to show experimentally that reinforcement learning can reduce error propagation in an nlp task---that applied reinforcement learning to nlp has , to our knowledge , not shown that it improved results by reducing error propagation
1
twitter is a social platform which contains rich textual content---opinion mining can be regarded as either a token-level sequence labeling problem or as a semantic compositional task
0
wiebe focused on the problem of identifying subjective adjectives with the help of the corpus---as for subjective information , wiebe proposed a method to identify strong clues of subjectivity on adjectives
1
recently , riezler et al and zhou et al proposed a phrase-based translation model for question and answer retrieval---guo and agichtein investigated the hierarchical structure of a search task with a series of search actions based on search sessions
0
we present a multi-task learning approach that jointly trains three word alignment models over disjoint bitexts of three languages : source , target and pivot---in a low-resource setting , we design a multitask learning approach that utilizes parallel data of a third language , called the pivot language
1
meanwhile , confusion sets of chinese words play an important role in chinese spelling correction---some language-specific properties in chinese have impact on errors
1
the target language model is trained by the sri language modeling toolkit on the news monolingual corpus---the language model is trained on the target side of the parallel training corpus using srilm
1
kalchbrenner et al proposed to extend cnns max-over-time pooling to k-max pooling for sentence modeling---to capture the relation between words , kalchbrenner et al propose a novel cnn model with a dynamic k-max pooling
1
kupiec proposed to extract bilingual noun phrases using statistical analysis of co-occurrence of phrases---kupiec proposes a method for extracting translation patterns of noun phrases from english-french parallel corpora
1
the weights associated to feature functions are optimally combined using the minimum error rate training---experiments show that our system was able to outperform other logic-based systems
0
experiments show that our model achieves significant improvements---experimental results show that our model leads to significant improvements
1
the weights of the different feature functions were optimised by means of minimum error rate training---the model weights were trained using the minimum error rate training algorithm
1
luong et al address the oov problem by looking up unknown words in an automatically generated dictionary , and use an external word aligner to map words in the target sequence to ones in the source sequence---luong et al preprocess the data and replace each unknown word in the target sentence by a placeholder token also containing a positional pointer to the corresponding word in the source sentence
1
continuous representations have been shown to be helpful in a wide range of tasks in natural language processing---distributed word representations induced through deep neural networks have been shown to be useful in several natural language processing applications
1
coreference resolution is the problem of identifying which noun phrases ( nps , or mentions ) refer to the same real-world entity in a text or dialogue---to maximize the objective in , we employ a stochastic gradient descent algorithm
0
our experiments show that ltag-based features can improve srl accuracy significantly---we show that ltag-based features improve on the best known set of features used in current srl
1
our method learns vector space representations for multi-word phrases---while our method can also learn multi-word phrases
1
conditional random fields are undirected graphical models of a conditional distribution---conditional random fields are undirected graphical models used for labeling sequential data
1
for the out-of-domain testsets , we obtained statistically significant overall improvements , but we were hampered by the small sizes of the testsets in evaluating unseen/wh words---for the out-of-domain testsets , we obtained statistically significant overall improvements , but we were hampered by the small sizes of the testsets
1
abstract meaning representation is a semantic formalism in which the meaning of a sentence is encoded as a rooted , directed , acyclic graph---abstract meaning representation is a semantic formalism that expresses the logical meanings of english sentences in the form of a directed , acyclic graph
1
we use the standard seq2seq with content-based attention model and we describe our hyperparmeters in appendix b---specifically , we employ the seq2seq model with attention implemented in opennmt
1
biadsy et al present a system that identifies dialectal words in speech and their dialect of origin through the acoustic signals---we test this hypothesis with an approximate randomization approach
0
the language model was a kneser-ney interpolated trigram model generated using the srilm toolkit---a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit
1
coreference resolution is the problem of identifying which noun phrases ( nps , or mentions ) refer to the same real-world entity in a text or dialogue---coreference resolution is the task of grouping all the mentions of entities 1 in a document into equivalence classes so that all the mentions in a given class refer to the same discourse entity
1
the second step aims at selecting and extracting the feature set---second step aims at selecting and extracting the feature set
1
furthermore , we train a 5-gram language model using the sri language toolkit---we used svm classifier that implements linearsvc from the scikit-learn library
0
ravi and knight , 2011b ) and have shown that-even for larger vocabulary sizes-it is possible to learn a full translation model from non-parallel data---ravi and knight , 2011b ) have shown that one can use decipherment to learn a full translation model from non-parallel data
1
we conduct large-scale translation quality experiments on arabic-english and chinese-english---we used the moses decoder for word segmentation of the english corpus and kytea for the japanese corpus
0
transformer-based neural machine translation has recently outperformed recurrent neural network -based models in many tasks---neural machine translation has demonstrated impressive performance when trained on large-scale corpora
1
we use bleu scores to measure translation accuracy---we substitute our language model and use mert to optimize the bleu score
1
for this task , we used the svm implementation provided with the python scikit-learn module---automatic metrics , such as bleu , are widely used in machine translation as a substitute for human evaluation
0
multiword expressions are defined as idiosyncratic interpretations that cross word boundaries or spaces---we applied our approaches to parsing errors given by the hpsg parser enju , which was trained on the penn treebank section 2-21
0
results in terms of average rewards and a human rating study show that our learnt strategy outperforms several baselines that are not sensitive to id by more than 23 %---our results in terms of average rewards and a human rating study show that a learning agent that is sensitive to id can learn when it is most beneficial
1
coreference resolution is the task of identifying all mentions which refer to the same entity in a document---the 5-gram target language model was trained using kenlm
0
cite-p-7-1-7 obtain 86.0 % word-based accuracy using maximum entropy models from acoustic and syntactic information on the burnc---we create mwes with word2vec skipgram 1 and estimate w with scikit-learn
0
in this paper , we examine the problem of latent attribute inference outside the english-language context---the most common word embeddings used in deep learning are word2vec , glove , and fasttext
0
a 4-gram language model was trained on the monolingual data by the srilm toolkit---a 4-gram language model is trained on the monolingual data by srilm toolkit
1
pronunciation dictionaries provide natural parallel corpora , with strings of characters paired to strings of phones---pronunciation dictionaries provide a readily available parallel corpus for learning to transduce between character strings and phoneme strings
1
the smt system deployed in our approach is an implementation of the alignment template approach of och and ney---our phrase-based smt system is similar to the alignment template system described in och and ney
1
we extract named entities using a python wrapper for the stanford ner tool---we use the stanford named entity recognizer to identify named entities in s and t
1
the idea of distant supervision has widely used in the task of relation extraction---distant supervision has proved to be a popular approach to relation extraction
1
this paper describes our participation in the semeval-2014 tasks 1 , 3 and 10---in this paper we describe our participating systems in the semeval-2014 tasks 1 , 3 , and 10
1
on real-world tasks , our method achieves 7 times speedup on citation matching , and 13 times speedup on large-scale author disambiguation---on real-world tasks , our method achieves 7 times speedup on citation matching , and 13 times speedup on large-scale author
1
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit---since code mixing speech data is scarce , we propose to instead learn the code mixing language model from bilingual data
0
sentiment analysis ( sa ) is a field of knowledge which deals with the analysis of people ’ s opinions , sentiments , evaluations , appraisals , attitudes and emotions towards particular entities ( cite-p-17-1-0 )---suggested upper merged ontology is the largest freely available ontology which is linked to the entire english wordnet
0
the translation quality in our experiments is evaluated using bleu , as well as using human assessment---our mt system was evaluated using the n-gram based bleu and nist machine translation evaluation software
1
izumi et al proposed a maximum entropy model , using lexical and pos features , to recognize a variety of errors , including verb form errors---dredze et al showed the possibility that many parsing errors in the domain adaptation tasks came from inconsistencies between annotation manners of training resources
0
we use the scikit-learn machine learning library to implement the entire pipeline---the classifier we use in this paper is support vector machines in the implementation of svm light
0
for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit---we used a 5-gram language model trained on 126 million words of the xinhua section of the english gigaword corpus , estimated with srilm
1
we use the lesk algorithm , provided through the nltk package , to perform wordsense disambiguation---we use the group average agglomerative clustering package within nltk
1
we present a study of the relationship between quality of writing and word association profiles---work , we intend to investigate in more detail the contribution of various kinds of words to word association profiles
1
we demonstrate empirically that there can be large discrepancies between topic coherence and document¨ctopic associations---we empirically demonstrate that there can be large discrepancies between topic-and document-level topic model
1
haagsma and bjerva use violations of selectional preferences to find novel metaphors---next we group these words using wordnet to obtain more general concepts
0
we used srilm -sri language modeling toolkit to train several character models---we used the srilm toolkit to simulate the behavior of flexgram models by using count files as input
1
the log-linear feature weights are tuned with minimum error rate training on bleu---the log-lineal combination weights were optimized using mert
1
this naturally calls for a measure of distribution closeness , for which we introduce the earth mover¡¯s distance---as distributions , we propose to minimize their earth mover ¡¯ s distance , a measure of divergence between distributions
1
as an example for these hierarchical relationships , figure 1 shows a german noun phrase taken from the german tiger corpus---for example , figure 1 shows a part of the variation n-grams found in the german tiger corpus
1
we would like to have a classification approach that enjoys the representational power of a syntactic method and efficiency of statistical classification---we would like a classification approach to enjoy the representational power of a syntactic method and the efficiency of statistical classification
1
we used srilm to build a 4-gram language model with interpolated kneser-ney discounting---for the n-gram lm , we use srilm toolkits to train a 4-gram lm on the xinhua portion of the gigaword corpus
1
to the best of our knowledge , our model is the first attention model that can produce explainable results in the sarcasm detection task---sarcasm is a sophisticated speech act which commonly manifests on social communities such as twitter and reddit
0
coreference resolution is the task of determining when two textual mentions name the same individual---we applied liblinear via its scikitlearn python interface to train the logistic regression model with l2 regularization
0
the data in this corpus was annotated by a total of five annotators using the brat annotation framework---annotation was conducted on a modified version of the brat web-based annotation tool
1
we implement an in-domain language model using the sri language modeling toolkit---we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing
1
we used the srilm toolkit to generate the scores with no smoothing---we trained a tri-gram hindi word language model with the srilm tool
1
dinu and lapata introduced a probabilistic model for computing word representations in context---dinu and lapata and s茅aghdha and korhonen introduced a probabilistic model to represent word meanings by a latent variable model
1
language identification is the task of identifying the language a given document is written in---language identification is the task of automatically detecting the language ( s ) present in a document based on the content of the document
1
mihalcea et al use both corpusbased and knowledge-based measures of the semantic similarity between words---in this work , we propose a coverage mechanism to nmt ( nmt-c overage )
0
in this paper , we propose the features of trustiness , and synonym and contrastive collective evidence for the task of taxonomy construction , and show that these features help the system improve the performance significantly---in this paper , we present a method of taxonomic relation identification that incorporates the trustiness of source texts measured with such techniques as pagerank and knowledge-based trust , and the collective evidence of synonyms and contrastive terms
1
morfessor is a family of probabilistic machine learning methods for finding the morphological segmentation from raw text data---morfessor is a family of methods for unsupervised morphological segmentation
1
for each production , an svm classifier is trained using a string subsequence kernel---system tuning was carried out using both k-best mira and minimum error rate training on the held-out development set
0
the neural embeddings were created using the word2vec software 3 accompanying---the word embeddings for all the models were initialized with the word2vec tool on 30 million tweets
1
we use the mallet implementation of conditional random fields---the second decoding method is to use conditional random field
1
because c lean l ists is able to use typed lists , it can successfully identify typed functionality---in fact , in , it has been claimed that knowing the domain of the text in which the word is located is a crucial information for wsd
0
conditional random fields are a convenient formalism for sequence labeling tasks common in nlp---words in a word embedding model pose a serious challenge to the underlying learning algorithm
0
all four algorithms were compared on two domains taken from the penn treebank annotated corpus---results were obtained by training and evaluating each system on the full wsj portion of the penn treebank corpus
1
although , a number of segementators are able to yield very promising results , certain of them might be unsuitable for smt task due to the influence of segmentation scheme---language models were built with srilm , modified kneser-ney smoothing , default pruning , and order 5
0
we report a 0.9 point improvement in terms of bleu score on english¨cchinese technical documents---we report a statistically significant 0 . 9 absolute improvement in bleu score
1
a lot of work has gone into developing powerful optimization methods for solving these combinatorial problems---we adopt pretrained embeddings for word forms with the provided training data by word2vec
0
as expected , the glass-box features help to reduce mae and rmse for both err and n ?---zhu et al propose a syntax-based translation model for ts that learns operations over the parse trees of the complex sentences
0