text stringlengths 82 736 | label int64 0 1 |
|---|---|
word representations , especially brown clustering , have been shown to improve the performance of ner system when added as a feature---unsupervised word embeddings trained from large amounts of unlabeled data have been shown to improve many nlp tasks | 1 |
the advantages of our method are ( 1 ) distributed sense vectors are taken as the knowledge representations which are trained discriminatively , and usually have better performance than traditional count-based distributional models , and ( 2 ) a general model for the whole vocabulary is jointly trained to induce sense centroids under the mutli-task learning framework---distributed sense embeddings are taken as the knowledge representations which are trained discriminatively , and usually have better performance than traditional count-based distributional models ( cite-p-12-1-0 ) , and ( 2 ) a general model for the whole vocabulary is jointly trained to induce sense centroids under the mutli-task learning framework | 1 |
for the cluster- based method , we use word2vec 2 which provides the word vectors trained on the google news corpus---we use the word2vec tool to train monolingual vectors , 6 and the cca-based tool for projecting word vectors | 1 |
recognizing textual entailment between two sentences is also addressed by rockt盲schel et al , using lstms and word-by-word neural attention mechanisms on the snli data set---in this paper we presented a technique for extracting order constraints among plan elements | 0 |
the evaluation metric for the overall translation quality is caseinsensitive bleu4---the translation quality is evaluated by caseinsensitive bleu-4 metric | 1 |
we suggest a compositional vector representation of parse trees that relies on a recursive combination of recurrent-neural network encoders---to translate , we firstly use a tm system to retrieve the most similar ‘ example ’ source sentences together with their translations | 0 |
we use a combination of negative sampling and hierachical softmax via backpropagation---dropouts are applied on the outputs of bi-lstm | 0 |
word alignment is the task of identifying corresponding words in sentence pairs---word alignment is the task of identifying word correspondences between parallel sentence pairs | 1 |
we also present a novel visualisation interface for browsing collaborations---we present a novel interactive visualisation that we have developed for displaying collaborations | 1 |
in our experiments using bleu as a metric , the system achieves a relative improvement of 11.7 % over the best rbmt system that is used to produce the synthetic bilingual corpora---and the models trained on the synthetic bilingual corpora , the interpolated model achieves an absolute improvement of 0 . 0245 bleu score ( 13 . 1 % relative ) | 1 |
during training , early stopping , l2-regularization and dropout are used to prevent overfitting---to avoid overfitting during training , l2 regularization and dropout are used | 1 |
in addition , the proposed metric can be easily extended to evaluate other sequence labelling based nlp tasks---for a case study , it can be easily extended to other sequence labelling based nlp tasks | 1 |
ideally , we would like to propose a unified approach to all the four problems---second , we present a unified approach to these problems | 1 |
for part of speech tagging and dependency parsing of the text , we used the toolset from stanford corenlp---we use the sentiment pipeline of stanford corenlp to obtain this feature | 1 |
we used a phrase-based smt model as implemented in the moses toolkit---for decoding , we used moses with the default options | 1 |
many words have multiple meanings , and the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd )---word sense disambiguation ( wsd ) is the problem of assigning a sense to an ambiguous word , using its context | 1 |
pang and lee propose a graph-based method which finds minimum cuts in a document graph to classify the sentences into subjective or objective---pang and lee describe a sentence subjectivity detector that is trained on sets of labelled subjective and objective sentences | 1 |
we have described an efficient and scalable shortlisting-reranking neural models for large-scale domain classification---in this paper , we propose a set of efficient and scalable neural shortlisting-reranking models for large-scale domain classification | 1 |
the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting---we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting | 1 |
as science domain monolingual data for the science translation task , we used the english side of the aspec parallel corpus---in this study , we used the japanese-english portion of the asian scientific paper excerpt corpus | 1 |
ensembling multiple systems is a well known standard approach to improving accuracy in several machine learning applications---using ensembles of multiple systems is a standard approach to improving accuracy in machine learning | 1 |
the word embeddings were built from 200 million tweets using the word2vec model---the word embeddings for all the models were initialized with the word2vec tool on 30 million tweets | 1 |
we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit---for the fluency and grammaticality features , we train 4-gram lms using the development dataset with the sri toolkit | 1 |
translation with explicit ordering ) we use the latest version of meteor that find alignments between sentences based on exact , stem , synonym and paraphrase matches between words and phrases---in this paper , we propose a set of efficient and scalable neural shortlisting-reranking models for large-scale domain classification | 0 |
the translation performance was measured using the bleu and the nist mt-eval metrics , and word error rate---the system was evaluated in terms of bleu score , word error rate and sentence error rate | 1 |
we modelled each word by a set of lexical , semantic and contextual features and evaluated distinct binary classification algorithms---we modelled each word to evaluate as a numeric vector populated with a set of lexical , semantic and contextual features | 1 |
to capture the relation between words , kalchbrenner et al propose a novel cnn model with a dynamic k-max pooling---kalchbrenner et al proposed a dynamic convolution neural network with multiple layers of convolution and k-max pooling to model a sentence | 1 |
in this paper , we propose a novel instance-based evaluation framework for inference rules that takes advantage of crowdsourcing---by simplifying the previously-proposed instance-based evaluation framework we are able to take advantage of crowdsourcing services | 1 |
we use a minibatch stochastic gradient descent algorithm together with the adam optimizer---we use a minibatch stochastic gradient descent algorithm together with the adam method to train each model | 1 |
the srilm toolkit is used to train 5-gram language model---the srilm toolkit was used to build this language model | 1 |
mcdonald et al exploit both delexicalized parsing and parallel data , using an english delexicalized parser as the seed parser for the target languages , and updating it according to word alignments---choudhury et al proposed a hidden markov model based text normalization approach for sms texts and texting language | 0 |
the language model pis implemented as an n-gram model using the srilm-toolkit with kneser-ney smoothing---all language models are created with the srilm toolkit and are standard 4-gram lms with interpolated modified kneser-ney smoothing | 1 |
for part of speech tagging and dependency parsing of the text , we used the toolset from stanford corenlp---a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit | 0 |
lstm units are firstly proposed by hochreiter and schmidhuber to overcome gradient vanishing problem---the lstm were introduced by hochreiter and schmidhuber and were explicitly designed to avoid the longterm dependency problem | 1 |
as a strong baseline , we trained the skip-gram model of mikolov et al using the publicly available word2vec 5 software---we use the pre-trained word2vec embeddings provided by mikolov et al as model input | 1 |
for the mix one , we also train word embeddings of dimension 50 using glove---for the neural models , we use 100-dimensional glove embeddings , pre-trained on wikipedia and gigaword | 1 |
we assume that a morphological analysis consists of three processes : tokenization , dictionary lookup , and disambiguation---we train a 5-gram language model with the xinhua portion of english gigaword corpus and the english side of the training set using the srilm toolkit | 0 |
coster and kauchak and wubben et al use a modified phrase-based model based on a machine translation framework---coster and kauchak and specia , drawing on work by caseli et al , use standard statistical machine translation machinery for text simplification | 1 |
our mt system is a phrase-based , that is developed using the moses statistical machine translation toolkit---the spelling normalisation component is a character-based statistical machine translation system implemented with the moses toolkit | 1 |
this model learns the most frequent and general dialog features present across the various domains---using diverse dialog domains allows the model to better capture general dialog dynamics applicable to different domains at once | 1 |
the penn discourse treebank , developed by prasad et al , is currently the largest discourse-annotated corpus , consisting of 2159 wall street journal articles---the penn discourse treebank is the largest available annotated corpora of discourse relations over 2,312 wall street journal articles | 1 |
multiword expressions are defined as idiosyncratic interpretations that cross word boundaries or spaces---chen et al collected gene names from various source databases and calculated intra-and inter-species ambiguities | 0 |
detecting irony in web texts is an important task to mine fine-grained sentiment information---to facilitate comparison with future work , we released the source code of our normalization system | 0 |
644 examples identified in the parsed penn treebank---in the 644 examples identified in the parsed penn treebank | 1 |
to the best of our knowledge , this work makes a first attempt at investigating the evaluation of narrative quality using automated methods---in this work , we propose a novel participant-based event summarization approach , which dynamically identifies the participants from data streams , then “ zooms-in ” the event stream to participant level , detects the important sub-events related to each participant using a novel time-content mixture model , and generates the event summary progressively | 0 |
relation extraction is the task of recognizing and extracting relations between entities or concepts in texts---relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text | 1 |
translation quality is evaluated by case-insensitive bleu-4 metric---we implemented this model using the srilm toolkit with the modified kneser-ney discounting and interpolation options | 0 |
wei and gulla modelled the hierarchical relation between product aspects---wei and gulla modeled the hierarchical relation between product aspects | 1 |
for the newsgroups and sentiment datasets , we used stopwords from the nltk python package---we use nltk to tokenize the english corpora and the jieba python module to segment the chinese data | 1 |
we also show that mt systems based on translatedfrom-source-language lms outperform mt systems based on originals lms or lms translated from other languages---as we show in section 4 . 2 , lms based on translations from the source language outperform lms compiled from non-source translations | 1 |
over the last decade , phrase-based statistical machine translation systems have demonstrated that they can produce reasonable quality when ample training data is available , especially for language pairs with similar word order---during the last decade , statistical machine translation systems have evolved from the original word-based approach into phrase-based translation systems | 1 |
we used the srilm software 4 to build langauge models as well as to calculate cross-entropy based features---to train our model we use markov chain monte carlo sampling | 0 |
named entity recognition ( ner ) is the task of identifying named entities in free text—typically personal names , organizations , gene-protein entities , and so on---the translation quality is evaluated by caseinsensitive bleu-4 metric | 0 |
adapting lda for selectional preference modeling was suggested independently by脫 s茅aghdha and ritter , mausam , and etzioni---ritter and etzioni proposed a generative approach to use extended lda to model selectional preferences | 1 |
sentiment analysis is a research area in the field of natural language processing---sentiment analysis is a collection of methods and algorithms used to infer and measure affection expressed by a writer | 1 |
moreover , we release a chinese zero anaphora corpus of 100 documents , which adds a layer of annotation to the manually-parsed sentences in the chinese treebank ( ctb ) 6.0---moreover , we release a chinese zero anaphora corpus of 100 documents , which adds a layer of annotation to the manually-parsed sentences in the chinese treebank ( ctb ) | 1 |
in this section we relate our work with the existing literature and further discuss our result---in this section we relate our work with the existing literature | 1 |
summarization is the process of condensing text to its most essential facts---summarization is the task of condensing a piece of text to a shorter version that contains the main information from the original | 1 |
semantic parsing is the task of mapping natural language sentences to a formal representation of meaning---semantic parsing is the task of converting natural language utterances into formal representations of their meaning | 1 |
we use the scikit-learn machine learning library to implement the entire pipeline---we use scikit learn python machine learning library for implementing these models | 1 |
coreference resolution is a task aimed at identifying phrases ( mentions ) referring to the same entity---coreference resolution is the task of partitioning a set of entity mentions in a text , where each partition corresponds to some entity in an underlying discourse model | 1 |
following this cache-based approach , gong et al further introduce two additional caches---gong et al extend this by further introducing two additional caches | 1 |
shallow semantic representations could prevent the sparseness of deep structural approaches and the weakness of bow models---we train a kn-smoothed 5-gram language model on the target side of the parallel training data with srilm | 0 |
we employ the polylingual topic model , which is originally used to model corresponding documents in different languages that are topically comparable , but not parallel translations---we use the polylingual topic model from mimno et al , which was developed to model multilingual corpora that are topically comparable between languages -the documents are not direct translations , but they cover the same ideas | 1 |
in our paper , we show that massive amounts of data can have a major impact on discourse processing research as well---in our paper , we show that massive amounts of data can have a major impact on discourse processing research | 1 |
in conversational systems , understanding user intent is critical to the success of interaction---identification of user intent has played an important role in conversational systems | 1 |
the results from a crowdsourced survey indicated that news values influence people¡¯s decisions to click on a headline---a crowdsourcing survey indicates that news values affect people ¡¯ s decisions to click on a headline | 1 |
our system not only identified the clinical temporal events , but also their detailed properties and their temporal relations with other events---outcomes of our system are not only the clinical temporal events , but also their detailed properties and their temporal relations with other events | 1 |
cite-p-17-1-6 extend the scheme to frame identification , for which they obtain satisfying results---cite-p-17-3-1 also take the grammar constraints into consideration | 1 |
the framework of linear models is derived from linear discriminant functions widely used for pattern classification and has been recently introduced into nlp tasks by collins and duffy---the method is derived from linear discriminant functions widely used for pattern classification , and has been recently introduced into nlp tasks by collins and duffy | 1 |
we show that a variety of ls models and representations , including alignment and language models , over both words and syntactic structures , can be adapted to the proposed higher-order formalism---from yahoo ! answers , we experimentally demonstrate that higher-order methods are broadly applicable to alignment and language models , across both word and syntactic representations | 1 |
for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus---five-gram language models are trained using kenlm | 0 |
these preference rules can be incorporated into a polynomial time generation algorithm , while some alternative formalizations of conversational impficature make the generation task np-hard---under these preference rules can be found in polynomial time , while some alternative formalizations of the free-of-false-implicatures constraint make the generation task np-hard | 1 |
following the approach in , we employ the morfessor 4 categories-map algorithm---following the approach in , we use the morfessor categories-map algorithm | 1 |
we propose a ranking strategy to select the best path in the constructed graph as a query-based abstract sentence for each cluster---for the generation phase , we propose a ranking strategy which selects the best path in the constructed word graph | 1 |
coreference resolution is the process of linking together multiple expressions of a given entity---coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept | 1 |
to tune feature weights minimum error rate training is used , optimized against the neva metric---the feature weights are tuned to optimize bleu using the minimum error rate training algorithm | 1 |
in this paper , we study feature-based chinese relation extraction---in this paper , we propose a novel feature-based chinese relation extraction | 1 |
for representing words , we used 100 dimensional pre-trained glove embeddings---for english posts , we used the 200d glove vectors as word embeddings | 1 |
for each one of the 6 languages which our approach covers , we built a phrase-based machine translation model using the moses toolkit---as a baseline system , we used the moses statistical machine translation package to build grapheme-based and phoneme-based translation systems , using a bigram language model | 1 |
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing---the target-side language models were estimated using the srilm toolkit | 1 |
we use word embedding pre-trained on newswire with 300 dimensions from word2vec---where a compound is the idiomatic word choice in the translation , a mt system can instead produce separate words , genitive or other alternative constructions , or only translate one part of the compound | 0 |
in addition , the highly irregular japanese orthography as is analyzed in poses a challenge for machine translation tasks---as is shown in , the japanese orthography is highly irregular , which contributes to a substantial number of out-of-vocabulary words in the machine translation output | 1 |
word sense disambiguation ( wsd ) is a problem long recognised in computational linguistics ( yngve 1955 ) and there has been a recent resurgence of interest , including a special issue of this journal devoted to the topic ( cite-p-27-8-11 )---word sense disambiguation ( wsd ) is a fundamental task and long-standing challenge in natural language processing ( nlp ) | 1 |
in addition , we combine the final results of the above two semi-supervised boosting methods---results by combining the results of the two semi-supervised boosting methods | 1 |
we used the moses toolkit for performing statistical machine translation---we used the moses toolkit to build an english-hindi statistical machine translation system | 1 |
semantic role labeling ( srl ) is a task of automatically identifying semantic relations between predicate and its related arguments in the sentence---semantic role labeling ( srl ) is a kind of shallow sentence-level semantic analysis and is becoming a hot task in natural language processing | 1 |
word alignment is a key component in most statistical machine translation systems---word alignment is a fundamental problem in statistical machine translation | 1 |
hpsg is a feature-based grammatical framework which is characterized by a modular specification of linguistic generalizations through extensive use of principles and lexicalization of grammatical information---some studies have analysed how to extract hashtags from a microblogging environment | 0 |
then we propose two approaches in order to improve the performance of chinese chunking---we proposed two approaches in order to improve the performance of chinese chunking | 1 |
laws et al use graph-based models to represent linguistic relations and induce translations---laws et al used linguistic analysis in the form of graph-based models instead of a vector space | 1 |
in this paper , we conducted a systematic study of the feature space for relation extraction---in this paper , we conduct a systematic study of the feature space for relation extraction | 1 |
we use the glove pre-trained word embeddings for the vectors of the content words---in our experiments , the pre-trained word embeddings for english are 100-dimensional glove vectors | 1 |
ambiguity is the task of building up multiple alternative linguistic structures for a single input ( cite-p-13-1-8 )---the knowledge representation system kl-one was the first dl | 0 |
alternatively , to avoid extracting features from an anaphora resolution system , callin et al developed a classifier based on a feed-forward neural network , which considered mainly the preceding nouns , determiners and their part-of-speech as features---for example , callin et al designed a classifier based on a feed-forward neural network , which considered as features the preceding nouns and determiners along with their part-of-speech tags | 1 |
recently , the focus has moved to mining user-generated content , such as online debates , discussions on regulations , and product reviews---recently , the focus has also moved to mining from user-generated content , such as online debates , discussions on regulations , and product reviews | 1 |
we perform pre-training using the skipgram nn architecture available in the word2vec tool---we use word2vec ) to pre-train the word embedding of 300 dimention and keep them from updating while training | 1 |
while polysemy is the immediate cause of the first problem , it indirectly contributes to the second problem as well by preventing the effective use of thesauri---we preprocessed the corpus with tokenization and true-casing tools from the moses toolkit | 0 |
following up on a translation model proposed by simard et al , galley and manning extend the phrase-based approach in that they allow for discontinuous phrase pairs---for the phrase-based models , galley and manning propose a translation model that uses discontinuous phrases and a corresponding beam search decoder | 1 |
in addition , we show that type-based features , including novel distributional features based on representative verbs , accurately predict predominant aspectual class for unseen verb types---type-based features ( lingind , dist ) may provide useful priors for some verbs and successfully predict predominant aspectual class for unseen verb types | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.