text stringlengths 82 736 | label int64 0 1 |
|---|---|
sentences are passed through the stanford dependency parser to identify the dependency relations---long sentences are removed , and the remaining sentences are pos-tagged and dependency parsed using the pre-trained stanford parser | 1 |
word sense disambiguation ( wsd ) is a key enabling-technology---word sense disambiguation ( wsd ) is a fundamental task and long-standing challenge in natural language processing ( nlp ) | 1 |
this paper presents a simple and effective method that retrieves translation pieces to guide nmt for narrow domains---in this paper , we examine user adaptation to the system ’ s lexical and syntactic choices in the context of the deployed | 0 |
we trained a 5-grams language model by the srilm toolkit---we use the sri language modeling toolkit for language modeling | 1 |
distributed word representations induced through deep neural networks have been shown to be useful in several natural language processing applications---recently , distributed representations have been widely used in a variety of natural language processing tasks | 1 |
we use pre-trained glove vector for initialization of word embeddings---we use the pre-trained glove vectors to initialize word embeddings | 1 |
sentiment classification is the fundamental task of sentiment analysis ( cite-p-15-3-11 ) , where we are to classify the sentiment of a given text---du et al have shown that segment-level topics and their dependencies can improve modeling accuracy in a monolingual setting | 0 |
linguistica and morfessor are built around an idea of optimally encoding the data , in the sense of minimal description length---for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus | 0 |
it is a standard phrasebased smt system built using the moses toolkit---moses is used as the baseline phrase-based smt system | 1 |
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit---a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke | 1 |
in this paper , we have presented an fdt-based model training approach to smt---to translate new-domain text , one major challenge is the large number of out-of-vocabulary ( oov ) and new-translation-sense words | 0 |
the targetside 4-gram language model was estimated using the srilm toolkit and modified kneser-ney discounting with interpolation---a hierarchical phrase-based translation grammar was extracted for the nist mt03 chinese-english translation using a suffix array rule extractor | 0 |
mikolov et al showed that word embedding represents words with meaningful syntactic and semantic information effectively---recently , mikolov et al introduced an efficient way for inferring word embeddings that are effective in capturing syntactic and semantic relationships in natural language | 1 |
we train a linear classifier using the averaged perceptron algorithm---the weights of the linear ranker are optimized using the averaged perceptron algorithm | 1 |
coreference resolution is the task of clustering a set of mentions in the text such that all mentions in the same cluster refer to the same entity---coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text | 1 |
twitter is a popular microblogging service , which , among other things , is used for knowledge sharing among friends and peers---twitter is a huge microblogging service with more than 500 million tweets per day from different locations of the world and in different languages ( cite-p-10-1-6 ) | 1 |
mrt is used to optimize a model globally for an arbitrary evaluation metric---minimum error rate training is widely used to optimize feature weights for a linear model | 1 |
for pos tagging and syntactic parsing , we use the stanford nlp toolkit---we use the stanford corenlp caseless tagger for part-of-speech tagging | 1 |
the system dictionary of the mix-wp identifier is comprised of the ckip lexicon and those unknown words found automatically from the udn 2001 corpus by a chinese word autoconfirmation system---the system dictionary of the bigram is comprised of ckip lexicon and those unknown words found automatically in the udn 2001 corpus by a chinese word auto-confirmation system | 1 |
this observation is used to build an initial solution that is later improved through self-learning---analysis shows that our initial solution is instrumental for making self-learning work without supervision | 1 |
we use the pku and msra data provided by the second international chinese word segmentation bakeoff to test our model---in this section , we test our joint model on pku and msra datesets provided by the second segmentation bake-off | 1 |
latent dirichlet allocation is a widely adopted generative model for topic modeling---one of the simplest topic models is latent dirichlet allocation | 1 |
language models have also been proved useful when determining the reading level of a text---language models constitute an important feature for assessing readability | 1 |
tang et al embed user in a matrix and build user-specific representation by a convolutional neural network structure---tang et al proposed a user-product neural network to incorporate both user and product information for sentiment classification | 1 |
collobert et al proposed cnn architecture that can be applied to various nlp tasks , such as pos tagging , chunking , named entity recognition and semantic role labeling---collobert et al propose a multi-task learning framework with dnn for various nlp tasks , including part-of-speech tagging , chunking , named entity recognition , and semantic role labelling | 1 |
our neural models achieve state-of-the-art results on the semeval 2010 relation classification task---our annotated dataset and trained dependency parser are available at http : / / slanglab . cs . umass . edu / twitteraae / | 0 |
our proposed method yielded better results than the previous state-of-the-art ilp system on different tac data sets---summarization data sets demonstrate this proposed method outperforms the previous ilp system | 1 |
semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them---semantic role labeling ( srl ) is the task of identifying the arguments of lexical predicates in a sentence and labeling them with semantic roles ( cite-p-13-3-3 , cite-p-13-3-11 ) | 1 |
we evaluated translation quality based on the caseinsensitive automatic evaluation score bleu-4---it has been applied to various areas such as image classification , speech recognition , image caption generation and machine translation | 0 |
we have shown that both syntactic and discourse relationships are important in antecedent selection---syntactic and discourse features are important in antecedent selection | 1 |
the output of the parser is a dod for the input utterance , which contains information both about its syntactic structure and its content---the output of the parser is a finite-state transducer that compactly packs all the ambiguities as a lattice | 1 |
however , the hand-crafted , well-structured taxonomies including wordnet , opencyc and freebase that are publicly available may not be complete for new or specialized domains---however , handcrafted , well-structured taxonomies such as wordnet , opencyc and freebase , which are publicly available , can be incomplete for new or specialized domains | 1 |
we first use the popular toolkit word2vec 1 provided by mikolov et al to train our word embeddings---to train the link embeddings , we use the speedy , skip-gram neural language model of mikolov et al via their toolkit word2vec | 1 |
semantic similarity is a measure that specifies the similarity of one text ’ s meaning to another ’ s---semantic similarity is a context dependent and dynamic phenomenon | 1 |
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus---we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing | 1 |
we initialize these word embeddings with glove vectors---we initialize the embedding layer weights with glove vectors | 1 |
paraphrase identification ( pi ) may be defined as “ the task of deciding whether two given text fragments have the same meaning ” ( lintean & rus 2011 )---paraphrase identification ( pi ) is a task that recognizes whether a pair of sentences is a paraphrase | 1 |
we use the stanford parser for obtaining all syntactic information---this syntactic information is obtained from the stanford parser | 1 |
in the early part of the last decade , phrase-based machine translation emerged as the preeminent design of statistical mt systems---phrase-based statistical mt has become the predominant approach to machine translation in recent years | 1 |
firstly , we built a forward 5-gram language model using the srilm toolkit with modified kneser-ney smoothing---we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing | 1 |
we observed that by using boostrap-resampling over bleu and nist metrics as described in---we perform bootstrap resampling with bounds estimation as described in | 1 |
we use the glove vectors of 300 dimension to represent the input words---huang et al reported that the time complexity of btg decoding with m-gram language model is o ) | 0 |
the statistics for these datasets are summarized in settings we use glove vectors with 840b tokens as the pre-trained word embeddings---the weights of the word embeddings use the 300-dimensional glove embeddings pre-trained on common crawl data | 1 |
the mre is the point of the story – the most unusual event that has the greatest emotional impact on the narrator and the audience---somasundaran and wiebe used unsupervised methods to identify stances in online debates | 0 |
we consider that feedback functions are expressed overwhelmingly through short utterances or fragments or in the beginning of potentially longer contributions---we consider that the feedback function is expressed overwhelmingly through short utterances or fragments or in the beginning of potentially longer contributions | 1 |
the parser uses the cky chart parsing algorithm described in steedman---word sense disambiguation ( wsd ) is a problem of finding the relevant clues in a surrounding context | 0 |
in this paper , we propose a distribution-based cutoff method---rush et al luong et al propose a neural machine translation model with two-layer lstms for the encoder-decoder | 0 |
a back-off 2-gram model with good-turing discounting and no lexical classes was also created from the training set , using the srilm toolkit ,---the third baseline , a bigram language model , was constructed by training a 2-gram language model from the large english ukwac web corpus using the srilm toolkit with default good-turing smoothing | 1 |
while the notion of scene construction is not new , our insight is that this can be done with a simple “ knowledge graph ” representation , allowing several massive background kbs to be applied , somewhat alleviating the knowledge bottleneck---by using a simple “ knowledge graph ” representation of the question , we can leverage several large-scale linguistic resources to provide missing background knowledge , somewhat alleviating the knowledge bottleneck | 1 |
the data consists of sections of the wall street journal part of the penn treebank , with information on predicate-argument structures extracted from the propbank corpus---the data comes from the conll 2000 shared task , which consists of sentences from the penn treebank wall street journal corpus | 1 |
we used a 4-gram language model which was trained on the xinhua section of the english gigaword corpus using the srilm 4 toolkit with modified kneser-ney smoothing---soricut and echihabi propose documentlevel features to predict document-level quality for ranking purposes , having bleu as quality label | 0 |
chapman et al proposed a rule-based algorithm called negex for determining whether a finding or disease mentioned within narrative medical reports is present or absent---chapman et al developed negex , a simple regular expression-based algorithm to determine whether a finding or disease mentioned within medical reports was present or absent | 1 |
chodorow et al discuss the comparability of grammatical error detection systems and give recommendations for best practices---chodorow et al presented the evaluation scheme for mapping writer , annotator , and system output onto traditional evaluation metrics for grammatical error detection | 1 |
first , our summaries are created from extracted phrases rather than from sentences---sentences , our summaries are created from extracted phrases rather than from sentences | 1 |
the parameter weights are optimized with minimum error rate training---the minimum error rate training was used to tune the feature weights | 1 |
mihalcea et al developed several corpus-based and knowledge-based word similarity measures and applied them to a paraphrase recognition task---mihalcea et al combine pointwise mutual information , latent semantic analysis and wordnet-based measures of word semantic similarity into an arbitrary text-to-text similarity metric | 1 |
as discussed in section 4 , k̂ bonferroni is the appropriate estimator of the number of cases one algorithm outperforms another---as discussed in section 4 , k bonferroni is the appropriate estimator of the number of cases | 1 |
we apply our approach to train a semantic parser that uses 77 relations from freebase in its knowledge representation---shrestha and mckeown also study the problem of da modeling in email conversations considering the two dialogue acts of question and answer | 0 |
in our experiments we use word2vec as a representative scalable model for unsupervised embeddings---to get a dictionary of word embeddings , we use the word2vec tool 2 and train it on the chinese gigaword corpus | 1 |
the chinese word embeddings are pre-trained using skip-gram model on the raw cqa corpus---we describe a supervised and also a semi-supervised method to discriminate the senses of partial cognates between french and english | 0 |
we use srilm for training a trigram language model on the english side of the training data---we use sri language modeling toolkit to train a 5-gram language model on the english sentences of fbis corpus | 1 |
previous attention models are built using information embedded in text including users , products and text in local context for sentiment classification---in addition , user and product information are flexibly modeled for sentiment classification in the neural network methods | 1 |
semantic parsing is the problem of translating human language into computer language , and therefore is at the heart of natural language understanding---semantic parsing is the task of mapping a natural language query to a logical form ( lf ) such as prolog or lambda calculus , which can be executed directly through database query ( zettlemoyer and collins , 2005 , 2007 ; haas and riezler , 2016 ; kwiatkowksi et al. , 2010 ) | 1 |
we tune the systems using kbest batch mira---we use k-batched mira to tune the weights for all the features | 1 |
we initialize these word embeddings with glove vectors---for all models , we use fixed pre-trained glove vectors and character embeddings | 1 |
coreference resolution is the process of linking multiple mentions that refer to the same entity---automatic text summarization is a seminal problem in information retrieval and natural language processing ( luhn , 1958 ; baxendale , 1958 ; edmundson , 1969 ) | 0 |
for the actioneffect embedding model , we use pre-trained glove word embeddings as input to the lstm---for sentences , we tokenize each sentence by stanford corenlp and use the 300-d word embeddings from glove to initialize the models | 1 |
we use the 300-dimensional pre-trained word2vec 3 word embeddings and compare the performance with that of glove 4 embeddings---we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training | 1 |
following the experimental settings of lang and lapata , we use the conll 2008 shared task dataset , only consider verbal predicates , and run unsupervised training on the standard training set---following common practices , we measure the overlap of induced semantic roles and their gold labels on the conll 2008 training data | 1 |
we evaluated the translation quality using the bleu-4 metric---huang et al apply split-merge training to create hmms with latent annotations for chinese pos tagging | 0 |
we use the adagrad algorithm to optimize the conditional , marginal log-likelihood of the data---to optimize model parameters , we use the adagrad algorithm of duchi et al with l2 regularization | 1 |
translation results are evaluated using the word-based bleu score---the translation quality is evaluated by case-insensitive bleu-4 metric | 1 |
for the latter baseline , we use berkeley parser collins parser---for comparison , we also include the berkeley parser | 1 |
k枚nig et al looked also at mci and ad subjects and examined vocal features using support vector machine---k枚nig et al looked also at mci and ad subjects and examined vocal features using support vector machines | 1 |
incometo select the most fluent path , we train a 5-gram language model with the srilm toolkit on the english gigaword corpus---in both pre-training and fine-tuning , we adopt adagrad and l2 regularizer for optimization | 0 |
by developing this dataset , we also introduce a new nlp task for the automatic classification of content types---we present initial promising results for the automatic classification of content types | 1 |
in this run , we use a sentence vector derived from word embeddings obtained from word2vec---here , for textual representation of captions , we use fisher-encoded word2vec features | 1 |
for implementation , we used the liblinear package with all of its default parameters---we use the svm implementation available in the li-blinear package | 1 |
research on automatic semantic structure extraction has been widely studied since the pioneering work of gildea and jurafsky---statistical translation models for retrieval have first been introduced by berger and lafferty | 0 |
more recently , li et al proposed the first joint model for chinese pos tagging and dependency parsing in a graph-based parsing framework , which is one of our baseline systems---li et al , li and zhou , hatori et al , and ma et al present systems that jointly model chinese pos tagging and dependency parsing | 1 |
ner is a sequence tagging task that consists in selecting the words that describe entities and recognizing their types ( e.g. , a person , location , company , etc . )---in this paper , we evaluate performance on a domain adaptation setting | 0 |
this is therefore the underlying approach for reducing the word sampling problem into graph-based active learning---for pos tagging and syntactic parsing , we use the stanford nlp toolkit | 0 |
it has been shown that incorporating sentiment analysis can improve community detection when looking for sentiment-based communities---analysis of the results can provide insight on which contextual information provide the most improvement in the task of sentiment-based community detection | 1 |
multiword expressions are problematic in machine translation due to the idiomaticity and overgeneration problems---coreference resolution is the task of clustering a sequence of textual entity mentions into a set of maximal non-overlapping clusters , such that mentions in a cluster refer to the same discourse entity | 0 |
semantic parsing is the problem of translating human language into computer language , and therefore is at the heart of natural language understanding---coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept | 0 |
for our logistic regression classifier we use the implementation included in the scikit-learn toolkit 2---and also includes a pos tagger , which can be used alone or as part of collocation or idiom extraction | 0 |
neural networks , working on top of conventional n-gram back-off language models , have been introduced in as a potential means to improve discrete language models---neural networks , working on top of conventional n-gram back-off language models , have been introduced in as a potential means to improve discrete language models | 1 |
zeng et al use convolutional neural network for learning sentence-level features of contexts and obtain good performance even without using syntactic features---we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option | 0 |
neural networks , working on top of conventional n-gram back-off language models , have been introduced in as a potential means to improve discrete language models---neural networks , working on top of conventional n-gram back-off language models , have been introduced in as a potential means to improve conventional language models | 1 |
1 bunsetsu is a linguistic unit in japanese that roughly corresponds to a basic phrase in english---1 a bunsetsu is the linguistic unit in japanese that roughly corresponds to a basic phrase in english | 1 |
furthermore , we train a 5-gram language model using the sri language toolkit---in this paper , we propose a convolutional neural network ( cnn ) model for text-based multiple choice question answering | 0 |
semantic role labeling ( srl ) is the task of automatically labeling predicates and arguments in a sentence with shallow semantic labels---semantic role labeling ( srl ) is the task of identifying the semantic arguments of a predicate and labeling them with their semantic roles | 1 |
both language models use modified kneser-ney smoothing---all models utilize the modified interpolated kneser-ney smoothing technique | 1 |
the integrated dialect classifier is a maximum entropy model that we train using the liblinear toolkit---we use the multi-class logistic regression classifier from the liblinear package 2 for the prediction of edit scripts | 1 |
we used the moses toolkit to train the phrase tables and lexicalized reordering models---for phrase-based smt translation , we used the moses decoder and its support training scripts | 1 |
furthermore , we train a 5-gram language model using the sri language toolkit---srilm toolkit is used to build these language models | 1 |
in this paper , we propose an attention-based hierarchical neural network model for discourse parsing---in this paper , we propose to use a hierarchical bidirectional long short-term memory ( bi-lstm ) network | 1 |
we design a coupled bag-of-words model , which correlates words based on their similarities on sentence-level readability computed using text features---such as the readability formulae , the word-based and feature-based methods , our method develops a coupled bag-of-words model which combines the merits of word frequencies and text features | 1 |
moreover , we show that , as a decoding algorithm , the greedy method surpasses dual decomposition in second-order parsing---we use dual decomposition to show that the greedy method indeed succeeds as an inference algorithm | 1 |
in this paper , with the help of these two concepts , we propose a novel framework to solve the one-to-many non-isomorphic mapping issue---for the first issue , we propose a novel non-isomorphic translation framework to capture more non-isomorphic structure | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.