text stringlengths 82 736 | label int64 0 1 |
|---|---|
in recent years , distributional semantics models have received close attention from the linguistic community---we pre-train the word embeddings using word2vec | 0 |
mihalcea et al propose a method to learn multilingual subjective language via cross-language projections---the feature weights of the translation system are tuned with the standard minimum-error-ratetraining to maximize the systems bleu score on the development set | 0 |
we evaluated translation quality using uncased bleu and ter---we evaluated our models using bleu and ter | 1 |
dye et al introduce a system that utilizes scripts for specific situations---dye et al developed a system based on scripts of common interactions | 1 |
we train a trigram language model with the srilm toolkit---the language models in our systems are trained with srilm | 1 |
for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit---even without such syntactic information , our neural models can realize comparable performance exclusively using the word sequence information of a sentence | 0 |
the phrase translation strategy significantly outperformed the sentence translation strategy---phrase translation strategy consistently outperformed the sentence translation strategy | 1 |
ng proposed a generative model for unsupervised coreference resolution that views coreference as an em clustering process---ng presented a generative model that views coreference as an em clustering process | 1 |
the corpus we used contains manual dialog act annotations as described in hu et al---the corpus we used also contains manual dialog act annotations as described in hu et al | 1 |
bilingual word embeddings has become a source of great interest in recent times---bilingual word embeddings have attracted a lot of attention in recent times | 1 |
bilingual co-training also enables us to build classifiers for two languages in tandem with the same combined amount of data as would be required for training a single classifier in isolation while achieving superior performance---bilingual co-training enables us to build classifiers for two languages in tandem with the same combined amount of data as required for training a single classifier in isolation while achieving superior performance | 1 |
mcclosky et al presented a successful instance of parsing with self-training by using a reranker---mcclosky et al use self-training in combination with a pcfg parser and reranking | 1 |
slot filling is a key component in spoken language understanding ( slu ) , which is usually treated as a sequence labeling problem and solved using methods such as conditional random fields ( crfs ) ( cite-p-15-3-8 ) or recurrent neural networks ( rnns ) ( cite-p-15-3-13 , cite-p-15-3-7 )---slot filling is a traditional task and tremendous efforts have been done , especially since the 1980s when the defense advanced research program agency ( darpa ) airline travel information system ( atis ) projects started ( cite-p-16-3-4 ) | 1 |
we further used adam to optimize the parameters , and used cross-entropy as the loss function---both barrθ΄Έn-cedeεΈ½o et al and filice et al use lexical similarities and tree kernels on parse trees | 0 |
a pseudo-word is the concatenation of two words ( e.g . house/car )---pseudo-word is a kind of multi-word expression ( includes both unary word and multi-word ) | 1 |
we then use an efficient general-purpose parser , bitpar , to parse unseen sentences with the resulting treebank grammars and strip off our morphological features for the purpose of evaluation---kalchbrenner et al introduced a convolutional neural network for sentence modeling that uses dynamic k-max pooling to better model inputs of varying sizes | 0 |
in our experiments , we used the srilm toolkit to build 5-gram language model using the ldc arabic gigaword corpus---for all data sets , we trained a 5-gram language model using the sri language modeling toolkit | 1 |
pitler and nenkova used the penn discourse treebank to examine discourse relations---pitler and nenkova used the same features to evaluate how well a text is written | 1 |
following up on a translation model proposed by simard et al , galley and manning extend the phrase-based approach in that they allow for discontinuous phrase pairs---the glove 100-dimensional pre-trained word embeddings are used for all experiments | 0 |
finally , we have demonstrated that machine transliteration is immediately useful to endto-end smt---to allow a comparison between transliteration systems , we are able to show that adding our transliterations to a production-level smt | 1 |
the iwcb model is a variation of the word-lattice-based chinese character bigram proposed by lee et al---the model is slightly modified from the word-lattice-based character bigram model of lee et al | 1 |
bengio et al proposed neural probabilistic language model by using a distributed representation of words---bengio et al proposed to use artificial neural network to learn the probability of word sequences | 1 |
semantic role labeling ( srl ) is the task of identifying the arguments of lexical predicates in a sentence and labeling them with semantic roles ( cite-p-13-3-3 , cite-p-13-3-11 )---semantic role labeling ( srl ) is a kind of shallow semantic parsing task and its goal is to recognize some related phrases and assign a joint structure ( who did what to whom , when , where , why , how ) to each predicate of a sentence ( cite-p-24-3-4 ) | 1 |
in summarization , barzilay and mckeown present a sentence fusion technique for multidocument summarization which needs to restructure sentences to improve text coherence---we propose a data-driven approach for generating short children β s stories that does not require extensive manual involvement | 0 |
negation is a grammatical category that comprises devices used to reverse the truth value of propositions---for regularization , dropout is applied to the input and hidden layers | 0 |
we evaluated the translation quality of the system using the bleu metric---we evaluated the translation quality using the case-insensitive bleu-4 metric | 1 |
to this end , we use conditional random fields---the second decoding method is to use conditional random field | 1 |
in the following experiments , we explore which factors affect stability , as well as how this stability affects downstream tasks that word embeddings are commonly used for---in the following experiments , we explore which factors affect stability , as well as how this stability affects downstream tasks | 1 |
in this proposal , we propose a corpus-based study of doctor-patient conversations of antibiotic treatment negotiation in pediatric consultations---in this proposal , we propose a corpus based study to examine doctor-patient conversation of antibiotic treatment negotiation | 1 |
the first strategy , named asymmetry alignment , identifies nes only on the source side and then finds their corresponding nes on the target side---the first strategy identifies nes only on the source side and then finds their corresponding nes on the target side | 1 |
we use minimum error rate training to tune the feature weights of hpb for maximum bleu score on the development set with serval groups of different start weights---we utilize minimum error rate training to optimize feature weights of the paraphrasing model according to ndcg | 1 |
krisp is a semantic parser learning system which uses word subsequence kernel based svm classifiers and was shown to be robust to noise compared to other semantic parser learners---krisp is a trainable semantic parser that uses support vector machines as the machine learning method with a string subsequence kernel | 1 |
waseem et al , 2017 ) proposed a typology of abusive language sub-tasks---waseem et al , 2017 ) tried to capture similarities between different sub tasks | 1 |
the srilm toolkit was used to build the 5-gram language model---the language models were built using srilm toolkits | 1 |
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit---we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing | 1 |
coreference resolution is the task of identifying all mentions which refer to the same entity in a document---wiebe et al train a sentence-level probabilistic classifier on data from the wsj to identify subjectivity in these sentences | 0 |
named entity recognition ( ner ) is the task of identifying and classifying phrases that denote certain types of named entities ( nes ) , such as persons , organizations and locations in news articles , and genes , proteins and chemicals in biomedical literature---named entity recognition ( ner ) is the task of detecting named entity mentions in text and assigning them to their corresponding type | 1 |
coreference resolution is a well known clustering task in natural language processing---coreference resolution is the task of determining which mentions in a text are used to refer to the same real-world entity | 1 |
our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing---we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit | 1 |
a sentiment lexicon is a list of words and phrases , such as β excellent β , β awful β and β not bad β , each of them is assigned with a positive or negative score reflecting its sentiment polarity and strength ( cite-p-18-3-8 )---a sentiment lexicon is a list of words and phrases , such as β excellent β , β awful β and β not bad β , each is being assigned with a positive or negative score reflecting its sentiment polarity and strength | 1 |
as an illustrative example , we show β anneke gronloh β , which may occur as β mw. , gronloh β , β anneke kronloh β or β mevrouw g β---as an illustrative example , we show β anneke gronloh β , which may occur as β mw . , gronloh β , β anneke kronloh β | 1 |
nowadays a very popular topic model is latent dirichlet allocation , a generative bayesian hierarchical model---phrase-based statistical mt has become the predominant approach to machine translation in recent years | 0 |
we use this model as an additional translation table in the moses phrase-based statistical mt system along with a standard phrasebased translation table---to induce interlingual features , several resources have been used , including bilingual lexicon and parallel corpora | 0 |
we train skip-gram word embeddings with the word2vec toolkit 1 on a large amount of twitter text data---we trained a continuous bag of words model of 400 dimensions and window size 5 with word2vec on the wiki set | 1 |
even if learners have access to an incorrect example retrieval system , such as kamata and yamauchi and nishina et al , they are often unable to rewrite a composition without correct versions of the incorrect examples---even if learners have access to an incorrect example retrieval system , such as kamata and yamauchi and nishina et al , they do not know how to search for the examples because they do not know whether their query includes errors | 1 |
we use evaluation metrics similar to those in---in this paper , we explore syntactic structure features by means of bilingual tree kernels and apply them to bilingual subtree alignment | 0 |
we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit---we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus | 1 |
twitter is a popular microblogging service , which , among other things , is used for knowledge sharing among friends and peers---second , we explored the actual representation of the acm | 0 |
by contrast , our approach is based on a single unified model , requires no entity types , and for us inferring a fact amounts to not more than a few dot products---by contrast , our approach is based on a single unified model , requires no entity types , and for us inferring | 1 |
we trained an svm with rbf kernel using scikit-learn---we trained svm models with rbf kernel using scikit-learn | 1 |
the word embeddings can provide word vector representation that captures semantic and syntactic information of words---coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities | 0 |
relation extraction is the task of finding relations between entities in text , which is useful for several tasks such as information extraction , summarization , and question answering ( cite-p-14-3-7 )---we use bleu as the metric to evaluate the systems | 0 |
this approach relies on word embeddings for the computation of semantic relatedness with word2vec---in this approach , words are mapped into a continuous latent space using two embedding methods word2vec and glove | 1 |
we use conditional random fields , a popular approach to solve sequence labeling problems---le and mikolov extended the word embedding learning model by incorporating paragraph information | 0 |
pang and lee cast this problem as a classification task , and use machine learning method in a supervised learning framework---pang and lee cast this problem a classification task , and use machine learning method in a supervised learning framework | 1 |
crowdsourcing is a cheap and increasingly-utilized source of annotation labels---crowdsourcing is a scalable and inexpensive data collection method , but collecting high quality data efficiently requires thoughtful orchestration of crowdsourcing jobs | 1 |
reisinger and mooney and huang et al also presented methods that learn multiple embeddings per word by clustering the contexts---huang et al further extended this context clustering method and incorporated global context to learn multi-prototype representation vectors | 1 |
the baseline system is a phrase-based smt system , built almost entirely using freely available components---the state-ofthe-art baseline is a standard phrase-based smt system tuned with mert | 1 |
pennell and liu were the first to study characterbased normalization---in this way , our cache-based approach can provide useful data at the beginning of the translation process | 0 |
the re-ranking algorithms include rescoring and minimum bayes-risk decoding---multiple solutions are also used for reranking , tuning , minimum bayes risk decoding , and system combination | 1 |
the joint nature provides crucial benefits by allowing situated cues , such as the set of visible objects , to directly influence learning---during the last decade , statistical machine translation systems have evolved from the original word-based approach into phrase-based translation systems | 0 |
we compute number and gender for common nouns using the number and gender data provided by bergsma and lin---the empirical evaluation of all our systems on the two standard metrics bleu and ter is presented in table 5 | 0 |
the targetside 4-gram language model was estimated using the srilm toolkit and modified kneser-ney discounting with interpolation---in this paper , we described our opinion extraction task , which extract opinion | 0 |
we use the word2vec framework in the gensim implementation to generate the embedding spaces---we use the word2vec tool to train monolingual vectors , 6 and the cca-based tool for projecting word vectors | 1 |
we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit---our 5-gram language model is trained by the sri language modeling toolkit | 1 |
named entity recognition ( ner ) is a fundamental information extraction task that automatically detects named entities in text and classifies them into predefined entity types such as person , organization , gpe ( geopolitical entities ) , event , location , time , date , etc---named entity recognition ( ner ) is the task of detecting named entity mentions in text and assigning them to their corresponding type | 1 |
some of these are not related to discourse at all , morphosyntactic similarities and word based measures like tf-idf ,---some of the work is not related to discourse at all , morphosyntactic similarities and word-based measures like tf-idf , | 1 |
our basic algorithm is an unsupervised method presented in martinez et al---complexity , i . e . , we propose a novel method to compress the embedding and prediction subnets in neural language models | 0 |
we tokenized the english data according to the penn treebank standard with stanford corenlp---we used stanford corenlp to tokenize the english and german data according to the penn treebank standard | 1 |
for the semantic language model , we used the srilm package and trained a tri-gram language model with the default goodturing smoothing---we used srilm for training the 5-gram language model with interpolated modified kneser-ney discounting , | 1 |
metamap is developed to link the text of medical documents to the knowledge embedded in umls metathesaurus---we propose an approximate mcmc framework that facilitates efficient inference | 0 |
latent dirichlet allocation is one of the most popular topic models used to mine large text data sets---taglda is a representative latent topic model by extending latent dirichlet allocation | 1 |
relation extraction is the task of finding relations between entities in text , which is useful for several tasks such as information extraction , summarization , and question answering ( cite-p-14-3-7 )---the only previous work on english-to-arabic smt that we are aware of is by sarikaya and deng | 0 |
in the last decade , semantic frames , such as framenet and propbank , have been manually elaborated---recently , large corpora have been manually annotated with semantic roles in framenet and propbank | 1 |
twitter is a microblogging service that has 313 million monthly active users 1---recently , convolutional neural networks have yielded best performance on many text classification tasks | 0 |
we also use a 4-gram language model trained using srilm with kneser-ney smoothing---we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing | 1 |
developing features has been shown crucial to advancing the state-of-the-art in dependency tree parsing---developing features has been shown to be crucial to advancing the state-of-the-art in dependency parsing | 1 |
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing---we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit | 1 |
as a learning algorithm for our classification model , we used maximum entropy---a kn-smoothed 5-gram language model is trained on the target side of the parallel data with srilm | 0 |
automatic text generation is the process of converting non-linguistic data into coherent and comprehensible text ( cite-p-21-3-11 )---script knowledge is a body of knowledge that describes a typical sequence of actions people do in a particular situation ( cite-p-7-1-6 ) | 0 |
second , we utilize word embeddings 3 to represent word semantics in dense vector space---third , we convert the stanford glove twitter model to word2vec and obtain the word embeddings | 1 |
the emotions proposed by are popular in emotion classification tasks---these emotion labels are mainly borrowed from ekman and the occ emotion model | 1 |
luong et al showed improvements on translation , captioning , and parsing in a shared multi-task setting---luong et al used a multi-task setup with a shared encoder to parse and translate the source language | 1 |
we use two standard evaluation metrics bleu and ter , for comparing translation quality of various systems---we evaluate the performance of different translation models using both bleu and ter metrics | 1 |
we present an unsupervised model of dialogue act sequences in conversation---we have presented an unsupervised model of das in conversation that separates out content | 1 |
mei et al propose an encoder-aligner-decoder architecture to generate weather forecasts---mei et al propose an encoder-aligner-decoder model to generate weather forecasts | 1 |
wordnet is a lexical database where each unique meaning of a word is represented by a synonym set---word sense disambiguation ( wsd ) is the problem of assigning a sense to an ambiguous word , using its context | 0 |
srl is a complex task , which is reflected by the algorithms used to address it---the model weights were trained using the minimum error rate training algorithm | 0 |
we use the treebanks from the conll shared tasks on dependency parsing for evaluation---we used data from the conll-x shared task on multilingual dependency parsing | 1 |
we also use a 4-gram language model trained using srilm with kneser-ney smoothing---we train a kn-smoothed 5-gram language model on the target side of the parallel training data with srilm | 1 |
to remedy this , we have proposed using maximum mutual information ( mmi ) as the objective function---we propose using maximum mutual information ( mmi ) as the objective function | 1 |
we analyze a set of linguistic features in both truthful and deceptive responses to interview questions---we conduct an empirical analysis of feature sets and report on the different characteristics of truthful and deceptive language | 1 |
table 4 presents case-insensitive evaluation results on the test set according to the automatic metrics bleu , ter , and meteor---table 3 shows the performance of these systems under three widely used evaluation metrics ter , bleu and meteor | 1 |
medlock and briscoe , morante et al , θ°zgηr and radev , szarvas et al ,---we computed the translation accuracies using two metrics , bleu score , and lexical accuracy on a test set of 30 sentences | 0 |
we use the word2vec skip-gram model to train our word embeddings---we use word2vec from as the pretrained word embeddings | 1 |
neural network-based encoder-decoder models are among recent attractive methodologies for tackling natural language generation tasks---neural network-based encoder-decoder models are cutting-edge methodologies for tackling natural language generation ( nlg ) tasks | 1 |
the brat annotation tool was used for manual revision and annotation of the oov words---the feature weights of the translation system are tuned with the standard minimum-error-ratetraining to maximize the systems bleu score on the development set | 0 |
we compare against state-of-the-art hierarchical translation baselines , based on the joshua and moses translation systems with default decoding settings---that attempts to learn distinct feature representations for anaphoricity detection and antecedent ranking , which we encourage by pre-training on a pair of corresponding subtasks | 0 |
the evaluation shows promising expert search results---expert search shows promising improvement | 1 |
the results help to understand how an environment of a slavonic language affects the performance of methods created for english---questions and knowledge base have been performed to evaluate their performance in the environment of the slavonic language | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.