text stringlengths 82 736 | label int64 0 1 |
|---|---|
in a first stage , it generates candidate compressions by removing branches from the source sentence¡¯s dependency tree using a maximum entropy classifier---in a first stage , the method generates candidate compressions by removing branches from the source sentence ¡¯ s dependency tree using a maximum entropy classifier | 1 |
similarly , concurrent learning could be used in an online fashion via live interaction with human users---rl can also have important implications for learning via live interaction with human users | 1 |
the method proposed by huang et al incorporates the sinica word segmentation system to detect typos---huang et al have proposed a learning model based on chinese phonemic alphabet for spelling check | 1 |
segmentation is a nontrivial task in japanese because it does not delimit words by whitespace---since segmentation is the first stage of discourse parsing , quality discourse segments are critical to building quality discourse representations ( cite-p-12-1-10 ) | 1 |
curran and moens found that dramatically increasing the volume of raw input data for distributional similarity tasks increases the accuracy of synonyms extracted---in this paper , we present discrex , the first approach for distant supervision to relation extraction | 0 |
similar to the issue-response relationship , shrestha et al proposed methods to identify the question-answer pairs from an email thread---shrestha and mckeown propose a supervised learning method to detect question-answer pairs in email conversations | 1 |
moreover , the large standardised development and test sets in simverb-3500 allow for principled tuning of hyperparameters , a critical aspect of achieving strong performance with the latest representation learning architectures---universal dependencies and morphology ( ud ) has recently been initiated within the nlp community ( cite-p-21-1-15 ) | 0 |
we trained a 3-gram language model on the spanish side using srilm---this personal belief is called argument | 0 |
for example , burchardt , erk , and frank apply a word sense disambiguation system to annotate predicates with a wordnet sense and hyponyms of these predicates are then assumed to evoke the same frame---coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text | 0 |
for the syntactic analogy and text classification tasks , lmms also surpass all the baselines---for the syntactic analogy and text classification tasks , our models also surpass all the baselines | 1 |
blitzer et al used structural correspondence learning to train a classifier on source data with new features induced from target unlabeled data---kann et al achieve the current state-of-the-art for canonical segmentation by re-ranking the output of the encoder-decoder system | 0 |
act is a theory of affective reasoning that uses empirically derived equations to predict the sentiments and emotions that arise from events---act is a social psychological theory of human social interaction ( cite-p-14-3-6 ) | 1 |
we presented a maximum entropy model to extend the sentence compression methods described by knight and marcu---as an example of model complexity , consider the popular hierarchical phrase-based model of chiang , which can translate discontiguous phrases | 0 |
as discussed in the introduction , we use conditional random fields , since they are particularly suitable for sequence labelling---we use a conditional random field since it represents the state of the art in sequence modeling and has also been very effective at named entity recognition | 1 |
we perform minimum error rate training to tune various feature weights---we used minimum error rate training to optimize the feature weights | 1 |
in section 2 , we discuss previous work , followed by an explanation of our model and its implementation in sections 3 and 4---ixa pipeline provides ready to use modules to perform efficient and accurate | 0 |
table 2 shows the translation quality measured in terms of bleu metric with the original and universal tagset---table 2 displays the quality , of the automatic translations generated for the test partitions | 1 |
we used a phrase-based smt model as implemented in the moses toolkit---we trained the statistical phrase-based systems using the moses toolkit with mert tuning | 1 |
we used the srilm toolkit to generate the scores with no smoothing---we use pre-trained glove vector for initialization of word embeddings | 0 |
several studies directly compare different word embedding models---context have not been systematically compared for different word embeddings | 1 |
finally , we used kenlm to create a trigram language model with kneser-ney smoothing on that data---unsupervised parsing has attracted researchers for decades for recent reviews ) | 0 |
we evaluate our model on three different tasks : multimodal sentiment analysis , speaker trait analysis , and emotion recognition---efficiency and performance of our approach are evaluated on different downstream tasks , namely sentiment analysis , speaker-trait recognition and emotion recognition | 1 |
the standard phrase-based machine translation system focuses on finding the most probable target sentence given the source sentence---a phrase-based smt system takes a source sentence and produces a translation by segmenting the sentence into phrases and translating those phrases separately | 1 |
in all submitted systems , we use the phrase-based moses decoder---we introduce a supervised fsc model to teach the compression model to generate stable sequences | 0 |
semantic parsing is the task of mapping natural language to machine interpretable meaning representations---semantic parsing is the problem of translating human language into computer language , and therefore is at the heart of natural language understanding | 1 |
we implemented this model using the srilm toolkit with the modified kneser-ney discounting and interpolation options---we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit | 1 |
duh et al employed the method of and further explored neural language model for data selection rather than the conventional n-gram language model---duh et al used a recurrent neural language model instead of an ngram-based language model to do the same | 1 |
the various models developed are evaluated using bleu and nist---the penn discourse treebank corpus is the best-known resource for obtaining english connectives | 0 |
cussens and pulman used a symbolic approach employing inductive logic programming , while erbach , barg and walther and fouvry followed a unificationbased approach---cussens and pulman describe a symbolic approach which employs inductive logic programming and barg and walther and fouvry follow a unification-based approach | 1 |
word sense disambiguation ( wsd ) is formally defined as the task of computationally identifying senses of a word in a context---word sense disambiguation ( wsd ) is the task of determining the meaning of an ambiguous word in its context | 0 |
figure 1 : our attention-based bilstm model for machine comprehension---in this paper , we present an ensemble of attention-based bilstm models for machine comprehension | 1 |
we use word2vec from as the pretrained word embeddings---in our word embedding training , we use the word2vec implementation of skip-gram | 1 |
both and filice et al use lexical similarities and tree kernels on parse trees---both barr贸n-cede帽o et al and filice et al use lexical similarities and tree kernels on parse trees | 1 |
sentence compression is a task of creating a short grammatical sentence by removing extraneous words or phrases from an original sentence while preserving its meaning---sentence compression is the task of shortening a sentence while preserving its important information and grammaticality | 1 |
topic-dependent modeling has proven to be an effective way to improve quality the quality of models in speech recognition---topic-dependent modeling was effectively applied in speech recognition to improve the quality of models | 1 |
we train a linear support vector machine classifier using the efficient liblinear package---there are several studies about grammatical error correction using phrase-based statistical machine translation | 0 |
xue et al proposed a word-based translation language model for question retrieval---xue et al proposed a translation-based language model for question retrieval | 1 |
for the english sts subtask , we used regression models combining a wide array of features including semantic similarity scores obtained from various methods---for this subtask combined a wide array of features including similarity scores calculated using knowledge based and corpus based methods in a regression model | 1 |
lin et al , 2012 ) proposed joint model of sentiment and topic which extends the state-ofthe-art topic model by adding a sentiment layer , this model is fully unsupervised and it can detect sentiment and topic simultaneously---lin and he proposed joint model of sentiment and topic which extends the state-of-the-art topic model by adding a sentiment layer , this model is fully unsupervised and it can detect sentiment and topic simultaneously | 1 |
we propose a joint learning method for pivot language-based paraphrase generation---in this paper , we propose a joint learning method of two smt systems for paraphrase generation | 1 |
based on uima , it allows for efficient parallel processing of large volumes of text---empirical results show that our model can generate either general or specific responses , and significantly outperform existing methods | 0 |
named entity recognition ( ner ) is a frequently needed technology in nlp applications---named entity recognition ( ner ) is the first step for many tasks in the fields of natural language processing and information retrieval | 1 |
we also used word2vec to generate dense word vectors for all word types in our learning corpus---we use pre-trained word2vec word vectors and vector representations by tilk et al to obtain word-level similarity information | 1 |
word sense disambiguation ( wsd ) is a key enabling-technology that automatically chooses the intended sense of a word in context---in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd ) | 1 |
twitter is a popular microblogging service which provides real-time information on events happening across the world---twitter is a famous social media platform capable of spreading breaking news , thus most of rumour related research uses twitter feed as a basis for research | 1 |
for all our classification experiments , we used the weka toolkit---for all the experiments we used the weka toolkit | 1 |
sentiment analysis is the study of the subjectivity and polarity ( positive vs. negative ) of a text ( cite-p-7-1-10 )---computational detection of sarcasm has become a popular area of natural language processing research in recent years | 0 |
this extra layer seems to be crucial for improving performance on this task---optimizing for both tasks is crucial for high performance | 1 |
semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation---these language models were built up to an order of 5 with kneser-ney smoothing using the srilm toolkit | 0 |
we first use bleu score to perform automatic evaluation---we substitute our language model and use mert to optimize the bleu score | 1 |
we use the skipgram model with negative sampling implemented in the open-source word2vec toolkit to learn word representations---to represent the semantics of the nouns , we use the word2vec method which has proven to produce accurate approximations of word meaning in different nlp tasks | 1 |
we have shown that ccg-gtrc as formulated above is weakly equivalent to ccg-std---sun and xu enhanced a cws model by interpolating statistical features of unlabeled data into the crfs model | 0 |
we use bleu scores as the performance measure in our evaluation---we substitute our language model and use mert to optimize the bleu score | 1 |
stance detection is the task of automatically determining from the text whether the author of the text is in favor of , against , or neutral towards a proposition or target---brown clustering is an agglomerative algorithm that induces a hierarchical clustering of words | 0 |
relation extraction is the task of finding relationships between two entities from text---relation extraction is the task of finding relations between entities in text , which is useful for several tasks such as information extraction , summarization , and question answering ( cite-p-14-3-7 ) | 1 |
luo et al , 2017 ) propose an attention based neural network model for predicting charges based on the fact description alone---we use the lstm cell as described in , figure 3 , configured in a bi-directional structure , called bdlstm , shown in figure 4 as the core network in our system | 0 |
the 50-dimensional pre-trained word embeddings are provided by glove , which are fixed during our model training---we use 300 dimensional glove embeddings trained on the common crawl 840b tokens dataset , which remain fixed during training | 1 |
with the algorithms presented in this paper , decoding with pdas is possible for any translation grammar as long as an entropy pruned lm is used---with the algorithms presented in this paper , decoding with pdas is possible for any translation grammar | 1 |
the sentiment analysis is a field of study that investigates feelings present in texts---sentiment analysis is the task of automatically identifying the valence or polarity of a piece of text | 1 |
to overcome these drawbacks , we abolish the syntactic information for the source side and develop a stringto-tree variant of ` mbots---in this paper , we present an unsupervised methodology for propagating lexical co-occurrence vectors into an ontology | 0 |
quickly , he crawled under the car and unscrewed the drain bolt---based on the distributional hypothesis , we train a skip-gram model to learn the distributional representations of words in a large corpus | 0 |
we report experimental results supporting our intuitions---and the results of experiments support our intuitions | 1 |
our second model is a convolutional neural network with max-over time pooling---the first stage of our classifier is represented by a convolutional neural network | 1 |
notice that the fan-out of a position set math-w-7-15-0-27 does not necessarily coincide with the fan-out of the non-terminal math-w-7-15-0-41 in the underlying lcfrs---given that math-w-5-1-0-300 , a derivation will associate math-w-5-1-0-311 with a set of one-component tuples of strings | 1 |
a problem text is split into fragments where each fragment corresponds to an observation or an update of the quantity of an entity in one or two containers---a problem text is split into fragments where each fragment is represented as a transition between two world states in which the quantities of entities are updated or observed | 1 |
relation extraction is the task of detecting and characterizing semantic relations between entities from free text---relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text | 1 |
for example , knight and graehl address the problem through cascaded finite state transducers , with explicit representations of the phonetics---for example , knight and graehl employ cascaded probabilistic finite-state transducers , one of the stages modeling the orthographicto-phonetic mapping | 1 |
for this language model , we built a trigram language model with kneser-ney smoothing using srilm from the same automatically segmented corpus---yan et al presented a variant of lda , dubbed biterm topic model , especially for short text modeling to alleviate the problem of sparsity | 0 |
semantic parsing aims to predict the logic forms of the question given the distant supervision of direct answers---deep semantic parsing aims to map a sentence in natural language into its corresponding formal meaning representation | 1 |
we present a joint model for the important qa tasks of answer sentence ranking and answer extraction---we propose a joint model for answer sentence ranking and answer extraction | 1 |
additionally we outperform the hand coded system on ner in spanish---we apply directly the ner system for spanish | 1 |
nowadays a very popular topic model is latent dirichlet allocation , a generative bayesian hierarchical model---in order to deal with this problem , we perform word alignment in two directions as described in | 0 |
the word embeddings are initialized with 100-dimensions vectors pre-trained by the cbow model---entities shows that our method significantly outperforms state-of-the-art sentence retrieval models | 0 |
we initialize the word embedding matrix with pre-trained glove embeddings---derived from the old-domain parallel corpus , our method recovers a new joint distribution that matches the marginal distributions of the new-domain comparable | 0 |
we use different pretrained word embeddings such as glove 1 and fasttext 2 as the initial word embeddings---we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings , which we do not optimize during training | 1 |
a prefix verb is a derived word with a bound morpheme as prefix---we have presented a computationally efficient scheme for selecting a subset of data from an unclean generic corpus such as data acquired from the web | 0 |
in this paper , we proposed a probabilistic model for associative anaphora resolution---al-onaizan and knight find that a model mapping directly from english to arabic letters outperforms the phoneme-toletter model | 0 |
in this study , we propose a bilingual document representation learning method for cross-lingual sentiment classification---in this study , we propose a representation learning approach which simultaneously learns vector representations for the texts | 1 |
in this paper , we analyze various training criteria which directly optimize translation quality---in this paper , we investigate methods to efficiently optimize model parameters with respect to machine translation quality | 1 |
hochreiter and schmidhuber , 1997 ) proposed a long short-term memory network , which can be used for sequence processing tasks---to solve the traditional recurrent neural networks , hochreiter and schmidhuber proposed the lstm architecture | 1 |
examples of these are freebase , yago , dbpedia , and google knowledge vault---widely used kbs are dbpedia , freebase , yago , wikidata and the google knowledge vault | 1 |
some prominent systems to map free text to umls include saphire , metamap , indexfinder , and nip---some known systems for mapping free text to umls are saphire , metamap , indexfinder , and nip | 1 |
birke and sarkar introduced trofi , which is considered the first statistical system to identify the metaphorical senses of verbs in a semi-supervised way---birke and sarkar proposed the trope finder system to recognize verbs with non-literal meaning using word sense disambiguation and clustering | 1 |
vector-space models of lexical semantics have been a popular and effective approach to learning representations of word meaning---vector space models of words have been very successful in capturing the semantic and syntactic characteristics of individual lexical items | 1 |
we evaluate the translation quality using the case-insensitive bleu-4 metric---we evaluate our models using the standard bleu metric 2 on the detokenized translations of the test set | 1 |
the training material consists of the minutes edited by the european parliament in several languages , also known as the final text editions---huang et al described and evaluated a bi-gram hmm tagger that utilizes latent annotations | 0 |
the same group has subsequently applied smart to extract entities for a qa system ( cite-p-17-5-1 )---qa systems usually rely on offthe-shelf el systems to extract entities from the question ( cite-p-17-5-1 ) | 1 |
farra et al propose a model of sentence classification in arabic documents---hara et al proposed to use a contextfree grammar to find a properly nested coordination structure | 0 |
we use the popular moses toolkit to build the smt system---our system is built using the open-source moses toolkit with default settings | 1 |
as a baseline for this comparison , we use morfessor categories-map---in our implementation , we use the binary svm light developed by joachims | 0 |
as sentiment analysis in twitter is a very recent subject , it is certain that more research and improvements are needed---sentiment analysis in twitter is a particularly challenging task , because of the informal and “ creative ” writing style , with improper use of grammar , figurative language , misspellings and slang | 1 |
word alignment is the task of identifying corresponding words in sentence pairs---word alignment is the problem of annotating parallel text with translational correspondence | 1 |
we use the stanford corenlp shift-reduce parsers for english , german , and french---the task of automatically assigning the correct meaning to a given word or entity mention in a document is called word sense disambiguation or entity linking , respectively | 0 |
the srilm toolkit was used to build the 5-gram language model---the trigram language model is implemented in the srilm toolkit | 1 |
semantic inference is a key component for advanced natural language understanding---semantic inference is the process by which machines perform reasoning over natural language texts | 1 |
the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit---the language model is a 5-gram lm with modified kneser-ney smoothing | 1 |
we train a kn-smoothed 5-gram language model on the target side of the parallel training data with srilm---a kn-smoothed 5-gram language model is trained on the target side of the parallel data with srilm | 1 |
we first extend the study on chinese chunking presented in by raising a set of additional features---a 4-gram language model is trained on the xinhua portion of the gigaword corpus with the srilm toolkit | 0 |
these are open research questions for future work---on the issue , these are still open questions | 1 |
this means in practice that the language model was trained using the srilm toolkit---a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.