text stringlengths 82 736 | label int64 0 1 |
|---|---|
therefore , we can try to find the transformation that minimizes the earth mover¡¯s distance---as distributions , we propose to minimize their earth mover ¡¯ s distance | 1 |
to train our models , we use svm-light-tk 15 , which enables the use of structural kernels in svm-light---we use svm-light-tk to train our reranking models , 9 which enables the use of tree kernels in svm-light | 1 |
such an analysis reveals that there are two distinct needs for adaptation , corresponding to the different distributions of instances and the different classification functions in the source and the target domains---the emergence of phrase-based statistical machine translation has been one of the major developments in statistical approaches to translation | 0 |
we use long shortterm memory networks to build another semanticsbased sentence representation---in this task , we present m awps ( math word problems , pronounced mops ) , a framework for building an online repository of math word problems | 0 |
decoding is based on a beam search algorithm similar to that of the phrase-based mt decoder---the decoder performs a stack-based search using a beam-search algorithm similar to the one used in pharoah | 1 |
a similar alignment method has been proposed for evaluating machine translation systems---previous research by lavie and denkowski proposed a similar alignment strategy for machine translation evaluation | 1 |
we use skipgram model to train the embeddings on review texts for k-means clustering---in addition to that we use pre-trained embeddings , by training word2vec skip-gram model on wikipedia texts | 1 |
we adapted the moses phrase-based decoder to translate word lattices---summarization can effectively capture the sub-events that have otherwise been shadowed by the long-tail of other dominant sub-events , yielding summaries with considerably better coverage | 0 |
this paper addresses the development and evaluation of pronunciation features for an automated system for scoring spontaneous speech---this paper presents a method for computing features for assessing the pronunciation quality of non-native spontaneous speech , guided by construct | 1 |
however , there are cases in which this can only be done by obscuring the underlying linguistic theory with the tricks needed for implementation---in the remainder of this paper , sec . 2 illustrates the related work , sec . 3 introduces the complexity of learning entailments from examples , sec . 4 describes our models , sec . 6 shows the experimental results | 0 |
in this paper , we will explore the relationship among translation rules---in this short paper , we propose a novel method to model rules as observed generation | 1 |
furthermore , we investigated the role of context-sensitive information such as language model scores in retrieval---we find that the use of context-sensitive translation information such as language models or reordering information , greatly improves retrieval | 1 |
bar and dershowitz addresses the challenge for spanish-english lcs---bar and dershowitz addresses the challenge for spanish-english lcspd | 1 |
motivated by this observation , this paper presents a new web mining scheme for parallel data acquisition---coverage and speed , this paper proposes a new web parallel data mining scheme | 1 |
the berkeley parser was used to obtain syntactic annotations---we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing | 0 |
one is described in and uses a margin based criterion for probabilities estimation---one is described in and uses a margin based training criteria for probabilities estimation | 1 |
the message-level embeddings are generated using doc2vec---this model is based on document vectorization using doc2vec | 1 |
we validate the compilation technique by applying the resulting wfst on a call-routing application---we describe the compilation of the boosting model into an wfst and validate the result of this compilation using a call-routing task | 1 |
we initialize word embeddings using the 300-dimension glove vectors supplied by pennington et al and we use the dependency parser from spacy 3 to obtain dependency paths of review sentences---for the sentence matching tasks , we initialized the word embeddings with 50-dimensional glove word vectors pretrained from wikipedia 2014 and gigaword 5 for all model variants | 1 |
participation in this task was used as the vehicle for efforts to integrate and exploit framenet in a comprehensive text processing system---in participating in this task , we integrated the use of framenet in the text parser component of the cl research | 1 |
we perform a number of analyses on how information about individual phonemes is encoded in the mfcc features extracted from the speech signal , and the activations of the layers of the model---in a series of experiments , we show that the lower layers of the model encode accurate representations of the phonemes which can be used in phoneme | 1 |
nallapati et al also employed the typical attention modeling based seq2seq framework , but utilized a trick to control the vocabulary size to improve the training efficiency---rush et al and nallapati et al employed attention-based sequenceto-sequence framework only for sentence summarization | 1 |
twitter consists of a massive number of posts on a wide range of subjects , making it very interesting to extract information and sentiments from them---among them , twitter is the most popular service by far due to its ease for real-time sharing of information | 1 |
for this reason , we propose using entity representations as context for generation---in this work , we provide evidence for the value of entity representations | 1 |
continuous scales are commonly used in psychology and related fields , but are virtually unknown in nlp---continuous scales are viable for use in language evaluation , and offer distinct advantages over discrete scales | 1 |
lengthy real-world texts are often hierarchically organized into chapters , sections , and paragraphs---documents are commonly organized hierarchically into sections and paragraphs | 1 |
translation performance is measured using the automatic bleu metric , on one reference translation---we measure translation performance by the bleu and meteor scores with multiple translation references | 1 |
we compare against state-of-the-art hierarchical translation baselines , based on the joshua and moses translation systems with default decoding settings---etzioni et al presented the knowitall system that also utilizes hyponym patterns to extract class instances from the web | 0 |
a discriminative preference ranking model with a preference for appropriate answers is trained and applied to unseen questions---questions show that a discriminatively trained preference rank model is able to outperform alternative approaches designed for the same task | 1 |
since verbnet uses argument labels that are more consistent across verbs , we are able to demonstrate that these new labels are easier to learn---by taking advantage of verbnet ’ s more consistent set of labels , we can generate more useful role label annotations | 1 |
a 4-gram language model was trained on the monolingual data by the srilm toolkit---a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke | 1 |
detection of these unknown words could be accomplished mainly by using a word-segmentation algorithm with a morphological analysis---without using any explicit delimiting character , detection of unknown words could be accomplished mainly by using a word-segmentation algorithm with a morphological analysis | 1 |
the targetside 4-gram language model was estimated using the srilm toolkit and modified kneser-ney discounting with interpolation---semantic role labeling ( srl ) consists of finding the arguments of a predicate and labeling them with semantic roles ( cite-p-9-1-5 , cite-p-9-3-0 ) | 0 |
we use glove vectors for word embeddings and one-hot vectors for pos-tag and dependency relations in each individual model---we use the glove algorithm to obtain 300-dimensional word embeddings from a union of these corpora | 1 |
with the best embeddings , our system was ranked third in the scenario 1 with the micro f1 score of 0.38---with the best embeddings , our system was ranked third in the scenario | 1 |
for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus---we built a 5-gram language model on the english side of europarl and used the kneser-ney smoothing method and srilm as the language model toolkit | 1 |
our translation model is implemented as an n-gram model of operations using the srilm toolkit with kneser-ney smoothing---we also use a 4-gram language model trained using srilm with kneser-ney smoothing | 1 |
we derive 100-dimensional word vectors using word2vec skip-gram model trained over the domain corpus---for the out-of-domain testsets , we obtained statistically significant overall improvements , but we were hampered by the small sizes of the testsets | 0 |
a subjective user evaluation shows that the consistency between the semantic representations and the learned realizations is high and that the naturalness of the realizations is higher than a hand-crafted baseline---user evaluation indicates that the consistency between the semantic representations and the learned realizations is high and that the naturalness of the realizations is significantly higher than a hand-crafted baseline | 1 |
zens and ney show that itg constraints yield significantly better alignment coverage than the constraints used in ibm statistical machine translation models on both german-english and french-english---word segmentation is a fundamental task for processing most east asian languages , typically chinese | 0 |
the srilm toolkit was used to build the trigram mkn smoothed language model---the language model is trained and applied with the srilm toolkit | 1 |
this approach was competitive with classification with svm using raw text and topic vectors---topic based classifier system is shown to be competitive with existing text classification techniques | 1 |
coreference resolution is the task of determining which mentions in a text refer to the same entity---coreference resolution is the task of partitioning a set of entity mentions in a text , where each partition corresponds to some entity in an underlying discourse model | 1 |
some of the recent works that have employed pre-trained language models include ulmfit , elmo , glomo , bert and openai transformer---one of the recent feature-based approaches is elmo which is based on the use of bidirectional lstm models | 1 |
we also used word2vec to generate dense word vectors for all word types in our learning corpus---word representations to learn word embeddings from our unlabeled corpus , we use the gensim im-plementation of the word2vec algorithm | 1 |
the model was built using the srilm toolkit with backoff and good-turing smoothing---a 4-grams language model is trained by the srilm toolkit | 1 |
coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text---the way the dataset used in the trac 2018 shared task was built is described in | 0 |
garg and henderson extended this model to use a restricted boltzmann machine representation---garg and henderson used rbm in a similar approach to dependency parsing | 1 |
in addition , we find that for the lda based adaptation scheme , adding more content words and increasing the number of topics can further improve the performance significantly---we used the scikit-learn library the svm model | 0 |
in this paper , we explore an alternate semi-supervised approach which does not require additional labeled data---in this paper , we demonstrate a general semi-supervised approach for adding pre-trained context | 1 |
we present an algorithm that improves user characterization by collecting and exploiting such commonsense knowledge---while we are the first to exploit commonsense knowledge in user characterization | 1 |
the joint nature provides crucial benefits by allowing situated cues , such as the set of visible objects , to directly influence learning---for the classifiers we use the scikit-learn machine learning toolkit | 0 |
parameters are initialized with glorot initialization---for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus | 0 |
automatic and interactive statistical machine translation ( smt )---hot toolkit for statistical machine translation ( smt ) | 1 |
we implement the pbsmt system with the moses toolkit---we use the moses software package 5 to train a pbmt model | 1 |
word sense disambiguation ( wsd ) is the task of determining the correct meaning ( “ sense ” ) of a word in context , and several efforts have been made to develop automatic wsd systems---word sense disambiguation ( wsd ) is the task of determining the meaning of a word in a given context | 1 |
coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity---coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text | 1 |
semantic role labeling ( srl ) is the task of identifying the semantic arguments of a predicate and labeling them with their semantic roles---we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training | 0 |
pun is a figure of speech that consists of a deliberate confusion of similar words or phrases for rhetorical effect , whether humorous or serious---a pun is a form of wordplay , which is often profiled by exploiting polysemy of a word or by replacing a phonetically similar sounding word for an intended humorous effect | 1 |
this is then used to create a word-context matrix from which row vectors can be used to measure word similarity---to construct a novel word-context matrix , which is further weighted and factorized using truncated svd to generate low-dimension word embedding vectors | 1 |
the word embeddings are initialized with 100-dimensions vectors pre-trained by the cbow model---the feature extractor 蠁 is a multi-layer perceptron over token embeddings , initialized by pre-trained word2vec vectors | 1 |
our word embeddings is initialized with 100-dimensional glove word embeddings---we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings | 1 |
in this paper , we proposed an approach to represent rare words by sparse linear combinations of common ones---we propose an approach to represent uncommon words ’ embeddings by a sparse linear combination of common ones | 1 |
in this paper , we formulate phrase chunking as a joint segmentation and labeling task---we have presented a novel approach to phrase chunking by formulating it as a joint segmentation and labeling problem | 1 |
using our proposed method , we acquired 217.8 million japanese entailment pairs with 80 % precision and 138.1 million non-trivial pairs with 70 % precision---we acquired 138 . 1 million pattern pairs with 70 % precision with such non-trivial lexical substitution as “ use y to distribute | 1 |
negation is a complex phenomenon present in all human languages , allowing for the uniquely human capacities of denial , contradiction , misrepresentation , lying , and irony ( cite-p-18-3-7 )---negation is a complex phenomenon present in all human languages , allowing for the uniquely human capacities of denial , contradiction , misrepresentation , lying , and irony ( horn and wansing , 2015 ) | 1 |
over the last few years , distributed representation models based on neural networks such as word2vec and glove have been of much importance in speech and natural language processing---vector based models such as word2vec , glove and skip-thought have shown promising results on textual data to learn semantic representations | 1 |
keyphrase extraction is the task of extracting a selection of phrases from a text document to concisely summarize its contents---t盲ckstr枚m et al use cross-lingual word clusters to show transfer of linguistic structure | 0 |
we used the srilm toolkit to generate the scores with no smoothing---we used the srilm toolkit to simulate the behavior of flexgram models by using count files as input | 1 |
coreference resolution is the task of identifying all mentions which refer to the same entity in a document---coreference resolution is the task of partitioning the set of mentions of discourse referents in a text into classes ( or ‘ chains ’ ) corresponding to those referents ( cite-p-12-3-14 ) | 1 |
pereira and lin use syntactic features in the vector definition---curran and lin use syntactic features in the vector definition | 1 |
semi-supervised learning is a machine learning approach that utilizes large amounts of unlabeled data , combined with a smaller amount of labeled data , to learn a target function---in this work , we present a method to identify the attitude of participants in an online discussion | 0 |
similarly , our participation , which achieved the third-best postition , used features that try to describe a comment in the context of the entire comment thread , focusing on user interaction---scmil presently deals with spelling corrections | 0 |
our ee framework is accompanied by a web-based user interface for the rapid development of event grammars and visualization of matches---text classification is a fundamental problem in natural language processing ( nlp ) | 0 |
additionally , a back-off 2-gram model with goodturing discounting and no lexical classes was built from the same training data , using the srilm toolkit---a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke | 1 |
to reduce error propagation , we use beam-search and scheduled sampling , respectively---to achieve efficient parsing , we use a beam search strategy like the previous methods | 1 |
furthermore , we train a 5-gram language model using the sri language toolkit---we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing | 1 |
cite-p-22-1-6 demonstrated that event schemas can be automatically induced from text corpora---cite-p-22-1-6 showed that event schemas can also be induced automatically from text corpora | 1 |
we evaluate the translation quality using the case-insensitive bleu-4 metric---we compute the interannotator agreement in terms of the bleu score | 1 |
coreference resolution is the process of linking together multiple expressions of a given entity---coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities | 1 |
zhou and xu use a bidirectional wordlevel lstm combined with a conditional random field for semantic role labeling---the feature weights are tuned to optimize bleu using the minimum error rate training algorithm | 0 |
experiments show that our system can outperform the state-of-art systems---experiments show that our system outperforms the state-of-art systems | 1 |
we employed the glove as the word embedding for the esim---we used the glove embeddings for these features | 1 |
the feature weights 位 m are tuned with minimum error rate training---the system is based on the transformer implementation in opennmt-py | 0 |
we link each transliteration hypothesis to an english kb using a languageindependent entity linker---we apply a state-of-the-art language-independent entity linker to link each transliteration hypothesis to an english kb | 1 |
the word-embeddings were initialized using the glove 300-dimensions pre-trained embeddings and were kept fixed during training---the word vectors were initialized with the 300-dimensional glove embeddings , and were also updated during training | 1 |
the learning algorithm used is a variation of the winnow update rule incorporated in snow , a multi-class classifier that is specifically tailored for large scale learning tasks---the learning algorithm used is a variation of the winnow update rule incorporated in snow , a multi-class classifier that is tailored for large scale learning tasks | 1 |
we work with the phrase-based smt framework as the baseline system---we used a phrase-based smt model as implemented in the moses toolkit | 1 |
kalchbrenner et al proposed a dynamic convolution neural network with multiple layers of convolution and k-max pooling to model a sentence---kalchbrenner et al showed that their dcnn for modeling sentences can achieve competitive results in this field | 1 |
an in-house language modeling toolkit was used to train the 4-gram language models with modified kneser-ney smoothing over the web-crawled data---the previous review-mining systems most relevant to our work are and | 0 |
the pre-processed monolingual sentences will be used by srilm or berkeleylm to train a n-gram language model---ontology learning from texts aims to automatically build or enriching a set of logical statements out of linguistic evidence , and is closely related to the field of information extraction | 0 |
v ( math-w-1-3-0-24 ) achieves 90.1~ average precision/recall for sen----the penn discourse treebank is another annotated discourse corpus | 0 |
lexflow is a web-based application that enables the cooperative and distributed management of computational lexicons---study is among the first ones to perform chinese word segmentation and pos tagging by deep learning | 0 |
blitzer et al investigate domain adaptation for sentiment classifiers , focusing on online reviews for different types of products---drezde et al applied structural correspondence learning to the task of domain adaptation for sentiment classification of product reviews | 1 |
in section 4 , through experiments on multiple real-world datasets , we observe that sictf is not only more accurate than kb-lda but also significantly faster with a speedup of 14x---in this paper , we propose adversarial multi-criteria learning for cws by fully exploiting the underlying shared knowledge | 0 |
birke and sarkar clustered literal and figurative contexts using a wordsense-disambiguation approach---a number of convolutional neural network , recurrent neural network , and other neural architectures have been proposed for relation classification | 0 |
sentiment analysis is the natural language processing ( nlp ) task dealing with the detection and classification of sentiments in texts---sentiment analysis is a nlp task that deals with extraction of opinion from a piece of text on a topic | 1 |
in the official evaluation , our system achieves an f1 score of 26.90 % in overall performance on the blind test set---in this paper , we present the lth coreference solver used in the closed track of the conll 2012 shared task | 0 |
dependency parsing is a basic technology for processing japanese and has been the subject of much research---zarrie脽 and kuhn argue that multiword expressions can be reliably detected in parallel corpora by using dependency-parsed , word-aligned sentences | 0 |
we present epireader , a novel model for machine comprehension of text---we presented the novel epireader framework for machine comprehension | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.