text stringlengths 82 736 | label int64 0 1 |
|---|---|
the state of the art suggests that the use of heterogeneous measures can improve the evaluation reliability---that suggest the convenience of using heterogeneous measures to corroborate evaluation | 1 |
we have shown that a general conversation summarization approach can achieve results on par with state-of-the-art systems that rely on features specific to more focused domains---in common , we can achieve competitive results with state-of-the-art systems that rely on more domain-specific features | 1 |
we created a 10 billion word topic-diverse web corpus by spidering websites from a set of seed urls---on the remaining tweets , we trained a 10-gram word length model , and a 5-gram language model , using srilm with kneyser-ney smoothing | 0 |
for building our ap e b2 system , we set a maximum phrase length of 7 for the translation model , and a 5-gram language model was trained using kenlm---in addition , we use an english corpus of roughly 227 million words to build a target-side 5-gram language model with srilm in combination with kenlm | 1 |
this paper presents a simple and effective method that retrieves translation pieces to guide nmt for narrow domains---in this paper , we propose a simple , fast , and effective method for recalling previously seen translation examples and incorporating them into the nmt | 1 |
our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing---system tuning was carried out using minimum error rate training optimised with k-best mira on a held out development set | 0 |
gimpel et al and foster et al annotated english microblog posts with pos tags---gimpel et al and foster et al annotate english microblog posts with pos tags | 1 |
therefore , word segmentation is a crucial first step for many chinese language processing tasks such as syntactic parsing , information retrieval and machine translation---word segmentation is a fundamental task for chinese language processing | 1 |
wordnet is a byproduct of such an analysis---although wordnet is a fine resources , we believe that ignoring other thesauri is a serious oversight | 1 |
we tune model weights using minimum error rate training on the wmt 2008 test data---identification of user intent also has important implications in building intelligent conversational qa systems | 0 |
previous approaches have used a hand-crafted finite set of features to represent the unbounded parse history---previous approaches have used a hand-crafted finite set of features to represent the parse history | 1 |
as is the case with the multi-task system , we apply the cross entropy loss function and the adam optimizer to train the energybased network---for all tasks , we use the adam optimizer to train models , and the relu activation function for fast calculation | 1 |
probabilistic word segmentation can handle this kind of ambiguity successfully---word segmentation can handle this kind of ambiguity successfully | 1 |
the skip-gram model implemented by word2vec learns vectors by predicting context words from targets---for each of these productions , a supportvector machine classifier is trained using string similarity as the kernel | 0 |
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---for all data sets , we trained a 5-gram language model using the sri language modeling toolkit | 1 |
for the second issue , we propose a technology to model the combination task by considering both sides¡¯ syntactic structure information---for the first issue , we propose a novel non-isomorphic translation framework to capture more non-isomorphic structure | 1 |
we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing---we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing | 1 |
we rely on conditional random fields 1 for predicting one label per reference---we use conditional random fields for sequence labelling | 1 |
peters et al proposed the embeddings from language models , which obtains contextualized word representations---we use k-batched mira to tune the weights for all the features | 0 |
in recent years , with the availability of human aligned training data , supervised methods ( e.g . the itg aligner ( cite-p-11-1-8 ) ) have become increasingly popular---formally , negation focus is defined as the special part in the sentence , which is most prominently or explicitly negated by a negative expression | 0 |
li et al , 2004 , hybrid , or based on phonetic , eg---in this section , we generalize the ideas regarding network-based dsms presented in , for the case of more complex structures | 0 |
all other parameters are initialized with glorot normal initialization---all parameters are initialized using glorot initialization | 1 |
these language models were built up to an order of 5 with kneser-ney smoothing using the srilm toolkit---we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit | 1 |
we show a relative reduction of alignment error rate of about 38 %---relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form | 0 |
the release of the penn discourse treebank has advanced the development of english discourse relation recognition---recent discourse research often make use of the large-scaled penn discourse treebank | 1 |
coreference resolution is a well known clustering task in natural language processing---coreference resolution is the process of determining whether two expressions in natural language refer to the same entity in the world | 1 |
for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus---we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit | 1 |
phrase-based models are a widely-used approach for statistical machine translation---phrase-based translation models are widely used in statistical machine translation | 1 |
the language model pis implemented as an n-gram model using the srilm-toolkit with kneser-ney smoothing---we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing | 1 |
see for an overview of estimation techniques for n-gram models---see chen and goodman for a detailed presentation of these smoothing methods | 1 |
to this end , we propose an unsupervised approach to clean the bilingual data---we propose an unsupervised method to clean bilingual data | 1 |
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus---we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing | 1 |
we also propose a new method to distill an ensemble of 20 greedy parsers into a single one to overcome annotation noise without sacrificing efficiency---without sacrificing computational efficiency , we propose a new method to distill an ensemble of 20 transition-based parsers into a single one | 1 |
we used l2-regularized logistic regression classifier as implemented in liblinear---semantic role labeling ( srl ) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence | 0 |
our research aims to learn the prototypical goal-acts for locations using a text corpus---locations coupled with predefined goal-acts , we want to learn the goal-acts for new locations | 1 |
another assistant for an authoring environment was developed in the a-propos project---another authoring assistant was developed in the a-propos project | 1 |
using multi-layered neural networks to learn word embeddings has become standard in nlp---distributed word representations have been shown to improve the accuracy of ner systems | 1 |
it has been empirically shown that word embeddings can capture semantic and syntactic similarities between words---it has been observed that many lexical relationships can be modelled as vector translations in a word embedding space | 1 |
each system is optimized using mert with bleu as an evaluation measure---the decoding weights are optimized with minimum error rate training to maximize bleu scores | 1 |
we trained a 4-gram language model on this data with kneser-ney discounting using srilm---in our experiments , we used the srilm toolkit to build 5-gram language model using the ldc arabic gigaword corpus | 1 |
we propose a framework to model human comprehension of discourse connectives---we propose a new framework to model the interpretation of discourse relations | 1 |
we use distributed word vectors trained on the wikipedia corpus using the word2vec algorithm---we preinitialize the word embeddings by running the word2vec tool on the english wikipedia dump | 1 |
we have not yet been able to combine the benefits of both an hbm and prosody information---and while we have not been able to usefully employ both prosody and the hbm technique | 1 |
our mt decoder is a proprietary engine similar to moses---our system is built using the open-source moses toolkit with default settings | 1 |
we use the mallet implementation of conditional random fields---as a classifier , we choose a first-order conditional random field model | 1 |
in each plot , the green solid line indicates the best accuracy found so far , while the dotted orange line shows accuracy at each trial---coreference resolution is the task of determining when two textual mentions name the same individual | 0 |
question answering ( qa ) is a challenging task that draws upon many aspects of nlp---conceptually , their model implements a co-clustering assumption closely related to singular value decomposition for more on this perspective ) | 0 |
in this paper , we introduce a uniform framework for chunking task based on support vector machines ( svms )---named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance | 0 |
tai et al , and le and zuidema extended sequential lstms to tree-structured lstms by adding branching factors---tai et al propose a tree-lstm model which captures syntactic properties in text | 1 |
we use the adam optimizer for the gradient-based optimization---we use a binary cross-entropy loss function , and the adam optimizer | 1 |
hammarstr枚m and borin presented a literature survey on unsupervised learning of morphology , including methods for learning morphological segmentation---hammarstr枚m and borin give an extensive overview of stateof-the-art unsupervised learning of morphology | 1 |
what¡¯s more , it is generally difficult to understand a topic only from the multinomial distribution ( cite-p-21-1-16 )---especially , the character-based tagging method which was proposed by nianwen xue achieves great success in the second international chinese word segmentation bakeoff in 2005 | 0 |
feature weights were set with minimum error rate training on a tuning set using bleu as the objective function---system tuning was carried out using minimum error rate training optimised with k-best mira on a held out development set | 1 |
in the following , we will call these the itg constraints---these models are combined in a log-linear framework with different weights | 0 |
this strategy makes an additional copy of the attention mechanism and finetunes only this small set of parameters---finetuning strategy requires the model to have an additional set of parameters relevant to the attention mechanism | 1 |
the thesaurus 4 used in this work was automatically constructed by lin---dependency parse correction , attachments in an input parse tree are revised by selecting , for a given dependent , the best governor from within a small set of candidates | 0 |
based on the derived hierarchy , we can generate a hierarchical organization of consumer reviews as well as consumer opinions on the aspects---based on the derived hierarchy , we generate a hierarchical organization of consumer reviews on various product aspects | 1 |
we participated in the english sts and interpretable similarity subtasks---we described our submissions to the semantic text similarity task | 1 |
currently , recurrent neural network based models are widely used on natural language processing tasks for excellent performance---with the advent of recurrent neural network based language models , some rnn based nlg systems have been proposed | 1 |
from a raw corpus , a small set of cue-phrase-based patterns were used to collect discourse instances---cue-phrase-based patterns were utilized to collect a large number of discourse instances | 1 |
this cnn-based architecture accepts multiple word embeddings as inputs---we derive 100-dimensional word vectors using word2vec skip-gram model trained over the domain corpus | 0 |
for the feature-based system we used logistic regression classifier from the scikit-learn library---we use the logistic regression classifier as implemented in the skll package , which is based on scikitlearn , with f1 optimization | 1 |
it is common for topic models to treat documents as bags-of-words , ignoring any internal structure---topic models make the bag-of-words assumption that words are generated independently , and so ignore potentially useful information about word order | 1 |
we initialize our word vectors with 300-dimensional word2vec word embeddings---we embed all words and characters into low-dimensional real-value vectors which can be learned by language model | 1 |
our experimental results on the 20 debates for the republican primary election show that when combined with word deviations and mention percentages , most persuasive argumentation features give superior performance compared to the baselines---and we assess the full potential of the joint segmentation and dependency parsing model | 0 |
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts---relation extraction is a crucial task in the field of natural language processing ( nlp ) | 1 |
deep learning with knowledge transfer has been previously applied to sentiment analysis in the context of domain adaptation and cross-lingual applications---deep learning has been considered as a generic solution to domain adaptation , and transfer learning problems | 1 |
information extraction ( ie ) is a fundamental technology for nlp---information extraction ( ie ) is the task of extracting information from natural language texts to fill a database record following a structure called a template | 1 |
to find the referent entity of a name mention , our method combines the evidences from all the three distributions p ( e ) , p ( s|e ) and p ( c|e )---we developed a similar approach using dependency structures rather than phrase structure trees , which , moreover , extends bare pattern matching with machine learning techniques | 0 |
our system for the english sts subtask used regression models that combined a wide array of features including semantic similarity scores obtained with various methods---for this subtask combined a wide array of features including similarity scores calculated using knowledge based and corpus based methods in a regression model | 1 |
a 5-gram lm was trained using the srilm toolkit 5 , exploiting improved modified kneser-ney smoothing , and quantizing both probabilities and back-off weights---the targetside 4-gram language model was estimated using the srilm toolkit and modified kneser-ney discounting with interpolation | 1 |
we first removed all sgml mark-up , and performed sentence-breaking and tokenization using the stanford corenlp toolkit---we used the stanford corenlp toolkit for word segmentation , part-of-speech tagging , and syntactic parsing | 1 |
gaussian processes are a bayesian non-parametric machine learning framework considered the stateof-the-art for regression---gaussian processes is a bayesian non-parametric machine learning framework based on kernels for regression and classification | 1 |
we used minimum error rate training to optimize the feature weights---we used the pharaoh decoder for both the minimum error rate training and test dataset decoding | 1 |
our joint model is novel in its choice of tasks and its features for capturing cross-task interactions---that is novel in terms of the choice of tasks and the features used to capture cross-task interactions | 1 |
egges et al provided virtual characters with conversational emotional responsiveness---egges et al have provided virtual characters with conversational emotional responsiveness | 1 |
language models were built using the sri language modeling toolkit with modified kneser-ney smoothing---the language models were created using the srilm toolkit on the standard training sections of the ccgbank , with sentenceinitial words uncapitalized | 1 |
word sense disambiguation ( wsd ) is a particular problem of computational linguistics which consists in determining the correct sense for a given ambiguous word---word sense disambiguation ( wsd ) is a fundamental task and long-standing challenge in natural language processing ( nlp ) | 1 |
in addition , we freely provide an annotated corpus for studying these dimensions---in addition , we provide a corpus with 320 arguments , annotated for all 15 dimensions | 1 |
we propose a minimalistic model architecture based on gated recurrent unit combined with an attention mechanism---a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke | 0 |
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided---we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing | 1 |
semantic parsing is the task of mapping natural language to machine interpretable meaning representations---semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation ( mr ) | 1 |
the trigram language model is implemented in the srilm toolkit---the models are built using the sri language modeling toolkit | 1 |
sentiment analysis is the task of automatically identifying the valence or polarity of a piece of text---sentiment analysis is a nlp task that deals with extraction of opinion from a piece of text on a topic | 1 |
our own implementation will be made available to other researchers as open source---for other researchers who wish to use our indexing machinery , it has been made available as free software | 1 |
most famously , the paradise framework learns from data a linear regression model that predicts dialogue-level user satisfaction from various objective characteristics of a dialogue that concern task success and dialogue costs---a well-known approach to dialogue system evaluation , paradise , predicts user satisfaction from task completion success and from a number of computable parameters related to dialogue cost | 1 |
to that end , we assume certain definitions that extend the textual entailment paradigm to the lexical level---we proposed definitions for entailment at sub-sentential levels , addressing a gap in the textual entailment framework | 1 |
in query-focused summarization , the task is to produce a summary as an answer to a given query---in query-focused summarization , the task is to produce a summary | 1 |
a 5-gram language model with kneser-ney smoothing was trained with srilm on monolingual english data---we also use a 4-gram language model trained using srilm with kneser-ney smoothing | 1 |
we started with the feature set described in vajjala and l玫o and added a few additional features , primarily lexical richness features from lu---we started with the feature set described in vajjala and l玫o and added more features to the list | 1 |
word alignment is a central problem in statistical machine translation ( smt )---community question answering ( cqa ) is an evolution of a typical qa setting | 0 |
we used the moses toolkit for performing statistical machine translation---we used the moses toolkit to build mt systems using various alignments | 1 |
sri language modeling toolkit was employed to train 5-gram english and japanese lms on the training set---a 3-gram language model was trained from the target side of the training data for chinese and arabic , using the srilm toolkit | 1 |
this feature , usually called lexical smoothing , has been used in phrase-based systems---in this paper we present a novel graph-based wsd algorithm which uses the full graph of wordnet efficiently , performing significantly better that previously published approaches in english | 0 |
in this work , we present our approach for sentiment classification which uses a combination of esa and naive bayes classifier---in this paper , we model the sentiment classification using dsms based on explicit topic models ( cite-p-9-1-2 ) , which incorporate correlation information from a corpus | 1 |
for this task , we use the widely-used bleu metric---conditional random fields constitute a widely-used and effective approach for supervised structure learning tasks involving the mapping between complex objects such as strings and trees | 0 |
coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text---coreference resolution is the task of determining which mentions in a text refer to the same entity | 1 |
finally , based on recent results in text classification , we also experiment with a neural network approach which uses a long-short term memory network---relation extraction ( re ) is the task of recognizing the assertion of a particular relationship between two or more entities in text | 0 |
nonetheless , compressive methods are unable to merge the related facts from different sentences---compressive summarization models can not merge facts from different source sentences , because | 1 |
ne recognition is essential for finding possible answers from documents---ne recognition plays an essential role in information | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.