text stringlengths 82 736 | label int64 0 1 |
|---|---|
for the cluster- based method , we use word2vec 2 which provides the word vectors trained on the google news corpus---the final smt system performance is evaluated on a uncased test set of 3071 sentences using the bleu , nist and meteor | 0 |
to this end , we use first-and second-order conditional random fields---we use the mallet implementation of conditional random fields | 1 |
we evaluated the translation quality using the bleu-4 metric---we measure the translation quality with automatic metrics including bleu and ter | 1 |
moreover , similar documents described in neighboring time periods are assumed to share similar storyline distributions---similar documents described in neighboring time periods should share similar storyline distributions | 1 |
niu et al proposed to convert a dependency treebank to a constituency one by using a parser trained on a constituency treebank to generate k-best lists for sentences in the dependency treebank---instead of using conversion rules , niu et al proposed to convert a dependency treebank to a constituency one by using a parser trained on a constituency treebank to generate kbest lists for sentences in the dependency treebank | 1 |
word embedding we use the word2vec toolkit to pre-train word embeddings on the whole english wikipedia dump---the dataset was parsed using the stanford parser | 0 |
log management are two integral components of interactive dialog systems---and dialog management are two integral components of a dialog system | 1 |
in this paper we will consider sentence-level approximations of the popular bleu score---recent studies focuses on learning word embeddings for specific tasks , such as sentiment analysis and dependency parsing | 0 |
the weights in the log-linear model are tuned by minimizing bleu loss through mert on the dev set for each language pair---in this paper we presented a supervised , knowledge-intensive interpretation model which takes advantage of new linguistic information from english | 0 |
rhetorical structure theory has contributed a great deal to the understanding of the discourse of written documents---rhetorical structure theory is one of the most influential approaches for document-level discourse analysis | 1 |
in this paper , we show that the a ∗ search based msa algorithm performs better than existing algorithms for combining multiple captions---in this paper , we describe an improved method for combining partial captions into a final output | 1 |
the release of large corpora with semantic annotations like the framenet and propbank have enabled the training and testing of classifiers for automated annotation models---the availability of large scale semantic lexicons , such as framenet , allowed the adoption of a wide family of learning paradigms in the automation of semantic parsing | 1 |
sentiment classification is the fundamental task of sentiment analysis ( cite-p-15-3-11 ) , where we are to classify the sentiment of a given text---sentiment classification is the task of labeling a review document according to the polarity of its prevailing opinion ( favorable or unfavorable ) | 1 |
we train the models for 20 epochs using categorical cross-entropy loss and the adam optimization method---we used a categorical cross entropy loss function and adam optimizer and trained the model for 10 epochs | 1 |
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit---we built a 5-gram language model on the english side of europarl and used the kneser-ney smoothing method and srilm as the language model toolkit | 1 |
bilingual lexicons are fundamental resources in multilingual natural language processing tasks such as machine translation , cross-language information retrieval or computerassisted translation---bilingual lexicons are an important resource in multilingual natural language processing tasks such as statistical machine translation and cross-language information retrieval | 1 |
we used the svm implementation of scikit learn---cass-swe operates on part-of-speech annotated texts using coarse-grained semantic information , and produces output that reflects this information | 0 |
burkett and klein propose a reranking based method for joint constituent parsing of bitext , which can make use of structural correspondence features in both languages---burkett and klein induce node-alignments of syntactic trees with a log-linear model , in order to guide bilingual parsing | 1 |
we use the open source moses phrase-based mt system to test the impact of the preprocessing technique on translation quality---we use the moses smt framework and the standard phrase-based mt feature set , including phrase and lexical translation probabilities and a lexicalized reordering model | 1 |
we propose a co-training approach to making use of unlabeled chinese data---relation extraction ( re ) is the task of recognizing the assertion of a particular relationship between two or more entities in text | 0 |
kilicoglu and bergler apply a combination of lexical and syntactic methods , improving on previous results and showing that quantifying the strength of a hedge can be beneficial for classification of speculative sentences---kilicoglu and bergler proposed a linguistically motivated approach based on syntactic information to semi-automatically refine a list of hedge cues | 1 |
in this paper we propose a neural network based insertion position selection model to reduce the computational cost by selecting the appropriate insertion positions---we have proposed a neural network based insertion position selection model to reduce the computational cost of the decoding | 1 |
we created 5-gram language models for every domain using srilm with improved kneserney smoothing on the target side of the training parallel corpora---we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus | 1 |
work by koppel et al , tsur and rappoport wong and dras , and tetreault et al set the stage for much of the recent research efforts---the work of koppel et al set the stage for much of the nli research in the past few years | 1 |
we used the phrase-based model moses for the experiments with all the standard settings , including a lexicalized reordering model , and a 5-gram language model---we used the moses mt toolkit with default settings and features for both phrase-based and hierarchical systems | 1 |
finally , we explore the potential of different sr-based indicators of document relevance---we explore the potential of different indicators of document relevance that are based on semantic relatedness | 1 |
we treat the text summarization problem as maximizing a submodular function under a budget constraint---wmfvec is the first sense similarity measure based on latent semantics of sense definitions | 0 |
the trigram language model is implemented in the srilm toolkit---the srilm toolkit was used to build the trigram mkn smoothed language model | 1 |
the training material consists of the minutes edited by the european parliament in several languages , also known as the final text editions---the training material consists of the summary edited by the european parliament in several languages , which is also known as the final text editions | 1 |
our interactive user interface helps researchers to better understand the capabilities of the different approaches and can aid qualitative analyses---enabling the exploration of individual models , our user interface also allows researchers to compare different attention | 1 |
we present an event extraction framework to detect event mentions and extract events from the document-level financial news---we present a framework named dcfee which can extract document-level events from announcements | 1 |
we used adam for optimization of the neural models---we used the adam optimization function with default parameters | 1 |
for this class of features , we used the hypernym taxonomy of wordnet---we created a highly related test set using the synonyms in wordnet | 1 |
mikolov et al uses a continuous skip-gram model to learn a distributed vector representation that captures both syntactic and semantic word relationships---a skip-gram model from mikolov et al was used to generate a 128-dimensional vector of a particular word | 1 |
rapp and fung discussed semantic similarity estimation using cross-lingual context vector alignment---rapp and fung proposed a bilingual context vector mapping strategy to explore word co-occurrence information | 1 |
recent works on word embedding show improvements in capturing semantic features of the words---extensive experiments have leveraged word embeddings to find general semantic relations | 1 |
to this end , we design novel features for keyphrase extraction based on citation context information and use them in conjunction with traditional features in a supervised probabilistic framework---lkb system is a parser generation tool , proposed by | 0 |
we used the moses toolkit to build an english-hindi statistical machine translation system---we experimented using the standard phrase-based statistical machine translation system as implemented in the moses toolkit | 1 |
the language model was trained using srilm toolkit---srilm toolkit is used to build these language models | 1 |
relation extraction ( re ) is the task of determining semantic relations between entities mentioned in text---relation extraction is a key step towards question answering systems by which vital structured data is acquired from underlying free text resources | 1 |
for the sick and msrvid experiments , we used 300-dimension glove word embeddings---we used glove 10 to learn 300-dimensional word embeddings | 1 |
kerremans presents the issue of terminological variation in the context of specialised translation on a parallel corpus of biodiversity texts---kerremans discusses in detail the issue of terminological variation in the context of specialised translation on a parallel corpus of biodiversity texts | 1 |
we used moses with the default configuration for phrase-based translation---cite-p-10-1-3 and cite-p-10-1-6 built systems to predict hierarchical power relations between people in the enron email corpus using lexical features from all the messages exchanged between them | 0 |
we use word2vec to train the word embeddings---we use the word2vec tool to pre-train the word embeddings | 1 |
similar pairs of verbs and nouns are identified on the basis of the wu-palmer word-to-word similarity measure---the similarity between words is measured using the wu-palmer method of wordnetbased lexical semantic similarity | 1 |
this is motivated by the fact that multi-task learning has shown to be beneficial in several nlp tasks---embeddings , have recently shown to be effective in a wide range of tasks | 1 |
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided---we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit | 1 |
it has been empirically shown that word embeddings can capture semantic and syntactic similarities between words---it has been empirically shown that word embeddings could capture semantic and syntactic similarities between words | 1 |
we present an evaluation metric for whole-sentence semantic analysis , and show that it can be computed efficiently---in this work , we provide an evaluation metric that uses the degree of overlap between two whole-sentence semantic structures | 1 |
to our knowledge , this constitutes the first large-scale quantitative lexical semantic typology that is completely unsupervised , bottom-up , and data-driven---to the best of our knowledge , a large-scale quantitative typological analysis of lexical semantics is lacking thus far | 1 |
blitzer et al proposed structural correspondence learning to identify the correspondences among features between different domains via the concept of pivot features---blitzer et al apply structural correspondence learning for learning pivot features to increase accuracy in the target domain | 1 |
we present a novel approach based on queueing theory and psychology of learning to identify spurious instances in datasets---in this paper , we present an effective approach inspired by queueing theory and psychology of learning to automatically identify spurious instances | 1 |
for all submissions , we used the phrase-based variant of the moses decoder---we used the moses decoder , with default settings , to obtain the translations | 1 |
in conclusion , we will discuss the current state of the project and where it is going---in conclusion , we will discuss the current state of the project | 1 |
semantic parsing is the problem of mapping natural language strings into meaning representations---semantic parsing is the task of mapping a natural language ( nl ) sentence into a complete , formal meaning representation ( mr ) which a computer program can execute to perform some task , like answering database queries or controlling a robot | 1 |
since their introduction at the beginning of the twenty-first century , phrase-based translation models have become the state-of-the-art for statistical machine translation---in recent years , phrase-based systems for statistical machine translation have delivered state-of-the-art performance on standard translation tasks | 1 |
our evaluation metric is case-insensitive bleu-4---the evaluation metric is case-sensitive bleu-4 | 1 |
blitzer et al investigate domain adaptation for sentiment classifiers , focusing on online reviews for different types of products---blitzer et al employ the structural correspondence learning algorithm for sentiment domain adaptation | 1 |
svms are frequently used for text classification and have been applied successfully to nli---svms have been applied to many text classification problems | 1 |
sennrich et al introduced a subword-level nmt model using subword-level segmentation based on the byte pair encoding algorithm---sennrich et al introduced a simpler and more effective approach to encode rare and unknown words as sequences of subword units by byte pair encoding | 1 |
inspired by recent work on neural language models , we proposed a neural network model that learns to discriminate between felicitous and infelicitous arguments for a particular predicate---applications , we propose a neural network model that learns to discriminate between felicitous and infelicitous arguments for a particular predicate | 1 |
traditionally , a language model is a probabilistic model which assigns a probability value to a sentence or a sequence of words---a language model is a statistical model that gives a probability distribution over possible sequences of words | 1 |
our implementation of the segment-based imt protocol is based on the moses toolkit---we used moses as the implementation of the baseline smt systems | 1 |
callison-burch et al extract phrase-level paraphrases by mapping input phrases into a phrase table and then mapping back to the source language---relation extraction ( re ) is the task of recognizing relationships between entities mentioned in text | 0 |
wordembeddings have been shown to help with a variety of nlp tasks---word embeddings have been used to help to achieve better performance in several nlp tasks | 1 |
we used the phrasebased smt system moses to calculate the smt score and to produce hfe sentences---distributed sense embeddings are taken as the knowledge representations which are trained discriminatively , and usually have better performance than traditional count-based distributional models ( cite-p-12-1-0 ) , and ( 2 ) a general model for the whole vocabulary is jointly trained to induce sense centroids under the mutli-task learning framework | 0 |
both files are concatenated and learned by word2vec---the embeddings were trained over the english wikipedia using word2vec | 1 |
silberer and frank cast ni resolution as a coreference resolution task , and employ an entity-mention model---silberer and frank use an entity-based coreference resolution model to automatically extended the training set | 1 |
arguably the most influential approach to the topic modeling domain is latent dirichlet allocation---a widely used topic modeling method is the latent dirichlet allocation model , which is proposed by blei | 1 |
recognizing textual entailment between two sentences is also addressed by rockt盲schel et al , using lstms and word-by-word neural attention mechanisms on the snli data set---recognising textual entailment between two sentences was also addressed in which used lstms and a word-by-word neural attention mechanism on the snli corpus | 1 |
features are combined using a log-linear model optimized for bleu , using the n-best batch mira algorithm---the parameters of the log-linear model were tuned by optimizing bleu on the development set using the batch variant of mira | 1 |
we use the glove vector representations to compute cosine similarity between two words---we use glove vectors for word embeddings and one-hot vectors for pos-tag and dependency relations in each individual model | 1 |
also , the head words of the constituents are constrained to occur in the distributional resources used---in those models , the contexts are defined by using the syntactic relations between words | 1 |
a major challenge in document clustering research arises from the growing amount of text data written in different languages---the feature definitions are inspired by the set which yielded the best results when combined in a naive bayes model on several senseval-2 lexical sample tasks | 0 |
word embedding approaches like word2vec or glove are powerful tools for the semantic analysis of natural language---we ran these ml methods by the weka platform using the default parameters | 0 |
in section 2 , we present the relevant facts about morphology in the arabic language family---in section 2 , we present the relevant facts about morphology | 1 |
to assess the pronunciation of spontaneous speech , we proposed a method for extracting a set of pronunciation features---in this paper , we will describe a method for extracting pronunciation features based on spontaneous speech | 1 |
we use case-sensitive bleu-4 to measure the quality of translation result---we use the machine translation toolkit jane and evaluate with case-insensitive bleu in all experiments | 1 |
in this framework , graph walks can be applied to draw a measure of similarity between the graph nodes---graph walks , combined with existing techniques of supervised learning , can be used to derive a task-specific word similarity measure in this graph | 1 |
all language models were trained using the srilm toolkit---by cite-p-8-1-4 , recent attempts that apply either complex linguistic reasoning or attention-based complex neural network architectures achieve up to 76 % accuracy on benchmark sets | 0 |
we have used the srilm with kneser-ney smoothing for training a language model for the first stage of decoding---the language model is a large interpolated 5-gram lm with modified kneser-ney smoothing | 1 |
in fact , it has been shown that the decoding problem for the presented machine translation models is np-complete---the decoding problem has been proved to be np-complete even when the translation model is ibm model 1 and the language model is bi-gram | 1 |
we then introduce a new algorithm for quasi-second-order parsing---in this paper , we introduce an alternative maximum subgraph algorithm for first-order parsing | 1 |
we tune phrase-based smt models using minimum error rate training and the development data for each language pair---question answering ( qa ) is a challenging task that draws upon many aspects of nlp | 0 |
our 5-gram language model is trained by the sri language modeling toolkit---our models improve crf , especially when small data sets are used | 0 |
historically , unsupervised learning techniques have lacked a principled technique for selecting the number of unseen components---unsupervised learning techniques have historically lacked good methods for choosing the number of unseen components | 1 |
to train monolingual word embeddings we used fasttext which employs subword information for better quality representations---to train monolingual word embeddings we used fasttext with default parameters except the dimension of the vectors which is 300 | 1 |
our research aims to learn the prototypical goal-acts for locations using a text corpus---for example , minimum bayes risk decoding over n-best list finds a translation that has lowest expected loss with all the other hypotheses , and it shows that improvement over the maximum a posteriori decoding | 0 |
the distance between two languages is a function of the number or fraction of these forms which are cognate between the two languages 1---the distance between two languages is the divergence their lexical metrics | 1 |
we have presented efficient algorithms for maximum expected f-score decoding---semantic role labeling ( srl ) is the task of identifying the arguments of lexical predicates in a sentence and labeling them with semantic roles ( cite-p-13-3-3 , cite-p-13-3-11 ) | 0 |
semantic parsing is the task of converting a sentence into a representation of its meaning , usually in a logical form grounded in the symbols of some fixed ontology or relational database ( cite-p-21-3-3 , cite-p-21-3-4 , cite-p-21-1-11 )---semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance | 1 |
we substitute our language model and use mert to optimize the bleu score---we use bleu and meteor for our automatic metric-based evaluation | 1 |
we describe the semeval-2010 shared task on ¡°linking events and their participants in discourse¡±---we described the semeval-2010 shared task on ¡° linking events and their participants in discourse ¡± | 1 |
we obtained our best results when we combined a variety of features---our results indicate that using a variety of information | 1 |
finally , following bousmalis et al , we further encourage the domain-specific features to be mutually exclusive with the shared features by imposing soft orthogonality constraints---the exponential log-linear model weights of both the smt and re-scoring stages of our system were set by tuning the system on development data using the mert procedure by means of the publicly available zmert toolkit 1 | 0 |
for language model scoring , we use the srilm toolkit training a 5-gram language model for english---a 4-gram language model is trained on the monolingual data by srilm toolkit | 1 |
in our implementation , we train a tri-gram language model on each phone set using the srilm toolkit---our 5-gram language model is trained by the sri language modeling toolkit | 1 |
moreover , in order to tackle this machine comprehension task , we used a deep learning architecture with new attention mechanisms---in this work , we presented a deep learning architecture with new attention mechanisms in order to learn more complex representations and similarities among input elements | 1 |
in conversational systems , understanding user intent is the key to the success of the interaction---identification of user intent also has important implications in building intelligent conversational qa systems | 1 |
this paper presents a dialogue system , called n umbers , in which all components operate incrementally---this paper describes a fully incremental dialogue system that can engage in dialogues | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.