text stringlengths 82 736 | label int64 0 1 |
|---|---|
ramshaw and marcus , 1995 ) used transformation based learning using a large annotated corpus for english---neural machine translation has become the primary paradigm in machine translation literature | 0 |
thus , we can efficiently solve the algorithm by using the hungarian method---this combinatorial optimisation problem can be solved in polynomial time through the hungarian algorithm | 1 |
collobert et al used word embeddings as input to a deep neural network for multi-task learning---collobert et al first applies a convolutional neural network to extract features from a window of words | 1 |
finally , we combine all the above features using a support vector regression model which is implemented in scikit-learn---we train and evaluate an l2-regularized logistic regression with liblinear as implemented in scikit-learn , using scaled and normalized features to the interval | 1 |
in this work , we employ the toolkit word2vec to pre-train the word embedding for the source and target languages---with hyperedge replacement grammars , our implementations outperform the best previous system by several orders of magnitude | 0 |
text categorization is the problem of automatically assigning predefined categories to free text documents---text categorization is the task of assigning a text document to one of several predefined categories | 1 |
in this paper , we show how to overcome both limitations---methods make use of the information from only one language side | 0 |
semantic role labeling ( srl ) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence---the lstm were introduced by hochreiter and schmidhuber and were explicitly designed to avoid the longterm dependency problem | 0 |
coreference resolution is the process of linking together multiple referring expressions of a given entity in the world---coreference resolution is the process of determining whether two expressions in natural language refer to the same entity in the world | 1 |
an early attempt can be found in nepveu et al , where dynamic adaptation of an imt system via cache-based model extensions to language and translation models is proposed---one example is the work by nepveu et al , where dynamic adaptation of an imt system via cache-based model extensions to language and translation models is proposed | 1 |
in related work on modeling arabic syntax and morphology , habash et al demonstrated that given good syntactic representations , case prediction can be done with a high degree of accuracy---in related work on modeling arabic case and syntax , habash et al compared rule-based and machine learning approaches to capture the complexity of arabic case assignment and agreement | 1 |
by designing a two player game , we can both collect and verify referring expressions directly within the game---efficiently , we design a new two player referring expression game ( referitgame ) | 1 |
nevertheless , examination of parser output shows the parse features can be extracted reliably from esl data---analysis of the parser output indicates that it is robust enough in the face of noisy non-native writing | 1 |
in particular , haussler proposed the well-known convolution kernels for a discrete structure---previous research by lavie and denkowski proposed a similar alignment strategy for machine translation evaluation | 0 |
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting---we used the srilm toolkit to simulate the behavior of flexgram models by using count files as input | 1 |
unsupervised word embeddings trained from large amounts of unlabeled data have been shown to improve many nlp tasks---word embeddings have been used to help to achieve better performance in several nlp tasks | 1 |
we implemented a modified version of the tnt algorithm to train a pos tagger---to assign pos tags for the unlabeled data , we used the package tnt to train a pos tagger on training data | 1 |
for example , bannard and callison-burch propose the pivot approach to extract phrasal paraphrases from an english-german parallel corpus---for example , bannard and callison-burch propose the pivot approach to generate phrasal paraphrases from an english-german parallel corpus | 1 |
we presented kl cpos 3 , an efficient language similarity measure designed for delexicalized dependency parser transfer---sentiment classification is the task of identifying the sentiment polarity of a given text | 0 |
we used the logistic regression implemented in the scikit-learn library with the default settings---we used the logistic regression implementation in scikit-learn for the maximum entropy models in our experiments | 1 |
we obtained distributed word representations using word2vec 4 with skip-gram---we used word2vec to convert each word in the world state , query to its vector representation | 1 |
in particular , we use the liblinear 3 package which has been shown to be efficient for text classification problems such as this---in particular , we use the liblinear 4 package which has been shown to be efficient for text classification problems such as this | 1 |
the positional independence assumption is too strong---that exploits a positional independence assumption | 1 |
the bleu metric has been used to evaluate the performance of the systems---in this paper , we present a new method to collect large-scale sentential paraphrases from twitter | 0 |
relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text---relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form | 1 |
we use distributed word vectors trained on the wikipedia corpus using the word2vec algorithm---the penn discourse treebank is the largest available annotated corpora of discourse relations over 2,312 wall street journal articles | 0 |
however , for relational network classification , deepwalk can be suboptimal as it lacks a mechanism to optimize the objective of the target task---when dealing with a network classification task , it lacks a mechanism to optimize the objective of the target task | 1 |
to calculate language model features , we train traditional n-gram language models with ngram lengths of four and five using the srilm toolkit---for this language model , we built a trigram language model with kneser-ney smoothing using srilm from the same automatically segmented corpus | 1 |
conditional random fields are a popular family of models that have been proven to work well in a variety of sequence tagging nlp applications---conditional random fields are arguably one of the best performing sequence prediction models for many natural language processing tasks | 1 |
we used srilm to build a 4-gram language model with interpolated kneser-ney discounting---we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus | 1 |
for the n-gram lm , we use srilm toolkits to train a 4-gram lm on the xinhua portion of the gigaword corpus---the syntactic relations are obtained using the constituency and dependency parses from the stanford parser | 0 |
uchiyama et al also propose a statistical token classification method for jcvs---uchiyama , baldwin , and ishizaki also propose a statistical token classification method for jcvs | 1 |
we train a kn-smoothed 5-gram language model on the target side of the parallel training data with srilm---our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing | 1 |
the srilm toolkit was used for training the language models using kneser-ney smoothing---the language models were trained with kneser-ney backoff smoothing using the sri language modeling toolkit , | 1 |
we use the linear svm classifier from scikit-learn---the standard classifiers are implemented with scikit-learn | 1 |
our baseline system is re-implementation of hiero , a hierarchical phrase-based system---our baseline is the re-implementation of the hiero system | 1 |
in such work on question answering , question generation models are typically not evaluated for their intrinsic quality , but rather with respect to their utility as an intermediate step in the question answering process---mihalcea et al defines a measure of text semantic similarity and evaluates it in an unsupervised paraphrase detector on this data set | 0 |
part-of-speech ( pos ) tagging is a fundamental natural-language-processing problem , and pos tags are used as input to many important applications---part-of-speech ( pos ) tagging is a critical task for natural language processing ( nlp ) applications , providing lexical syntactic information | 1 |
in the first step , we generate a few high-confidence sentiment and topic seeds in the target domain---in the first step , we build a bridge between the source and target domains | 1 |
experiments on five languages showed that the approach can yield significant improvement in tagging accuracy given sufficiently fine-grained label sets---experiments on five languages show that the approach can yield significant improvement in tagging accuracy | 1 |
we used the moses toolkit for performing statistical machine translation---we train a 5-gram language model with the xinhua portion of english gigaword corpus and the english side of the training set using the srilm toolkit | 0 |
our word embeddings is initialized with 100-dimensional glove word embeddings---a dependency tree is a rooted , directed spanning tree that represents a set of dependencies between words in a sentence | 0 |
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided---we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option | 1 |
the deep web is the collection of information repositories that are not indexed by search engines---deep web is the information that is in proprietory databases | 1 |
in addition , we showed that our approach is more robust to adversarial inputs---results and analyses show that our approach is more robust to adversarial inputs | 1 |
we implemented the different aes models using scikit-learn---shen et al , 2008 ) exploits target dependency structures as dependency language models to ensure the grammaticality of the target string | 0 |
du et al have shown that this topic structure can significantly improve the modelling accuracy , which should contribute to more accurate segmentation---du et al have shown that segment-level topics and their dependencies can improve modeling accuracy in a monolingual setting | 1 |
the algorithm is similar to those for context-free parsing such as chart parsing and the cky algorithm---as a baseline model we develop a phrase-based smt model using moses | 0 |
word alignment is the task of identifying translational relations between words in parallel corpora , in which a word at one language is usually translated into several words at the other language ( fertility model ) ( cite-p-18-1-0 )---word alignment is the task of identifying corresponding words in sentence pairs | 1 |
using these estimates , our parser is capable of finding the viterbi parse of an average-length penn treebank sentence in a few seconds , processing less than 3 % of the edges which would be constructed by an exhaustive parser---on average-length penn treebank sentences , our most detailed estimate reduces the total number of edges processed to less than 3 % of that required by exhaustive parsing | 1 |
we introduce a transition-based ( cite-p-25-3-15 ) method for joint deep input surface realisation integrating linearization , function word prediction and morphological generation---we construct a transition-based model to jointly perform linearization , function word prediction and morphological generation , which considerably improves upon the accuracy | 1 |
we use pre-trained glove vector for initialization of word embeddings---we represent input words using pre-trained glove wikipedia 6b word embeddings | 1 |
the parallel corpus used in our experiments is the english-french part of the europarl corpus ,---we used the french-english europarl corpus of parliamentary debates as a source of the parallel corpus | 1 |
coreference resolution is the task of determining when two textual mentions name the same individual---coreference resolution is the task of grouping mentions to entities | 1 |
the sentiment analysis is a field of study that investigates feelings present in texts---sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-3-0 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) | 1 |
our word embeddings is initialized with 100-dimensional glove word embeddings---we used smoothed bleu for benchmarking purposes | 0 |
we used latent dirichlet allocation to perform the classification---natural language text usually consists of topically structured and coherent components , such as groups of sentences that form paragraphs and groups of paragraphs that form sections | 0 |
therefore , we propose a novel framework that differentiates two semantically similar words with the attribute word by using their word and context embeddings---using both context and word embeddings can better model the co-occurrence between the two similar words and their discriminative attribute word | 1 |
it is a generative probabilistic model that approximates the underlying hidden topical structure of a collection of texts based on the distribution of words in the documents---we use the skipgram model with negative sampling to learn word embeddings on the twitter reference corpus | 0 |
agrawal and an , 2012 ) proposed an unsupervised context-based approach to detect emotions from text at the sentence level---agrawal and an proposed a context-based approach to detect emotions from text at sentence level | 1 |
sarcasm is defined as ‘ the use of irony to mock or convey contempt ’ 1---sarcasm is a pervasive phenomenon in social media , permitting the concise communication of meaning , affect and attitude | 1 |
tsvetkov , mukomel , and gershman and tsvetkov et al used coarse semantic features , such as concreteness , animateness , named-entity types , and wordnet supersenses---finally , we represent subtree-based features on training data | 0 |
semantic role labeling ( srl ) is the process of producing such a markup---semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them | 1 |
specifically , we used wordsim353 , a benchmark dataset , consisting of relatedness judgments for 353 word pairs---in particular , we used the wordsim353 dataset containing pairs of similar words that reflect either relatedness or similarity relations | 1 |
we use the opensource moses toolkit to build a phrase-based smt system---for our baseline we use the moses software to train a phrase based machine translation model | 1 |
in this paper we introduce the notion of ¡°frame relatedness¡± , i.e . relatedness among prototypical situations as represented in the framenet database---we introduce the notion of ¡° frame relatedness ¡± , i . e . relatedness among prototypical situations | 1 |
the matrix is weighted using positive pointwise mutual information---we measure this association using pointwise mutual information | 1 |
sequence labeling is a widely used method for named entity recognition and information extraction from unstructured natural language data---this produces multiple paths between nodes , allowing the sash to shape itself to the data set | 0 |
these are also useful in situations when the text suffers from errors such as misspellings---given the basic nature of the semantic classes and wsd algorithms , we think there is room for future improvements | 0 |
we used glove word embeddings with 300 dimensions pre-trained using commoncrawl to get a vector representation of the evidence sentence---in a second baseline model , we also incorporate 300-dimensional glove word embeddings trained on wikipedia and the gigaword corpus | 1 |
morfessor is a family of probabilistic machine learning methods for finding the morphological segmentation from raw text data---morfessor 2.0 is a rewrite of the original , widely-used morfessor 1.0 software , with well documented command-line tools and library interface | 1 |
in addition , verbree et al created an argumentation scheme intended to support automatic production of argument structure diagrams from decision-oriented meeting transcripts---from a perhaps more formal perspective , verbree et al have created an argumentation scheme intended to support automatic production of argument structure diagrams from decision-oriented meeting transcripts | 1 |
we tune weights by minimizing bleu loss on the dev set through mert and report bleu scores on the test set---we set the feature weights by optimizing the bleu score directly using minimum error rate training on the development set | 1 |
question answering ( qa ) is a long-standing challenge in nlp , and the community has introduced several paradigms and datasets for the task over the past few years---question answering ( qa ) is the task of retrieving answers to a question given one or more contexts | 1 |
we applied a topic modelling approach to this task , and contrasted it with baseline and benchmark methods---frame-semantic parsing is the task of automatically finding semantically salient targets in text , disambiguating the targets by assigning a sense ( frame ) to them , identifying their arguments , and labeling these arguments with appropriate roles | 0 |
these features are computed and presented for each sentence in a data file format used by the weka tool---these features are extracted using the filters provided in the affective tweets package available for weka | 1 |
word sense disambiguation ( wsd ) is a task to identify the intended sense of a word based on its context---word sense disambiguation ( wsd ) is the task of determining the meaning of a word in a given context | 1 |
the fw feature set consists of 318 english fws from the scikit-learn package---the function word feature set consists of 318 english function words from the scikit-learn package | 1 |
firstly , we built a forward 5-gram language model using the srilm toolkit with modified kneser-ney smoothing---we used the srilm toolkit to build unpruned 5-gram models using interpolated modified kneser-ney smoothing | 1 |
semantic parsing is the task of translating text to a formal meaning representation such as logical forms or structured queries---semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation | 1 |
the target-side language models were estimated using the srilm toolkit---we use word2vec tool for learning distributed word embeddings | 0 |
we learn the noise model parameters using an expectation-maximization approach---we estimate the parameters by maximizingp using the expectation maximization algorithm | 1 |
frermann et al present a bayesian generative model for joint learning of event types and ordering constraints---frermann et al models the joint task of inducing event paraphrases and their order using a bayesian framework | 1 |
we use the glove vector representations to compute cosine similarity between two words---for this reason , we used glove vectors to extract the vector representation of words | 1 |
translation quality can be measured in terms of the bleu metric---to compare translations , the bleu measure is used | 1 |
in this paper , we introduce gate mechanism in multi-task cnn to reduce the interference---in section 4 , we describe tools allowing to efficiently access wikipedia ¡¯ s edit history | 0 |
we trained a 4-gram language model on this data with kneser-ney discounting using srilm---we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit | 1 |
we use a set of 318 english function words from the scikit-learn package---word representations are widely used in nlp tasks such as tagging , named entity recognition , and parsing | 0 |
we show that this process results in improved accuracy compared to raw word embeddings---results show that such representations consistently improve the accuracy of the selected supervised wsd system | 1 |
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts---table 1 shows the performance for the test data measured by case sensitive bleu | 0 |
we use word embedding pre-trained on newswire with 300 dimensions from word2vec---we used the 300 dimensional model trained on google news | 1 |
we follow demsar in computing significance across datasets using a wilcoxon signed rank test---to test for statistical significance , we use non-parametric tests proposed by dem拧ar for comparing classifiers across multiple data sets | 1 |
relation extraction is the task of finding relational facts in unstructured text and putting them into a structured ( tabularized ) knowledge base---relation extraction ( re ) is the task of recognizing relationships between entities mentioned in text | 1 |
a distinction between recognizing chinese and foreign person names is made by chen and lee---the system by chen and lee distinguishes the recognition of the foreign person names from the chinese names | 1 |
the translation quality is evaluated by case-insensitive bleu-4---probabilistic soft logic is a recently proposed alternative framework for probabilistic logic | 0 |
dredze et al combine classifier weights using confidence-weighted learning , which represents the covariance of the weight vectors---dredze et al combined classifier weights using confidence-weighted learning , which represented the covariance of the weight vectors | 1 |
we also present the first benchmarking results on translating to and from arabic for 22 european languages---we also presented first benchmarking results on translating to and from arabic for 22 european languages | 1 |
the translation quality is evaluated by case-insensitive bleu and ter metric---our parser is based on synchronous tree-substitution grammar with sisteradjunction | 0 |
krahmer and theune extend the incremental algorithm so it can mark attributes as contrastive---during training , we fix the number of reasoning steps , but perform stochastic dropout | 0 |
the corpus consists of introductory sections from approximately 1,000 wikipedia articles in which single and plural references to all people mentioned in the text have been annotated---the corpus consists of introductory sections from approximately 2,000 wikipedia articles in which references to the main subject have been annotated | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.