text stringlengths 82 736 | label int64 0 1 |
|---|---|
we offer a systematic and fully replicable method of an automatic extraction of these news values from headlines---we utilise state-of-the-art techniques to develop a method for automatic extraction of news values from headline text | 1 |
lemmatization is the process of determining the dictionary form of a word ( e.g . swim ) given one of its inflected variants ( e.g . swims , swimming , swam , swum )---lemmatization is the process of reducing a word to its base form , normally the dictionary lookup form ( lemma ) of the word | 1 |
we use a qkv-style attention to summarize the post context into a single vector---we apply a transformer-style attention on top of branch-level lstm | 1 |
this paper reported on an implementation of a multimodal grammar combining spoken and gestural input---this paper reports on an implementation of a multimodal grammar of speech and co-speech gesture | 1 |
ramshaw and marcus , 1995 ) used transformation based learning using a large annotated corpus for english---this paper presents a new grapheme-to-phoneme conversion method using phoneme connectivity and ccv conversion rules | 0 |
for representing proper chunks , we employ iob2 representation , one of those which have been studied well in various chunking tasks of nlp---for representing proper chunks , we employ iob2 representation , one of those which have been studied well in various chunking tasks of natural language processing | 1 |
we used the phrasebased translation system in moses 5 as a baseline smt system---we used the phrase-based smt in moses 5 for the translation experiments | 1 |
a 3-gram language model was trained from the target side of the training data for chinese and arabic , using the srilm toolkit---a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke | 1 |
ganchev et al , 2010 ) describes a method based on posterior regularization that incorporates additional constraints within the em algorithm for estimation of ibm models---for the diverse-requirement scenario , the conditional valueat-risk ( cvar ) is used as the objective function | 0 |
the word embeddings are word2vec of dimension 300 pre-trained on google news---the word embeddings were trained using word2vec on several billion words of newswire and discussion forum data | 1 |
semantic parsing is the task of transducing natural language ( nl ) utterances into formal meaning representations ( mrs ) , commonly represented as tree structures---semantic parsing is the task of mapping a natural language ( nl ) sentence into a complete , formal meaning representation ( mr ) which a computer program can execute to perform some task , like answering database queries or controlling a robot | 1 |
we use the stanford pos-tagger and name entity recognizer---we used pos tags predicted by the stanford pos tagger | 1 |
we use the uplug 5 collection of tools for alignment to extract translations from our specialized parallel corpus---for this purpose , we use the uplug toolkit which is a collection of tools for processing corpus data , created by j枚rg tiedemann | 1 |
we use state-of-the-art word embedding methods , namely continuous bag of words and global vectors---we complement the neural approaches with a simple neural network that uses word representations , namely a continuous bag-of-words model | 1 |
we automatically extract more translation pairs using the europarl parallel corpus and select pairs based on the word frequency in the target language---in this treebank , we followed the format of the conll tab-separated format for dependency parsing | 0 |
hatzivassiloglou and mckeown proposed a method for identifying the word polarity of adjectives---hatzivassiloglou and mckeown proposed the first method for determining adjective polarities or orientations | 1 |
especially , character-based tagging method which was proposed by nianwen xue achieves great success in the second international chinese word segmentation bakeoff in 2005---for instance , character-based tagging method achieves great success in the second international chinese word segmentation bakeoff in 2005 | 1 |
using a large set of color–name pairs obtained from a color design forum , we evaluate our model on a “ color turing test ” and find that , given a name , the colors predicted by our model are preferred by annotators to color names created by humans---color – name pairs obtained from an online color design forum , we evaluate our model on a “ color turing test ” and find that , given a name , the colors predicted by our model are preferred by annotators to color names created by humans | 1 |
word sense disambiguation ( wsd ) is a widely studied task in natural language processing : given a word and its context , assign the correct sense of the word based on a predefined sense inventory ( cite-p-15-3-4 )---word sense disambiguation ( wsd ) is the nlp task that consists in selecting the correct sense of a polysemous word in a given context | 1 |
here , we have shown ways to improve shrg-based stringto-semantic-graph parsing---in this paper , we have evaluated different strategies for parsing code-mixed data | 0 |
the ptb parser we use for comparison is the publicly available berkeley parser---for the experiments in this paper , we will use the berkeley parser and the related maryland parser | 1 |
coreference resolution is a well known clustering task in natural language processing---although coreference resolution is a subproblem of natural language understanding , coreference resolution evaluation metrics have predominately been discussed in terms of abstract entities and hypothetical system errors | 1 |
in this paper , we present a comprehensive study of the relationship between an individual¡¯s personal traits and his/her brand preferences---in previous research , in this study , we want to systematically investigate the relationship between a comprehensive set of personal traits and brand preferences | 1 |
h i¡í and math-w-2-7-0-62 are projected vectors of entities---shen et al proposed to use linguistic knowledge expressed in terms of a dependency grammar , instead of a syntactic constituency grammar | 0 |
and in low-resource setting , the system achieved only 1.58 %---in low-resource settings , however , the performance was only 1 . 58 % | 1 |
word embeddings have proven to be effective models of semantic representation of words in various nlp tasks---that handles both problems and operates linearly in the number of tokens and the number of possible output labels at any token | 0 |
this model has been used for translation , image caption generation , and speech recognition---it has been applied to various areas such as image classification , speech recognition , image caption generation and machine translation | 1 |
semantic parsing is the task of mapping natural language to a formal meaning representation---semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation ( mr ) | 1 |
sentiment analysis is a research area in the field of natural language processing---sentiment analysis is the task of automatically identifying the valence or polarity of a piece of text | 1 |
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided---for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit | 1 |
some examples are the colemanliau index , which was specifically designed for automated assessment of readability , the smog formula and the fry readability formula---some of the well-known readability formulas include the smog formula , the fk formula , and the dalechall formula | 1 |
the incorrectly predicted alignment types are shown with the ? symbol---type and the horizontal axis represents the predicted alignment type | 1 |
to get word vectors , we used glove and the mean of these word vectors are used as the sentence embedding---for this reason , we used glove vectors to extract the vector representation of words | 1 |
discourse parsing is the process of discovering the latent relational structure of a long form piece of text and remains a significant open challenge---discourse parsing is a fundamental task in natural language processing that entails the discovery of the latent relational structure in a multi-sentence piece of text | 1 |
then , one weakness of previous work lies in the demand of manually recognizing a large amount of ground truth review spam data for model training---then , one real challenge would be to manually recognize plentiful ground truth spam review data for model | 1 |
we use the 300-dimensional pre-trained word2vec 3 word embeddings and compare the performance with that of glove 4 embeddings---relation extraction ( re ) is the task of determining semantic relations between entities mentioned in text | 0 |
in this paper , we investigated the problem of automated essay scoring in the presence of biased ratings---in this paper , we investigated the problem of automated essay scoring | 1 |
we use the well-known long short-term memory as our bi-rnn cell---here we use the most widely used long short term memory network as our composition model | 1 |
the algorithm is essentially a dependency version of the data-driven constituent parsing algorithm for probabilistic glr-like parsing described by sagae and lavie---the algorithm is essentially a dependency version of the constituent parsing algorithm for probabilistic parsing with lr-like data-driven models described by sagae and lavie | 1 |
we demonstrate that concept drift is an important consideration---we employed the uima tokenizer 2 to generate tokens and sentences , and the treetagger for part-of-speech tagging and chunking | 0 |
we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing---for the language model , we used srilm with modified kneser-ney smoothing | 1 |
kindred is a python package that builds upon the stanford corenlp framework and the scikit-learn machine learning library---this is a gui-enabled convenience tool that manages datasets and uses the python-based scikitlearn machine learning toolkit | 1 |
we trim the parse tree of a relation instance so that it contains only the most essential tree components based on constituent dependencies---based on the findings of qian et al , we trim the parse tree of a relation instance so that it contains only the most essential components | 1 |
for this paper , we propose a rule hierarchy for this purpose , that can be used as a preprocessing tool to context checking---in this paper , we have developed an efficient algorithm for the assignment of definiteness attributes to japanese | 1 |
bleu is a precision metric that computes the geometric mean of the n-gram precisions between generated text and reference texts and adds a brevity penalty for shorter sentences---however , in practice , there are many domains , such as the biomedical domain , which involve nested , overlapping , discontinuous ne mentions | 0 |
we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option---we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing | 1 |
semantic role labeling ( srl ) is the process of extracting simple event structures , i.e. , “ who ” did “ what ” to “ whom ” , “ when ” and “ where ”---semantic role labeling ( srl ) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence | 1 |
we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training---we use 300-dimensional glove vectors trained on 6b common crawl corpus as word embeddings , setting the embeddings of outof-vocabulary words to zero | 1 |
the rule-based classifier of uchiyama , baldwin , and ishizaki incorporates syntactic information about japanese compound verbs , a type of mwe composed of two verbs---the rule-based classifier of uchiyama et al incorporates syntactic information about japanese compound verbs , a type of mwe composed of two verbs | 1 |
a paradigm is a grid of all the inflected forms of some lexeme , as illustrated in table 1---the paradigm actually consists of > 40 word forms ; only the present tense portion is shown here | 1 |
in order to deal with this problem , we perform translation in two directions as described in---in order to tackle this problem , we perform word alignment in two directions as described in | 1 |
our models are quite similar to this model but we used different variety of rnn in place of window based neural network---our results also indicate that rnn based models perform better than window based neural network model | 1 |
aggregating evaluation methods like bleu give a useful overview of the quality of a translation , but they do not afford specific information and leave too many details to chance---while automatic evaluation methods like bleu can be useful for estimating translation quality , a higher score is no guarantee of quality improvement | 1 |
we used the stanford parser to extract dependency features for each quote and response---several user simulation models have been proposed for dialogue management policy learning | 0 |
this loss function allows us to integrate syntactic structure into the statistical mt framework without building detailed models of syntactic features and retraining models from scratch---under this loss function allows us to integrate syntactic knowledge into a statistical mt system without building detailed models of linguistic features , and retraining the system from scratch | 1 |
we implement classification models using keras and scikit-learn---we use the linear svm classifier from scikit-learn | 1 |
twitter 1 is a microblogging service , which according to latest statistics , has 284 million active users , 77 % outside the us that generate 500 million tweets a day in 35 different languages---twitter is a huge microblogging service with more than 500 million tweets per day from different locations of the world and in different languages ( cite-p-8-1-9 ) | 1 |
user and product information can help by introducing a frequent user/product with similar attributes to the cold-start user/product---user and product information can be used to effectively mitigate the problem caused by cold-start users and products | 1 |
this paper describes limsi ’ s submission to the conll 2017 ud shared task ( cite-p-20-3-5 ) , dedicated to parsing universal dependencies ( cite-p-20-1-10 ) on a wide array of languages---we applied the approach to translation from german to english , using the europarl corpus for our training data | 0 |
in this nested tree case , we can prove that the number of zdd nodes is math-w-8-1-0-194---we obtained both phrase structures and dependency relations for every sentence using the stanford parser | 0 |
however , the information states as well as the possible worlds are never directly accessible from the object language---but the notion of a information state ( a set of possibilities -- namely first-order models ) is not available from the object language | 1 |
abbreviation is defined as a shortened description of the original fully expanded form---an abbreviation is a letter or sequence of letters , which is a shortened form of a word or a sequence of words , which is called the sense of the abbreviation | 1 |
in this paper , we introduce a uniform framework for chunking task based on support vector machines ( svms )---we introduce a new type of weighting strategy which are derived from the theoretical basis of the svms | 1 |
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting---we used a 5-gram language model trained on 126 million words of the xinhua section of the english gigaword corpus , estimated with srilm | 1 |
it is however far from being a suitable solution for solving clir problems ,---in this paper , a generalized probabilistic semantic model ( gpsm ) is proposed | 0 |
many words have multiple meanings , and the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd )---word sense disambiguation ( wsd ) is a particular problem of computational linguistics which consists in determining the correct sense for a given ambiguous word | 1 |
wikipedia is a web based , freely available multilingual encyclopedia , constructed in a collaborative effort by thousands of contributors---our nnape model is inspired by the mt work of bahdanau et al which is based on bidirectional recurrent neural networks | 0 |
we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit---we train trigram language models on the training set using the sri language modeling tookit | 1 |
we identify grammatical roles using rasp---we identify grammatical roles with rasp | 1 |
for the automatic evaluation , we used the bleu metric from ibm---chang et al stated that one reason is that the objective function of topic models does not always correlate well with human judgments | 0 |
crf is a probabilistic framework that suitable for labeling input sequence data---the crf is a sequence modeling framework that can solve the label bias problem in a principled way | 1 |
the language models used were 7-gram srilm with kneser-ney smoothing and linear interpolation---the earliest attempts at aspect detection were based on the classic information extraction approach of using frequently occurring noun phrases | 0 |
the fast align toolkit is used for word alignment---fast align was used to generate word alignment files | 1 |
we use 300d glove vectors trained on 840b tokens as the word embedding input to the lstm---for the classification task , we use pre-trained glove embedding vectors as lexical features | 1 |
blitzer et al used structural correspondence learning to train a classifier on source data with new features induced from target unlabeled data---drezde et al applied structural correspondence learning to the task of domain adaptation for sentiment classification of product reviews | 1 |
as our baseline , we apply a high-performing chinese-english mt system based on hierarchical phrase-based translation framework---we implemented the algorithms in python using the stochastic gradient descent method for nmf from the scikit-learn package | 0 |
figure 2 arc-eager transition system for dependency parsing---in the riedel dataset , we used the same features as riedel et al and hoffmann et al for the mention classifier | 0 |
ji and grishman employ a rule-based approach to propagate consistent triggers and arguments across topic-related documents---ji and grishman extended the one sense per discourse idea to multiple topically related documents and propagate consistent event arguments across sentences and documents | 1 |
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing---we apply dropout on the lstm layer to prevent network parameters from overfitting and control the co-adaptation of features | 0 |
in this paper , we propose a probabilistic model to explain speakers ’ choices of referring expressions based on discourse salience---in this domain can be derived from our speaker model , providing an explanation from first principles for the relation between discourse salience and speakers ’ choices of referring expressions | 1 |
coreference resolution is a field in which major progress has been made in the last decade---since coreference resolution is a pervasive discourse phenomenon causing performance impediments in current ie systems , we considered a corpus of aligned english and romanian texts to identify coreferring expressions | 1 |
we use the stanford parser for english language data---for parsing , we use the stanford parser | 1 |
we use the glove word vector representations of dimension 300---we select the glove algorithm as a representative example | 1 |
relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text---we further propose a graph-based triple encoder to optimize the amount of information preserved in the input of the framework | 0 |
dakka and cucerzan trained an svm classifier by using features related to the structure of wikipedia articles---dakka and cucerzan presented a work on tagging the wikipedia data with coarse named entity tags | 1 |
twitter is a subject of interest among researchers in behavioral studies investigating how people react to different events , topics , etc. , as well as among users hoping to forge stronger and more meaningful connections with their audience through social media---waseem et al , 2017 ) tried to capture similarities between different sub tasks | 0 |
it is not ideally suited for computational use but work currently in progress is aimed at addressing this problem---questions concerning people , dates , etc , which can generally be answered by a short sentence or phrase | 0 |
in the experiments reported here we use support vector machines through the svm light package---we employ support vector machine as the machine learning approach | 1 |
we have applied the parser to dependency parsing of turkish---in order to do so , we perform traversals of the platforms and use already available tools to filter the urls | 0 |
lda is a probabilistic model of text data which provides a generative analog of plsa , and is primarily meant to reveal hidden topics in text documents---in arabic , there is a reasonable number of sentiment lexicons but with major deficiencies | 0 |
in this paper we assume that the phrase pairs are given ( without any scores ) , and we induce every other parameter of the phrase-based model from monolingual data---a priori , we induce every other parameter of a full phrase-based translation system from monolingual data alone | 1 |
in japanese morphological analysis , the dictionary-based approach has been widely used to generate the word lattice , kurohashi et al ,---we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing | 0 |
we utilize minimum error rate training to optimize feature weights of the paraphrasing model according to ndcg---finally we use minimum error training to train log-linear scaling factors that are applied to the wfsts in equation 1 | 1 |
we used the support vector machine implementation from the liblinear library on the test sets and report the results in table 4---we use the logistic regression implementation of liblinear wrapped by the scikit-learn library | 1 |
translation quality is measured in truecase with bleu on the mt08 test sets---the translation quality is evaluated by case-insensitive bleu-4 metric | 1 |
the query expansion model of cui et al is based on the principle that if queries containing one term often lead to the selection of documents containing another term , then a strong relationship between the two terms is assumed---the query expansion model of cui et al is based on the principle that if queries containing one term often lead to the selection of documents containing another term , then a strong relationship between the two terms can be assumed | 1 |
cui et al developed a dependency-tree based information discrepancy measure---among these techniques , latent semantic indexing is a wellknown approach | 0 |
we proposed to allow data generators to be ¡°weakly¡± specified , leaving the undetermined coefficients to be learned from data---while defining generic data generators is difficult , we propose to allow generators to be ¡° weakly ¡± specified | 1 |
the word embeddings are initialized from glove pretrained word embeddings on common crawl , and are not updated during training---we use the berkeley parser word signatures | 0 |
dependency parsing is the task of predicting the most probable dependency structure for a given sentence---dependency parsing is a simpler task than constituent parsing , since dependency trees do not have extra non-terminal nodes and there is no need for a grammar to generate them | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.