text stringlengths 82 736 | label int64 0 1 |
|---|---|
li and li proposed a bilingual bootstrapping approach for the more specific task of word translation disambiguation as opposed to the more general task of wsd---li and li have shown that word translation and bilingual bootstrapping is a good combination for disambiguation | 1 |
named entity recognition ( ner ) is the task of identifying named entities in free text—typically personal names , organizations , gene-protein entities , and so on---the formally syntax-based models use synchronous context-free grammar but induce a grammar from a parallel text without relying on any linguistic annotations or assumptions | 0 |
relation extraction is a core task in information extraction and natural language understanding---relation extraction is the task of finding semantic relations between entities from text | 1 |
in this paper , we propose a multi-step stacked learning model for disfluency detection---in this paper , we proposed multi-step stacked learning to extract n-gram features | 1 |
then we use the standard minimum error-rate training to tune the feature weights to maximize the system潞s bleu score---we use word2vec tool for learning distributed word embeddings | 0 |
for this purpose , we used phrase tables learned by the standard statistical mt toolkit moses---in the translation tasks , we used the moses phrase-based smt systems | 1 |
the evaluation metric is the case-insensitive bleu4---the penn discourse tree bank is the largest resource to date that provides a discourse annotated corpus in english | 0 |
continuous representations have been shown to be helpful in a wide range of tasks in natural language processing---the translation model has an rnn encoderdecoder architecture with word embeddings and a global attention | 0 |
bahdanau et al introduce attention mechanism to the sequence-to-sequence model and it greatly improves the model performance on the task of machine translation---as a remedy to this problem , we instead use an adaptation of the update strategy in bj枚rkelund and kuhn | 0 |
we also explore the use of a gaussian prior and a simple cutoff for smoothing---we explore the combination of gaussian smoothing and a simple cutoff | 1 |
we initialize our model with 300-dimensional word2vec toolkit vectors generated by a continuous skip-gram model trained on around 100 billion words from the google news corpus---we pretrain 200-dimensional word embeddings using word2vec on the english wikipedia corpus , and randomly initialize other hyperparameters | 1 |
we perform minimum-error-rate training to tune the feature weights of the translation model to maximize the bleu score on development set---we set all feature weights using minimum error rate training , and we optimize their number on the development dataset | 1 |
all four algorithms were compared on two domains taken from the penn treebank annotated corpus---in this section we concentrate on some unsupervised methods | 0 |
in our wok , we have used the stanford log-linear part-of-speech to do pos tagging---we use the stanford part of speech tagger to annotate each word with its pos tag | 1 |
these studies report argument math-w-3-1-3-83 scores of 0.6914 and 0.7283 , respectively---in the acoustic model , in this paper , we investigate the problem of word fragment identification | 0 |
in this paper , we propose a new universal machine translation approach focusing on languages with a limited amount of parallel data---in this paper , we propose a novel universal multilingual nmt approach focusing mainly on low resource languages | 1 |
we used nwjc2vec 10 , which is a 200 dimensional word2vec model---component can be used to increase the responsivity and naturalness of spoken interactive systems | 0 |
irony is a profoundly pragmatic and versatile linguistic phenomenon---malandrakis et al used a kernel function to combine the similarity between seeds and unseen words into a linear regression model | 0 |
the recognition and appropriate generation of cue phrases is of particular interest to research in discourse structure---although distinguishing discourse and sentential uses of cue phrases is critical to the interpretation and generation of discourse | 1 |
renoun ’ s goal is to extract facts for attributes expressed as noun phrases---renoun ’ s approach is based on leveraging a large ontology of noun attributes | 1 |
the second and third authors were partially supported by nsf grant sbr8920230 and aro grant daah0404-94-g-0426---semantic parsing is the task of converting natural language utterances into their complete formal meaning representations which are executable for some application | 0 |
zeng et al proposed the first neural relation extraction with distant supervision---zeng et al proposed piecewise convolution neural networks | 1 |
future work will include a further investigation of parser¨c derived features---future work may consider features of the acoustic sequence | 1 |
in this paper we propose two novel inference mechanisms to chinese trigger identification---in this paper , we propose two novel inference mechanisms to chinese trigger identification | 1 |
coreference resolution is the process of linking together multiple referring expressions of a given entity in the world---coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities | 1 |
gram language model with modified kneser-ney smoothing is trained with the srilm toolkit on the epps , ted , newscommentary , and the gigaword corpora---darwish and voss et al deal with exactly the problem of classifying tokens in arabizi as arabic or not | 0 |
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing---we trained a 4-gram language model on this data with kneser-ney discounting using srilm | 0 |
we use bleu to evaluate translation quality---we measure translation quality via the bleu score | 1 |
another related approach is the unification space model of kempen & cite-p-5-1-1 , which unifies through a process of simulated annealing , and also uses a notion of unification strength---a related approach is the query-by-example work seen in the past in interfaces to database systems ( cite-p-6-1-0 ) | 1 |
this paper reports on work in progress on an exemplar activation model as an alternative to one-vector-per-word approaches to word meaning in context---we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit | 0 |
for building our statistical ape system , we used maximum phrase length of 7 and a 5-gram language model trained using kenlm---in all our experiments , we used a 5-gram language model trained on the one billion word benchmark dataset with kenlm | 1 |
bollmann and s酶gaard reported that a deep neural network architecture improves the normalization of historical texts , compared to both baseline using conditional random fields and norma tool---bollmann and s酶gaard and bollmann et al recently showed that we can obtain more robust historical text normalization models by exploiting synergies across historical text normalization datasets and with related tasks | 1 |
our code and trained models are publicly available for further academic research---we make our code and trained models publicly available for future research | 1 |
in this paper , we propose entity linking using densified knowledge graphs ( elden )---the various models developed are evaluated using bleu and nist | 0 |
by explicitly modeling the graph segmentation , our system obtains further improvement , especially on german–english---chinese – english and german – english show our model to be significantly better than the phrase-based model | 1 |
sentiment analysis is the study of the subjectivity and polarity ( positive vs. negative ) of a text ( cite-p-7-1-10 )---sentiment analysis is a natural language processing task whose aim is to classify documents according to the opinion ( polarity ) they express on a given subject ( cite-p-13-8-14 ) | 1 |
to this end , we use conditional random fields---as a sequence labeler we use conditional random fields | 1 |
a number of sentiment word/sense dictionaries have been manually or ( semi ) automatically constructed---sentiment dictionaries have been manually or ( semi ) -automatically created | 1 |
to model this kind of coherence of sentences , le and mikolov extend word embedding learning network to learn the paragraph embedding as a fixed-length vector representation for paragraph or sentence---on this basis , le and mikolov extend word-level representation to sentence and document level , which allows them to compute the similarity between two sequence of words | 1 |
notice that the 2 } 3 } fan-out of the non-terminal math-w-7-15-1-87 is 2---with reference to this system , we implement a data-driven parser with a neural classifier based on long short-term memory | 0 |
textual entailment ( te ) is a directional relationship between pairs of text expressions , text ( t ) and hypothesis ( h )---textual entailment ( te ) is a directional relationship between an entailing text fragment t and an entailed hypothesis , h , saying that the meaning of t entails ( or implies ) the meaning of h | 1 |
for non-structured classification , the maximum entropy model is widely used---the maximum entropy approach is known to be well suited to solve the classification problem | 1 |
we trained a tri-gram hindi word language model with the srilm tool---we used srilm to build a 4-gram language model with kneser-ney discounting | 1 |
we use most of the local features utilized by stoyanov et al , with the exception of the ones that duplicate our cluster features---we used the same set of preprocessing components as stoyanov et al and took a subset of their features for our local features | 1 |
we adopted the case-insensitive bleu-4 as the evaluation metric---all systems are evaluated using case-insensitive bleu | 1 |
we propose using a principled way of incorporating both rater-comment and rater-author interactions simultaneously---to get a dictionary of word embeddings , we use the word2vec tool 2 and train it on the chinese gigaword corpus | 0 |
owing to this complication , ouchi et al and shibata et al focused exclusively on intra-sentential argument analysis---thus , ouchi et al and iida et al focused on only intra-sentential zero anaphora | 1 |
finally , we extract the semantic phrase table from the augmented aligned corpora using the moses toolkit---discourse segmentation is the process of decomposing discourse into elementary discourse units ( edus ) , which may be simple sentences or clauses in a complex sentence , and from which discourse trees are constructed | 0 |
in particular , we focus on exploiting the output structure at the thread level in order to make more consistent global decisions---in this paper goes in the same direction : we are interested in exploiting the output structure at the thread level to make more consistent global assignments | 1 |
we show for the first time that self-training is able to significantly improve the performance of the pcfg-la parser , a single generative parser , on both small and large amounts of labeled training data---in this paper , for the first time , that self-training is able to significantly improve the performance of the pcfg-la parser , a single generative parser , on both small and large amounts of labeled training data | 1 |
twitter is a widely used microblogging platform , where users post and interact with messages , “ tweets ”---twitter is a communication platform which combines sms , instant messages and social networks | 1 |
datasets we evaluate our model on standard benchmark corpora -conll 2006 and conll 2008 -which include dependency treebanks for 14 different languages---dataset and evaluation measures we evaluate our model on conll dependency treebanks for 14 different languages , using standard training and testing splits | 1 |
in our work , we use lda to identify the subtopics in the given body of texts---in our work , we use latent dirichlet allocation to identify the sub-topics in the given body of texts | 1 |
in this work , we investigate large-scale , discriminative itg word alignment---work presented the first large-scale application of itg to discriminative word alignment | 1 |
here we seek to automatically identify hungarian patients suffering from mild cognitive impairment based on their speech transcripts---we seek to automatically identify hungarian patients suffering from mild cognitive impairment | 1 |
all back-off lms were built using modified kneserney smoothing and the sri lm-toolkit---models were built and interpolated using srilm with modified kneser-ney smoothing and the default pruning settings | 1 |
the data consist of four-tuples of words , extracted from the wall street journal treebank by a group at ibm---bannard and callison-burch , for instance , used a bilingual parallel corpus and obtained english paraphrases by pivoting through foreign language phrases | 0 |
in the first step , we pose a variant of sequential pattern mining problem to identify sequential word patterns that are more common among student answers---in the first step of the proposed two-step fluctuation smoothing approach , we apply a variant of the sequential pattern mining algorithm to identify frequent common n-grams in student answers | 1 |
ji and grishman extended the one sense per discourse idea to multiple topically related documents and propagate consistent event arguments across sentences and documents---inspired by the hypothesis of one sense per discourse , ji and grishman combined global evidence from related documents with local decisions for the event extraction | 1 |
recurrent neural network architectures have proven to be well suited for many natural language generation tasks---with the advent of recurrent neural network based language models , some rnn based nlg systems have been proposed | 1 |
we use the glove word vector representations of dimension 300---for input representation , we used glove word embeddings | 1 |
extensive experimental results are provided in section 5 to illustrate the performance comparison , and section 6 concludes this study---in section 5 to illustrate the performance comparison , and section 6 concludes this study | 1 |
furthermore , we plan to integrate the proposed interface within an computer-based interactive platform for speech therapy---a zero pronoun ( zp ) is a gap in a sentence that is found when a phonetically null form is used to refer to a real-world entity | 0 |
we measured translation performance with bleu---we used smoothed bleu for benchmarking purposes | 1 |
our experimental results on multiple tac data sets show the competitiveness of our proposed methods---our experiments in the tac data sets demonstrate that our proposed methods | 1 |
preliminary tests show substantial improvement of the semantic score measure over syntactic score measure---semantic score measure shows substantial improvement in structural disambiguation over a syntax-based approach | 1 |
we have presented the inesc-id system for the semeval 2015 message classification task---we present the inesc-id system for the 2015 semeval message polarity classification task | 1 |
we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training---to tackle this issue , we leverage pretrained word embeddings , specifically the 300 dimension glove embeddings trained on 42b tokens of external text corpora | 1 |
various models for learning word embeddings have been proposed , including neural net language models and spectral models---distributed word representations induced through deep neural networks have been shown to be useful in several natural language processing applications | 1 |
we apply online training , where model parameters are optimized by using adagrad---we use a minibatch stochastic gradient descent algorithm together with an adagrad optimizer | 1 |
the other is described in and has been implemented in the software wapiti 1---the other is described in and has been implemented in the software wapiti 3 | 1 |
first we follow cite-p-31-3-9 , use freebase as source of distant supervision , and employ wikipedia as source of unlabelled text¡ªwe will call this an in-domain setting---we follow cite-p-31-3-9 , use freebase as source of distant supervision , and employ wikipedia as source of unlabelled text ¡ª | 1 |
the translation quality is evaluated by case-insensitive bleu and ter metric---the translation quality is evaluated by case-insensitive bleu and ter metrics using multeval | 1 |
we used moses , a state-of-the-art phrase-based smt model , in decoding---we used moses , a phrase-based smt toolkit , for training the translation model | 1 |
in this section , we describe the observed data , latent variables , and auxiliary variables of the problem and show an example in fig . 1---an in-house language modeling toolkit was used to train the 4-gram language models with modified kneser-ney smoothing over the web-crawled data | 0 |
we apply the evaluation method used to evaluate vector representation of text sequences by le and mikolov---we experimentally evaluate the paragraph vector model proposed by le and mikolov | 1 |
we use the stanford ner system with a standard set of language-independent features---we use the stanford pos-tagger and name entity recognizer | 1 |
in order to have an idea of the quality of the smt model beforehand , we evaluated the machine translations in terms of bleu scores using a single reference from europarl---to compare the performance of system , we recorded the total training time and the bleu score , which is a standard automatic measurement of the translation quality | 1 |
one idea is to use multiple source languages to increase the statistical ground for the learning process , a strategy that can also be used in the case of annotation projection---at a summit conference , the prime minister will adopt a policy of requesting the french government to halt nuclear testing | 0 |
koehn et al propose certain heuristics to extract phrases that are consistent with bidirectional word-alignments generated by the ibm models---koehn et al proposed a distortion model for phrase-based smt based on jump distances between the newly translated phrases and to-be-translated phrases which does not consider specific lexical information | 1 |
we used svm classifier that implements linearsvc from the scikit-learn library---we used the implementation of the scikit-learn 2 module | 1 |
we use adamax for optimization as described in---we use the adam optimizer for the gradient-based optimization | 1 |
ding and palmer introduce the notion of a synchronous dependency insertion grammar as a tree substitution grammar defined on dependency trees---for a task of interest , such as named entity recognition ( ner ) , it is crucial for prac-tioneers and researchers | 0 |
the output of these systems has been used to support many nlp tasks such as learning selectional preference , acquiring sense knowledge , and recognizing entailment---mihalcea et al used various text based similarity measures , including wordnet and corpus based similarity methods , to determine if two phrases are paraphrases | 0 |
for chinese-english , we train a standard phrase-based smt system over the available 21,863 sentences---we use standard phrase-based smt techniques to build separate phrase tables for the indonesian-english and the malay-english bitexts | 1 |
relation extraction is a fundamental task in information extraction---relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text | 1 |
it is widely recognized that word embeddings are useful because both syntactic and semantic information of words are well encoded---it has been empirically shown that word embeddings could capture semantic and syntactic similarities between words | 1 |
cohn and lapata present a supervised tree-to-tree transduction method for sentence compression---cohn and lapata evaluate their applicability in the text-to-text generation task of sentence compression | 1 |
feature weights were set with minimum error rate training on a development set using bleu as the objective function---for example , riaz and girju and do et al have proposed unsupervised metrics for learning causal dependencies between two events | 0 |
for lm training and interpolation , the srilm toolkit was used---a 4-grams language model is trained by the srilm toolkit | 1 |
we initialize the word embeddings for our deep learning architecture with the 100-dimensional glove vectors---we test the statistical significance of differences between various mt systems using the bootstrap resampling method | 0 |
all the language models used in our experiments are 5-grams modified kneser-ney smoothed lms trained using kenlm---in our experiments we used 5-gram language models trained with modified kneser-ney smoothing using kenlm toolkit | 1 |
in this article , we propose pseudofit , a new method for specializing word embeddings according to semantic similarity without any external knowledge---in this article , we presented pseudofit , a method that specializes word embeddings towards semantic similarity without external knowledge | 1 |
we used the openfst toolkit for finite-state machine implementation and operations---we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit | 0 |
word segmentation is a fundamental task for processing most east asian languages , typically chinese---word segmentation is the first step prior to word alignment for building statistical machine translations ( smt ) on language pairs without explicit word boundaries such as chinese-english | 1 |
bandyopadhyay et al , 2011 , sentiment analysis , and many other applications---bandyopadhyay et al , 2011 , and sentiment analysis | 1 |
the probabilistic language model is constructed on google web 1t 5-gram corpus by using the srilm toolkit---we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit | 1 |
sentiment analysis is a fundamental problem aiming to give a machine the ability to understand the emotions and opinions expressed in a written text---sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-1-14 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) | 1 |
besides , chinese is a topic-prominent language , the subject is usually covert and the usage of words is relatively flexible---it is well-known that chinese is a pro-drop language , meaning pronouns can be dropped from a sentence without causing the sentence to become ungrammatical or incomprehensible when the identity of the pronoun can be inferred from the context | 1 |
we use the multi-class logistic regression classifier from the liblinear package 2 for the prediction of edit scripts---we use the wrapper of the scikit learn python library over the liblinear logistic regression implementation | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.