text stringlengths 82 736 | label int64 0 1 |
|---|---|
we use pre-trained word vectors of glove for twitter as our word embedding---in this paper , we propose a spectral learning algorithm where latent states are not restricted to hmm-like distributions of modifier sequences | 0 |
for the mix one , we also train word embeddings of dimension 50 using glove---also , we initialized all of the word embeddings using the 300 dimensional pre-trained vectors from glove | 1 |
our proposed attention-based amr encoder-decoder model improves headline generation benchmarks compared with the baseline neural attention-based model---benchmark data showed that our attention-based amr encoder-decoder model successfully improved standard automatic evaluation measures of headline generation | 1 |
therefore , combines at-least-one multi-instance learning with neural network model to extract relations on distant supervision data---therefore , incorporates multi-instance learning with neural network model , which can build relation extractor based on distant supervision data | 1 |
subsequently , the knowledge base is adjusted to suit the text at hand---once the knowledge base is adjusted to suit the text at hand , it is then applied to the text | 1 |
a 4-gram language model is trained on the monolingual data by srilm toolkit---we use sri language modeling toolkit to train a 5-gram language model on the english sentences of fbis corpus | 1 |
we gave an efficient polynomial time algorithm for the simplest variant , namely deciding on a unigram bleu score for a cn---we give an efficient polynomial-time algorithm to calculate unigram bleu on confusion networks , but show that even small generalizations of this data | 1 |
twitter is a widely used social networking service---twitter is a microblogging site where people express themselves and react to content in real-time | 1 |
in a six-month trial , the platform was used by 50 people to access 6400 newspaper articles---in which 50 people , linked to a company intranet , used the platform to access newspaper articles | 1 |
in contrast with such work , we are addressing subject-object ambiguity in german---translation quality is evaluated by case-insensitive bleu-4 metric | 0 |
we release a dataset of 1,938 annotated posts from across the four forums---we used a phrase-based smt model as implemented in the moses toolkit | 0 |
feature weights are tuned using minimum error rate training on the 455 provided references---in section 6 , the proposed word embeddings show evident improvements on sentiment classification , as compared to the base model | 0 |
liu et al studied learning-dependency between knowledge units using classification where a knowledge unit is a special text fragment containing concepts---the letters s and t always denote variables or atoms | 0 |
chinese word segmentation is the initial stage of many chinese language processing tasks , and has received a lot of attention in the literature ( cite-p-17-1-13 , cite-p-17-1-15 , cite-p-17-1-17 , cite-p-17-1-10 )---chinese word segmentation is the initial step of many chinese language processing tasks , and has attracted a lot of attention in the research community | 1 |
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided---in the realm of error correction , smt has been applied to identify and correct spelling errors | 0 |
we evaluate the system generated summaries using the automatic evaluation toolkit rouge---our experimental evaluation shows that our new framework significantly outperforms strong baselines | 0 |
the weights associated to feature functions are optimally combined using the minimum error rate training---the optimisation of the feature weights of the model is done with minimum error rate training against the bleu evaluation metric | 1 |
we used the dataset made available by the workshop on statistical machine translation to train a german-english phrase-based system using the moses toolkit in a standard setup---we used the open source moses phrase-based mt system to test the impact of the preprocessing technique on translation results | 1 |
we followed the approach of schwenk and koehn by training language models from each sub-corpus separately and then linearly interpolated them using srilm with weights optimized on the held-out dev-set---we follow the approach of schwenk and koehn by training domain-specific language models separately and then linearly interpolate them using srilm with weights optimized on the held-out development set | 1 |
with english gigaword corpus , we use the skip-gram model as implemented in word2vec 3 to induce embeddings---we train a kn-smoothed 5-gram language model on the target side of the parallel training data with srilm | 0 |
our work may also have significant implications for the cognitive foundations of the representation and acquisition of linguistic knowledge---and the approach has potential implications for the representation and the acquisition of linguistic knowledge | 1 |
this paper has presented a pilot approach for the detection of partial cognates in multilingual word lists---this paper presents a new algorithm for cognate detection which does not identify cognate words | 1 |
on the remaining tweets , we trained a 10-gram word length model , and a 5-gram language model , using srilm with kneyser-ney smoothing---for all data sets , we trained a 5-gram language model using the sri language modeling toolkit | 1 |
one of the very few available discourse annotated corpora is the penn discourse treebank in english---major discourse annotated resources in english include the rst treebank and the penn discourse treebank | 1 |
the target-normalized hierarchical phrase-based model is based on a more general hierarchical phrase-based model---the hierarchical phrase-based model used hierarchical phrase pairs to strengthen the generalization ability of phrases and allow long distance reorderings | 1 |
we use the crf learning algorithm , which consists in a framework for building probabilistic models to label sequential data---we use a conditional random field sequence model , which allows for globally optimal training and decoding | 1 |
more importantly , semi-supervised learning on large amount of unlabeled data effectively increases the classification accuracy---we focus on training classifiers with weakly and strongly labeled data , as well as semi-supervised learning | 1 |
we use the openly available parseme corpus3 manually annotated for vmwes in 18 languages---we use the french part of the parseme corpus 6 manually annotated for vmwes in 18 languages | 1 |
the correlated topic model induces a correlation structure between topics by using the logistic normal distribution instead of the dirichlet---this is because chinese is a pro-drop language ( cite-p-21-3-1 ) that allows the subject to be dropped in more contexts than english does | 0 |
we use 300-dimensional word embeddings from glove to initialize the model---we use pre-trained 50-dimensional word embeddings vector from glove | 0 |
it starts with identifying transferable knowledge from across multiple domains that can be useful for learning the target domain task---which starts with identifying transferable knowledge from across multiple source domains useful for learning the target domain task | 1 |
we first induce unlabeled bracketing trees using the algorithm given in 1---hence we use the expectation maximization algorithm for parameter learning | 0 |
in statistical machine translation , since translation knowledge is acquired from parallel data , the quality and quantity of parallel data are crucial---we use 4-gram language models in both tasks , and conduct minimumerror-rate training to optimize feature weights on the dev set | 0 |
language models were built using the srilm toolkit 16---sentence hypothesis is selected as the final output of our system | 0 |
barzilay and lapata recently proposed an entity-based coherence model that aims to learn abstract coherence properties , similar to those stipulated by centering theory---the entity-based coherence model , proposed by barzilay and lapata , is one of the most popular statistical models of inter-sentential coherence , and learns coherence properties similar to those employed by centering theory | 1 |
bleu is one of the most popular metrics for automatic evaluation of machine translation , where the score is calculated based on the modified n-gram precision---bleu is a common metric to automatically measure the quality of smt output by comparing n-gram matches of the smt output with a human reference translation | 1 |
our second approach is fully bayesian and derived from the more general model , hierarchical lda---our first model extends this approach to the hierarchical setting | 1 |
i plan to improve the performance of my current system by incorporating semantic information---i am working on incorporating semantic resources to improve the performance of my preliminary system | 1 |
recently , researchers in computational linguistics started to investigate how the principle of compositionality could be applied to distributional models of semantics---recently , a number of researchers have tried to reconcile the framework of distributional semantics with the principle of compositionality | 1 |
a system pbmt is built using the phrase-based model in moses---moses is a phrase-based system with lexicalized reordering | 1 |
the language models were trained using srilm toolkit---the language model was trained using srilm toolkit | 1 |
we used minimum error rate training to optimize the feature weights---we used minimum error rate training for tuning on the development set | 1 |
we initialize the word embeddings for our deep learning architecture with the 100-dimensional glove vectors---third , we convert the stanford glove twitter model to word2vec and obtain the word embeddings | 1 |
however , such a mapping might not be available for resource-poor languages---mappings are not available for many resource-poor languages | 1 |
we pretrain word vectors with the word2vec tool on the news dataset released by ding et al , which are fine-tuned during training---we use a popular word2vec neural language model to learn the word embeddings on an unsupervised tweet corpus | 1 |
for natural language problems in general , of course , it is widely recognized that significant accuracy gains can often be achieved by generalizing over relevant feature combinations ,---for wsd and indeed many natural language tasks , significant accuracy gains can often be achieved by generalizing over relevant feature combinations , | 1 |
relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text---relation extraction is a key step towards question answering systems by which vital structured data is acquired from underlying free text resources | 1 |
we use 300 dimension word2vec word embeddings for the experiments---for the token-level sequence labeling tasks we use hidden markov models and conditional random fields appear sentences | 0 |
the unsupervised component gathers lexical statistics from an unannotated corpus of newswire text---we also use editor score as an outcome variable for a linear regression classifier , which we evaluate using 10-fold cross-validation in scikit-learn | 0 |
introduced by bengio et al , the authors proposed a statistical language model based on shallow neural networks---they propose an attentive cnn encoder and a neural network language model decoder | 1 |
this finite-state tagger will also be found useful when combined with other language components , since it can be naturally extended by composing it with finite-state transducers that could encode other aspects of natural language syntax---lstms have become more popular after being successfully applied in statistical machine translation | 0 |
crfs are a class of undirected graphical models with exponent distribution---in this example , each cnn component covers 6 words , while in practice | 0 |
machine learning consists of a hypothesis function which learns this mapping based on latent or explicit features extracted from the input data---in machine learning , there is a class of semi-supervised learning algorithms that learns from positive and unlabeled examples ( pu learning for short ) | 1 |
we apply standard tuning with mert on the bleu score---for the evaluation of the results we use the bleu score | 1 |
the birnn is implemented with lstms for better long-term dependencies handling---the elmo embedding is dynamically computed by a l-layer bi-lstm language model | 1 |
in a semantic role labeling task , the syntax and semantics are correlated with each other , that is , the global structure of the sentence is useful for identifying ambiguous semantic roles---despite the fact that the intersection kernel is very popular in computer vision , it has never been used before in text mining | 0 |
we also verify that through the use of rich features , we can further improve the accuracy of our query spelling correction system---with the proposed discriminative model , we can directly optimize the search phase of query spelling correction | 1 |
zens and ney show that itg constraints yield significantly better alignment coverage than the constraints used in ibm statistical machine translation models on both german-english and french-english---although the itg constraint allows more flexible reordering during decoding , zens and ney showed that the ibm constraint results in higher bleu scores | 1 |
centering theory , as employed by strube and hahn or okumura and tamura , uses this type of approach---okumura and tamura developed a rule-based method based on the idea of centering theory | 1 |
we utilize the nematus implementation to build encoder-decoder nmt systems with attention and gated recurrent units---we conduct an empirical evaluation using encoder-decoder nmt with attention and gated recurrent units as implemented in nematus | 1 |
mintz et al , 2009 ) proposes distant supervision to automatically generate training data via aligning kbs and texts---mintz et al proposes distant supervision , which exploits relational facts in knowledge bases | 1 |
the feature weights for each system were tuned on development sets using the moses implementation of minimum error rate training---the feature weights are tuned to optimize bleu using the minimum error rate training algorithm | 1 |
for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus---we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing | 1 |
for the srl module , we use a rich syntactic feature-based learning method---then we review the path ranking algorithm introduced by lao and cohen | 0 |
mikolov et al found that the learned word representations capture meaningful syntactic and semantic regularities referred to as linguistic regularities---mikolov et al showed that meaningful syntactic and semantic regularities can be captured in pre-trained word embedding | 1 |
incometo select the most fluent path , we train a 5-gram language model with the srilm toolkit on the english gigaword corpus---we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing | 1 |
in run3 , we averaged run1 with a previously proposed surface-based approach as a kind of integration---in run3 , we averaged run1 with a previously proposed surface-based approach | 1 |
for studies on languages other than english see on chinese and on slovene---for studies on languages other than english see work by su et al on chinese and fi拧er et al on slovene | 1 |
in this example , the target word statements belongs to ( ¡°evokes¡± ) the frame s tatement---task : given a sentence with an entity mention , the goal is to predict a set of free-form phrases ( e . g . skyscraper , songwriter , or criminal ) that describe appropriate types for the target entity | 0 |
we use a 5-gram language model with modified kneser-ney smoothing , trained on the english side of set1 , as our baseline lm---we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing | 1 |
we trained a 5-gram sri language model using the corpus supplied for this purpose by the shared task organizers---we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words | 1 |
in this paper , we use an svms-based chunking tool yamcha 8---for both syntactic and semantic chunking , we used tinysvm along with yamcha 7 | 1 |
metonymy is a figure of speech , in which one expression is used to refer to the standard referent of a related one ( cite-p-18-1-13 )---metonymy is a figure of speech that uses “ one entity to refer to another that is related to it ” ( lakoff and johnson , 1980 , p.35 ) | 1 |
experiments were performed using the publicly available europarl corpora for the english-french language pair---all experiments used the europarl parallel corpus as sources of text in the languages of interest | 1 |
semantic parsing is the task of mapping natural language sentences to complete formal meaning representations---semantic parsing is the task of converting natural language utterances into formal representations of their meaning | 1 |
the theory formalizes this intuition by introducing constraints on the distribution of discourse entities in coherent text---representation of discourse allows the system to learn the properties of locally coherent texts | 1 |
bilingual lexicons play a vital role in many natural language processing applications such as machine translation or crosslanguage information retrieval---bilingual dictionaries of technical terms are important resources for many natural language processing tasks including statistical machine translation and cross-language information retrieval | 1 |
for ptb pos tags , we tagged the text with the stanford parser---we parsed the corpus with rasp and with the stanford pcfg parser | 1 |
two common issues with training deep nns on large data-sets are the vanishing and the exploding gradients problems---however , training simple rnns is difficult because of the vanishing and exploding gradient problems | 1 |
at the same time , it allows us to measure the similarity between two documents by comparing their graph representations using kernel functions---and we used a graph kernel instead of a sequence kernel to measure the similarity between pairs of documents | 1 |
alshawi et al , 2000 ) represents each production in parallel dependency trees as a finite-state transducer---to tackle this problem , hochreiter et al introduced an architecture , called long short-term memory that allows to preserve temporal information , even if the correlated events are separated by a longer time | 0 |
additionally , coreference resolution is a pervasive problem in nlp and many nlp applications could benefit from an effective coreference resolver that can be easily configured and customized---coreference resolution is a multi-faceted task : humans resolve references by exploiting contextual and grammatical clues , as well as semantic information and world knowledge , so capturing each of these will be necessary for an automatic system to fully solve the problem | 1 |
a lattice is a directed acyclic graph that is used to compactly represent the search space for a speech recognition system---to set the weights , 位 m , we performed minimum error rate training on the development set using bleu as the objective function | 0 |
the recovery of shallow meaning , and semantic role labels in particular , has a long history in linguistics---shallow representations of meaning , and semantic role labels in particular , have a long history in linguistics | 1 |
henderson presented the first neural network for broad coverage parsing---henderson was the first to show that neural networks can be successfully used for large scale parsing | 1 |
word embeddings have recently gained popularity among natural language processing community---transliteration is a process of rewriting a word from a source language to a target language in a different writing system using the word ’ s phonological equivalent | 0 |
the n-gram language models are trained using the srilm toolkit or similar software developed at hut---a 4-gram language model was trained on the monolingual data by the srilm toolkit | 1 |
we have shown that the computation of joint prefix probabilities for pscfgs can be reduced to the computation of inside probabilities for the same model---in our previous work , we established the predictiveness of several interaction parameters derived from discourse structure | 0 |
however , huang et al reported that the computational complexity for decoding amounted to o ) with n-gram even using a hook technique---also , we initialized all of the word embeddings using the 300 dimensional pre-trained vectors from glove | 0 |
with appropriate adaptor grammars and inference procedures we achieve an 87 % word token f-score on the standard brent version of the bernsteinratner corpus , which is an error reduction of over 35 % over the best previously reported results for this corpus---sentiment analysis in twitter , which is a task of semeval , was firstly proposed in 2013 and not replaced until 2018 | 0 |
bahdanau et al propose integrating an attention mechanism in the decoder , which is trained to determine on which portions of the source sentence to focus---bahdanau et al extend the vanilla encoder-decoder nmt framework by adding a small feed-forward neural network which learns which word in the source sentence is relevant for predicting the next word in the target sequence | 1 |
with respect to the model optimization , we adopt the contrastive objective function used in previous works---we adopted the contrastive max-margin objective function used in previous work | 1 |
the “ charniak parser ” has a labeled precision-recall f-measure of 89.7 % on wsj but a lowly 82.9 % on the test set from the brown corpus treebank---the translation quality is evaluated by case-insensitive bleu-4 metric | 0 |
teufel et al worked on a 2829 sentence citation corpus using a 12-class classification scheme---teufel et al worked on a 2,829 sentence citation corpus using a 12-class classification scheme | 1 |
the idea of extracting features for nlp using convolutional dnn was previously explored by collobert et al , in the context of pos tagging , chunking , named entity recognition and semantic role labeling---for example , collobert et al effectively used a multilayer neural network for chunking , part-ofspeech tagging , ner and semantic role labelling | 1 |
we first use bleu score to perform automatic evaluation---we report bleu scores computed using sacrebleu | 1 |
we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit---we used a 5-gram language model trained on 126 million words of the xinhua section of the english gigaword corpus , estimated with srilm | 1 |
in this paper , however , we choose to focus on the basic framework and algorithms of lvegs and leave the incorporation of contextual information for future work---in this paper we choose to focus on the basic framework and algorithms of lvegs , and therefore we leave a few important extensions for future work | 1 |
the weights 位 m are usually optimized for system performance as measured by bleu---the log linear weights for the baseline systems are optimized using mert provided in the moses toolkit | 1 |
researchers have previously tried to align summary sentences with sentences in a document , mostly by manual effort---some experiments have evaluated the suitability of taking extracted paragraphs or sentences as a document summary | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.