text
stringlengths
82
736
label
int64
0
1
recurrent neural network architectures have proven to be well suited for many natural language generation tasks---recurrent neural networks are remarkably powerful models for sequential data
1
the baselines apply 4-gram lms trained by the srilm toolkit with interpolated modified kneser-ney smoothing---we used the same set of preprocessing components as stoyanov et al and took a subset of their features for our local features
0
for this , we utilize the publicly available glove 1 word embeddings , specifically ones trained on the common crawl dataset---we used the scikit-learn library the svm model
0
we use the stanford parser to extract a set of dependencies from each comment---we apply the rules to each sentence with its dependency tree structure acquired from the stanford parser
1
we use the selectfrommodel 4 feature selection method as implemented in scikit-learn---we compute statistical significance using the approximate randomization test
0
the log-linear feature weights are tuned with minimum error rate training on bleu---the feature weights 位 i are trained in concert with the lm weight via minimum error rate training
1
the complexity is dominated by the word confusion network construction and parsing---with word confusion networks further improves performance
1
however , cognitive evidence suggests that humans are likely to perform these two tasks simultaneously , as part of a holistic metaphor comprehension process---however , cognitive evidence suggests that humans are likely to perform identification and interpretation simultaneously , as part of a holistic metaphor comprehension process
1
we use moses , an open source toolkit for training different systems---we use the moses toolkit to train our phrase-based smt models
1
we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting---for all data sets , we trained a 5-gram language model using the sri language modeling toolkit
1
table 7 : comparison of different parsers on the wsj test data measured by average number of errors per sentence ; the numbers in bold indicate the least errors in each error type---on the wsj test data measured by average number of errors per sentence ; the numbers in bold indicate the least errors in each error type
1
named entity recognition ( ner ) is a fundamental information extraction task that automatically detects named entities in text and classifies them into predefined entity types such as person , organization , gpe ( geopolitical entities ) , event , location , time , date , etc---word sense disambiguation ( wsd ) is a problem long recognised in computational linguistics ( yngve 1955 ) and there has been a recent resurgence of interest , including a special issue of this journal devoted to the topic ( cite-p-27-8-11 )
0
all models used interpolated modified kneser-ney smoothing---we evaluated translation output using case-insensitive ibm bleu
0
parsing is the process of mapping sentences to their syntactic representations---we have presented b rain s up , a novel system for creative sentence generation that allows users to control many aspects of the creativity process , from the presence of specific target words in the output
0
manual evaluation of translation quality is generally thought to be excessively time consuming and expensive---manual evaluation of machine translation is too time-consuming and expensive to conduct
1
automatic semantic role labeling was first introduced by gildea and jurafsky---rte is a binary classification task , whose goal is to determine , whether for a pair of texts t and h the meaning of h is contained in t ( cite-p-9-1-3 )
0
we use the l2-regularized logistic regression of liblinear as our term candidate classifier---we train and evaluate a l2-regularized logistic regression classifier with the liblin-ear solver as implemented in scikit-learn
1
since katakana words are basically transliterations from english , back-transliterating katakana noun compounds is also useful for splitting---katakana words ( i . e . , transliterated foreign words ) are particularly difficult to split , because katakana words are highly productive and are often out-of-vocabulary
1
we divided the sentences into three types according to triplet overlap degree , including normal , entitypairoverlap and singleentiyoverlap---we divide the sentences into three types according to triplet overlap degree , including normal , entitypairoverlap
1
lda is a generative model that learns a set of latent topics for a document collection---lda is a probabilistic model that can be used to model and discover underlying topic structures of documents
1
we chose the skip-gram model provided by word2vec tool developed by for training word embeddings---we used word2vec , a powerful continuous bag-of-words model to train word similarity
1
le and mikolov extended the word embedding learning model by incorporating paragraph information---le and mikolov presented the paragraph vector algorithm to learn a fixed-size feature representation for documents
1
pun is a way of using the characteristics of the language to cause a word , a sentence or a discourse to involve two or more different meanings---a pun is a form of wordplay , which is often profiled by exploiting polysemy of a word or by replacing a phonetically similar sounding word for an intended humorous effect
1
thus , event extraction is a difficult task and requires substantial training data---event extraction is the task of detecting certain specified types of events that are mentioned in the source language data
1
the statistical phrase-based systems were trained using the moses toolkit with mert tuning---we trained the statistical phrase-based systems using the moses toolkit with mert tuning
1
word sense disambiguation ( wsd ) is the task of automatically determining the correct sense for a target word given the context in which it occurs---in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd )
1
our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing---we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing
1
sentence compression is the task of compressing long sentences into short and concise ones by deleting words---this task is called sentence compression
1
regneri et al induce script knowledge from explicit esds using a graph-based method---clark and curran describe a log-linear glm for ccg parsing , trained on the penn treebank
0
recently , vaswani et al , propose a novel sequenceto-sequence generation network , the transformer , which is entirely based on attention---experiments on two chinese treebanks showed that our approach outperformed the baseline
0
we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset---their method is based on sentence clustering , originating from a similarity-based word sense disambiguation method developed by karov and edelman
0
we rely on conditional random fields 1 for predicting one label per reference---for simplicity , we use the well-known conditional random fields for sequential labeling
1
ftd is typically diagnosed on the basis of the clinical observation of disorganized speech---1 to our knowledge , read-x is the first system that performs in real time a ) keyword search , b ) thematic classification and c ) analysis of reading difficulty
0
this resource can be used in machine translation and cross-lingual ir systems---although coreference resolution is a subproblem of natural language understanding , coreference resolution evaluation metrics have predominately been discussed in terms of abstract entities and hypothetical system errors
0
many recent studies in natural language processing have paid attention to rhetorical structure theory , a method of structured description of text---much recent work on language generation has made use of discourse representations based on rhetorical structure theory
1
first , we propose and evaluate three extra-linguistic modifications to the machine learning framework , which together provide substantial and statistically significant gains in coreference resolution precision---first , we propose three extra-linguistic modifications to the machine learning framework , which together consistently produce statistically significant gains in precision
1
we train a trigram language model with the srilm toolkit---we use the sri language modeling toolkit for language modeling
1
gong et al and xiao et al introduce topic-based similarity models to improve smt system---gong et al introduce topic model for filtering topic-mismatched phrase pairs
1
we use 300 dimension word2vec word embeddings for the experiments---for all three classifiers , we used the word2vec 300d pre-trained embeddings as features
1
we used the sri language modeling toolkit for this purpose---we train a trigram language model with the srilm toolkit
1
coreference resolution is a key task in natural language processing ( cite-p-13-1-8 ) aiming to detect the referential expressions ( mentions ) in a text that point to the same entity---coreference resolution is the task of determining which mentions in a text refer to the same entity
1
the standard polynomial-time solution to the assignment problem is the kuhn-munkres algorithm---an algorithm , the kuhn-munkres method , has been proposed that can find a solution to the optimum assignment problem in polynomial time
1
the composite kernel consists of an entity kernel and a convolution parse tree kernel---our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing
0
these methods can not utilize the long distance information which is also crucial for word segmentation---these methods are extracted from a local context and neglect the long distance information
1
we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit---we use sri language modeling toolkit to train a 5-gram language model on the english sentences of fbis corpus
1
in this paper , we develop greedy algorithms for the task that are effective in practice---textual entailment is a directional relation between text fragments ( cite-p-18-1-6 ) which holds true when the truth of one text fragment , referred to as ‘ hypothesis ’ , follows from another , referred to as ‘ text ’
0
in this paper , we have considered new input sources for imt---target language models were trained on the english side of the training corpus using the srilm toolkit
0
jiang et al , 2007 ) put forward a ptc framework based on the svm model---word sense disambiguation ( wsd ) is formally defined as the task of computationally identifying senses of a word in a context
0
the input to net are the pre-trained glove word embeddings of 300d trained on 840b tokens---the weights of the word embeddings use the 300-dimensional glove embeddings pre-trained on common crawl data
1
however , the classical algorithm by dale and haddock was shown to be unable to generate satisfying res in practice ,---word sense disambiguation ( wsd ) is a fundamental task and long-standing challenge in natural language processing ( nlp )
0
unpruned language models were trained using lmplz which employs modified kneser-ney smoothing---we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing
1
the parser uses the cky chart parsing algorithm described in steedman---pang et al for the first time applied machine learning techniques for sentiment classification
0
we optimise the feature weights of the model with minimum error rate training against the bleu evaluation metric---we perform minimum-error-rate training to tune the feature weights of the translation model to maximize the bleu score on development set
1
maximum entropy models implement the intuition that the best model is the one that is consistent with the set of constraints imposed by the evidence but otherwise is as uniform as possible---me models implement the intuition that the best model will be the one that is consistent with the set of constrains imposed by the evidence , but otherwise is as uniform as possible
1
we then perform mert process which optimizes the bleu metric , while a 5-gram language model is derived with kneser-ney smoothing trained with srilm---thus , we train a 4-gram language model based on kneser-ney smoothing method using sri toolkit and interpolate it with the best rnnlms by different weights
1
in particular , abstract meaning representation , is a novel representation of semantics---twitter is a rich resource for information about everyday events – people post their tweets to twitter publicly in real-time as they conduct their activities throughout the day , resulting in a significant amount of mundane information about common events
0
we represent each citation as a feature set in a support vector machine framework which has been shown to produce good results for sentiment classification---we represent each citation as a feature set in a support vector machine framework and use n-grams of length 1 to 3 as well as dependency triplets as features
1
sentiment analysis is a much-researched area that deals with identification of positive , negative and neutral opinions in text---sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-1-14 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b )
1
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts---the system was trained using moses with default settings , using a 5-gram language model created from the english side of the training corpus using srilm
0
word alignment is the task of identifying corresponding words in sentence pairs---word alignment is a well-studied problem in natural language computing
1
yet smt translation quality still obviously suffers from inaccurate lexical choice---smt systems still suffers from inaccurate lexical choice
1
the language model was a kneser-ney interpolated trigram model generated using the srilm toolkit---a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke
1
the tlemma and formeme tms are an interpolation of maximum entropy discriminative models and simple conditional probability models---the t-lemma and formeme translation models are an interpolation of maximum entropy discriminative models of mare膷ek et al and simple conditional probability models
1
transliteration is the task of converting a word from one alphabetic script to another---transliteration is the process of converting terms written in one language into their approximate spelling or phonetic equivalents in another language
1
these two steps will meet my goal of building a system that will extract social networks from news articles---coreference resolution is the task of grouping all the mentions of entities 1 in a document into equivalence classes so that all the mentions in a given class refer to the same discourse entity
0
faruqui and dyer introduced canonical correlation analysis to project the embeddings in both languages to a shared vector space---more concretely , faruqui and dyer use canonical correlation analysis to project the word embeddings in both languages to a shared vector space
1
in this paper , we described a phrase-based unigram model for statistical machine translation---in this paper , we describe a phrase-based unigram model for statistical machine translation
1
we adapted the moses phrase-based decoder to translate word lattices---cui et al developed an information theoretic measure based on dependency trees
0
the selection is made based on the scores of translation , language , and other models---typically , this selection is made based on translation scores , confidence estimations , language and other models
1
for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus---we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit
1
we also use a 4-gram language model trained using srilm with kneser-ney smoothing---a 4-gram language model was trained on the monolingual data by the srilm toolkit
1
tsvetkov , mukomel , and gershman and tsvetkov et al used coarse semantic features , such as concreteness , animateness , named-entity types , and wordnet supersenses---tsvetkov , mukomel , and gershman presented a supervised learning approach that makes use of coarse semantic features
1
word sense disambiguation is the task of assigning a sense to a word based on the context in which it occurs---twitter is the medium where people post real time messages to discuss on the different topics , and express their sentiments
0
we used the scikit-learn implementation of svrs and the skll toolkit---we used scikit-lean toolkit , and we developed a framework to define functional classification models
1
we use the feature set that we described in pil谩n et al and for modeling linguistic complexity in l2 swedish texts---text categorization is the problem of automatically assigning predefined categories to free text documents
0
we use moses , a statistical machine translation system that allows training of translation models---tsl languages are also similarly learnable , given the stipulation that both the tier and math-w-3-6-0-134
0
we use the rmsprop optimization algorithm to minimize a loss function over the training data---we use the rmsprop optimization algorithm to minimize the mean squared error loss function over the training data
1
for the word-embedding based classifier , we use the glove pre-trained word embeddings---such frameworks include recursive auto-encoders , denoising autoencoders , and others
0
this score measures the precision of unigrams , bigrams , trigrams and fourgrams with respect to a reference translation with a penalty for too short sentences---the bleu score measures the precision of n-grams with respect to a reference translation with a penalty for too short sentences
1
feature weights were set with minimum error rate training on a tuning set using bleu as the objective function---the model weights were trained using the minimum error rate training algorithm
1
an eojeol is a surface level form consisting of more than one combined morpheme---for our baseline , we used a small parallel corpus of 30k english-spanish sentences from the europarl corpus
0
we suggest a compositional vector representation of parse trees that relies on a recursive combination of recurrent-neural network encoders---we suggest a compositional vector representation of parse trees that relies on a recursive combination of recurrent-neural network encoders , and demonstrate its effectiveness
1
prettenhofer and stein use the structural correspondence learning algorithm to learn a map between the source language and the target language---prettenhofer and stein use correspondence learning algorithm to learn a map between the source language and the target language
1
significant differences were found in readability judgments for sentences with and without their surrounding context---judgments on sentence difficulty , small but significant differences were found in how sentences are ranked with and without the surrounding passages
1
1 rather than associating each sentence in the training set to a single reference , we propose to consider a set of references encoding alternative syntactic representations---with a single gold reference , we propose to consider a set of references encoding alternative syntactic representations
1
after standard preprocessing of the data , we train a 3-gram language model using kenlm---crf training is usually performed through the l-bfgs algorithm and decoding is performed by the viterbi algorithm
0
more recently , a more efficient representation of multiple alignments was proposed in named weighted alignment matrices , which represents the alignment probability distribution over the words of each parallel sentence---more recently , the method described in produces improvements over the methods above , while reducing the computational cost by using weighted alignment matrices to represent the alignment distribution over each parallel sentence
1
the work described in this paper makes use of the hiero statistical mt framework---the work described in this paper is based on the smt framework of hierarchical phrase-based translation
1
we first trained a trigram bnlm as the baseline with interpolated kneser-ney smoothing , using srilm toolkit---we used the srilm toolkit to build unpruned 5-gram models using interpolated modified kneser-ney smoothing
1
for preprocessing the corpus , we use the stanford pos-tagger and parser included in the dkpro framework---shift-reduce parsing for cfg and dependency parsing have recently been studied , through approaches based essentially on deterministic parsing
0
in this paper , we propose bilingual tree kernels ( btks ) to model the bilingual translational equivalences , in our case , to conduct subtree alignment---in this paper , we explore syntactic structure features by means of bilingual tree kernels and apply them to bilingual subtree alignment
1
we optimise the feature weights of the model with minimum error rate training against the bleu evaluation metric---we use our reordering model for n-best re-ranking and optimize bleu using minimum error rate training
1
we propose a novel hierarchical entity-based approach to structuralize ugc in social media---we propose a hierarchical entity-based approach for structuralizing ugc in social media
1
sentiment analysis is a fundamental task and has attracted a huge amount of research in recent years---sentiment classification on these data has become a popular research topic over the past few years
1
the rule base utility was evaluated within two lexical expansion applications , yielding better results than other automatically constructed baselines and comparable results to wordnet---rule-base is shown to perform better than other automatically constructed baselines in a couple of lexical expansion and matching tasks
1
in our example , one should treat ¡°page¡± , ¡°plant¡± and ¡°gibson¡± also as named-entity mentions and aim to disambiguate them together with ¡°kashmir¡±---twitter is a rich resource for information about everyday events – people post their tweets to twitter publicly in real-time as they conduct their activities throughout the day , resulting in a significant amount of mundane information about common events
0
as a representative in chinese zero anaphora resolution , zhao and ng focused on anaphoricity determination and antecedent identification using feature-based methods---we use a pbsmt model built with the moses smt toolkit
0
neural networks , working on top of conventional n-gram back-off language models , have been introduced in as a potential means to improve conventional language models---neural networks , working on top of conventional n-gram back-off language models , have been introduced in as a potential means to improve discrete language models
1
for the word-embedding based classifier , we use the glove pre-trained word embeddings---we initialize the embedding weights by the pre-trained word embeddings with 200 dimensional vectors
0
we evaluated our model on the semeval-2010 task 8 dataset , which is an established benchmark for relation classification---to evaluate the performance of our proposed method , we use the semeval-2010 task 8 dataset
1