text
stringlengths
82
736
label
int64
0
1
in this paper , we present a method for decoding complete documents in phrase-based smt---in this paper , we have presented a decoding procedure for phrase-based smt
1
when training a classifier for one label , predictions-as-features methods can model dependencies between former labels and the current label , but they can¡¯t model dependencies between the current label and the latter labels---for one label , the predictions-as-features methods can model dependencies between former labels and the current label , but they can ¡¯ t model dependencies between the current label and the latter labels
1
to perform word alignment between languages l1 and l2 , we introduce a third language l3---the idea of distant supervision has widely used in the task of relation extraction
0
we use minimal error rate training to maximize bleu on the complete development data---we used a phrase-based smt model as implemented in the moses toolkit
0
we use gibbs sampling for parameter estimation , which is more principled than the neighborhood method used in ibm model 4---method used in our model is more principled than the heuristic-based neighborhood method in ibm model
1
the first algebra to compose semantic primitives was proposed by huhns and stephens---soricut and marcu use a standard bottomup chart parsing algorithm to determine the discourse structure of sentences
0
wikipedia is a free multilingual online encyclopedia and a rapidly growing resource---we evaluate kale with link prediction and triple classification tasks
0
diaz used score regularisation to adjust document retrieval rankings from an initial retrieval by a semisupervised learning method---diaz used score regularization to adjust document retrieval rankings from an initial retrieval by a semisupervised learning method
1
we have developed the textevaluator system for providing text complexity and common core-aligned readability information---we have presented textevaluator , a tool capable of analyzing almost any written text , for which it provides in-depth information into the text ’ s readability and complexity
1
spectral analysis is the backbone of several techniques , such as multidimensional scaling , principle component analysis and latent semantic analysis , that are commonly used in nlp---thus , we believe that spectral analysis is a promising approach that is well suited to the discovery of linguistic principles underlying a set of observations represented as a network of entities
1
the data consists of sections of the wall street journal part of the penn treebank , with information on predicate-argument structures extracted from the propbank corpus---the data comes from the conll 2000 shared task , which consists of sentences from the penn treebank wall street journal corpus
1
stance detection is a difficult task since it often requires reasoning in order to determine whether an utterance is in favor of or against a specific issue---stance detection is the task of automatically determining whether the authors of a text are against or in favour of a given target
1
for the sentence matching tasks , we initialized the word embeddings with 50-dimensional glove word vectors pretrained from wikipedia 2014 and gigaword 5 for all model variants---since the similarity calculations in our framework involves vectorial representations for each word , we trained 300 dimensional glove vectors on the chinese gigaword corpus
1
we used the svd implementation provided in the scikit-learn toolkit---we trained a 3-gram language model on all the correct-side sentences using kenlm
0
for systems evaluation , we also use bleu score through the scripts at moses---su et al presented a clustering method that utilizes the mutual reinforcement associations between features and opinion words
0
we trained two 5-gram language models on the entire target side of the parallel data , with srilm---we trained a 5-gram sri language model using the corpus supplied for this purpose by the shared task organizers
1
mcclosky et al presented a successful instance of parsing with self-training by using a reranker---mcclosky et al presented a self-training method combined with a reranking algorithm for constituency parsing
1
the feature weights 位 m are tuned with minimum error rate training---the corresponding weight is trained through minimum error rate method
1
however , the classical algorithm by dale and haddock was shown to be unable to generate satisfying res in practice ,---however , the classical algorithm by dale and haddock was recently shown to be unable to generate satisfying res in practice
1
rothe and sch眉tze proposed a method that learns sense embedding using word embeddings and the sense inventory of wordnet---riedel et al present an approach for extracting bio-molecular events and their arguments using markov logic
0
sarcasm , commonly defined as ‘ an ironical taunt used to express contempt ’ , is a challenging nlp problem due to its highly figurative nature---since sarcasm is a refined and indirect form of speech , its interpretation may be challenging for certain populations
1
for evaluation , caseinsensitive nist bleu is used to measure translation performance---we use case-sensitive bleu to assess translation quality
1
our work is inspired by the suc-cessful application of word clustering in supervised nlp models---our model is inspired by recent work in learning distributed representations of words
1
we use moses , a statistical machine translation system that allows training of translation models---we develop translation models using the phrase-based moses smt system
1
for training the language identification component , we used the european parliament proceedings parallel corpus which covers the proceedings of the european parliament from 1996 to 2006---we used the publicly available europarl corpus that contains proceedings of the european parliament in the different official languages
1
the human-annotated labels that accompany media on flickr enable us to acquire predicate-argument co-occurrence information---human-annotated image and video descriptions allow us to investigate what types of verb ¨c noun relations are in principle present in the visual data
1
we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit---we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing
1
coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set---coreference resolution is a field in which major progress has been made in the last decade
1
for the semantic language model , we used the srilm package and trained a tri-gram language model with the default goodturing smoothing---the proposed system is based on the phrase-based log-linear translation model
0
the brown algorithm is a hierarchical clustering algorithm which clusters words to maximize the mutual information of bigrams---brown clustering is a hierarchical clustering method that groups words into a binary tree of classes
1
twitter is a huge microbloging service with more than 500 million tweets per day 1 from different locations in the world and in different languages---twitter is a social platform which contains rich textual content
1
bilingual lexicons play an important role in many natural language processing tasks , such as machine translation and cross-language information retrieval---bilingual lexicons are an important resource in multilingual natural language processing tasks such as statistical machine translation and cross-language information retrieval
1
in this work , we propose a novel nonparametric estimator of vocabulary size---we see that our estimator compares favorably with the best estimator of vocabulary size
1
language models ( lms ) are statistical models that calculate probabilities over sequences of words or other discrete symbols---language models ( lms ) are statistical models that , given a sentence math-w-2-1-0-13 , calculate its probability
1
our model is a structured conditional random field---our system is based on the conditional random field
1
we initialize these word embeddings with glove vectors---we used moses as the implementation of the baseline smt systems
0
the word embeddings are identified using the standard glove representations---the model parameters in word embedding are pretrained using glove
1
relation extraction is the task of detecting and characterizing semantic relations between entities from free text---and then run a dp algorithm on the zdd to obtain the optimal solution that satisfies the length limit
0
in such cases , the proposed tool is effective to identify essential tree structure patterns---by using the proposed tool , users develop tree structure patterns
1
we use the moses toolkit to train our phrase-based smt models---we use the popular moses toolkit to build the smt system
1
janus is a natural language understanding and generation system that allows the user to interface with several knowledge bases maintained by the u.s. navy---for the evaluation of the results we use the bleu score
0
many words have multiple meanings , and the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd )---word sense disambiguation ( wsd ) is a difficult natural language processing task which requires that for every content word ( noun , adjective , verb or adverb ) the appropriate meaning is automatically selected from the available sense inventory 1
1
thirdly , we design a new kernel for relation detection by integrating the relation topics into the relation detector construction---thirdly , we design a new kernel for relation detection by integrating the relation topics
1
the pioneering work of ramshaw and marcus introduced np chunking as a machine-learning problem , with standard datasets and evaluation metrics---ramshaw and marcusfirst represented base noun phrase recognition as a machine learning problem
1
the proposed approach modifies the supervised boosting algorithm to a semi-supervised learning algorithm by incorporating the unlabeled data---proposed approach modifies the supervised adaboost algorithm to a semi-supervised learning algorithm by incorporating the unlabeled data
1
we use word2vec as the vector representation of the words in tweets---in order to cluster lexical items , we use the algorithm proposed by brown et al , as implemented in the srilm toolkit
0
many researchers have attempted to make use of cue phrases , especially for segmentation both in prose and conversation---in addition , we utilize the pre-trained word embeddings with 300 dimensions from for initialization
0
the parameters are optimized with adagrad under a cosine proximity objective function---weights are optimized by the gradient-based adagrad algorithm with a mini-batch
1
a quite detailed analysis of the most commonly used inter-annotator agreement coefficients is provided by artstein and poesio---artstein and poesio provides a comprehensive survey of the iaa metrics and their usage in nlp
1
it has already proven successful for several tasks in computer vision ( cite-p-9-1-0 , cite-p-9-1-1 ) and natural language processing---that has already proven successful in solving a number of relational tasks in natural language processing
1
with this data , we can investigate whether the relationship between personal traits and brand preferences varies across multiple product categories---in this study , we focus on investigating the feasibility of using automatically inferred personal traits in large-scale brand preference
1
social media is a popular public platform for communicating , sharing information and expressing opinions---social media is a valuable source for studying health-related behaviors ( cite-p-11-1-8 )
1
the results are reported in bleu and ter scores---the translations are evaluated in terms of bleu score
1
shen et al proposed a target dependency language model for smt to employ target-side structured information---shen et al proposed to use linguistic knowledge expressed in terms of a dependency grammar , instead of a syntactic constituency grammar
1
we present our uwb system for the task of capturing discriminative attributes at semeval 2018---we described our uwb system participating in semeval 2018 shared task for capturing discriminative attributes
1
dave et al , riloff and wiebe , bethard et al , pang and lee , wilson et al , yu and hatzivassiloglou ,---dave et al , riloff and wiebe , bethard et al , wilson et al , yu and hatzivassiloglou , choi et al , kim and hovy , wiebe and riloff ,
1
the bleu metric has been widely accepted as an effective means to automatically evaluate the quality of machine translation outputs---the bleu metric has deeply rooted in the machine translation community and is used in virtually every paper on machine translation methods
1
we use the 200-dimensional global vectors , pre-trained on 2 billion tweets , covering over 27-billion tokens---for the neural models , we use 100-dimensional glove embeddings , pre-trained on wikipedia and gigaword
1
disambiguation of acronyms is a special case of the more general problem of word sense disambiguation---word sense disambiguation is a popular way to evaluate polysemous word representations
1
dependency parses are obtained from the stanford parser---sentences are tagged and parsed using the stanford dependency parser
1
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts---the target-side language models were estimated using the srilm toolkit
0
in this paper , we obtain syntactic clusters from the berkeley parser---we use the latent variable grammar implementation of huang and harper in this work
1
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided---we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit
1
in the penn treebank , null elements , or empty categories , are used to indicate non-local dependencies , discontinuous constituents , and certain missing elements---in treebanks , empty categories have been used to indicate long-distance dependencies , discontinuous constituents , and certain dropped elements
1
we use the 300-dimensional pre-trained word2vec 3 word embeddings and compare the performance with that of glove 4 embeddings---we use the 200-dimensional global vectors , pre-trained on 2 billion tweets , covering over 27-billion tokens
1
speech repair is a phenomenon in spontaneous spoken language in which a speaker decides to interrupt the flow of speech , replace some of the utterance ( the “ reparandum ” ) , and continues on ( with the “ alteration ” ) in a way that makes the whole sentence as transcribed grammatical only if the reparandum is ignored---text categorization is the problem of automatically assigning predefined categories to free text documents
0
semantic role labeling ( srl ) is the process of assigning semantic roles to strings of words in a sentence according to their relationship to the semantic predicates expressed in the sentence---semantic role labeling ( srl ) is the task of labeling predicate-argument structure in sentences with shallow semantic information
1
we used moses as the phrase-based machine translation system---we applied the ems in moses to build up the phrase-based translation system
1
barzilay and mckeown used a monolingual parallel corpus to obtain paraphrases---barzilay and mckeown extract both singleand multiple-word paraphrases from a monolingual parallel corpus
1
in general if a query term has a low-frequency in the corpus , then its context vector is sparse---the nist mt03 test set was used as development set for optimizing the interpolation weights using mer training
0
given ( g , w ) as input , m ( ~ ) tests whether g e2-lcfrs ( k ) ( which is trivial ) ; if the test fails , m ( t ) rejects , otherwise it simulates m on input ( g , w )---as input , m ( ~ ) tests whether g e2-lcfrs ( k ) ( which is trivial ) ; if the test fails , m ( t ) rejects , otherwise
1
text categorization is the task of assigning a text document to one of several predefined categories---text categorization is the task of automatically assigning predefined categories to documents written in natural languages
1
word sense disambiguation ( wsd ) is a key task in computational lexical semantics , inasmuch as it addresses the lexical ambiguity of text by making explicit the meaning of words occurring in a given context ( cite-p-18-3-10 )---word sense disambiguation ( wsd ) is the problem of assigning a sense to an ambiguous word , using its context
1
sentiment analysis is the study of the subjectivity and polarity ( positive vs. negative ) of a text ( cite-p-7-1-10 )---sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-1-14 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b )
1
we use moses toolkit for pbsmt training and sockeye toolkit for nmt training---we use the moses software package 5 to train a pbmt model
1
topic models , such as plsa and lda , have shown great success in discovering latent topics in text collections---topic models have recently been applied to information retrieval , text classification , and dialogue segmentation
1
we used an l2-regularized l2-loss linear svm to learn the attribute predictions---this dictionary is not easy to employ for nlp use but work in progress is aimed at addressing this problem
0
relation extraction is a challenging task in natural language processing---relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text
1
on the other hand , several cross-linguistic experiments have indicated that mental representation and processing of polymorphemic words are not language independent---on the other hand , experiments indicate that mental representation and processing of morphologically complex words are not quite language independent
1
we also demonstrate an accompanying plug-in for the protégé ontology editor , which can be used to create the ontology ’ s annotations and generate previews of the resulting texts by invoking the generation engine---the language model is a 5-gram lm with modified kneser-ney smoothing
0
learning from errors is a crucial aspect of improving expertise---if learning from errors is a crucial aspect of improving expertise
1
more generally , collocations are a frequent type of multiword expression , a sequence of words that presents some lexical , syntactic , semantic , pragmatic or statistical idiosyncrasies---a multiword expression is any combination of words with lexical , syntactic or semantic idiosyncrasy , in that the properties of the mwe are not predictable from the component words
1
coreference resolution is a multi-faceted task : humans resolve references by exploiting contextual and grammatical clues , as well as semantic information and world knowledge , so capturing each of these will be necessary for an automatic system to fully solve the problem---coreference resolution is a key problem in natural language understanding that still escapes reliable solutions
1
we then obtain dependency parses by converting these constituency parses using the stanford converter---we trained a standard 5-gram language model with modified kneser-ney smoothing using the kenlm toolkit on 4 billion running words
0
feature weights were set with minimum error rate training on a tuning set using bleu as the objective function---feature weights are tuned with mert on the development set and output is evaluated using case-sensitive bleu
1
semantic role labeling ( srl ) is the process of extracting simple event structures , i.e. , “ who ” did “ what ” to “ whom ” , “ when ” and “ where ”---semantic role labeling ( srl ) is the process of producing such a markup
1
relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text---relation extraction is the key component for building relation knowledge graphs , and it is of crucial significance to natural language processing applications such as structured search , sentiment analysis , question answering , and summarization
1
we use a set of 318 english function words from the scikit-learn package---we implement classification models using keras and scikit-learn
1
we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit---for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing
1
the state-of-the-art unsupervised berkeley aligner with default setting is used to construct word alignments---the state-of-the-art unsupervised berkeley aligner 3 lexicalized reordering gives better performance than simple distance-based reordering
1
the approach of marking ambiguities resembles that proposed by kameyama---the resolveipa approach of indicating possible reference ambiguities resembles that proposed by kameyama
1
for our experiments we used the moses phrasebased smt toolkit with default settings and features , including the five features from the translation table , and kb-mira tuning---parsing is the task of reconstructing the syntactic structure from surface text
0
we use the moses toolkit to create a statistical phrase-based machine translation model built on the best pre-processed data , as described above---we then evaluate the effect of word alignment on machine translation quality using the phrase-based translation system moses
1
for feature extraction , we parse the french part of our training data using the berkeley parser and lemmatize and pos tag it using morfette---we experimentally evaluate the paragraph vector model proposed by le and mikolov
0
we used moses tokenizer 5 and truecaser for both languages---knowledge graphs such as wordnet , freebase and yago have been playing a pivotal role in many ai applications , such as relation extraction , question answering , etc
0
we use skip-gram with negative sampling for obtaining the word embeddings---to get a dictionary of word embeddings , we use the word2vec tool 2 and train it on the chinese gigaword corpus
0
on the other hand , agarwal , shah , and mannem considered the question generation problem beyond the sentence level and designed an approach that uses discourse connectives to generate questions from a given text---on the other hand , agarwal et al considered the question generation problem beyond sentence level and proposed an approach that uses discourse connectives to generate questions from a given text
1
we analyze the accuracy of sentence selection at each step---for the second step , sentence selection adopts a particular strategy to choose content
1
we attempt to both reproduce the results of said technique , as well as extend the previous work with application to a newly-created domain of biographical data---we substitute our language model and use mert to optimize the bleu score
0
we employed the glove as the word embedding for the esim---we use theano and pretrained glove word embeddings
1