text stringlengths 82 736 | label int64 0 1 |
|---|---|
coreference resolution is the process of linking multiple mentions that refer to the same entity---in this work , we use a class-factored output layer consisting of a class layer and a word layer | 0 |
wordnet is a general english thesaurus which additionally covers biological terms---marciniak and strube propose an ilp model for global optimization in a generation task that is decomposed into a set of classifiers | 0 |
semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation ( mr )---semantic parsing is the task of mapping natural language sentences to a formal representation of meaning | 1 |
we employ normalised pointwise mutual information which outperforms other metrics in measuring topic coherence---we use the pmi score to evaluate the quality of topics learnt by topic models | 1 |
the statistical-machine translation approaches were implemented using the moses toolkit---all smt models were developed using the moses phrase-based mt toolkit and the experiment management system | 1 |
1 we evaluate the method using the data from the english lexical substitution task for semeval-2007---brown clustering is an agglomerative algorithm that induces a hierarchical clustering of words | 0 |
we use byte-pair encoding with 30k operations to bpe the en side---we use a joint source and target byte-pair encoding with 10k merge operations | 1 |
semantic role labeling ( srl ) has been defined as a sentence-level natural-language processing task in which semantic roles are assigned to the syntactic arguments of a predicate ( cite-p-14-1-7 )---markov models were trained with modified kneser-ney smoothing as implemented in srilm | 0 |
in this article we present a method that tackles sentence boundaries , capitalized words , and abbreviations in a uniform way through a document-centered approach---we presented an approach that tackles three important aspects of text normalization : sentence boundary disambiguation , disambiguation of capitalized words | 1 |
in this paper , we explore two approaches which require no or only a very small amount of manually labelled training data---in this study , we experimented with two machine learning techniques that do not require such annotated training data , but can be trained on | 1 |
a 3-gram language model was trained from the target side of the training data for chinese and arabic , using the srilm toolkit---the language models were created using the srilm toolkit on the standard training sections of the ccgbank , with sentenceinitial words uncapitalized | 1 |
we obtained distributed word representations using word2vec 4 with skip-gram---we compared sn models with two different pre-trained word embeddings , using either word2vec or fasttext | 1 |
keyphrases are useful in many tasks such as information retrieval , document summarization or document clustering---keyphrases are useful for a variety of tasks such as summarization , information retrieval and document clustering | 1 |
we used the moses toolkit to train the phrase tables and lexicalized reordering models---we use the moses toolkit to train our phrase-based smt models | 1 |
we then perform mert which optimizes parameter settings using the bleu metric , while a 5-gram language model is derived with kneser-ney smoothing trained using srilm---thus , we train a 4-gram language model based on kneser-ney smoothing method using sri toolkit and interpolate it with the best rnnlms by different weights | 1 |
the lingo grammar matrix is situated theoretically within head-driven phrase structure grammar , a lexicalist , constraint-based framework---the grammatical framework for the krg is head-driven phrase structure grammar , a non-derivational , constraintbased , and surface-oriented grammatical architecture | 1 |
we used the implementation of random forest in scikitlearn as the classifier---for nb and svm , we used their implementation available in scikit-learn | 1 |
bunescu and mooney propose a shortest path dependency kernel for relation extraction---bunescu and mooney proposed a shortest path dependency kernel | 1 |
knowledge bases such as freebase and yago play a pivotal role in many nlp related applications---knowledge graphs like wordnet , freebase , and dbpedia have become extremely useful resources for many nlp-related applications | 1 |
language models were estimated using the sri language modeling toolkit with modified kneser-ney smoothing---we use large 300-dim skip gram vectors with bag-of-words contexts and negative sampling , pre-trained on the 100b google news corpus | 0 |
wang et al presented a syntactic tree matching method for finding similar questions---wang et al computed their similarity function on the syntactic-tree representations of the questions | 1 |
in smt the most extended language models are n-grams---twitter is a huge microblogging service with more than 500 million tweets per day from different locations of the world and in different languages ( cite-p-10-1-6 ) | 0 |
components of w mt -13 and w mt -14 quality estimation shared tasks are replicated to reveal substantially increased conclusivity in system rankings , including identification of outright winners of tasks---through replication of components of w mt-13 and w mt-14 quality estimation shared tasks , revealing substantially increased conclusivity of system rankings | 1 |
the automatic classification results were compared with a simple baseline method , against human judgement as the gold standard---automatic classification results were compared with a baseline method and with the manual judgement of several linguistics students | 1 |
bengio et al presented a neural network language model where word embeddings are simultaneously learned along with a language model---word sense disambiguation ( wsd ) is a task to identify the intended sense of a word based on its context | 0 |
h r on a synonym choice task , where it outperforms the standard bag-of-word model for nouns and verbs---h r on a synonym choice task , where math-w-7-1-0-70 outperformed the bag-of-word model | 1 |
it has been shown that user opinions about products , companies and politics can be influenced by opinions posted by other online users---it has been shown that user opinions about products , companies and politics can be influenced by posts by other users in online forums and social networks | 1 |
the disambiguated text is processed with the word2vec toolkit 5---our language model , p , is a kneser-ney smoothed character n-gram model | 0 |
to this end , we use conditional random fields---we selected conditional random fields as the baseline model | 1 |
5 the number of unrestricted dependency trees on n nodes is given by sequence a000169 , the number of well-nested dependency trees is given by sequence a113882 in the online encyclopedia of integer sequences ( cite-p-17-4-16 )---5 the number of unrestricted dependency trees on n nodes is given by sequence a000169 , the number of well-nested dependency trees is given by sequence a113882 | 1 |
aspect extraction is a task to abstract the common properties of objects from corpora discussing them , such as reviews of products---aspect extraction is a central problem in sentiment analysis | 1 |
we use minimal error rate training to maximize bleu on the complete development data---we use mt02 as the development set 4 for minimum error rate training | 1 |
this maximum matching problem can be solved using the hungarian algorithm---this combinatorial optimisation problem can be solved in polynomial time through the hungarian algorithm | 1 |
we used a phrase-based smt model as implemented in the moses toolkit---the most common type of deterministic connectionist network is a back propagation network | 0 |
our model focuses on entity-pair level noise while previous models only dealt with sentence level noise---to the best of our knowledge , we first propose an entity-pair level noise-tolerant method while previous works only focused on sentence level noise | 1 |
the weights of the word embeddings use the 300-dimensional glove embeddings pre-trained on common crawl data---for the character-based model we use publicly available pre-trained character embeddings 3 de- rived from glove vectors trained on common crawl | 1 |
we propose a framework to select and rank mandatory matching phrases ( mmp ) for question answering---firstly , we propose a framework to select and rank important question phrases ( mmps ) for question answering | 1 |
translation results are evaluated using the word-based bleu score---the language model is trained on the target side of the parallel training corpus using srilm | 0 |
sentence compression is the task of compressing long , verbose sentences into short , concise ones---statistical machine translation systems employ a word-based alignment model | 0 |
for hindi , the ma by bharati et al is most widely used among the nlp researchers in the indian community---the paradigm based analyzer by bharati et al is one of the most widely used applications among researchers in the indian nlp community | 1 |
word embeddings have recently gained popularity among natural language processing community---sri language modeling toolkit was employed to train 5-gram english and japanese lms on the training set | 0 |
propbank proposes a general purpose annotation schema , based on annotating predicates as the main semantic constituents of a sentence---propbank is a corpus in which the arguments of each verb predicate are annotated with their semantic roles | 1 |
we apply reinforce to directly optimize the task reward of this structured prediction problem---which is composed of three cascaded components : the tagging of sr phrase , the identification of semantic-role-phrase and semantic dependency parsing | 0 |
we used kenlm with srilm to train a 5-gram language model based on all available target language training data---in our experiments , we used the srilm toolkit to build 5-gram language model using the ldc arabic gigaword corpus | 1 |
twitter is a popular microblogging service , which , among other things , is used for knowledge sharing among friends and peers---twitter is a huge microblogging service with more than 500 million tweets per day from different locations of the world and in different languages ( cite-p-8-1-9 ) | 1 |
the keyphrases are semantically relevant with the document theme---the extracted keyphrases have a good coverage of the document | 1 |
in this paper , we propose a joint inference framework that utilizes these global clues to resolve disagreements among local predictions---in this paper , we make use of the global clues derived from kb to help resolve the disagreements among local relation predictions , thus reduce the incorrect predictions | 1 |
we conducted baseline experiments for phrasebased machine translation using the moses toolkit---locations coupled with predefined goal-acts , we want to learn the goal-acts for new locations | 0 |
semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation ( mr )---in phrase-based smt models , phrases are used as atomic units for translation | 0 |
coreference resolution is the task of partitioning a set of mentions ( i.e . person , organization and location ) into entities---coreference resolution is a key problem in natural language understanding that still escapes reliable solutions | 1 |
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus---we then lowercase all data and use all unique headlines in the training data to train a language model with the srilm toolkit | 1 |
we use the penn treebank as the linguistic data source---we assume the part-of-speech tagset of the penn treebank | 1 |
row 1 and row 2 are the baseline systems , which model the relevance ranking using bm25 and language model in the term space---row 1 and row 2 are two baseline systems , which model the relevance score using vsm and language model in the term space | 1 |
in nlp , such methods are primarily based on learning a distributed representation for each word , which is also called a word embeddings---in nlp , such methods are primarily based on learning a distributed representation for each word , which is also called a word embedding | 1 |
coreference resolution is a fundamental component of natural language processing ( nlp ) and has been widely applied in other nlp tasks ( cite-p-15-3-9 )---coreference resolution is the process of linking multiple mentions that refer to the same entity | 1 |
chinese is a meaning-combined language with very flexible syntax , and semantics are more stable than syntax---it is well-known that chinese is a pro-drop language , meaning pronouns can be dropped from a sentence without causing the sentence to become ungrammatical or incomprehensible when the identity of the pronoun can be inferred from the context | 1 |
we obtain these dependency constructions by implementing a distantly supervised pattern extraction approach---coreference resolution is the task of grouping mentions to entities | 0 |
as our machine learning component we use liblinear with a l2-regularised l2-loss svm model---we rely on a support vector machine , in particular on a liblinear implementation with l2-regularization , to train our supervised model | 1 |
sentiwordnet describes itself as a lexical resource for opinion mining---sentiwordnet is a large lexicon for sentiment analysis and opinion mining applications | 1 |
chen et al extracted different types of subtrees from the auto-parsed data and used them as new features in standard learning methods---hierarchical phrase-based translation was proposed by chiang | 0 |
experiments show that our proposed model significantly improves the translation performance over the state-of-the-art nmt model---experiments show that our proposed model obtains considerable bleu score improvements upon an attention-based nmt baseline | 1 |
however , ahmed et al proposed a framework to group temporally and tocipally related news articles into same story clusters in order to reveal the temporal evolution of stories---ahmed et al proposed a unified framework to group temporally and topically related news articles into same storylines in order to reveal the temporal evolution of events | 1 |
we used the nematus nmt system 5 to train an attentional encoderdecoder network---we used the sub-word neural machine translation toolkit nematus for training the nmt system | 1 |
lda is a generative probabilistic model where documents are viewed as mixtures over underlying topics , and each topic is a distribution over words---lda is a widely used topic model , which views the underlying document distribution as having a dirichlet prior | 1 |
semantic role labeling ( srl ) has been defined as a sentence-level natural-language processing task in which semantic roles are assigned to the syntactic arguments of a predicate ( cite-p-14-1-7 )---semantic role labeling ( srl ) is the process of extracting simple event structures , i.e. , β who β did β what β to β whom β , β when β and β where β | 1 |
our training data is the switchboard portion of the english penn treebank corpus , which consists of telephone conversations about assigned topics---our labeled data comes from the penn treebank and consists of about 40,000 sentences from wall street journal articles annotated with syntactic information | 1 |
finally , we extract the semantic phrase table from the augmented aligned corpora using the moses toolkit---we preprocessed the corpus with tokenization and true-casing tools from the moses toolkit | 1 |
the log-linear feature weights are tuned with minimum error rate training on bleu---cui et al proposed a joint model to select hierarchical rules for both source and target sides | 0 |
word sense disambiguation ( wsd ) is the task of determining the correct meaning ( β sense β ) of a word in context , and several efforts have been made to develop automatic wsd systems---word sense disambiguation ( wsd ) is a key task in computational lexical semantics , inasmuch as it addresses the lexical ambiguity of text by making explicit the meaning of words occurring in a given context ( cite-p-18-3-10 ) | 1 |
for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit---we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit | 1 |
semantic parsing is the task of converting a sentence into a representation of its meaning , usually in a logical form grounded in the symbols of some fixed ontology or relational database ( cite-p-21-3-3 , cite-p-21-3-4 , cite-p-21-1-11 )---for this language model , we built a trigram language model with kneser-ney smoothing using srilm from the same automatically segmented corpus | 0 |
reranking has become a popular technique for solving various structured prediction tasks , such as phrase-structure and dependency parsing , semantic role labeling and machine translation---while reranking has benefited many tagging and parsing tasks including semantic role labeling , it has not yet been applied to semantic parsing | 1 |
our baseline decoder is an in-house implementation of bracketing transduction grammar in cky-style decoding with a lexical reordering model trained with maximum entropy---we use an in-house implementation of the bracketing transduction grammar model as the phrase-based model that our method relies on for translation | 1 |
continuous representation of words and phrases are proven effective in many nlp tasks---importantly , word embeddings have been effectively used for several nlp tasks | 1 |
nli is a fundamentally important problem that has applications in many tasks including question answering , semantic search and automatic text summarization---nli is a task choosing one of relationships ( entailment , contradiction , neutral ) between two sentences | 1 |
we train a linear support vector machine classifier using the efficient liblinear package---we experiment with linear kernel svm classifiers using liblinear | 1 |
blitzer et al proposed structural correspondence learning to identify the correspondences among features between different domains via the concept of pivot features---for the language model , we used srilm with modified kneser-ney smoothing | 0 |
to estimate the weights δ½ i in formula , we use the minimum error rate training algorithm , which is widely used for phrasebased smt model training---for the language model , we used srilm with modified kneser-ney smoothing | 0 |
the language model is a trigram-based backoff language model with kneser-ney smoothing , computed using srilm and trained on the same training data as the translation model---although wordnet is a fine resources , we believe that ignoring other thesauri is a serious oversight | 0 |
in recent years there has been increasing interest in improving the quality of smt systems over a wide range of linguistic phenomena , including coreference resolution and modality---in recent years , there has been increasing interest in improving the quality of smt systems over a wide range of linguistic phenomena , including coreference resolution and modality | 1 |
santos et al proposed a ranking cnn model , which is trained by a pairwise ranking loss function---zeng et al and dos santos et al respectively proposed a standard and a ranking-based cnn model based on the raw word sequences | 1 |
in this paper , we address the problem for predicting cqa answer quality as a classification task---in this paper , we have provided a new perspective to predict the cqa answer quality | 1 |
we used the icsi meeting corpus , which contains naturally occurring meetings , each about an hour long---hypernym discovery is the task of identifying potential hypernyms for a given term | 0 |
we train distributional similarity models with word2vec for the source and target side separately---then we train word2vec to represent each entity with a 100-dimensional embedding vector | 1 |
however , the authors assume that the input texts to parse are transcribed by human annotators , which , in practice , is unrealistic---most authors make the unrealistic assumption that input texts are transcribed by human annotators | 1 |
our word embeddings is initialized with 100-dimensional glove word embeddings---for word-level embedding e w , we utilize pre-trained , 300-dimensional embedding vectors from glove 6b | 1 |
it has been trained with the srilm toolkit on the target side of all the training data---the language model is trained and applied with the srilm toolkit | 1 |
the top-down method had better bleu scores for 7 language pairs without relying on supervised syntactic parsers compared to other preordering methods---with the top-down method had statistically significantly higher bleu scores for 7 language pairs without relying on supervised syntactic parsers , compared to baseline systems using existing preordering methods | 1 |
there are several corpora of reasonable size which include semantic annotation on some level , such as propbank , framenet , and the penn discourse treebank---examples of well-known srl schemes motivated by different linguistic theories are framenet , propbank , and verbnet | 1 |
abstract meaning representation is a popular framework for annotating whole sentence meaning---in recent years , error mining approaches were developed to help identify the most likely sources of parsing failures | 0 |
as with many previous statistical parsers , we use a history-based model of parsing---as with several previous statistical parsers , we use a generative history-based probability model of parsing | 1 |
ittycheriah and roukos used a maximum entropy classifier to train an alignment model using hand-labeled data---ittycheriah and roukos proposed to use only manual alignment links in a maximum entropy model | 1 |
translation performance is measured using the automatic bleu metric , on one reference translation---we measure the translation quality with automatic metrics including bleu and ter | 1 |
a popular statistical machine translation paradigms is the phrase-based model---phrase-based translation models are widely used in statistical machine translation | 1 |
our machine translation system is a phrase-based system using the moses toolkit---we use the moses toolkit to train our phrase-based smt models | 1 |
we trained two 5-gram language models on the entire target side of the parallel data , with srilm---a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke | 1 |
semantic role labeling ( srl ) consists of finding the arguments of a predicate and labeling them with semantic roles ( cite-p-9-1-5 , cite-p-9-3-0 )---first show that the standard dependency grammar does not account for the full range of syntactic structures manifested by queries with question intent | 0 |
this is done by training a multiclass support vector machine classifier implemented in the svmmulticlass package by joachims---for pcfg parsing , we select the berkeley parser | 0 |
this paper describes the semeval 2018 shared task on semantic extraction from cybersecurity reports , which is introduced for the first time as a shared task on semeval---in this work , we have presented the results of semeval 2018 shared task on semantic extraction from cybersecurity reports | 1 |
from combination point of view , the newly proposed model can be considered as a novel method going beyond the conventional post-decoding style combination methods---an important aspect of simplification is syntactic transformation in which sentences deemed difficult are re-written as multiple sentences | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.