id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
|---|---|---|---|
train_12200
|
Embedding models that consider words as atomic units in the corpus, e.g., SKIP and SSKIP, are word-level.
|
embedding models that represent words with their character ngrams, e.g., fasttext (Bojanowski et al., 2016), are subword-level.
|
contrasting
|
train_12201
|
In contrast, the ELR embedding is based on an entity's contexts, which are often informative for each entity and can distinguish politicians from athletes.
|
not all entities have sufficiently many informative contexts in the corpus.
|
contrasting
|
train_12202
|
Some aspects of the proposed approachnamely, the propagation of the nodes' weights through the graph, which we metaphorically represent as the flow of a contrast medium across nodes (Section 3.3) -are somewhat similar in spirit to spreading activation (Collins and Loftus, 1975) and random walks on graphs (Lovász, 1993) approaches.
|
in contrast to spreading activation approaches we leverage the graph directionality in order to reach all the possible nodes within the same connected components.
|
contrasting
|
train_12203
|
The key ideas behind ContrastMedium are: • Identification of important topological clues from the companion knowledge base KB in order to hierarchically sort the concepts in G. For our purposes, KB is expected to be able to provide ground-truth taxonomic relations that can be safely projected onto G to guide the cleaning process: that is, we assume it to be reasonably clean.
|
we do not make any assumption on how KB has been created: our approach can be used with either manually created taxonomies like WordNet or (semi-)automatically induced ones, provided they are of sufficient quality.
|
contrasting
|
train_12204
|
Concurrent to our work, Weissenborn (2016) proposes a row-less method for relation extraction considering both a uniform and weighted average aggregation function over columns.
|
weissenborn (2016) did not experiment with max and max-pool aggregation functions or evaluate on entity-type prediction.
|
contrasting
|
train_12205
|
(2016) treat database records and output texts as sequences, and use recurrent neural networks to encode and decode them.
|
our input is a set of discrete attributes instead of database records or sequences.
|
contrasting
|
train_12206
|
(2016) use variational auto-encoders to generate face images conditioned on visual attributes.
|
our goal is to generate texts instead of images.
|
contrasting
|
train_12207
|
In our case, the input is a shallow graph instead of a syntactic tree, and hence the search space is larger.
|
the same set of actions can still be applied, with additional constraints on valid actions given each configuration (Section 3.2.3).
|
contrasting
|
train_12208
|
The content selection discussed here is analogous to the selection of semantic attributes (type, color, size, etc) when generating a description of an entity (Dale and Haddock, 1991;Dale and Reiter, 1995).
|
instead of attributes, the content selection step in our model aims to choose the form of a proper name reference (which kind(s) of name and modifier(s) are part of the proper name reference).
|
contrasting
|
train_12209
|
Our model as presented does not make this assumption (it does not always produce a proper name reference that distinguishes the target from the distractors).
|
this could be incorporated into our model as well.
|
contrasting
|
train_12210
|
Therefore, a large number of dynamic oracles have been developed in recent years (Goldberg and Nivre, 2012; Goldberg and Nivre, 2013;Goldberg et al., 2014;Gomez-Rodriguez et al., 2014;Björkelund and Nivre, 2015).
|
the reinforcement learning approach proposed in this paper is more general and can be applied to a variety of systems.
|
contrasting
|
train_12211
|
We have shown that noisy-context surprisal derives information locality, and argued that dependency locality can be seen as a special case of information locality.
|
deriving dependency locality requires a crucial assumption that words linked in a dependency have higher mutual information than those words that are not.
|
contrasting
|
train_12212
|
(2015) study the characteristics of linguistic relations that may signal entailment or contradiction at subsentential level.
|
they ignore the logical context in which these linguistic relations occur in the entailment problem.
|
contrasting
|
train_12213
|
If all sub-goals are proved, then the theorem is proved and the entailment judgement can be produced.
|
there are theorems for which not all subgoals can be proved.
|
contrasting
|
train_12214
|
Henderson and Popa (2016) learn a mapping from an existing distributional vector representation to an entailment-based vector representation that expresses whether information is known or unknown.
|
they only evaluate on lexical semantic tasks such as hyponymy detection.
|
contrasting
|
train_12215
|
Both papers evaluate their models only on lexical relationships.
|
to traditional distributional similarities, Young et al.
|
contrasting
|
train_12216
|
The participants were requested to answer the PVQ questionnaire and to provide their Twitter IDs, so that their tweets could be crawled.
|
several challenges have to be addressed when working with Twitter, and a number of iterations, human interventions and personal communications were necessary.
|
contrasting
|
train_12217
|
(2013), speech act features were applied in order to classify personalities/values.
|
for this experiment the speech act classes were restricted to 11 major categories: Statement Non-Opinion (SNO), Wh Table 5: Speech act class distributions in the corpus (in %) and speech act classifier performance Question (Wh), Yes-No Question (YN), Statement Opinion (SO), Action Directive (AD), Yes Answers (YA), Thanking (T), Appreciation (AP), Response Acknowledgement (RA), Apology (A) and others (O), hence avoiding having 43 fine-grained speech act classes.
|
contrasting
|
train_12218
|
Our results so far show that our arguments changed people's beliefs as a function of their prior beliefs and argument type.
|
we aim to automatically predict belief change, and hypothesize that knowing a person's personality in combination with their prior beliefs will allow us to select social-media arguments that are more persuasive for a particular individual.
|
contrasting
|
train_12219
|
This supports the results of our prior ANOVA testing over all subjects for belief change, and shows that the argument itself partially predicts belief change.
|
more interestingly, Table 7 also shows that providing the learner with information about personality consistently improves the ability of the learner to predict belief change.
|
contrasting
|
train_12220
|
While our results for balanced monologs suggest that summaries increase belief change, summary tools for such arguments are still under development .
|
perhaps high quality summaries may not be needed if compelling argument fragments can be automatically extracted (Misra et al., 2016b;Subba and Di Eugenio, 2007;Nguyen and Litman, 2015;Swanson et al., 2015).
|
contrasting
|
train_12221
|
In this model, a Multilayer Perceptron (MLP) takes as input a number of carefully hand-crafted syntactic and social behavioural features from each user and attempts to predict a label for each of the 5 personality traits.
|
the authors reported neither evaluation of this work, nor details of the dataset.
|
contrasting
|
train_12222
|
Also, the focus is on the prediction of trait scores on the author level based on modelling all available text from a user.
|
not only does our approach infer the personality of a user given a collection of short texts, it is also flexible enough to predict trait scores from a single short text, arguably a more challenging task considering the limited amount of information.
|
contrasting
|
train_12223
|
This is perhaps not the venue to consider the implications of this further, although one explanation might be that the model has uncovered a flexibility often associated with Ambiverts (Grant, 2013).
|
it is worth noting that the model is capable of capturing, without feature engineering, well-understood dimensions of language.
|
contrasting
|
train_12224
|
Our results suggest that apart from faster algorithms and more compact representations, recent cross-lingual word embedding algorithms are still unable to outperform the traditional methods by a significant margin.
|
introducing our new multi-lingual signal considerably improves performance.
|
contrasting
|
train_12225
|
If linguistic variability and noise only occur on the token level, then a tokenization-free approach has fewer advantages.
|
the foregoing discussion of cross-token regularities and disambiguation applies to well-edited English text as much as it does to other languages and other text-types as the example of "exchange" shows (which is dis-ambiguated by prior context and provides disambiguating context to following words) and as is also exemplified by lines 5-9 in Table 2 (right).
|
contrasting
|
train_12226
|
It is not clear to what extent blurring on the byte level is useful; e.g., if we blur the bytes of the word "university" individually, then it is unlikely that the noise generated is helpful in, say, providing good training examples in parts of the space that would otherwise be unexplored.
|
the text representation we have introduced in this paper can be blurred in a way that is analogous to images and speech.
|
contrasting
|
train_12227
|
We have focused on whitespace tokenization and proposed a whitespacetokenization-free method that computes embeddings of higher quality than tokenization-based methods.
|
there are many properties of edited text beyond whitespace tokenization that a complex rule-based tokenizer exploits.
|
contrasting
|
train_12228
|
This is true for the training of the model as well as for applying it when computing the representation of a new text.
|
to prior work that has assumed that the sequence-of-character information captured by character ngrams is sufficient, position embeddings also capture sequence-of-ngram information.
|
contrasting
|
train_12229
|
All three sentences S1, S2, and S3 mention same entity Barack Obama.
|
looking at the context, we can infer that S1 mentions Obama as a person/author, S2 mentions Obama only as a person, and S3 mentions Obama as a person/politician.
|
contrasting
|
train_12230
|
tion mechanism to allow the model to focus on relevant expressions in the entity mention's context.
|
the model assumed that all labels obtained via distant supervision are correct.
|
contrasting
|
train_12231
|
For noisy entity mentions, we propose a variant of a hinge loss where, like L c , score for all negative labels should go below −1.
|
for positive labels, as we don't know which labels are relevant to entity mention's local context, we propose that the maximum score from the set of given positive labels should be greater than one.
|
contrasting
|
train_12232
|
This indicates usefulness of the learnt feature representations.
|
if we repeat the same process with OntoNotes dataset, there is only a subtle change in performance.
|
contrasting
|
train_12233
|
Performance of AFET significantly drops (AFET-NoCo) when data-driven label-label correlation is ignored, which indicates that modeling data-driven correlation helps.
|
as shown in Figure 2a, the use of label-label correlation depends on appropriate values of parameters which vary from one dataset to another.
|
contrasting
|
train_12234
|
The strength of association between each named entity y and date d is measured based on the number of co-occurring tweets in order to form a binary tuple y, d to represent an event.
|
twiCal relies on a supervised sequence labeler trained on tweets annotated with event mentions for the identification of eventrelated phrases.
|
contrasting
|
train_12235
|
On Dataset II, both DPEMM and DPEMM-WE achieve better clustering results compared to LEM.
|
the purity of clusters generated by DPEMM is slightly higher than that generated by DPEMM-WE.
|
contrasting
|
train_12236
|
It can be observed that the purity of the cluster generated by DPEMM is 91% which is better than DPEMM-WE's 63%.
|
the size of the cluster returned by DPEMM is smaller and it failed to extract the location information.
|
contrasting
|
train_12237
|
boundaries as well as types) in it are known.
|
end-to-end relation extraction deals with plain sentences without assuming any knowledge of entity mentions in them.
|
contrasting
|
train_12238
|
As the same parameters are shared for all entity as well as relation type predictions, we expect the model to learn dependencies among relation and entity types.
|
as it makes separate predictions for each word pair, there might be some inconsistencies among the labels as described above.
|
contrasting
|
train_12239
|
Current global inference approaches optimize a single coherence measure, most commonly a measure of general semantic relatedness such as the Milne-Witten distance (Milne and Witten, 2008), or keyphrase overlap relatedness (KORE) (Hoffart et al., 2012).
|
verification allows employing many global coherence features, which we categorize according to four aspects of coherence: geographical coherence and temporal coherence, which to our knowledge have not been used before in EL, as well as entity type coherence and the general semantic relatedness mentioned above.
|
contrasting
|
train_12240
|
We have an odds ratio probability term "OR = 1.41" and two variables.
|
determining the boundaries of the variables is not straightforward.
|
contrasting
|
train_12241
|
We start with a linear-chain CRF: Algorithm 1 Dependence Extraction Constraints if influence term > 0 then ensure at least one A and one B label else if A > 0 then ensure at least one B label else if B > 0 then ensure at least one A label end if where T is the number of observations indexed by t, k indexes the feature function f k and weight λ k , and Z x normalizes over the entire input sequence, The CRF performs satisfactorily leading to higher precision classification of the labels than recall.
|
it is not able to capture some information that we know to be true.
|
contrasting
|
train_12242
|
The constraints applied for the Default constraint experiment assign the (correct) variable B and (incorrect) variable A.
|
the classifier in the Par-allel experiment mistakenly leads us not to label any entities in this sentence, thereby missing the relation.
|
contrasting
|
train_12243
|
However, as would be expected, the precision goes up as p 1 and p 2 increase, i.e., as we become more conservative in constraining the CRF output.
|
recall drops with higher values of the p threshold as the CRF is not pushed to find previously missed variables A or B.
|
contrasting
|
train_12244
|
However, achieving similar levels of performance on small real world datasets has proved difficult; major challenges stem from the large vocabulary size, complex grammar, and the frequent ambiguities in linguistic structure.
|
the requirement of human generated annotations for training, in order to ensure a sufficiently diverse set of questions is prohibitively expensive.
|
contrasting
|
train_12245
|
With the advent of deep learning, the state of art performance for various semantic NLP tasks has seen a significant boost (Collobert and Weston, 2008).
|
most of these techniques are data-hungry, and require a large number of sufficiently diverse labeled training samples, e.g., for QA, training samples should not only encompass an entire range of possible questions but also have them in sufficient quantity (Bordes et al., 2015).
|
contrasting
|
train_12246
|
An inspiration from the ripples created by the success of pre-training and as well as word2vec, this paper explores pre-training to utilize data from a related domain and also pre-trained vectors from word2vec tool (Mikolov et al., 2013).
|
finding an optimal dimension for these pre-trained vectors and other involved hyper-parameters requires computationally extensive experiments.
|
contrasting
|
train_12247
|
Such a training procedure allows the learner to waste less time with noisy or hard to predict data when the model is not ready to incorporate such samples.
|
what remains unanswered and is left as a matter of further exploration is how to devise an effective strategy for a given task?
|
contrasting
|
train_12248
|
Unlike SemEval-2007, the SemEval-2012 task is concerned exclusively with ranking substitutes; all the original participating systems were given the gold-standard substitutes and simply asked to put them in the correct order.
|
to score our own systems we use their own substitute lists, removing only those substitutes that do not also appear in the goldstandard list.
|
contrasting
|
train_12249
|
(2015) adapted for multiple languages also using bilingual corpora.
|
parallel data is an expensive resource and using parallel data seems to under-perform on the bilingual lexicon induction task (Vulić and Moens, 2015).
|
contrasting
|
train_12250
|
We also adopted this approach for multilingual environment by applying multi-view CCA to map the English part of each pre-trained bilingual word embedding to the same space.
|
we only observe minor improvements.
|
contrasting
|
train_12251
|
As we keep adding more languages to the model, the hidden layer in our model -shared between all languages -might not be enough to accommodate all languages.
|
we can combine the strength of the linear transformation proposed in §3 to our joint model as described in Equation (3).
|
contrasting
|
train_12252
|
Most of the popular methods are based on a distributional approach: the meaning of a word is defined by the context of its use, i.e., the neighboring words.
|
distributional representations carry no explicit linguistic information and cannot easily represent some important semantic relationships, such as synonymy and antonymy (Nguyen et al., 2016).
|
contrasting
|
train_12253
|
(2008a), which is also used for building a semantic representation (Zesch et al., 2008b).
|
the level of detail and structure format obtained by such method was not deemed adequate for this work and an alternative extraction method was developed (Sections 3.2 and 3.3).
|
contrasting
|
train_12254
|
If the book is long, as the reading progresses, the guesses tend to become more accurate, as a human will try to piece together the information patterns surrounding the new words.
|
the definitional approach would be equivalent to reading the entire contents of a dictionary before reading the book.
|
contrasting
|
train_12255
|
It also outperforms the most popular distributional representations.
|
they are clearly outclassed in the semantic relatedness test, for which the distributional approaches show superior performance.
|
contrasting
|
train_12256
|
Most languages have no established writing system and minimal written records.
|
textual data is essential for natural language processing, and particularly important for training language models to support speech recognition.
|
contrasting
|
train_12257
|
Neural network approaches to a range of NLP problems have also been aided by initialization with word embeddings trained on large amounts of unannotated text (Frome et al., 2013;Zhang et al., 2014;Lau and Baldwin, 2016).
|
in the case of extremely low-resource languages we do not have the luxury of this unannotated text.
|
contrasting
|
train_12258
|
This resilience of the target English word embeddings suggests that CLWEs can serve as a method of transferring semantic information from resource-rich languages to the resource-poor, even when the languages are quite different.
|
the WordSim353 task is a constrained environment, so in the next section we turn to language modeling, a natural language processing task of much practical importance for resource-poor languages.
|
contrasting
|
train_12259
|
In every case where pre-trained embeddings were used, the embedding layer was held fixed during training.
|
we observed similar results when allowing them to deviate from their initial state.
|
contrasting
|
train_12260
|
The language models initialized with pretrained CLWEs are significantly better than their un-pre-trained counterpart on small amounts of data, reaching par performance with MKN at somewhere just past 4k sentences of training data.
|
it takes more than 16k sentences of training data before the plain LSTM language model began to outperform MKN.
|
contrasting
|
train_12261
|
For the Na-English lexicon, this was only 18% and 20% when lemmatized and unlemmatized, respectively.
|
it was 67% for the PanLex lexicon.
|
contrasting
|
train_12262
|
Post-editing, the easiest one to implement, has little consideration for the words surrounding the nouns, while re-ranking works on MT hypotheses and thus ensures that a better global translation is found that is also consistent.
|
in some cases, no hypothesis conforms to the consistency decision, and in this case post-editing the best hypothesis appears to be beneficial.
|
contrasting
|
train_12263
|
As in Chinese semantic features are less helpful, given also the limited amount of data, combining them with syntactic ones actually decreases the performance of the syntactic ones used independently.
|
semantic features are more helpful on German dataset, and also improve results when we considered along with the syntactic ones together.
|
contrasting
|
train_12264
|
The analysis shows that lexical features are significantly more important than purely syntactic ones, for both languages.
|
the syntactic ones are not negligible.
|
contrasting
|
train_12265
|
Much of psycholinguistics has focused for many years on processing measures that provide difficulty estimates on a word-byword basis.
|
these psycholinguistic measures have not yet been tested on sentence readability ranking tasks.
|
contrasting
|
train_12266
|
Previous work on readability has classified or ranked texts based on document-level measures such as word length, sentence length, number of different phrasal categories & parse tree depth (Petersen, 2007), and discourse coherence (Graesser et al., 2004), inter alia.
|
not all applications that need readability ratings deal with long documents.
|
contrasting
|
train_12267
|
The cataloging of product listings through taxonomy categorization is a fundamental problem for any e-commerce marketplace, with applications ranging from personalized search recommendations to query understanding.
|
manual and rule based approaches to categorization are not scalable.
|
contrasting
|
train_12268
|
Then, if the specific items cannot be found in the inventory, other relevant items in the "Jeans" category are returned in the search results to encourage the user to browse further.
|
achieving good product categorization for e-commerce market-places is challenging.
|
contrasting
|
train_12269
|
), but with more pronounced data quality issues.
|
the existing methods for noisy product classification have only been applied to English.
|
contrasting
|
train_12270
|
For instance, existing approaches classify text units as argumentative or non-argumentative (Moens et al., 2007), recognize argument components such as claims or premises at the sentence-level (Mochales-Palau and Moens, 2009;Kwon et al., 2007;Eckle-Kohler et al., 2015) or clause-level Sardianos et al., 2015), or identify argument structures by classifying pairs of argument components .
|
these approaches are of limited use for argumentative writing support systems since they do not recognize the weak points of arguments.
|
contrasting
|
train_12271
|
The argument is a generalization from one sample to the general case.
|
a single sample is not enough to support the general case.
|
contrasting
|
train_12272
|
Approaches for identifying the structure of arguments recognize argumentative relations between argument components using context-free grammars (Mochales-Palau and Moens, 2009), pair classification , or maximum spanning trees (Peldszus and Stede, 2015).
|
none of these approaches consider the quality of arguments.
|
contrasting
|
train_12273
|
Cano-Basave and He (2016) ranked speakers in political debates by using semantic frames which indicate persuasive argumentation features, and Habernal and Gurevych (2016b) compared the convincingness of argument pairs using feature-rich SVMs and bidirectional LSTMs.
|
the persuasiveness score of an argument is only of limited use for argumentative writing support, since it summarizes various quality criteria and does not explain why an argument is weak.
|
contrasting
|
train_12274
|
Thus, the model is less suitable for exhaustively finding all insufficiently supported arguments.
|
the CNN model is more balanced with respect to precision and recall and considerably outperforms the recall of the SVM model.
|
contrasting
|
train_12275
|
Using the length features individually yields only a slight improvement of the macro F1 score over the majority baseline.
|
removing the length from the entire feature set causes a slight decrease of .002 in the macro F1 score compared to the system which uses all features.
|
contrasting
|
train_12276
|
Most of these arguments either refer to an example in addition to other premises which are already sufficient to support the claim or include an example for specifying another premise.
|
we also found several false negatives which include examples as evidence.
|
contrasting
|
train_12277
|
Using the augmented training set, consisting of both the original documents and the additional pseudodocuments, we can then employ any existing PV model (Le and Mikolov, 2014;Dai et al., 2015) to learn the document-phrase co-embeddings.
|
due to the significant differences in the sizes and numbers of documents and pseudodocuments, there is a danger that the addition of the pseudo-documents can have a detrimental effect on the performance of the model.
|
contrasting
|
train_12278
|
Most research efforts in this task have focused on English texts, much less attention has been given to other languages (Abdul-Mageed et al., 2011;Kapukaranov and Nakov, 2015).
|
spanish is the third language most used on the internet, with a total of 7.7% (more than 277 million of users) and a huge internet growth of more than 1,400%.
|
contrasting
|
train_12279
|
In addition, we introduce a novel type of LM using a modified version of the standard bidirectional LSTM called contextual bidirectional LSTM (cBLSTM).
|
to the unidirectional model, this model is trained to predict a word depending on its full left and right contexts.
|
contrasting
|
train_12280
|
More importantly, in (Peris and Casacuberta, 2015), bidirectional RNN LMs are used for a statistical machine translation task.
|
only standard RNNs but not LSTMs are utilized.
|
contrasting
|
train_12281
|
The pattern based methods (both lexical patterns and syntactic patterns) are simple, fast, and scalable on large-scale datasets.
|
the robustness of patterns is usually questionable in practice.
|
contrasting
|
train_12282
|
Lexical patterns either have limited coverage (e.g., fixed set of patterns (Riloff and Wiebe, 2003)), or hard-tocontrol noise (e.g., bootstrapping approaches (Qiu et al., 2011)).
|
supervised models can achieve better performances than patterns on manually labeled datasets, but it is often difficult to obtain large number of annotations for the relation extraction task, and the trained models are also limited to specified domains.
|
contrasting
|
train_12283
|
Because this update potentially involves concurrent reads and writes at the same memory location, we use an atomic max operation (defined as atomicMax on the NVIDIA toolkit).
|
atomicMax is not defined for floating-point values.
|
contrasting
|
train_12284
|
Very recently, there have been a few efforts to apply NMT to simultaneous translation either through heuristic modifications to the decoding process (Cho and Esipova, 2016), or through the training of an independent segmentation network that chooses when to perform output using a standard NMT model (Satija and Pineau, 2016).
|
the former model lacks a capability to learn the appropriate timing with which to perform translation, and the latter model uses a standard NMT model as-is, lacking a holistic design of the modeling and learning within the simultaneous MT context.
|
contrasting
|
train_12285
|
are trained independently and combined in a loglinear scheme in which each model is assigned a different weight by a tuning algorithm.
|
in NMT all the components are jointly trained to maximise translation quality.
|
contrasting
|
train_12286
|
They showed that gender differences captured by shallow syntactic features were preserved across languages, when examined by linguistic categories.
|
they did not study the distribution of individual gender markers across domains and languages.
|
contrasting
|
train_12287
|
Data One of the main advantages of automatic BLI systems is their portability to different languages and domains.
|
current standard BLI evaluation protocols still rely on generaldomain data and test sets (Mikolov et al., 2013a;Gouws et al., 2015;Lazaridou et al., 2015;Vulić and Moens, 2016, inter alia).
|
contrasting
|
train_12288
|
Sophisticated deep learning algorithms can also be applied to text clustering (Xu et al., 2015), but to date they require labeled training data, while the method proposed in this paper is unsupervised.
|
to bag-of-words (BOW) schemes, named entities (NEs) can be used as features (Montalvo et al., 2012).
|
contrasting
|
train_12289
|
The dominant approach for many NLP tasks are recurrent neural networks, in particular LSTMs, and convolutional neural networks.
|
these architectures are rather shallow in comparison to the deep convolutional networks which have pushed the state-of-the-art in computer vision.
|
contrasting
|
train_12290
|
These word embeddings are now the state-of-the-art in NLP.
|
it is less clear how we should best represent a sequence of words, e.g.
|
contrasting
|
train_12291
|
We obtain state-of-the-art results for all data sets, except AG's news and Sogou news which are the smallest ones.
|
with our very deep architecture, we get closer to the stateof-the-art which are ngrams TF-IDF for these data sets and significantly surpass convolutional models presented in .
|
contrasting
|
train_12292
|
An explanation may be that similarity rewards redundancy, which is why, e.g., all four similarity approaches falsely ranked the redundancy-free argument a 1 in Table 1 lowest.
|
this requires further investigation, including an analysis of more sophisticated similarity measures.
|
contrasting
|
train_12293
|
Note that in addition to the experiments reported in this table, we attempted to combine different features sets.
|
we did not observe Aiming to identify potential predictors for the remaining MI codes, we conduct a set of experiments where we use our linguistic feature sets to build multiclass classifiers.
|
contrasting
|
train_12294
|
It should be mentioned that the baseline character n-gram models in most cases are more effective than baseline token n-gram models.
|
in average, they are worse than token n-grams due to their poor performance when the Society texts are used for training.
|
contrasting
|
train_12295
|
Note that the results for CCAT-10 are directly comparable to Table 3.
|
for the Guardian corpus we present the average performance of all possible 12 combinations (using one thematic category as training corpus and another thematic category as test corpus).
|
contrasting
|
train_12296
|
It is also notable that, in both corpora, the differences between DV-MA and DV-SA are not significant.
|
dV-MA is slightly better than dV-SA in most of the cases.
|
contrasting
|
train_12297
|
The baseline model is better only in the case of the most challenging cross-genre PAN15-DU corpus.
|
its performance essentially resembles random guessing (0.5).
|
contrasting
|
train_12298
|
Experimental results demonstrated a considerable gain in effectiveness when using the proposed models under the realistic cross-topic conditions in both closed-set attribution and author verification tasks.
|
when the corpora are too topicspecific where the texts by a given author are consistently on certain subjects different than the ones of the other candidate authors, the distortion methods seem not to be helpful.
|
contrasting
|
train_12299
|
In the I2B2 Temporal Challenge eight types of relations were initially annotated.
|
due to low inter-annotator agreement these were merged to three types of temporal relations, OVERLAP, BEFORE, and AFTER.
|
contrasting
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.