markdown_text
stringlengths 1
2.5k
| pdf_metadata
dict | header_metadata
dict | chunk_metadata
dict |
|---|---|---|---|
## **Frustratingly Easy Domain Adaptation**
### **Hal Daum´e III** School of Computing University of Utah Salt Lake City, Utah 84112 `me@hal3.name`
|
{
"id": "0907.1815",
"title": "Frustratingly Easy Domain Adaptation",
"categories": [
"cs.LG",
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "**Frustratingly Easy Domain Adaptation**",
"Header 3": "**Hal Daum´e III** School of Computing University of Utah Salt Lake City, Utah 84112 `me@hal3.name`",
"Header 4": null,
"Header 5": null
}
|
{
"chunk_type": "title"
}
|
### **Abstract**
We describe an approach to domain adaptation that is appropriate exactly in the case
when one has enough “target” data to do
slightly better than just using only “source”
data. Our approach is incredibly simple,
easy to implement as a preprocessing step
(10 lines of Perl!) and outperforms stateof-the-art approaches on a range of datasets.
Moreover, it is trivially extended to a multidomain adaptation problem, where one has
data from a variety of different domains.
|
{
"id": "0907.1815",
"title": "Frustratingly Easy Domain Adaptation",
"categories": [
"cs.LG",
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "**Frustratingly Easy Domain Adaptation**",
"Header 3": "**Abstract**",
"Header 4": null,
"Header 5": null
}
|
{
"chunk_type": "body"
}
|
### **1 Introduction**
The task of domain adaptation is to develop learning algorithms that can be easily ported from one
domain to another—say, from newswire to biomedical documents. This problem is particularly interesting in NLP because we are often in the situation
that we have a large collection of labeled data in one
“source” domain (say, newswire) but truly desire a
model that performs well in a second “target” domain. The approach we present in this paper is based
on the idea of transforming the domain adaptation
learning problem into a standard supervised learning problem to which any standard algorithm may
be applied (eg., maxent, SVMs, etc.). Our transformation is incredibly simple: we augment the feature
space of both the source and target data and use the
result as input to a standard learning algorithm.
There are roughly two varieties of the domain
adaptation problem that have been addressed in the
literature: the fully supervised case and the semi
supervised case. The fully supervised case models the following scenario. We have access to a
large, annotated corpus of data from a source domain. In addition, we spend a little money to annotate a small corpus in the target domain. We want to
leverage both annotated datasets to obtain a model
that performs well on the target domain. The semisupervised case is similar, but instead of having a
small annotated target corpus, we have a large but
*unannotated* target corpus. In this paper, we focus
exclusively on the fully supervised case.
One particularly nice property of our approach
is that it is incredibly easy to implement: the Appendix provides a 10 line, 194 character Perl script
for performing the complete transformation (available at `http://hal3.name/easyadapt.pl.gz` ). In
addition to this simplicity, our algorithm performs as
well as (or, in some cases, better than) current state
of the art techniques.
|
{
"id": "0907.1815",
"title": "Frustratingly Easy Domain Adaptation",
"categories": [
"cs.LG",
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "**Frustratingly Easy Domain Adaptation**",
"Header 3": "**1 Introduction**",
"Header 4": null,
"Header 5": null
}
|
{
"chunk_type": "body"
}
|
### **2 Problem Formalization and Prior Work**
To facilitate discussion, we first introduce some notation. Denote by X the input space (typically either
a real vector or a binary vector), and by Y the output
space. We will write D [s] to denote the distribution
over source examples and D [t] to denote the distribution over target examples. We assume access to
a samples D [s] ∼D [s] of source examples from the
source domain, and samples D [t] ∼D [t] of target examples from the target domain. We will assume that
D [s] is a collection of N examples and D [t] is a collection of M examples (where, typically, N ≫ M ).
Our goal is to learn a function h : X →Y with
low expected loss with respect to the target domain.
-----
For the purposes of discussion, we will suppose that
X = R [F] and that Y = {−1, +1}. However, most
of the techniques described in this section (as well
as our own technique) are more general.
There are several “obvious” ways to attack the
domain adaptation problem without developing new
algorithms. Many of these are presented and evaluated by Daum´e III and Marcu (2006).
The S RC O NLY baseline ignores the target data and
trains a single model, only on the source data.
The T GT O NLY baseline trains a single model only
on the target data.
The A LL baseline simply trains a standard learning
algorithm on the union of the two datasets.
A potential problem with the A LL baseline is that
if N ≫ M, then D [s] may “wash out” any affect
D [t] might have. We will discuss this problem in
more detail later, but one potential solution is
to re-weight examples from D [s] . For instance,
if N = 10 × M, we may weight each example
from the source domain by 0.1. The next baseline, W EIGHTED, is exactly this approach, with
the weight chosen by cross-validation.
The P RED baseline is based on the idea of using
the output of the source classifier as a feature in
the target classifier. Specifically, we first train a
S RC O NLY model. Then we run the S RC O NLY
model on the target data (training, development
and test). We use the predictions made by
the S RC O NLY model as additional features and
train a second model on the target data, augmented with this new feature.
In the L IN I NT baseline, we linearly interpolate
the predictions of the S RC O NLY and the T G
T O NLY models. The interpolation parameter is
adjusted based on target development data.
|
{
"id": "0907.1815",
"title": "Frustratingly Easy Domain Adaptation",
"categories": [
"cs.LG",
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "**Frustratingly Easy Domain Adaptation**",
"Header 3": "**2 Problem Formalization and Prior Work**",
"Header 4": null,
"Header 5": null
}
|
{
"chunk_type": "body"
}
|
In the L IN I NT baseline, we linearly interpolate
the predictions of the S RC O NLY and the T G
T O NLY models. The interpolation parameter is
adjusted based on target development data.
These baselines are actually surprisingly difficult to beat. To date, there are two models that
have successfully defeated them on a handful of
datasets. The first model, which we shall refer
to as the P RIOR model, was first introduced by
Chelba and Acero (2004). The idea of this model
is to use the S RC O NLY model as a *prior* on the
weights for a second model, trained on the target data. Chelba and Acero (2004) describe this approach within the context of a maximum entropy
classifier, but the idea is more general. In particular,
for many learning algorithms (maxent, SVMs, averaged perceptron, naive Bayes, etc.), one *regularizes*
the weight vector toward zero. In other words, all of
these algorithms contain a regularization term on the
weights w of the form λ ||w|| [2] 2 [. In the generalized]
P RIOR model, we simply replace this regularization
term with λ ||w − w [s] || [2] 2 [, where][ w] [s] [ is the weight]
vector learned in the S RC O NLY model. [1] In this way,
the model trained on the target data “prefers” to have
weights that are similar to the weights from the S R
C O NLY model, unless the data demands otherwise.
Daum´e III and Marcu (2006) provide empirical evidence on four datasets that the P RIOR model outperforms the baseline approaches.
More recently, Daum´e III and Marcu (2006) presented an algorithm for domain adaptation for maximum entropy classifiers. The key idea of their
approach is to learn *three* separate models. One
model captures “source specific” information, one
captures “target specific” information and one captures “general” information. The distinction between these three sorts of information is made on
a *per-example* basis. In this way, each source example is considered either source specific or general,
while each target example is considered either target specific or general. Daum´e III and Marcu (2006)
present an EM algorithm for training their model.
This model consistently outperformed all the baseline approaches as well as the P RIOR model. Unfortunately, despite the empirical success of this algorithm, it is quite complex to implement and is
roughly 10 to 15 times slower than training the
P RIOR model.
|
{
"id": "0907.1815",
"title": "Frustratingly Easy Domain Adaptation",
"categories": [
"cs.LG",
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "**Frustratingly Easy Domain Adaptation**",
"Header 3": "**2 Problem Formalization and Prior Work**",
"Header 4": null,
"Header 5": null
}
|
{
"chunk_type": "body"
}
|
### **3 Adaptation by Feature Augmentation**
In this section, we describe our approach to the domain adaptation problem. Essentially, all we are going to do is take each feature in the original problem
and make three versions of it: a general version, a
source-specific version and a target-specific version.
The augmented source data will contain only general
1 For the maximum entropy, SVM and naive Bayes learning algorithms, modifying the regularization term is simple because it appears explicitly. For the perceptron algorithm, one
can obtain an equivalent regularization by performing standard
perceptron updates, but using (w + w [s] ) [⊤] x for making predictions rather than simply w [⊤] x.
-----
and source-specific versions. The augmented target
data contains general and target-specific versions.
To state this more formally, first recall the notation from Section 2: X and Y are the input and
output spaces, respectively; D [s] is the source domain data set and D [t] is the target domain data set.
Suppose for simplicity that X = R [F] for some
F > 0. We will define our augmented input space
X →by X [˘] =X˘ R for mapping the source and target data [3][F] . Then, define mappings Φ [s], Φ [t] :
respectively. These are defined by Eq (1), where
0 = ⟨0, 0, . . ., 0⟩∈ R [F] is the zero vector.
Φ [s] (x) = ⟨x, x, 0⟩, Φ [t] (x) = ⟨x, 0, x⟩ (1)
Before we proceed with a formal analysis of this
transformation, let us consider why it might be expected to work. Suppose our task is part of speech
tagging, our source domain is the Wall Street Journal
and our target domain is a collection of reviews of
computer hardware. Here, a word like “the” should
be tagged as a determiner in both cases. However,
a word like “monitor” is more likely to be a verb
in the WSJ and more likely to be a noun in the hardware corpus. Consider a simple case where X = R [2],
where x 1 indicates if the word is “the” and x 2 indicates if the word is “monitor.” Then, in X [˘], ˘x 1 and ˘x 2
will be “general” versions of the two indicator functions, ˘x 3 and ˘x 4 will be source-specific versions, and
x˘ 5 and ˘x 6 will be target-specific versions.
Now, consider what a learning algorithm could do
to capture the fact that the appropriate tag for “the”
remains constant across the domains, and the tag
for “monitor” changes. In this case, the model can
set the “determiner” weight vector to something like
|
{
"id": "0907.1815",
"title": "Frustratingly Easy Domain Adaptation",
"categories": [
"cs.LG",
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "**Frustratingly Easy Domain Adaptation**",
"Header 3": "**3 Adaptation by Feature Augmentation**",
"Header 4": null,
"Header 5": null
}
|
{
"chunk_type": "body"
}
|
to capture the fact that the appropriate tag for “the”
remains constant across the domains, and the tag
for “monitor” changes. In this case, the model can
set the “determiner” weight vector to something like
⟨1, 0, 0, 0, 0, 0⟩. This places high weight on the common version of “the” and indicates that “the” is most
likely a determiner, regardless of the domain. On
the other hand, the weight vector for “noun” might
look something like ⟨0, 0, 0, 0, 0, 1⟩, indicating that
the word “monitor” is a noun *only* in the target domain. Similar, the weight vector for “verb” might
look like ⟨0, 0, 0, 1, 0, 0⟩, indicating the “monitor” is
a verb *only* in the source domain.
Note that this expansion is actually redundant.
We could equally well use Φ [s] (x) = ⟨x, x⟩ and
Φ [t] (x) = ⟨x, 0⟩. However, it turns out that it is easier to analyze the first case, so we will stick with
that. Moreover, the first case has the nice property
that it is straightforward to generalize it to the multidomain adaptation problem: when there are more
than two domains. In general, for K domains, the
augmented feature space will consist of K +1 copies
of the original feature space.
**3.1** **A Kernelized Version**
It is straightforward to derive a kernelized version of
the above approach. We do not exploit this property
in our experiments—all are conducted with a simple
linear kernel. However, by deriving the kernelized
version, we gain some insight into the method. For
this reason, we sketch the derivation here.
Suppose that the data points x are drawn from a
reproducing kernel Hilbert space X with kernel K :
X × X →R, with K positive semi-definite. Then,
K can be written as the dot product (in X ) of two
(perhaps infinite-dimensional) vectors: K(x, x [′] ) =
⟨Φ(x), Φ(x [′] )⟩ X . Define Φ [s] and Φ [t] in terms of Φ, as:
Φ [s] (x) = ⟨Φ(x), Φ(x), 0⟩ (2)
Φ [t] (x) = ⟨Φ(x), 0, Φ(x)⟩
Now, we can compute the kernel product between Φ [s] and Φ [t] in the expanded RKHS by making use of the original kernel K. We denote the expanded kernel by K [˘] (x, x [′] ). It is simplest to first describe K [˘] (x, x [′] ) when x and x [′] are from the same
domain, then analyze the case when the domain
differs.K˘ (x, x [′] ) =When the domain is the same, we get: ⟨Φ(x), Φ(x [′] )⟩ X + ⟨Φ(x), Φ(x [′] )⟩ X =
2we get:K(x, x [′] K)˘. When they are from different domains,(x, x [′] ) = ⟨Φ(x), Φ(x [′] )⟩ X = K(x, x [′] ).
Putting this together, we have:
˘ 2K(x, x ′ ) same domain
K(x, x [′] ) = (3)
|
{
"id": "0907.1815",
"title": "Frustratingly Easy Domain Adaptation",
"categories": [
"cs.LG",
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "**Frustratingly Easy Domain Adaptation**",
"Header 3": "**3 Adaptation by Feature Augmentation**",
"Header 4": null,
"Header 5": null
}
|
{
"chunk_type": "body"
}
|
2we get:K(x, x [′] K)˘. When they are from different domains,(x, x [′] ) = ⟨Φ(x), Φ(x [′] )⟩ X = K(x, x [′] ).
Putting this together, we have:
˘ 2K(x, x ′ ) same domain
K(x, x [′] ) = (3)
� K(x, x [′] ) diff. domain
This is an intuitively pleasing result. What it
says is that—considering the kernel as a measure
of similarity—data points from the same domain are
“by default” twice as similar as those from different domains. Loosely speaking, this means that data
points from the target domain have twice as much
influence as source points when making predictions
about test target data.
-----
**3.2** **Analysis**
We first note an obvious property of the featureaugmentation approach. Namely, it does not make
learning harder, in a minimum Bayes error sense. A
more interesting statement would be that it makes
learning *easier*, along the lines of the result of
(Ben-David et al., 2006) — note, however, that their
results are for the “semi-supervised” domain adaptation problem and so do not apply directly. As yet,
we do not know a proper formalism in which to analyze the fully supervised case.
It turns out that the feature-augmentation method
is remarkably similar to the P RIOR model [2] . Suppose we learn feature-augmented weights in a classifier regularized by an ℓ 2 norm (eg., SVMs, maximum entropy). We can denote by w s the sum of the
“source” and “general” components of the learned
weight vector, and by w t the sum of the “target” and
“general” components, so that w s and w t are the predictive weights for each task. Then, the regularization condition on the entire weight vector is approximately ||w g || [2] + ||w s − w g || [2] + ||w t − w g || [2], with
free parameter w g which can be chosen to minimize
this sum. This leads to a regularizer proportional to
||w s − w t || [2], akin to the P RIOR model.
Given this similarity between the featureaugmentation method and the P RIOR model, one
might wonder why we expect our approach to do
better. Our belief is that this occurs because we optimize w s and w t *jointly*, not sequentially. First, this
means that we do not need to cross-validate to es
timate good hyperparameters for each task (though
in our experiments, we do not use any hyperparameters). Second, and more importantly, this means
that the single supervised learning algorithm that
is run is allowed to regulate the trade-off between
source/target and general weights. In the P RIOR
model, we are forced to use the prior variance on
|
{
"id": "0907.1815",
"title": "Frustratingly Easy Domain Adaptation",
"categories": [
"cs.LG",
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "**Frustratingly Easy Domain Adaptation**",
"Header 3": "**3 Adaptation by Feature Augmentation**",
"Header 4": null,
"Header 5": null
}
|
{
"chunk_type": "body"
}
|
that the single supervised learning algorithm that
is run is allowed to regulate the trade-off between
source/target and general weights. In the P RIOR
model, we are forced to use the prior variance on
in the target learning scenario to do this ourselves.
**3.3** **Multi-domain adaptation**
Our formulation is agnostic to the number of
“source” domains. In particular, it may be the case
that the source data actually falls into a variety of
more specific domains. This is simple to account
for in our model. In the two-domain case, we ex
2 Thanks an anonymous reviewer for pointing this out!
panded the feature space from R [F] to R [3][F] . For a
K-domain problem, we simply expand the feature
space to R [(][K][+1)][F] in the obvious way (the “+1” corresponds to the “general domain” while each of the
other 1 . . . K correspond to a single task).
|
{
"id": "0907.1815",
"title": "Frustratingly Easy Domain Adaptation",
"categories": [
"cs.LG",
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "**Frustratingly Easy Domain Adaptation**",
"Header 3": "**3 Adaptation by Feature Augmentation**",
"Header 4": null,
"Header 5": null
}
|
{
"chunk_type": "body"
}
|
### **4 Results**
In this section we describe experimental results on a
wide variety of domains. First we describe the tasks,
then we present experimental results, and finally we
look more closely at a few of the experiments.
**4.1** **Tasks**
All tasks we consider are sequence labeling tasks
(either named-entity recognition, shallow parsing or
part-of-speech tagging) on the following datasets:
ACE-NER. We use data from the 2005 Automatic
Content Extraction task, restricting ourselves to
the named-entity recognition task. The 2005
ACE data comes from 5 domains: Broad
cast News (bn), Broadcast Conversations (bc),
Newswire (nw), Weblog (wl), Usenet (un) and
Converstaional Telephone Speech (cts).
CoNLL-NE. Similar to ACE-NER, a named-entity
recognition task. The difference is: we use the
2006 ACE data as the source domain and the
CoNLL 2003 NER data as the target domain.
PubMed-POS. A part-of-speech tagging problem on PubMed abstracts introduced by
Blitzer et al. (2006). There are two domains:
the source domain is the WSJ portion of
the Penn Treebank and the target domain is
PubMed.
CNN-Recap. This is a recapitalization task introduced by Chelba and Acero (2004) and also
used by Daum´e III and Marcu (2006). The
source domain is newswire and the target domain is the output of an ASR system.
Treebank-Chunk. This is a shallow parsing task
based on the data from the Penn Treebank. This
data comes from a variety of domains: the standard WSJ domain (we use the same data as for
CoNLL 2000), the ATIS switchboard domain,
and the Brown corpus (which is, itself, assembled from six subdomains).
-----
**Task** **Dom** **# Tr** **# De** **# Te** **# Ft**
bn 52,998 6,625 6,626 80k
bc 38,073 4,759 4,761 109k
ACE- nw 44,364 5,546 5,547 113k
NER wl 35,883 4,485 4,487 109k
un 35,083 4,385 4,387 96k
cts 39,677 4,960 4,961 54k
CoNLL- src 256,145 - - 368k
NER tgt 29,791 5,258 8,806 88k
PubMed- src 950,028 - - 571k
POS tgt 11,264 1,987 14,554 39k
CNN- src 2,000,000 - - 368k
Recap tgt 39,684 7,003 8,075 88k
wsj 191,209 29,455 38,440 94k
swbd3 45,282 5,596 41,840 55k
br-cf 58,201 8,307 7,607 144k
Tree br-cg 67,429 9,444 6,897 149k
bank- br-ck 51,379 6,061 9,451 121k
Chunk br-cl 47,382 5,101 5,880 95k
br-cm 11,696 1,324 1,594 51k
br-cn 56,057 6,751 7,847 115k
br-cp 55,318 7,477 5,977 112k
br-cr 16,742 2,522 2,712 65k
Table 1: Task statistics; columns are task, domain,
size of the training, development and test sets, and
|
{
"id": "0907.1815",
"title": "Frustratingly Easy Domain Adaptation",
"categories": [
"cs.LG",
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "**Frustratingly Easy Domain Adaptation**",
"Header 3": "**4 Results**",
"Header 4": null,
"Header 5": null
}
|
{
"chunk_type": "body"
}
|
br-cm 11,696 1,324 1,594 51k
br-cn 56,057 6,751 7,847 115k
br-cp 55,318 7,477 5,977 112k
br-cr 16,742 2,522 2,712 65k
Table 1: Task statistics; columns are task, domain,
size of the training, development and test sets, and
the number of unique features in the training set.
Treebank-Brown. This is identical to the Treebank
Chunk task, except that we consider all of the
Brown corpus to be a single domain.
In all cases (except for CNN-Recap), we use
roughly the same feature set, which has become somewhat standardized: lexical information (words, stems, capitalization, prefixes and
suffixes), membership on gazetteers, etc. For
the CNN-Recap task, we use identical feature to
those used by both Chelba and Acero (2004) and
Daum´e III and Marcu (2006): the current, previous
and next word, and 1-3 letter prefixes and suffixes.
Statistics on the tasks and datasets are in Table 1.
In all cases, we use the S EARN algorithm
for solving the sequence labeling problem
(Daum´e III et al., 2007) with an underlying averaged perceptron classifier; implementation due to
(Daum´e III, 2004). For structural features, we make
a second-order Markov assumption and only place
a bias feature on the transitions. For simplicity,
we optimize and report only on label accuracy (but
require that our outputs be parsimonious: we do
not allow “I-NP” to follow “B-PP,” for instance).
We do this for three reasons. First, our focus in
this work is on building better learning algorithms
and introducing a more complicated measure only
serves to mask these effects. Second, it is arguable
that a measure like F 1 is inappropriate for chunking
tasks (Manning, 2006). Third, we can easily compute statistical significance over accuracies using
McNemar’s test.
**4.2** **Experimental Results**
The full—somewhat daunting—table of results is
presented in Table 2. The first two columns specify the task and domain. For the tasks with only a
single source and target, we simply report results on
the target. For the multi-domain adaptation tasks,
we report results for each setting of the target (where
all other data-sets are used as different “source” do
mains). The next set of eight columns are the *error*
*rates* for the task, using one of the different techniques (“A UGMENT ” is our proposed technique).
For each row, the error rate of the best performing
technique is bolded (as are all techniques whose performance is not statistically significantly different at
|
{
"id": "0907.1815",
"title": "Frustratingly Easy Domain Adaptation",
"categories": [
"cs.LG",
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "**Frustratingly Easy Domain Adaptation**",
"Header 3": "**4 Results**",
"Header 4": null,
"Header 5": null
}
|
{
"chunk_type": "body"
}
|
For each row, the error rate of the best performing
technique is bolded (as are all techniques whose performance is not statistically significantly different at
the 95% level). The “T<S” column is contains a “+”
whenever T GT O NLY outperforms S RC O NLY (this
will become important shortly). The final column
indicates when A UGMENT comes in first. [3]
There are several trends to note in the results. Ex
cluding for a moment the “br-*” domains on the
Treebank-Chunk task, our technique always performs best. Still excluding “br-*”, the clear secondplace contestant is the P RIOR model, a finding consistent with prior research. When we repeat the
Treebank-Chunk task, but lumping all of the “br-*”
data together into a single “brown” domain, the story
reverts to what we expected before: our algorithm
performs best, followed by the P RIOR method.
Importantly, this simple story breaks down on the
Treebank-Chunk task for the eight sections of the
Brown corpus. For these, our A UGMENT technique
performs rather poorly. Moreover, there is no clear
winning approach on this task. Our hypothesis is
that the common feature of these examples is that
these are exactly the tasks for which S RC O NLY outperforms T GT O NLY (with one exception: CoNLL).
This seems like a plausible explanation, since it im
3 One advantage of using the averaged perceptron for all experiments is that the only tunable hyperparameter is the number
of iterations. In all cases, we run 20 iterations and choose the
one with the lowest error on development data.
-----
**Task** **Dom** S RC O NLY T GT O NLY A LL W EIGHT P RED L IN I NT P RIOR A UGMENT T<S **Win**
bn 4.98 2.37 2.29 2.23 2.11 2.21 2.06 **1.98** + +
bc 4.54 4.07 3.55 3.53 3.89 4.01 **3.47** **3.47** + +
ACE- nw 4.78 3.71 3.86 3.65 3.56 3.79 3.68 **3.39** + +
NER wl 2.45 2.45 **2.12** **2.12** 2.45 2.33 2.41 **2.12** = +
un 3.67 2.46 2.48 2.40 2.18 2.10 2.03 **1.91** + +
cts 2.08 0.46 0.40 0.40 0.46 0.44 **0.34** **0.32** + +
CoNLL tgt 2.49 2.95 1.80 **1.75** 2.13 **1.77** 1.89 **1.76** +
PubMed tgt 12.02 4.15 5.43 4.15 4.14 3.95 3.99 **3.61** + +
CNN tgt 10.29 3.82 3.67 3.45 3.46 3.44 **3.35** **3.37** + +
wsj 6.63 4.35 4.33 4.30 4.32 4.32 4.27 **4.11** + +
swbd3 15.90 4.15 4.50 4.10 4.13 4.09 3.60 **3.51** + +
br-cf 5.16 6.27 4.85 4.80 4.78 **4.72** 5.22 5.15
Tree br-cg 4.32 5.36 **4.16** **4.15** 4.27 4.30 4.25 4.90
bank- br-ck 5.05 6.32 5.05 4.98 **5.01** **5.05** 5.27 5.41
|
{
"id": "0907.1815",
"title": "Frustratingly Easy Domain Adaptation",
"categories": [
"cs.LG",
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "**Frustratingly Easy Domain Adaptation**",
"Header 3": "**4 Results**",
"Header 4": null,
"Header 5": null
}
|
{
"chunk_type": "body"
}
|
swbd3 15.90 4.15 4.50 4.10 4.13 4.09 3.60 **3.51** + +
br-cf 5.16 6.27 4.85 4.80 4.78 **4.72** 5.22 5.15
Tree br-cg 4.32 5.36 **4.16** **4.15** 4.27 4.30 4.25 4.90
bank- br-ck 5.05 6.32 5.05 4.98 **5.01** **5.05** 5.27 5.41
Chunk br-cl 5.66 6.60 5.42 **5.39** **5.39** 5.53 5.99 5.73
br-cm 3.57 6.59 **3.14** **3.11** 3.15 3.31 4.08 4.89
br-cn 4.60 5.56 4.27 4.22 **4.20** **4.19** 4.48 4.42
br-cp 4.82 5.62 4.63 **4.57** **4.55** **4.55** 4.87 4.78
br-cr 5.78 9.13 5.71 5.19 5.20 **5.15** 6.71 6.30
Treebank-brown 6.35 5.75 4.80 4.75 4.81 4.72 4.72 **4.65** + +
Table 2: Task results.
plies that the source and target domains may not be
that different. If the domains are so similar that
a large amount of source data outperforms a small
amount of target data, then it is unlikely that blowing up the feature space will help.
We additionally ran the M EGA M model
(Daum´e III and Marcu, 2006) on these data
(though not in the multi-conditional case; for
this, we considered the single source as the union
of all sources). The results are not displayed in
Table 2 to save space. For the majority of results,
M EGA M performed roughly comparably to the
best of the systems in the table. In particular, it
was not statistically significantly different that
A UGMENT on: ACE-NER, CoNLL, PubMed,
Treebank-chunk-wsj, Treebank-chunk-swbd3, CNN
and Treebank-brown. It did outperform A UGMENT
on the Treebank-chunk on the Treebank-chunk-br-*
data sets, but only outperformed the best other
model on these data sets for br-cg, br-cm and br-cp.
However, despite its advantages on these data sets,
it was quite significantly slower to train: a single
run required about ten times longer than any of
the other models (including A UGMENT ), and also
required five-to-ten iterations of cross-validation
to tune its hyperparameters so as to achieve these
results.
- bn bc nw wl un cts
PER
GPE
ORG
LOC

Figure 1: Hinton diagram for feature /Aa+/ at current position.
**4.3** **Model Introspection**
One explanation of our model’s improved performance is simply that by augmenting the feature
space, we are creating a more powerful model.
While this may be a partial explanation, here we
show that what the model learns about the various
domains actually makes some plausible sense.
We perform this analysis only on the ACE-NER
|
{
"id": "0907.1815",
"title": "Frustratingly Easy Domain Adaptation",
"categories": [
"cs.LG",
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "**Frustratingly Easy Domain Adaptation**",
"Header 3": "**4 Results**",
"Header 4": null,
"Header 5": null
}
|
{
"chunk_type": "body"
}
|
space, we are creating a more powerful model.
While this may be a partial explanation, here we
show that what the model learns about the various
domains actually makes some plausible sense.
We perform this analysis only on the ACE-NER
data by looking specifically at the learned weights.
That is, for any given feature f, there will be seven
versions of f : one corresponding to the “crossdomain” f and seven corresponding to each domain.
We visualize these weights, using Hinton diagrams,
to see how the weights vary across domains.
For example, consider the feature “current word
has an initial capital letter and is then followed by
-----
- bn bc nw wl un cts
PER
GPE
ORG
LOC

Figure 2: Hinton diagram for feature /bush/ at current position.
one or more lower-case letters.” This feature is presumably useless for data that lacks capitalization information, but potentially quite useful for other domains. In Figure 1 we shown a Hinton diagram for
this figure. Each column in this figure correspond
to a domain (the top row is the “general domain”).
Each row corresponds to a class. [4] Black boxes correspond to negative weights and white boxes correspond to positive weights. The size of the box depicts the absolute value of the weight.
As we can see from Figure 1, the /Aa+/ feature
is a very good indicator of entity-hood (it’s value is
strongly positive for all four entity classes), regardless of domain (i.e., for the “*” domain). The lack
of boxes in the “bn” column means that, beyond the
settings in “*”, the broadcast news is agnostic with
respect to this feature. This makes sense: there is
no capitalization in broadcast news domain, so there
would be no sense is setting these weights to anything by zero. The usenet column is filled with negative weights. While this may seem strange, it is
due to the fact that many email addresses and URLs
match this pattern, but are not entities.
Figure 2 depicts a similar figure for the feature
“word is ’bush’ at the current position” (this figure is
case sensitive). [5] These weights are somewhat harder
to interpret. What is happening is that “by default”
the word “bush” is going to be a person—this is because it rarely appears referring to a plant and so
even in the capitalized domains like broadcast con
4 Technically there are many more classes than are shown
|
{
"id": "0907.1815",
"title": "Frustratingly Easy Domain Adaptation",
"categories": [
"cs.LG",
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "**Frustratingly Easy Domain Adaptation**",
"Header 3": "**4 Results**",
"Header 4": null,
"Header 5": null
}
|
{
"chunk_type": "body"
}
|
the word “bush” is going to be a person—this is because it rarely appears referring to a plant and so
even in the capitalized domains like broadcast con
4 Technically there are many more classes than are shown
here. We do not depict the smallest classes, and have merged
the “Begin-*” and “In-*” weights for each entity type.
5
The scale of weights across features is *not* comparable, so
do not try to compare Figure 1 with Figure 2.
- bn bc nw wl un cts
PER
GPE
ORG
LOC

Figure 3: Hinton diagram for feature /the/ at current
position.
- bn bc nw wl un cts
PER
GPE
ORG
LOC

Figure 4: Hinton diagram for feature /the/ at previous position.
versations, if it appears at all, it is a person. The
exception is that in the conversations data, people
*do* actually talk about bushes as plants, and so the
weights are set accordingly. The weights are high in
the usenet domain because people tend to talk about
the president without capitalizing his name.
Figure 3 presents the Hinton diagram for the feature “word at the current position is ’the”’ (again,
case-sensitive). In general, it appears, “the” is a
common word in entities in all domain except for
broadcast news and conversations. The exceptions
are broadcast news and conversations. These exceptions crop up because of the capitalization issue.
In Figure 4, we show the diagram for the feature
“previous word is ’the’.” The only domain for which
this is a good feature of entity-hood is broadcast
conversations (to a much lesser extent, newswire).
This occurs because of four phrases very common in
the broadcast conversations and rare elsewhere: “the
Iraqi people” (“Iraqi” is a GPE), “the Pentagon” (an
ORG), “the Bush (cabinet|advisors|. . . )” (PER), and
“the South” (LOC).
-----
- bn bc nw wl un cts
PER
GPE
ORG
LOC

Figure 5: Hinton diagram for membership on a list
of names at current position.
Finally, Figure 5 shows the Hinton diagram for
the feature “the current word is on a list of com
mon names” (this feature is case- *in* sensitive). All
around, this is a good feature for picking out people
and nothing else. The two exceptions are: it is also
|
{
"id": "0907.1815",
"title": "Frustratingly Easy Domain Adaptation",
"categories": [
"cs.LG",
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "**Frustratingly Easy Domain Adaptation**",
"Header 3": "**4 Results**",
"Header 4": null,
"Header 5": null
}
|
{
"chunk_type": "body"
}
|
the feature “the current word is on a list of com
mon names” (this feature is case- *in* sensitive). All
around, this is a good feature for picking out people
and nothing else. The two exceptions are: it is also
a good feature for other entity types for broadcast
news and it is not quite so good for people in usenet.
The first is easily explained: in broadcast news, it
is very common to refer to countries and organizations by the name of their respective leaders. This is
essentially a metonymy issue, but as the data is annotated, these are marked by their true referent. For
usenet, it is because the list of names comes from
news data, but usenet names are more diverse.
In general, the weights depicte for these features
make some intuitive sense (in as much as weights
for any learned algorithm make intuitive sense). It
is particularly interesting to note that while there are
some regularities to the patterns in the five diagrams,
it is definitely *not* the case that there are, eg., two
domains that behave identically across all features.
This supports the hypothesis that the reason our algorithm works so well on this data is because the
domains are actually quite well separated.
|
{
"id": "0907.1815",
"title": "Frustratingly Easy Domain Adaptation",
"categories": [
"cs.LG",
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "**Frustratingly Easy Domain Adaptation**",
"Header 3": "**4 Results**",
"Header 4": null,
"Header 5": null
}
|
{
"chunk_type": "body"
}
|
### **5 Discussion**
In this paper we have described an *incredibly* simple approach to domain adaptation that—under a
common and easy-to-verify condition—outperforms
previous approaches. While it is somewhat frustrating that something so simple does so well, it
is perhaps not surprising. By augmenting the feature space, we are essentially forcing the learning
algorithm to do the adaptation for us. Good supervised learning algorithms have been developed over
decades, and so we are essentially just leveraging all
that previous work. Our hope is that this approach
is so simple that it can be used for many more realworld tasks than we have presented here with little
effort. Finally, it is very interesting to note that using our method, shallow parsing error rate on the
CoNLL section of the treebank improves from 5.35
to 5.11. While this improvement is small, it is real,
and may carry over to full parsing. The most important avenue of future work is to develop a formal
framework under which we can analyze this (and
other supervised domain adaptation models) theoretically. Currently our results only state that this
augmentation procedure doesn’t make the learning
harder — we would like to know that it actually
makes it easier. An additional future direction is
to explore the kernelization interpretation further:
why should we use 2 as the “similarity” between
domains—we could introduce a hyperparamter α
that indicates the similarity between domains and
could be tuned via cross-validation.
**Acknowledgments.** We thank the three anonymous reviewers, as well as Ryan McDonald and
John Blitzer for very helpful comments and insights.
|
{
"id": "0907.1815",
"title": "Frustratingly Easy Domain Adaptation",
"categories": [
"cs.LG",
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "**Frustratingly Easy Domain Adaptation**",
"Header 3": "**5 Discussion**",
"Header 4": null,
"Header 5": null
}
|
{
"chunk_type": "body"
}
|
### **References**
[Ben-David et al.2006] Shai Ben-David, John Blitzer, Koby
Crammer, and Fernando Pereira. 2006. Analysis of representations for domain adaptation. In *Advances in Neural*
*Information Processing Systems (NIPS)* .
[Blitzer et al.2006] John Blitzer, Ryan McDonald, and Fernando
Pereira. 2006. Domain adaptation with structural correspondence learning. In *Proceedings of the Conference on Empir-*
*ical Methods in Natural Language Processing (EMNLP)* .
[Chelba and Acero2004] Ciprian Chelba and Alex Acero. 2004.
Adaptation of maximum entropy classifier: Little data can
help a lot. In *Proceedings of the Conference on Empir-*
*ical Methods in Natural Language Processing (EMNLP)*,
Barcelona, Spain.
[Daum´e III and Marcu2006] Hal Daum´e III and Daniel Marcu.
2006. Domain adaptation for statistical classifiers. *Journal*
*of Artificial Intelligence Research*, 26.
[Daum´e III et al.2007] Hal Daum´e III, John Langford, and
Daniel Marcu. 2007. Search-based structured prediction.
*Machine Learning Journal (submitted)* .
[Daum´e III2004] Hal Daum´e III. 2004. Notes on CG and LMBFGS optimization of logistic regression. Paper available at
`http://pub.hal3.name/#daume04cg-bfgs`,
implementation available at
`http://hal3.name/megam/`, August.
-----
[Manning2006] Christopher Manning. 2006. Doing named entity recognition? Don’t optimize
for F 1 . Post on the NLPers Blog, 25 August.
`http://nlpers.blogspot.com/2006/08/doing-named-entity-recognition-dont.html` .
-----
|
{
"id": "0907.1815",
"title": "Frustratingly Easy Domain Adaptation",
"categories": [
"cs.LG",
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "**Frustratingly Easy Domain Adaptation**",
"Header 3": "**References**",
"Header 4": null,
"Header 5": null
}
|
{
"chunk_type": "references"
}
|
# Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory
#### Sumio Watanabe P&I Lab., Tokyo Institute of Technology 4259 Nagatsuta, Midoriku, Yokohama, 226-8503 Japan E-mail: swatanab(at)pi.titech.ac.jp P&I Lab., Tokyo Institute of Technology 4259 Nagatsuta, Midoriku, Yokohama, 226-8503 Japan July 22, 2013
Abstract
In regular statistical models, it is well known that the cross validation
leaving one out is asymptotically equivalent to Akaike information criterion.
However, a lot of learning machines are singular statistical models, resulting
that the asymptotic behavior of the cross validation has been left unknown.
In previous papers, we established singular learning theory and proposed a
widely applicable information criterion whose expectation value is asymptotically equal to the average Bayes generalization loss. In this paper, we theoretically compare the Bayes cross validation loss and the widely applicable
information criterion and prove two theorems. Firstly, the Bayes cross validation loss is asymptotically equivalent to the widely applicable information
criterion. Therefore, model selection and hyperparameter optimization using
these two values are asymptotically equivalent to each other. Secondly, the
sum of the Bayes generalization error and the Bayes cross validation error is
asymptotically equal to 2λ/n, where λ is the log canonical threshold and n
is the number of training samples. This fact shows that the relation between
the cross validation error and the generalization error is determined by the
algebraic geometrical structure of a learning machine.
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": null,
"Header 4": "Sumio Watanabe P&I Lab., Tokyo Institute of Technology 4259 Nagatsuta, Midoriku, Yokohama, 226-8503 Japan E-mail: swatanab(at)pi.titech.ac.jp P&I Lab., Tokyo Institute of Technology 4259 Nagatsuta, Midoriku, Yokohama, 226-8503 Japan July 22, 2013",
"Header 5": null
}
|
{
"chunk_type": "title"
}
|
##### Keywords: Cross Validation, Information Criterion, Singular Learning Ma- chines 1
-----
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": null,
"Header 4": "Sumio Watanabe P&I Lab., Tokyo Institute of Technology 4259 Nagatsuta, Midoriku, Yokohama, 226-8503 Japan E-mail: swatanab(at)pi.titech.ac.jp P&I Lab., Tokyo Institute of Technology 4259 Nagatsuta, Midoriku, Yokohama, 226-8503 Japan July 22, 2013",
"Header 5": "Keywords: Cross Validation, Information Criterion, Singular Learning Ma- chines 1"
}
|
{
"chunk_type": "body"
}
|
### 1 Introduction
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "1 Introduction",
"Header 4": null,
"Header 5": "A statistical model or a learning machine is called regular if the map from a param- eter to a probability distribution is one-to-one and if its Fisher information matrix is positive definite. If a model is not regular, then it is called singular. Although asymptotic statistical theory of regular statistical models was made in statistics, that of singular models is not yet. Many learning machines are not regular but singular [Watanabe 07], for example, artificial neural networks [Watanabe 01b], reduced rank regressions [Aoyagi & Watanabe 05], normal mixtures[Yamazaki & Watanabe 03], Bayes networks [Rusakov & Geiger 05, Zwiernik 10], binomial mixtures, Boltzmann machines, and hidden Markov models. If a statistical model or a learning machine contains hierarchical structure, hidden variables, or a grammatical rule, then it is singular in general. The statistical properties of singular models have been left unknown in statis- tics and information science, because it was difficult to analyze a singular likelihood function [Hartigan 85, Watanabe 95]. In fact, in singular statistical models, the maximum likelihood estimator does not satisfy asymptotic normality, resulting that AIC is not equal to the average generalization error [Hagiwara 02], and that the Bayes information criterion (BIC) is not equal to the Bayes marginal likelihood [Watanabe 01a]. In singular models, the maximum likelihood estimator often di- verges or even if it does not diverge, it makes the generalization error very large, hence the maximum likelihood method is not appropriate for singular models. Recently, new statistical learning theory has been established based on algebraic geometrical method [Watanabe 01a, Drton et al. 09, Watanabe 09, Watanabe 10a, Watanabe 10c, Lin 10]. In singular learning theory, the log likelihood function can be made to be a common standard form even if it contains singularities. As a result, it was proved that the concepts of AIC and BIC are generalized onto singular statistical models. It was proved that the asymptotic Bayes marginal likelihood is determined by the log canonical threshold [Watanabe 01a, Drton et al. 09, Lin 10] and that the widely applicable information criterion is equal to the average Bayes generalization error [Watanabe 09, Watanabe 10a, Watanabe 10c]. The cross validation is the alternative method to estimate the generalization error [Mosier 51, Stone 77, Geisser 75]. By the definition, the average of the cross validation is equal to the average generalization error in both regular and singular models. In regular statistical models, it is well known that the cross validation leaving one out is asymptotically equivalent to AIC [Akaike 74] in the maximum likelihood method [Stone 77, Linhart 86, Browne 00]. However, the asymptotic be- havior of the cross validation in singular models has not been clarified. In this paper, in singular statistical models, we theoretically compare the Bayes cross validation, the widely applicable information criterion, and the Bayes general- ization error, and prove two theorems. Firstly, we show that Bayes cross validation loss is asymptotically equivalent to the widely applicable information criterion as random variables. Therefore model selection and hyperparameter optimization using them are equivalent to each other. Secondly, we also show that the sum of the Bayes 2"
}
|
{
"chunk_type": "body"
}
|
##### A statistical model or a learning machine is called regular if the map from a param- eter to a probability distribution is one-to-one and if its Fisher information matrix is positive definite. If a model is not regular, then it is called singular. Although asymptotic statistical theory of regular statistical models was made in statistics, that of singular models is not yet. Many learning machines are not regular but singular [Watanabe 07], for example, artificial neural networks [Watanabe 01b], reduced rank regressions [Aoyagi & Watanabe 05], normal mixtures[Yamazaki & Watanabe 03], Bayes networks [Rusakov & Geiger 05, Zwiernik 10], binomial mixtures, Boltzmann machines, and hidden Markov models. If a statistical model or a learning machine contains hierarchical structure, hidden variables, or a grammatical rule, then it is singular in general. The statistical properties of singular models have been left unknown in statis- tics and information science, because it was difficult to analyze a singular likelihood function [Hartigan 85, Watanabe 95]. In fact, in singular statistical models, the maximum likelihood estimator does not satisfy asymptotic normality, resulting that AIC is not equal to the average generalization error [Hagiwara 02], and that the Bayes information criterion (BIC) is not equal to the Bayes marginal likelihood [Watanabe 01a]. In singular models, the maximum likelihood estimator often di- verges or even if it does not diverge, it makes the generalization error very large, hence the maximum likelihood method is not appropriate for singular models. Recently, new statistical learning theory has been established based on algebraic geometrical method [Watanabe 01a, Drton et al. 09, Watanabe 09, Watanabe 10a, Watanabe 10c, Lin 10]. In singular learning theory, the log likelihood function can be made to be a common standard form even if it contains singularities. As a result, it was proved that the concepts of AIC and BIC are generalized onto singular statistical models. It was proved that the asymptotic Bayes marginal likelihood is determined by the log canonical threshold [Watanabe 01a, Drton et al. 09, Lin 10] and that the widely applicable information criterion is equal to the average Bayes generalization error [Watanabe 09, Watanabe 10a, Watanabe 10c]. The cross validation is the alternative method to estimate the generalization error [Mosier 51, Stone 77, Geisser 75]. By the definition, the average of the cross validation is equal
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "1 Introduction",
"Header 4": null,
"Header 5": "A statistical model or a learning machine is called regular if the map from a param- eter to a probability distribution is one-to-one and if its Fisher information matrix is positive definite. If a model is not regular, then it is called singular. Although asymptotic statistical theory of regular statistical models was made in statistics, that of singular models is not yet. Many learning machines are not regular but singular [Watanabe 07], for example, artificial neural networks [Watanabe 01b], reduced rank regressions [Aoyagi & Watanabe 05], normal mixtures[Yamazaki & Watanabe 03], Bayes networks [Rusakov & Geiger 05, Zwiernik 10], binomial mixtures, Boltzmann machines, and hidden Markov models. If a statistical model or a learning machine contains hierarchical structure, hidden variables, or a grammatical rule, then it is singular in general. The statistical properties of singular models have been left unknown in statis- tics and information science, because it was difficult to analyze a singular likelihood function [Hartigan 85, Watanabe 95]. In fact, in singular statistical models, the maximum likelihood estimator does not satisfy asymptotic normality, resulting that AIC is not equal to the average generalization error [Hagiwara 02], and that the Bayes information criterion (BIC) is not equal to the Bayes marginal likelihood [Watanabe 01a]. In singular models, the maximum likelihood estimator often di- verges or even if it does not diverge, it makes the generalization error very large, hence the maximum likelihood method is not appropriate for singular models. Recently, new statistical learning theory has been established based on algebraic geometrical method [Watanabe 01a, Drton et al. 09, Watanabe 09, Watanabe 10a, Watanabe 10c, Lin 10]. In singular learning theory, the log likelihood function can be made to be a common standard form even if it contains singularities. As a result, it was proved that the concepts of AIC and BIC are generalized onto singular statistical models. It was proved that the asymptotic Bayes marginal likelihood is determined by the log canonical threshold [Watanabe 01a, Drton et al. 09, Lin 10] and that the widely applicable information criterion is equal to the average Bayes generalization error [Watanabe 09, Watanabe 10a, Watanabe 10c]. The cross validation is the alternative method to estimate the generalization error [Mosier 51, Stone 77, Geisser 75]. By the definition, the average of the cross validation is equal to the average generalization error in both regular and singular models. In regular statistical models, it is well known that the cross validation leaving one out is asymptotically equivalent to AIC [Akaike 74] in the maximum likelihood method [Stone 77, Linhart 86, Browne 00]. However, the asymptotic be- havior of the cross validation in singular models has not been clarified. In this paper, in singular statistical models, we theoretically compare the Bayes cross validation, the widely applicable information criterion, and the Bayes general- ization error, and prove two theorems. Firstly, we show that Bayes cross validation loss is asymptotically equivalent to the widely applicable information criterion as random variables. Therefore model selection and hyperparameter optimization using them are equivalent to each other. Secondly, we also show that the sum of the Bayes 2"
}
|
{
"chunk_type": "body"
}
|
generalization error [Watanabe 09, Watanabe 10a, Watanabe 10c]. The cross validation is the alternative method to estimate the generalization error [Mosier 51, Stone 77, Geisser 75]. By the definition, the average of the cross validation is equal to the average generalization error in both regular and singular models. In regular statistical models, it is well known that the cross validation leaving one out is asymptotically equivalent to AIC [Akaike 74] in the maximum likelihood method [Stone 77, Linhart 86, Browne 00]. However, the asymptotic be- havior of the cross validation in singular models has not been clarified. In this paper, in singular statistical models, we theoretically compare the Bayes cross validation, the widely applicable information criterion, and the Bayes general- ization error, and prove two theorems. Firstly, we show that Bayes cross validation loss is asymptotically equivalent to the widely applicable information criterion as random variables. Therefore model selection and hyperparameter optimization using them are equivalent to each other. Secondly, we also show that the sum of the Bayes 2
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "1 Introduction",
"Header 4": null,
"Header 5": "A statistical model or a learning machine is called regular if the map from a param- eter to a probability distribution is one-to-one and if its Fisher information matrix is positive definite. If a model is not regular, then it is called singular. Although asymptotic statistical theory of regular statistical models was made in statistics, that of singular models is not yet. Many learning machines are not regular but singular [Watanabe 07], for example, artificial neural networks [Watanabe 01b], reduced rank regressions [Aoyagi & Watanabe 05], normal mixtures[Yamazaki & Watanabe 03], Bayes networks [Rusakov & Geiger 05, Zwiernik 10], binomial mixtures, Boltzmann machines, and hidden Markov models. If a statistical model or a learning machine contains hierarchical structure, hidden variables, or a grammatical rule, then it is singular in general. The statistical properties of singular models have been left unknown in statis- tics and information science, because it was difficult to analyze a singular likelihood function [Hartigan 85, Watanabe 95]. In fact, in singular statistical models, the maximum likelihood estimator does not satisfy asymptotic normality, resulting that AIC is not equal to the average generalization error [Hagiwara 02], and that the Bayes information criterion (BIC) is not equal to the Bayes marginal likelihood [Watanabe 01a]. In singular models, the maximum likelihood estimator often di- verges or even if it does not diverge, it makes the generalization error very large, hence the maximum likelihood method is not appropriate for singular models. Recently, new statistical learning theory has been established based on algebraic geometrical method [Watanabe 01a, Drton et al. 09, Watanabe 09, Watanabe 10a, Watanabe 10c, Lin 10]. In singular learning theory, the log likelihood function can be made to be a common standard form even if it contains singularities. As a result, it was proved that the concepts of AIC and BIC are generalized onto singular statistical models. It was proved that the asymptotic Bayes marginal likelihood is determined by the log canonical threshold [Watanabe 01a, Drton et al. 09, Lin 10] and that the widely applicable information criterion is equal to the average Bayes generalization error [Watanabe 09, Watanabe 10a, Watanabe 10c]. The cross validation is the alternative method to estimate the generalization error [Mosier 51, Stone 77, Geisser 75]. By the definition, the average of the cross validation is equal to the average generalization error in both regular and singular models. In regular statistical models, it is well known that the cross validation leaving one out is asymptotically equivalent to AIC [Akaike 74] in the maximum likelihood method [Stone 77, Linhart 86, Browne 00]. However, the asymptotic be- havior of the cross validation in singular models has not been clarified. In this paper, in singular statistical models, we theoretically compare the Bayes cross validation, the widely applicable information criterion, and the Bayes general- ization error, and prove two theorems. Firstly, we show that Bayes cross validation loss is asymptotically equivalent to the widely applicable information criterion as random variables. Therefore model selection and hyperparameter optimization using them are equivalent to each other. Secondly, we also show that the sum of the Bayes 2"
}
|
{
"chunk_type": "body"
}
|
-----
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "1 Introduction",
"Header 4": null,
"Header 5": "A statistical model or a learning machine is called regular if the map from a param- eter to a probability distribution is one-to-one and if its Fisher information matrix is positive definite. If a model is not regular, then it is called singular. Although asymptotic statistical theory of regular statistical models was made in statistics, that of singular models is not yet. Many learning machines are not regular but singular [Watanabe 07], for example, artificial neural networks [Watanabe 01b], reduced rank regressions [Aoyagi & Watanabe 05], normal mixtures[Yamazaki & Watanabe 03], Bayes networks [Rusakov & Geiger 05, Zwiernik 10], binomial mixtures, Boltzmann machines, and hidden Markov models. If a statistical model or a learning machine contains hierarchical structure, hidden variables, or a grammatical rule, then it is singular in general. The statistical properties of singular models have been left unknown in statis- tics and information science, because it was difficult to analyze a singular likelihood function [Hartigan 85, Watanabe 95]. In fact, in singular statistical models, the maximum likelihood estimator does not satisfy asymptotic normality, resulting that AIC is not equal to the average generalization error [Hagiwara 02], and that the Bayes information criterion (BIC) is not equal to the Bayes marginal likelihood [Watanabe 01a]. In singular models, the maximum likelihood estimator often di- verges or even if it does not diverge, it makes the generalization error very large, hence the maximum likelihood method is not appropriate for singular models. Recently, new statistical learning theory has been established based on algebraic geometrical method [Watanabe 01a, Drton et al. 09, Watanabe 09, Watanabe 10a, Watanabe 10c, Lin 10]. In singular learning theory, the log likelihood function can be made to be a common standard form even if it contains singularities. As a result, it was proved that the concepts of AIC and BIC are generalized onto singular statistical models. It was proved that the asymptotic Bayes marginal likelihood is determined by the log canonical threshold [Watanabe 01a, Drton et al. 09, Lin 10] and that the widely applicable information criterion is equal to the average Bayes generalization error [Watanabe 09, Watanabe 10a, Watanabe 10c]. The cross validation is the alternative method to estimate the generalization error [Mosier 51, Stone 77, Geisser 75]. By the definition, the average of the cross validation is equal to the average generalization error in both regular and singular models. In regular statistical models, it is well known that the cross validation leaving one out is asymptotically equivalent to AIC [Akaike 74] in the maximum likelihood method [Stone 77, Linhart 86, Browne 00]. However, the asymptotic be- havior of the cross validation in singular models has not been clarified. In this paper, in singular statistical models, we theoretically compare the Bayes cross validation, the widely applicable information criterion, and the Bayes general- ization error, and prove two theorems. Firstly, we show that Bayes cross validation loss is asymptotically equivalent to the widely applicable information criterion as random variables. Therefore model selection and hyperparameter optimization using them are equivalent to each other. Secondly, we also show that the sum of the Bayes 2"
}
|
{
"chunk_type": "body"
}
|
##### Variable Name eq. number E w [ ] posterior average eq.(1) E [(] w [i][)] [[ ]] posterior average without X i eq.(19) L(w) log loss function eq.(6) L 0 minimum loss eq.(8) L n emiprical loss eq.(9) B g L(n) Bayes generalization loss eq.(2) B t L(n) Bayes training loss eq.(3) C v L(n) cross validation loss eq.(20) B g (n) Bayes generalization error eq.(10) B t (n) Bayes training error eq.(11) C v (n) cross validation error eq.(35) V (n) functional variance eq.(4) Y k (n) kth functional cumulant eq.(22) WAIC(n) WAIC eq.(5) λ log canonical threshold eq.(36) ν singular fluctuation eq.(37) Table 1: Variable, Name, and Equation Number cross validation error and the Bayes generalization error is asymptotically equal to 2λ/n, where λ is the log canonical threshold and n is the number of training sam- ples. Because the log canonical threshold is a birational invariant of the statistical model, the relation between the Bayes cross validation and the Bayes generalization error is determined by the algebraic geometrical structure of the statistical model.
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "1 Introduction",
"Header 4": null,
"Header 5": "Variable Name eq. number E w [ ] posterior average eq.(1) E [(] w [i][)] [[ ]] posterior average without X i eq.(19) L(w) log loss function eq.(6) L 0 minimum loss eq.(8) L n emiprical loss eq.(9) B g L(n) Bayes generalization loss eq.(2) B t L(n) Bayes training loss eq.(3) C v L(n) cross validation loss eq.(20) B g (n) Bayes generalization error eq.(10) B t (n) Bayes training error eq.(11) C v (n) cross validation error eq.(35) V (n) functional variance eq.(4) Y k (n) kth functional cumulant eq.(22) WAIC(n) WAIC eq.(5) λ log canonical threshold eq.(36) ν singular fluctuation eq.(37) Table 1: Variable, Name, and Equation Number cross validation error and the Bayes generalization error is asymptotically equal to 2λ/n, where λ is the log canonical threshold and n is the number of training sam- ples. Because the log canonical threshold is a birational invariant of the statistical model, the relation between the Bayes cross validation and the Bayes generalization error is determined by the algebraic geometrical structure of the statistical model."
}
|
{
"chunk_type": "body"
}
|
### 2 Bayes Learning Theory
##### In this section, we summarize Bayes learning theory for singular learning machines. The results written in this section are well known and the fundamental basis of this paper. Table 1 shows variables, names, and equation numbers in this paper.
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": null,
"Header 5": "In this section, we summarize Bayes learning theory for singular learning machines. The results written in this section are well known and the fundamental basis of this paper. Table 1 shows variables, names, and equation numbers in this paper."
}
|
{
"chunk_type": "body"
}
|
#### 2.1 Framework of Bayes Learning
##### Firstly, we explain the framework of Bayes learning. Let q(x) be a probability density function on N dimensional real Euclidean space R [N] . The training samples and the testing sample are respectively denoted by random variables X 1, X 2, ..., X n and X, which are independently subject to the same probability distribution as q(x)dx. The probability distribution q(x)dx is sometimes called the true distribution. A statistical model or a learning machine is defined by a probability density function p(x|w) of x ∈ R [N] for a given parameter w ∈ W ⊂ R [d], where W is a set of 3
-----
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.1 Framework of Bayes Learning",
"Header 5": "Firstly, we explain the framework of Bayes learning. Let q(x) be a probability density function on N dimensional real Euclidean space R [N] . The training samples and the testing sample are respectively denoted by random variables X 1, X 2, ..., X n and X, which are independently subject to the same probability distribution as q(x)dx. The probability distribution q(x)dx is sometimes called the true distribution. A statistical model or a learning machine is defined by a probability density function p(x|w) of x ∈ R [N] for a given parameter w ∈ W ⊂ R [d], where W is a set of 3"
}
|
{
"chunk_type": "body"
}
|
##### all parameters. In Bayes estimation, we prepare a probability density function ϕ(w) on W . Although ϕ(w) is called a prior distribution, it does not necessary represent an a priori knowledge of the parameter, in general. For a given function f (w) on W, its expectation value with respect to the pos- terior distribution is defined by
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.1 Framework of Bayes Learning",
"Header 5": "all parameters. In Bayes estimation, we prepare a probability density function ϕ(w) on W . Although ϕ(w) is called a prior distribution, it does not necessary represent an a priori knowledge of the parameter, in general. For a given function f (w) on W, its expectation value with respect to the pos- terior distribution is defined by"
}
|
{
"chunk_type": "body"
}
|
##### E w [f (w)] =
n
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.1 Framework of Bayes Learning",
"Header 5": "E w [f (w)] ="
}
|
{
"chunk_type": "body"
}
|
##### f (w) � p(X i |w) [β] ϕ(w)dw �
i=1
n
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.1 Framework of Bayes Learning",
"Header 5": "f (w) � p(X i |w) [β] ϕ(w)dw �"
}
|
{
"chunk_type": "body"
}
|
##### � p(X i |w) [β] ϕ(w)dw �
i=1
#####, (1)
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.1 Framework of Bayes Learning",
"Header 5": "� p(X i |w) [β] ϕ(w)dw �"
}
|
{
"chunk_type": "body"
}
|
##### where 0 < β < ∞ is the inverse temperature. The case β = 1 is most important because it corresponds to the conventional Bayes estimation. The Bayes predictive distribution is defined by p [∗] (x) ≡ E w [p(x|w)]. In Bayes learning theory, the following random variables are important. The Bayes generalization loss B g L(n) and the Bayes training loss B t L(n) are respectively de- fined by B g L(n) = −E X [log p [∗] (X)], (2)
n
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.1 Framework of Bayes Learning",
"Header 5": "where 0 < β < ∞ is the inverse temperature. The case β = 1 is most important because it corresponds to the conventional Bayes estimation. The Bayes predictive distribution is defined by p [∗] (x) ≡ E w [p(x|w)]. In Bayes learning theory, the following random variables are important. The Bayes generalization loss B g L(n) and the Bayes training loss B t L(n) are respectively de- fined by B g L(n) = −E X [log p [∗] (X)], (2)"
}
|
{
"chunk_type": "body"
}
|
##### B t L(n) = −n [1] � log p [∗] (X i ), (3)
i=1
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.1 Framework of Bayes Learning",
"Header 5": "B t L(n) = −n [1] � log p [∗] (X i ), (3)"
}
|
{
"chunk_type": "body"
}
|
##### where E X [ ] shows the expectation value over X. The functional variance is defined by
n
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.1 Framework of Bayes Learning",
"Header 5": "where E X [ ] shows the expectation value over X. The functional variance is defined by"
}
|
{
"chunk_type": "body"
}
|
##### V (n) = ��E w [(log p(X i |w)) [2] ] − E w [log p(X i |w)] [2] [�], (4)
i=1
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.1 Framework of Bayes Learning",
"Header 5": "V (n) = ��E w [(log p(X i |w)) [2] ] − E w [log p(X i |w)] [2] [�], (4)"
}
|
{
"chunk_type": "body"
}
|
##### which shows fluctuation of the posterior distribution. In previous papers [Watanabe 09, Watanabe 10a, Watanabe 10b], we defined the widely applicable information crite- rion WAIC(n) ≡ B t L(n) + [β] (5) n [V][ (][n][)][,] and proved that E[B g L(n)] = E[WAIC(n)] + o( [1] n [)][,] where E[ ] shows the expectation value over the sets of training samples.
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.1 Framework of Bayes Learning",
"Header 5": "which shows fluctuation of the posterior distribution. In previous papers [Watanabe 09, Watanabe 10a, Watanabe 10b], we defined the widely applicable information crite- rion WAIC(n) ≡ B t L(n) + [β] (5) n [V][ (][n][)][,] and proved that E[B g L(n)] = E[WAIC(n)] + o( [1] n [)][,] where E[ ] shows the expectation value over the sets of training samples."
}
|
{
"chunk_type": "body"
}
|
#### 2.2 Notations
##### Secondly, we explain several notations. 4
-----
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.2 Notations",
"Header 5": "Secondly, we explain several notations. 4"
}
|
{
"chunk_type": "body"
}
|
##### The log loss function L(w) and the entropy S of the true distribution are respec- tively defined by L(w) = −E X [log p(X|w)], (6) S = −E X [log q(X)]. (7) Then L(w) = S + D(q||p w ), where D(q||p w ) is Kullback-Leibler distance defined by D(q||p w ) = q(x) log [q][(][x][)] � p(x|w) [dx.] Then D(q||p w ) ≥ 0, hence L(w) ≥ S. Moreover, L(w) = S if and only if p(x|w) = q(x). In this paper, we assume that there exists a parameter w 0 ∈ W which minimizes L(w), L(w 0 ) = min w∈W [L][(][w][)][.] Note that such w 0 is not unique in general, because the map w �→ p(x|w) is not one-to-one in general in singular learning machines. We also assume that, for an arbitrary w that satisfies L(w) = L(w 0 ), p(x|w) is the same probability density function. Let p 0 (x) be such a unique probability density function. In general, the set W 0 = {w ∈ W ; p(x|w) = p 0 (x)} is not a set of single element but an analytic set or an algebraic set with singularities. For simple notations, the minimum log loss L 0 and the empirical log loss L n are respectively defined by L 0 = −E X [log p 0 (X)], (8)
n
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.2 Notations",
"Header 5": "The log loss function L(w) and the entropy S of the true distribution are respec- tively defined by L(w) = −E X [log p(X|w)], (6) S = −E X [log q(X)]. (7) Then L(w) = S + D(q||p w ), where D(q||p w ) is Kullback-Leibler distance defined by D(q||p w ) = q(x) log [q][(][x][)] � p(x|w) [dx.] Then D(q||p w ) ≥ 0, hence L(w) ≥ S. Moreover, L(w) = S if and only if p(x|w) = q(x). In this paper, we assume that there exists a parameter w 0 ∈ W which minimizes L(w), L(w 0 ) = min w∈W [L][(][w][)][.] Note that such w 0 is not unique in general, because the map w �→ p(x|w) is not one-to-one in general in singular learning machines. We also assume that, for an arbitrary w that satisfies L(w) = L(w 0 ), p(x|w) is the same probability density function. Let p 0 (x) be such a unique probability density function. In general, the set W 0 = {w ∈ W ; p(x|w) = p 0 (x)} is not a set of single element but an analytic set or an algebraic set with singularities. For simple notations, the minimum log loss L 0 and the empirical log loss L n are respectively defined by L 0 = −E X [log p 0 (X)], (8)"
}
|
{
"chunk_type": "body"
}
|
##### L n = −n [1] � log p 0 (X i ). (9)
i=1
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.2 Notations",
"Header 5": "L n = −n [1] � log p 0 (X i ). (9)"
}
|
{
"chunk_type": "body"
}
|
##### By using these values, Bayes generalization error B g (n) and Bayes training error B t (n) are respectively defined by B g (n) = B g L(n) − L 0, (10) B t (n) = B t L(n) − L n . (11) In general, both B g (n) and B t (n) converge to zero in probability, when n →∞. Let us define a log density ratio function, f (x, w) = log [p] [0] [(][x][)] p(x|w) [,] which is equivalent to p(x|w) = p 0 (x) exp(−f (x, w)). 5
-----
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.2 Notations",
"Header 5": "By using these values, Bayes generalization error B g (n) and Bayes training error B t (n) are respectively defined by B g (n) = B g L(n) − L 0, (10) B t (n) = B t L(n) − L n . (11) In general, both B g (n) and B t (n) converge to zero in probability, when n →∞. Let us define a log density ratio function, f (x, w) = log [p] [0] [(][x][)] p(x|w) [,] which is equivalent to p(x|w) = p 0 (x) exp(−f (x, w)). 5"
}
|
{
"chunk_type": "body"
}
|
##### Then, it is immediately derived that B g (n) = −E X [log E w [exp(−f (X, w))]], (12)
n
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.2 Notations",
"Header 5": "Then, it is immediately derived that B g (n) = −E X [log E w [exp(−f (X, w))]], (12)"
}
|
{
"chunk_type": "body"
}
|
##### B t (n) = −n [1] � log E w [exp(−f (X i, w))], (13)
i=1
n
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.2 Notations",
"Header 5": "B t (n) = −n [1] � log E w [exp(−f (X i, w))], (13)"
}
|
{
"chunk_type": "body"
}
|
##### V (n) = ��E w [f (X i, w) [2] ] − E w [f (X i, w) [2] ]�. (14)
i=1
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.2 Notations",
"Header 5": "V (n) = ��E w [f (X i, w) [2] ] − E w [f (X i, w) [2] ]�. (14)"
}
|
{
"chunk_type": "body"
}
|
##### Therefore, the problem of statistical learning is characterized by the function f (x, w). Definition. (1) If q(x) = p 0 (x), then q(x) is said to be realizable by p(x|w). If otherwise, then it is said to be unrealizable. (2) If the set W 0 consists of a single point w 0 and if the Hessian matrix ∇∇L(w 0 ) is strictly positive definite, then q(x) is said to be regular for p(x|w). If otherwise, then it is said to be singular for p(x|w). Bayes learning theory was studied in a realizable and regular case [Schwarz 78, Levin et al. 90, Aamari 93]. The concept WAIC was found in a realizable and sin- gular case [Watanabe 01a, Watanabe 09, Watanabe 10a] and an unrealizable and regular case [Watanabe 10b]. It was generalized for an unrealizable and singular case [Watanabe 10d].
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.2 Notations",
"Header 5": "Therefore, the problem of statistical learning is characterized by the function f (x, w). Definition. (1) If q(x) = p 0 (x), then q(x) is said to be realizable by p(x|w). If otherwise, then it is said to be unrealizable. (2) If the set W 0 consists of a single point w 0 and if the Hessian matrix ∇∇L(w 0 ) is strictly positive definite, then q(x) is said to be regular for p(x|w). If otherwise, then it is said to be singular for p(x|w). Bayes learning theory was studied in a realizable and regular case [Schwarz 78, Levin et al. 90, Aamari 93]. The concept WAIC was found in a realizable and sin- gular case [Watanabe 01a, Watanabe 09, Watanabe 10a] and an unrealizable and regular case [Watanabe 10b]. It was generalized for an unrealizable and singular case [Watanabe 10d]."
}
|
{
"chunk_type": "body"
}
|
#### 2.3 Singular Learning Theory
##### Thirdly, we summarize singular learning theory. In this paper, we assume the fol- lowing conditions. Assumptions. (1) The set of parameters W is a compact set in R [d] whose open kernel [1] is not the empty set. Its boundary is defined by several analytic functions, In other words, W = {w ∈ R [d] ; π 1 (w) ≥ 0, π 2 (w) ≥ 0, ..., π k (w) ≥ 0}. (2) The prior distribution satisfies ϕ(w) = ϕ 1 (w)ϕ 2 (w), where ϕ 1 (w) ≥ 0 is an analytic function and ϕ 2 (w) > 0 is a C [∞] -class function. (3) Let s ≥ 8 and
1/s
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.3 Singular Learning Theory",
"Header 5": "Thirdly, we summarize singular learning theory. In this paper, we assume the fol- lowing conditions. Assumptions. (1) The set of parameters W is a compact set in R [d] whose open kernel [1] is not the empty set. Its boundary is defined by several analytic functions, In other words, W = {w ∈ R [d] ; π 1 (w) ≥ 0, π 2 (w) ≥ 0, ..., π k (w) ≥ 0}. (2) The prior distribution satisfies ϕ(w) = ϕ 1 (w)ϕ 2 (w), where ϕ 1 (w) ≥ 0 is an analytic function and ϕ 2 (w) > 0 is a C [∞] -class function. (3) Let s ≥ 8 and"
}
|
{
"chunk_type": "body"
}
|
##### L [s] (q) = {f (x); ∥f ∥≡ |f (x)| [s] q(x)dx < ∞} � [�] � be a Banach space. The map W ∋ w �→ f (x, w) is an L [s] (q) valued analytic function. (4) A nonnegative function K(w) is defined by K(w) = E X [f (X, w)].
1 The open kernel of a set A is the largest open set that is contained in A.
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.3 Singular Learning Theory",
"Header 5": "L [s] (q) = {f (x); ∥f ∥≡ |f (x)| [s] q(x)dx < ∞} � [�] � be a Banach space. The map W ∋ w �→ f (x, w) is an L [s] (q) valued analytic function. (4) A nonnegative function K(w) is defined by K(w) = E X [f (X, w)]."
}
|
{
"chunk_type": "body"
}
|
##### 6
-----
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.3 Singular Learning Theory",
"Header 5": "6"
}
|
{
"chunk_type": "body"
}
|
##### The set W ǫ is defined by W ǫ = {w ∈ W ; K(w) ≤ ǫ}. It is assumed that there exist constants ǫ, c > 0 such that (∀w ∈ W ǫ ) E X [f (X, w)] ≥ c E X [f (X, w) [2] ]. (15) In order to study the cross validation in singular learning machines, we need singular learning theory. In the previous papers, we obtained the following lemma. Lemma 1. Assume that the above assumptions (1),(2),(3), and (4) are satisfied. Then the followings hold. (1) Three random variables nB g (n), nB t (n), and V (n) converge in law, when n tends to infinity. Also their expectation values converge. (2) For k = 1, 2, 3, 4, we define M k (n) ≡ E[E w [E X [| f (X, w)| [k] sup exp(σf (X, w))]]]. (16)
|σ|≤1+β
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.3 Singular Learning Theory",
"Header 5": "The set W ǫ is defined by W ǫ = {w ∈ W ; K(w) ≤ ǫ}. It is assumed that there exist constants ǫ, c > 0 such that (∀w ∈ W ǫ ) E X [f (X, w)] ≥ c E X [f (X, w) [2] ]. (15) In order to study the cross validation in singular learning machines, we need singular learning theory. In the previous papers, we obtained the following lemma. Lemma 1. Assume that the above assumptions (1),(2),(3), and (4) are satisfied. Then the followings hold. (1) Three random variables nB g (n), nB t (n), and V (n) converge in law, when n tends to infinity. Also their expectation values converge. (2) For k = 1, 2, 3, 4, we define M k (n) ≡ E[E w [E X [| f (X, w)| [k] sup exp(σf (X, w))]]]. (16)"
}
|
{
"chunk_type": "body"
}
|
##### Then limsup n→∞ n [k][/][2] M k (n) < ∞. (17) � � (3) The expectation value of the Bayes generalization loss is asymptotically equal to the widely applicable information criterion, E[B g L(n)] = E[WAIC(n)] + o( [1] (18) n [)][.] (Proof) In the case when q(x) is realizable by and singular for p(x|w), this lemma was proved in [Watanabe 10a]. The proof of Lemma 1 (1) is given in Theorem 1 of [Watanabe 10a]. The result in Lemma 1 (2) can be proved by just the same way as Lemma 6 in [Watanabe 10a]. The proof of Lemma 1 (3) is given in Theorem 2 and discussion of [Watanabe 10a]. In the case when q(x) is regular for and unrealizable by p(x|w), this lemma was proved in [Watanabe 10b]. These results were generalized in [Watanabe 10d]. (Q.E.D.) Remark. In ordinary learning machines, if the true distirbution is regular for or realizable by a learning machine, then the assumptions (1)-(4) are satisfied, result- ing that the results of this paper hold. If the true distribution is singular for and unrealizable by a learning machine, in some cases the assumptions (1)-(4) are satis- fied but in other cases not. If the true distribution is singular for and unrealizable by a learning machine and if the assumptions (1)-(4) are not satisfied, then the Bayes generalization and training errors may have the other asymptotic behaviors [Watanabe 10d]. 7
-----
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "2 Bayes Learning Theory",
"Header 4": "2.3 Singular Learning Theory",
"Header 5": "Then limsup n→∞ n [k][/][2] M k (n) < ∞. (17) � � (3) The expectation value of the Bayes generalization loss is asymptotically equal to the widely applicable information criterion, E[B g L(n)] = E[WAIC(n)] + o( [1] (18) n [)][.] (Proof) In the case when q(x) is realizable by and singular for p(x|w), this lemma was proved in [Watanabe 10a]. The proof of Lemma 1 (1) is given in Theorem 1 of [Watanabe 10a]. The result in Lemma 1 (2) can be proved by just the same way as Lemma 6 in [Watanabe 10a]. The proof of Lemma 1 (3) is given in Theorem 2 and discussion of [Watanabe 10a]. In the case when q(x) is regular for and unrealizable by p(x|w), this lemma was proved in [Watanabe 10b]. These results were generalized in [Watanabe 10d]. (Q.E.D.) Remark. In ordinary learning machines, if the true distirbution is regular for or realizable by a learning machine, then the assumptions (1)-(4) are satisfied, result- ing that the results of this paper hold. If the true distribution is singular for and unrealizable by a learning machine, in some cases the assumptions (1)-(4) are satis- fied but in other cases not. If the true distribution is singular for and unrealizable by a learning machine and if the assumptions (1)-(4) are not satisfied, then the Bayes generalization and training errors may have the other asymptotic behaviors [Watanabe 10d]. 7"
}
|
{
"chunk_type": "body"
}
|
### 3 Bayes Cross Validation
##### In this section, we introduce the cross validation in Bayes learning. For an arbitrary function f (w), the expectation value E w [(][i][)] [[][f] [(][w][)] using the pos-] terior distribution leaving X i out is defined by
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "3 Bayes Cross Validation",
"Header 4": null,
"Header 5": "In this section, we introduce the cross validation in Bayes learning. For an arbitrary function f (w), the expectation value E w [(][i][)] [[][f] [(][w][)] using the pos-] terior distribution leaving X i out is defined by"
}
|
{
"chunk_type": "body"
}
|
##### E [(] w [i][)] [[][f] [(][w][)] =]
n
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "3 Bayes Cross Validation",
"Header 4": null,
"Header 5": "E [(] w [i][)] [[][f] [(][w][)] =]"
}
|
{
"chunk_type": "body"
}
|
##### f (w) � p(X j |w) [β] ϕ(w)dw � j≠ i
n
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "3 Bayes Cross Validation",
"Header 4": null,
"Header 5": "f (w) � p(X j |w) [β] ϕ(w)dw � j≠ i"
}
|
{
"chunk_type": "body"
}
|
##### � p(X j |w) [β] ϕ(w)dw � j≠ i
#####, (19)
n
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "3 Bayes Cross Validation",
"Header 4": null,
"Header 5": "� p(X j |w) [β] ϕ(w)dw � j≠ i"
}
|
{
"chunk_type": "body"
}
|
##### where � shows the product for j = 1, 2, 3, .., n which does not include j = i. The
j≠ i
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "3 Bayes Cross Validation",
"Header 4": null,
"Header 5": "where � shows the product for j = 1, 2, 3, .., n which does not include j = i. The"
}
|
{
"chunk_type": "body"
}
|
##### predictive distribution leaving X i out is defined by p [(][i][)] (x) = E w [(][i][)] [[][p][(][x][|][w][)]][.] The log loss of p [(][i][)] (x) when X i is used as a testing sample is − log p [(][i][)] (X i ) = − log E w [(][i][)] [[][p][(][X] [i] [|][w][)]][.] Hence the log loss of the Bayes cross validation leaving one out is defined by the empirical average of them,
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "3 Bayes Cross Validation",
"Header 4": null,
"Header 5": "predictive distribution leaving X i out is defined by p [(][i][)] (x) = E w [(][i][)] [[][p][(][x][|][w][)]][.] The log loss of p [(][i][)] (x) when X i is used as a testing sample is − log p [(][i][)] (X i ) = − log E w [(][i][)] [[][p][(][X] [i] [|][w][)]][.] Hence the log loss of the Bayes cross validation leaving one out is defined by the empirical average of them,"
}
|
{
"chunk_type": "body"
}
|
##### C v L(n) = − [1] n
n
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "3 Bayes Cross Validation",
"Header 4": null,
"Header 5": "C v L(n) = − [1] n"
}
|
{
"chunk_type": "body"
}
|
##### � log E w [(][i][)] [[][p][(][X] [i] [|][w][)]][.] (20)
i=1
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "3 Bayes Cross Validation",
"Header 4": null,
"Header 5": "� log E w [(][i][)] [[][p][(][X] [i] [|][w][)]][.] (20)"
}
|
{
"chunk_type": "body"
}
|
##### The random variable C v L(n) is referred to as the cross validation loss in this paper. Since X 1, X 2, ..., X n are independent training samples, it immediately follows that E[C v L(n)] = E[B g L(n − 1)]. Two random variables C v L(n) and B g L(n − 1) are different, C v L(n) ̸= B g L(n − 1), however, their expectation values coincide with each other by the definition. By using eq.(18), it follows that E[C v L(n)] = E[WAIC(n − 1)] + o( [1] n [)][.] Therefore three expectation values E[C v L(n)], E[B g L(n − 1)], and E[WAIC(n − 1)] are asymptotically equivalent. The main purpose of this paper is to clarify the asymptotic behaviors of three random variables, C v L(n), B g L(n), and WAIC(n), when n is sufficiently large. 8
-----
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "3 Bayes Cross Validation",
"Header 4": null,
"Header 5": "The random variable C v L(n) is referred to as the cross validation loss in this paper. Since X 1, X 2, ..., X n are independent training samples, it immediately follows that E[C v L(n)] = E[B g L(n − 1)]. Two random variables C v L(n) and B g L(n − 1) are different, C v L(n) ̸= B g L(n − 1), however, their expectation values coincide with each other by the definition. By using eq.(18), it follows that E[C v L(n)] = E[WAIC(n − 1)] + o( [1] n [)][.] Therefore three expectation values E[C v L(n)], E[B g L(n − 1)], and E[WAIC(n − 1)] are asymptotically equivalent. The main purpose of this paper is to clarify the asymptotic behaviors of three random variables, C v L(n), B g L(n), and WAIC(n), when n is sufficiently large. 8"
}
|
{
"chunk_type": "body"
}
|
##### Remark. In practical applications, the Bayes generalization loss B g L(n) indicates the accuracy of Bayes estimation. However, in order to calculate B g L(n), we need the expectation value over the testing sample taken from the unknown true distri- bution, resulting that we can not directly obtain B g L(n) in practical applications. On the other hand, both the cross validation loss C v L(n) and the widely applica- ble information criterion WAIC(n) can be calculated using only training samples. Therefore, the cross validation loss and the widely applicable information criterion can be used for model selection and hyperparameter optimization. This is the rea- son why comparison of these random variables is an important issue in statistical learning theory.
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "3 Bayes Cross Validation",
"Header 4": null,
"Header 5": "Remark. In practical applications, the Bayes generalization loss B g L(n) indicates the accuracy of Bayes estimation. However, in order to calculate B g L(n), we need the expectation value over the testing sample taken from the unknown true distri- bution, resulting that we can not directly obtain B g L(n) in practical applications. On the other hand, both the cross validation loss C v L(n) and the widely applica- ble information criterion WAIC(n) can be calculated using only training samples. Therefore, the cross validation loss and the widely applicable information criterion can be used for model selection and hyperparameter optimization. This is the rea- son why comparison of these random variables is an important issue in statistical learning theory."
}
|
{
"chunk_type": "body"
}
|
### 4 Main Results
##### In this section, main results of this paper are explained.
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": null,
"Header 5": "In this section, main results of this paper are explained."
}
|
{
"chunk_type": "body"
}
|
#### 4.1 Functional Cumulants
##### Firstly, we define functional cumulants. Definition. The generating function F (α) of functional cumulants is defined by
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "Firstly, we define functional cumulants. Definition. The generating function F (α) of functional cumulants is defined by"
}
|
{
"chunk_type": "body"
}
|
##### F (α) = [1] n
n
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "F (α) = [1] n"
}
|
{
"chunk_type": "body"
}
|
##### � log E w [p(X i |w) [α] ]. (21)
i=1
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "� log E w [p(X i |w) [α] ]. (21)"
}
|
{
"chunk_type": "body"
}
|
##### The kth order functional cumulant Y k (n) (k = 1, 2, 3, 4) is defined by
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "The kth order functional cumulant Y k (n) (k = 1, 2, 3, 4) is defined by"
}
|
{
"chunk_type": "body"
}
|
##### Then, by definition, For simple notation, we use
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "Then, by definition, For simple notation, we use"
}
|
{
"chunk_type": "body"
}
|
##### Y k (n) = [d] [k] [F] (22) dα [k] [ (0)][.] F (0) = 0, F (1) = −B t L(n).
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "Y k (n) = [d] [k] [F] (22) dα [k] [ (0)][.] F (0) = 0, F (1) = −B t L(n)."
}
|
{
"chunk_type": "body"
}
|
##### ℓ k (X i ) = E w [(log p(X i |w)) [k] ] (k = 1, 2, 3, 4). 9
-----
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "ℓ k (X i ) = E w [(log p(X i |w)) [k] ] (k = 1, 2, 3, 4). 9"
}
|
{
"chunk_type": "body"
}
|
##### Lemma 2. The followings hold, 1 n Y 1 (n) = n � ℓ 1 (X i ), (23)
i=1
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "Lemma 2. The followings hold, 1 n Y 1 (n) = n � ℓ 1 (X i ), (23)"
}
|
{
"chunk_type": "body"
}
|
##### 1 n Y 2 (n) = n ��ℓ 2 (X i ) − ℓ 1 (X i ) [2] [�], (24)
i=1
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "1 n Y 2 (n) = n ��ℓ 2 (X i ) − ℓ 1 (X i ) [2] [�], (24)"
}
|
{
"chunk_type": "body"
}
|
##### 1 n Y 3 (n) = n ��ℓ 3 (X i ) − 3ℓ 2 (X i )ℓ 1 (X i ) + 2ℓ 1 (X i ) [3] [�], (25)
i=1
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "1 n Y 3 (n) = n ��ℓ 3 (X i ) − 3ℓ 2 (X i )ℓ 1 (X i ) + 2ℓ 1 (X i ) [3] [�], (25)"
}
|
{
"chunk_type": "body"
}
|
##### 1 n Y 4 (n) = n ��ℓ 4 (X i ) − 4ℓ 3 (X i )ℓ 1 (X i ) − 3ℓ 2 (X i ) [2] (26)
i=1
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "1 n Y 4 (n) = n ��ℓ 4 (X i ) − 4ℓ 3 (X i )ℓ 1 (X i ) − 3ℓ 2 (X i ) [2] (26)"
}
|
{
"chunk_type": "body"
}
|
##### +12ℓ 2 (X i )ℓ 1 (X i ) [2] − 6ℓ 1 (X i ) [4] [�] . (27)
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "+12ℓ 2 (X i )ℓ 1 (X i ) [2] − 6ℓ 1 (X i ) [4] [�] . (27)"
}
|
{
"chunk_type": "body"
}
|
##### Moreover, In other words,
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "Moreover, In other words,"
}
|
{
"chunk_type": "body"
}
|
##### Y k (n) = O p ( [1] (k = 2, 3, 4). n [k/][2] [)] limsup n→∞ E[n [k][/][2] |Y k (n)|] < ∞ (k = 2, 3, 4). (28)
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "Y k (n) = O p ( [1] (k = 2, 3, 4). n [k/][2] [)] limsup n→∞ E[n [k][/][2] |Y k (n)|] < ∞ (k = 2, 3, 4). (28)"
}
|
{
"chunk_type": "body"
}
|
##### (Proof) Firstly, we prove eqs.(23)-(27). Let us define g i (α) = E w [p(X i |w) [α] ]. Then g i (0) = 1, g i [(][k][)] [(0) =][ ℓ] [k] [(][X] [i] [)] (k = 1, 2, 3, 4), and
n
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "(Proof) Firstly, we prove eqs.(23)-(27). Let us define g i (α) = E w [p(X i |w) [α] ]. Then g i (0) = 1, g i [(][k][)] [(0) =][ ℓ] [k] [(][X] [i] [)] (k = 1, 2, 3, 4), and"
}
|
{
"chunk_type": "body"
}
|
##### F (α) = n [1] � log g i (α).
i=1
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "F (α) = n [1] � log g i (α)."
}
|
{
"chunk_type": "body"
}
|
##### For arbitrary natural number k,
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "For arbitrary natural number k,"
}
|
{
"chunk_type": "body"
}
|
##### g i (α) (k) � g i (α)
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "g i (α) (k) � g i (α)"
}
|
{
"chunk_type": "body"
}
|
##### ′ = [g] [i] [(][α][)] [(][k][+1)] − g i (α) (k) � g i (α) � g i (α)
′
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "′ = [g] [i] [(][α][)] [(][k][+1)] − g i (α) (k) � g i (α) � g i (α)"
}
|
{
"chunk_type": "body"
}
|
##### g i (α) �� g i (α)
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "g i (α) �� g i (α)"
}
|
{
"chunk_type": "body"
}
|
##### . �
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": ". �"
}
|
{
"chunk_type": "body"
}
|
##### By using this relation recursively, eqs.(23)-(27) are derived. Secondly, we show eq.(28). The random variables Y k (n) (k = 2, 3, 4) are invariant under the transform, log p(X i |w) �→ log p(X i |w) + c(X i ), (29) for arbitrary c(X i ). In particular, by choosing c(X i ) = − log p 0 (X i ),@we can show that Y k (n) (k = 2, 3, 4) are invariant by replacing log p(X i |w) �→ f (X i, w). 10
-----
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "By using this relation recursively, eqs.(23)-(27) are derived. Secondly, we show eq.(28). The random variables Y k (n) (k = 2, 3, 4) are invariant under the transform, log p(X i |w) �→ log p(X i |w) + c(X i ), (29) for arbitrary c(X i ). In particular, by choosing c(X i ) = − log p 0 (X i ),@we can show that Y k (n) (k = 2, 3, 4) are invariant by replacing log p(X i |w) �→ f (X i, w). 10"
}
|
{
"chunk_type": "body"
}
|
##### Then by using eq.(17), eq.(28) is obtained. (Q.E.D.) Note that Y k (n) (k = 1, 2, 3, 4) can be calculated using only training samples and a statistical model, without any information about the true distribution q(x). More- over, by definition, nY 2 (n) = V (n). By using eq.(29) with c(X i ) = −E w [log p(X i |w)] and by using the normalized func- tion defined by ℓ [∗] k [(][X] [i] [) =][ E] [w] [[(log][ p][(][X] [i] [|][w][)][ −] [c][(][X] [i] [))] [k] []][,] It follows that
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "Then by using eq.(17), eq.(28) is obtained. (Q.E.D.) Note that Y k (n) (k = 1, 2, 3, 4) can be calculated using only training samples and a statistical model, without any information about the true distribution q(x). More- over, by definition, nY 2 (n) = V (n). By using eq.(29) with c(X i ) = −E w [log p(X i |w)] and by using the normalized func- tion defined by ℓ [∗] k [(][X] [i] [) =][ E] [w] [[(log][ p][(][X] [i] [|][w][)][ −] [c][(][X] [i] [))] [k] []][,] It follows that"
}
|
{
"chunk_type": "body"
}
|
##### 1 Y 2 (n) = n 1 Y 3 (n) = n 1 Y 4 (n) = n
n
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "1 Y 2 (n) = n 1 Y 3 (n) = n 1 Y 4 (n) = n"
}
|
{
"chunk_type": "body"
}
|
##### � ℓ [∗] 2 [(][X] [i] [)][,]
i=1
n
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "� ℓ [∗] 2 [(][X] [i] [)][,]"
}
|
{
"chunk_type": "body"
}
|
##### � ℓ [∗] 3 [(][X] [i] [)][,]
i=1
n
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "� ℓ [∗] 3 [(][X] [i] [)][,]"
}
|
{
"chunk_type": "body"
}
|
##### ��ℓ [∗] 4 [(][X] [i] [)][ −] [3][ℓ] [∗] 2 [(][X] [i] [)] [2] [�] .
i=1
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "��ℓ [∗] 4 [(][X] [i] [)][ −] [3][ℓ] [∗] 2 [(][X] [i] [)] [2] [�] ."
}
|
{
"chunk_type": "body"
}
|
##### These formulas may be useful in practical applications.
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.1 Functional Cumulants",
"Header 5": "These formulas may be useful in practical applications."
}
|
{
"chunk_type": "body"
}
|
#### 4.2 Bayes Cross Validation and Widely Applicable Informa- tion Criterion
##### We show the asymptotic equivalence of the cross validation loss C v L(n) and the widely applicable information criterion WAIC(n). Theorem 1. For arbitrary 0 < β < ∞, the cross validation loss C v L(n) and the widely applicable information criterion WAIC(n) are respectively given by C v L(n) = −Y 1 (n) + 2β − 1 Y 2 (n) � 2 �
2
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.2 Bayes Cross Validation and Widely Applicable Informa- tion Criterion",
"Header 5": "We show the asymptotic equivalence of the cross validation loss C v L(n) and the widely applicable information criterion WAIC(n). Theorem 1. For arbitrary 0 < β < ∞, the cross validation loss C v L(n) and the widely applicable information criterion WAIC(n) are respectively given by C v L(n) = −Y 1 (n) + 2β − 1 Y 2 (n) � 2 �"
}
|
{
"chunk_type": "body"
}
|
##### − 3β − 3β + 1 Y 3 (n) + O p ( [1] � 6 � n [2] [)][,] WAIC(n) = −Y 1 (n) + 2β − 1 Y 2 (n) � 2 � − [1] 6 [Y] [3] [(][n][) +][ O] [p] [( 1] n [2] [)][.] (Proof) Firstly, we study C v L(n). From the definition of E w [ ] and E w [(][i][)] [[] ], for arbitrary function f (w), E [(] w [i][)] [[][f] [(][w][)] =][ E] [w] [[][f] [(][w][)][p][(][X] [i] [|][w][)] [−][β] [ ]] . (30) E w [p(X i |w) [−][β] ] 11
-----
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.2 Bayes Cross Validation and Widely Applicable Informa- tion Criterion",
"Header 5": "− 3β − 3β + 1 Y 3 (n) + O p ( [1] � 6 � n [2] [)][,] WAIC(n) = −Y 1 (n) + 2β − 1 Y 2 (n) � 2 � − [1] 6 [Y] [3] [(][n][) +][ O] [p] [( 1] n [2] [)][.] (Proof) Firstly, we study C v L(n). From the definition of E w [ ] and E w [(][i][)] [[] ], for arbitrary function f (w), E [(] w [i][)] [[][f] [(][w][)] =][ E] [w] [[][f] [(][w][)][p][(][X] [i] [|][w][)] [−][β] [ ]] . (30) E w [p(X i |w) [−][β] ] 11"
}
|
{
"chunk_type": "body"
}
|
##### Therefore, by the definition of the cross validation loss, eq.(20),
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.2 Bayes Cross Validation and Widely Applicable Informa- tion Criterion",
"Header 5": "Therefore, by the definition of the cross validation loss, eq.(20),"
}
|
{
"chunk_type": "body"
}
|
##### C v L(n) = − [1] n
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.2 Bayes Cross Validation and Widely Applicable Informa- tion Criterion",
"Header 5": "C v L(n) = − [1] n"
}
|
{
"chunk_type": "body"
}
|
##### � i=1n log [E] E [w] w [[] [ [ p] p [(] ( [X] X [i] i [|] | [w] w [)] ) [1][−][−][β][β] [ ]] ] [.]
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.2 Bayes Cross Validation and Widely Applicable Informa- tion Criterion",
"Header 5": "� i=1n log [E] E [w] w [[] [ [ p] p [(] ( [X] X [i] i [|] | [w] w [)] ) [1][−][−][β][β] [ ]] ] [.]"
}
|
{
"chunk_type": "body"
}
|
##### By using the generating function of functional cumulants F (α), C v L(n) = F (−β) − F (1 − β). Then by using Lemma 1 (2) for each k = 2, 3, 4, there exists a constant C k > 0 such that |F [(][k][)] (α)| ≤ C k M k (n) (|α| < 1 + β), where C 1 = 1, C 2 = 2, C 3 = 6, C 4 = 26. Therefore, |F [(][k][)] (α)| = O p ( [1] (31) n [k/][2] [)][.] By using Taylor expansion of F (α) among α = 0, there exist β [∗], β [∗∗] (|β [∗] |, |β [∗∗] | < 1 + β) such that F (−β) = F (0) − βF [′] (0) + [β] [2] 2 [F] [ ′′] [(0)] − [β] [3] 6 [F] [ (3)] [(0) +][ β] 24 [4] [F] [ (4)] [(][β] [∗] [)][,] F (1 − β) = F (0) + (1 − β)F [′] (0) + [(1][ −] [β][)] [2] F [′′] (0) 2 + [(1][ −] [β][)] [3] F [(3)] (0) + [(1][ −] [β][)] [4] F [(4)] (β [∗∗] ). 6 24 By using F (0) = 0 and eq.(31), it follows that C v L(n) = −F [′] (0) + [2][β][ −] [1] F [′′] (0) 2 − [3][β] [2] [ −] [3][β][ + 1] F [(3)] (0) + O p ( [1] 6 n [2] [)][.] Hence we obtained the first half of the theorem. For the latter half, by the definitions of WAIC(n), Bayes training loss, and the functional variance, WAIC(n) = B t L(n) + (β/n)V (n), (32) B t L(n) = −F (1), (33) V (n) = nF [′′] (0). (34) Therefore WAIC(n) = −F (1) + βF [′′] (0). 12
-----
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.2 Bayes Cross Validation and Widely Applicable Informa- tion Criterion",
"Header 5": "By using the generating function of functional cumulants F (α), C v L(n) = F (−β) − F (1 − β). Then by using Lemma 1 (2) for each k = 2, 3, 4, there exists a constant C k > 0 such that |F [(][k][)] (α)| ≤ C k M k (n) (|α| < 1 + β), where C 1 = 1, C 2 = 2, C 3 = 6, C 4 = 26. Therefore, |F [(][k][)] (α)| = O p ( [1] (31) n [k/][2] [)][.] By using Taylor expansion of F (α) among α = 0, there exist β [∗], β [∗∗] (|β [∗] |, |β [∗∗] | < 1 + β) such that F (−β) = F (0) − βF [′] (0) + [β] [2] 2 [F] [ ′′] [(0)] − [β] [3] 6 [F] [ (3)] [(0) +][ β] 24 [4] [F] [ (4)] [(][β] [∗] [)][,] F (1 − β) = F (0) + (1 − β)F [′] (0) + [(1][ −] [β][)] [2] F [′′] (0) 2 + [(1][ −] [β][)] [3] F [(3)] (0) + [(1][ −] [β][)] [4] F [(4)] (β [∗∗] ). 6 24 By using F (0) = 0 and eq.(31), it follows that C v L(n) = −F [′] (0) + [2][β][ −] [1] F [′′] (0) 2 − [3][β] [2] [ −] [3][β][ + 1] F [(3)] (0) + O p ( [1] 6 n [2] [)][.] Hence we obtained the first half of the theorem. For the latter half, by the definitions of WAIC(n), Bayes training loss, and the functional variance, WAIC(n) = B t L(n) + (β/n)V (n), (32) B t L(n) = −F (1), (33) V (n) = nF [′′] (0). (34) Therefore WAIC(n) = −F (1) + βF [′′] (0). 12"
}
|
{
"chunk_type": "body"
}
|
##### By using Taylor expansion of F (1), WAIC(n) = −F [′] (0) + [2][β][ −] [1] F [′′] (0) − [1] 2 6 [F] [ (3)] [(0) +][ O] [p] [( 1] n [2] [)][,] which completes the proof. (Q.E.D.) From the above theorem, we obtain the following corollary. Corollary 1. For arbitrary 0 < β < ∞, the cross validation loss C v L(n) and the widely applicable information criterion WAIC(n) satisfies C v L(n) = WAIC(n) + O p ( [1] n [3][/][2] [)][.] In particular, for β = 1, C v L(n) = WAIC(n) + O p ( [1] n [2] [)][.] More precisely, the difference between the cross validation loss and the widely applicable information criterion is given by
2
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.2 Bayes Cross Validation and Widely Applicable Informa- tion Criterion",
"Header 5": "By using Taylor expansion of F (1), WAIC(n) = −F [′] (0) + [2][β][ −] [1] F [′′] (0) − [1] 2 6 [F] [ (3)] [(0) +][ O] [p] [( 1] n [2] [)][,] which completes the proof. (Q.E.D.) From the above theorem, we obtain the following corollary. Corollary 1. For arbitrary 0 < β < ∞, the cross validation loss C v L(n) and the widely applicable information criterion WAIC(n) satisfies C v L(n) = WAIC(n) + O p ( [1] n [3][/][2] [)][.] In particular, for β = 1, C v L(n) = WAIC(n) + O p ( [1] n [2] [)][.] More precisely, the difference between the cross validation loss and the widely applicable information criterion is given by"
}
|
{
"chunk_type": "body"
}
|
##### β − β C v L(n) − WAIC(n) [∼] = � 2
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.2 Bayes Cross Validation and Widely Applicable Informa- tion Criterion",
"Header 5": "β − β C v L(n) − WAIC(n) [∼] = � 2"
}
|
{
"chunk_type": "body"
}
|
##### Y 3 (n). �
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.2 Bayes Cross Validation and Widely Applicable Informa- tion Criterion",
"Header 5": "Y 3 (n). �"
}
|
{
"chunk_type": "body"
}
|
##### If β = 1, C v L(n) − WAIC(n) [∼] = [1] 12 [Y] [4] [(][n][)][.]
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.2 Bayes Cross Validation and Widely Applicable Informa- tion Criterion",
"Header 5": "If β = 1, C v L(n) − WAIC(n) [∼] = [1] 12 [Y] [4] [(][n][)][.]"
}
|
{
"chunk_type": "body"
}
|
#### 4.3 Generalization Error and Cross Validation Error
##### In the previous subsection, we have shown that the cross validation loss is asymp- totically equivalent to the widely applicable information criterion. In this section, let us compare the Bayes generalization error B g (n) defined by eq.(10) and the cross validation error C v (n), which are defined by C v (n) = C v L(n) − L n . (35) We need a mathematical concept, the log canonical threshold. Definition. The zeta function ζ(z) (Re(z) > 0) of statistical learning is defined by ζ(z) = K(w) [z] ϕ(w)dw, � where K(w) = E X [f (X, w)] 13
-----
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.3 Generalization Error and Cross Validation Error",
"Header 5": "In the previous subsection, we have shown that the cross validation loss is asymp- totically equivalent to the widely applicable information criterion. In this section, let us compare the Bayes generalization error B g (n) defined by eq.(10) and the cross validation error C v (n), which are defined by C v (n) = C v L(n) − L n . (35) We need a mathematical concept, the log canonical threshold. Definition. The zeta function ζ(z) (Re(z) > 0) of statistical learning is defined by ζ(z) = K(w) [z] ϕ(w)dw, � where K(w) = E X [f (X, w)] 13"
}
|
{
"chunk_type": "body"
}
|
##### is a nonnegative function. It is well known that ζ(z) can be analytically continued to the unique meromorphic function on the entire complex plane C. All poles of ζ(z) are real, negative, and rational numbers. The largest pole is denoted by (−λ) = maximum pole of ζ(z). (36) Then λ is called the log canonical threshold. The singular fluctuation is defined by β ν = lim (37) n→∞ 2 [E][[][V][ (][n][)]][.] Note that both the log canonical threshold and the singular fluctuation are bira- tional invariants. In other words, they are determined by the algebraic geometrical structure of the statistical model. The following lemma was proved [Watanabe 10a, Watanabe 10b, Watanabe 10d]. Lemma 3. The following convergences hold, λ − ν lim = + ν, (38) n→∞ [n][E][[][B] [g] [(][n][)]] β λ − ν lim = − ν, (39) n→∞ [n][E][[][B] [t] [(][n][)]] β Moreover, convergence in probability n(B g (n) + B t (n)) + V (n) → [2] β [λ] (40) holds. (Proof) In the case when q(x) is realizable by and singular for p(x|w), two equa- tions (38) and (39) were proved by in Corollary 3 in [Watanabe 10a]. The equation (40) was given in Corollary 2 in [Watanabe 10a]. In the case when q(x) is reg- ular for p(x|w), these results were shown in [Watanabe 10b], and generalized by [Watanabe 10d]. (Q.E.D.) Examples. If q(x) is regular for and realizable by p(x|w), then λ = ν = d/2, where d is the dimension of the parameter space. If q(x) is regular for and unrealizable by p(x|w), then λ and ν are given by [Watanabe 10b]. If q(x) is singular for and realizable by p(x|w), then λ for several models are obtained by resolution of singu- larities, [Aoyagi & Watanabe 05, Rusakov & Geiger 05, Yamazaki & Watanabe 03, Lin 10, Zwiernik 10]. If q(x) is singular for and unrealized by p(x|w), then they are still unknown constants. We have the following theorem. Theorem 2. The following equation holds, lim + ν, n→∞ [n][E][[][C] [v] [(][n][)] =][ λ][ −] [ν] β 14
-----
|
{
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
}
|
{
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.3 Generalization Error and Cross Validation Error",
"Header 5": "is a nonnegative function. It is well known that ζ(z) can be analytically continued to the unique meromorphic function on the entire complex plane C. All poles of ζ(z) are real, negative, and rational numbers. The largest pole is denoted by (−λ) = maximum pole of ζ(z). (36) Then λ is called the log canonical threshold. The singular fluctuation is defined by β ν = lim (37) n→∞ 2 [E][[][V][ (][n][)]][.] Note that both the log canonical threshold and the singular fluctuation are bira- tional invariants. In other words, they are determined by the algebraic geometrical structure of the statistical model. The following lemma was proved [Watanabe 10a, Watanabe 10b, Watanabe 10d]. Lemma 3. The following convergences hold, λ − ν lim = + ν, (38) n→∞ [n][E][[][B] [g] [(][n][)]] β λ − ν lim = − ν, (39) n→∞ [n][E][[][B] [t] [(][n][)]] β Moreover, convergence in probability n(B g (n) + B t (n)) + V (n) → [2] β [λ] (40) holds. (Proof) In the case when q(x) is realizable by and singular for p(x|w), two equa- tions (38) and (39) were proved by in Corollary 3 in [Watanabe 10a]. The equation (40) was given in Corollary 2 in [Watanabe 10a]. In the case when q(x) is reg- ular for p(x|w), these results were shown in [Watanabe 10b], and generalized by [Watanabe 10d]. (Q.E.D.) Examples. If q(x) is regular for and realizable by p(x|w), then λ = ν = d/2, where d is the dimension of the parameter space. If q(x) is regular for and unrealizable by p(x|w), then λ and ν are given by [Watanabe 10b]. If q(x) is singular for and realizable by p(x|w), then λ for several models are obtained by resolution of singu- larities, [Aoyagi & Watanabe 05, Rusakov & Geiger 05, Yamazaki & Watanabe 03, Lin 10, Zwiernik 10]. If q(x) is singular for and unrealized by p(x|w), then they are still unknown constants. We have the following theorem. Theorem 2. The following equation holds, lim + ν, n→∞ [n][E][[][C] [v] [(][n][)] =][ λ][ −] [ν] β 14"
}
|
{
"chunk_type": "body"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.