text stringlengths 1 298 |
|---|
i,j=1 |
d |
i,j=1 |
wij (EF [|Xi −Xj|p] −|Yi −Yj|p)2 |
|
wij EF [|Xi −Xj|p]2 −2EF [|Xi −Xj|p]|Yi −Yj|p +|Yi −Yj|2p |
|
|
wij EF [|Xi −Xj|p]2 −2EF [|Xi −Xj|p]EG[|Yi −Yj|p]+EG[|Yi −Yj|2p] . |
i,j=1 |
B.5 Logarithmic score |
For any F,G ∈ P(Rd) such that F and G have probability density functions that belong to L1(Rd), the |
expectation of the logarithmic score is analogous to its univariate version : |
EG[LogS(F,Y )] = DKL(G||F)+H(F), |
where DKL(G||F) is the Kullback-Leibler divergence from F to G and H(F) is the Shannon entropy of F. |
DKL(G||F) = |
H(F) = |
B.6 Hyvärinen score |
g(y)log g(y) |
Rd |
Rd |
f(y) dy |
f(y)log(f(y))dy. |
For F,G ∈ P(Rd) such that their probability density functions f and g such that they are twice continuously |
differentiable and satisfying ∇f(x) → 0 and ∇g(x) → 0 as ∥x∥ → ∞, the expectation of the Hyvärinen score |
is : |
E[HS(F,Y )] = |
g(y)⟨∇log(f(y)) − 2∇log(g(y)),∇log(f(y))⟩g(y)dy |
Rd |
where ∇ is the gradient operator and ⟨·,·⟩ is the scalar product. The proof is similar to the proof for the |
univariate case using integration by parts and Stoke’s theorem (Parry et al., 2012). |
41 |
B.7 Quadratic score |
For any F,G ∈ L2(Rd), the expectation of the quadratic score is analogous to its univariate version : |
EG[QuadS(F,Y )] = ∥f∥2 |
2 −2⟨f,g⟩, |
where ⟨f,g⟩ = Rd |
f(y)g(y)dy. |
B.8 Pseudospherical score |
For any F,G ∈ Lα(Rd), the expectation of the quadratic score is analogous to its univariate version : |
EG[PseudoS(F,Y )] = −⟨fα−1,g⟩ |
where ⟨fα−1,g⟩ = Rd |
f(y)α−1g(y)dy. |
C Proofs |
C.1 Proposition 1 |
∥f∥α−1 |
α |
, |
Proof of Proposition 1. Let F ⊂ P(Rd) be a class of Borel probability measure on Rd and let F ∈ F be a |
forecast and y ∈ Rd an observation. Let T : Rd → Rk be a transformation and let S be a scoring rule on Rk |
that is proper relative to T(F) = {L(T(X)),X ∼ F ∈ F}. |
EG[ST(F,Y )] = EG[S(T(F)),T(Y ))] |
=ET(G)[S(T(F),Y )] |
Given that T(F),T(G) ∈ T(F) and S is proper relative to T(F), |
ET(G) [S(T(G),Y )] ≤ ET(G)[S(T(F),Y )] |
⇔ EG[ST(G,Y)] ≤ EG[ST(F,Y)] |
(23) |
Proof of the strict propriety case in Proposition 1. The notations are the same as the proof above except the |
following. Let T : Rd → Rk be an injective transformation and let S be a scoring rule on Rk that is strictly |
proper relative to T(F) = {L(T(X)),X ∼ F ∈ F}. |
The equality in Equation (23) leads to : |
EG[ST(G,Y )] = EG[ST(F,Y )] |
⇔ EG[S(T(G),T(Y ))] = EG[S(T(F),T(Y ))] |
⇔ ET(G)[S(T(G),Y )] = ET(G)[S(T(F),Y )] |
The fact that S is strictly proper relative to T(F) leads to T(F) = T(G), and finally since T is injective, we |
have F = G. |
42 |
C.2 Proposition 3 |
Proof of Proposition 3. The proof relies on the reproducing kernel Hilbert space (RKHS) representation of |
the kernel scoring rule Sρ. For a background on kernel scoring rule, maximum mean discrepancies and RKHS, |
we refer to Smola et al. (2007) or Steinwart and Christmann (2008, Section 4). |
Let Hρ denote the RKHS associated with ρ. We recall that Hρ contains all the functions ρ(x,·) and that |
the inner product on Hρ satisfies the property |
⟨ρ(x1,·),ρ(x2,·)⟩Hρ |
= ρ(x1,x2). |
The kernel mean embedding is a linear application Ψρ : Pρ → Hρ mapping an admissible distribution F ∈ Pρ |
into a function Ψρ(F) in the RKHS and such that the image of the point measure δx is ρ(x,·). Equation (16) |
giving the kernel scoring rule for an ensemble prediction F = 1 |
M |
M |
m=1δxm |
can be written as |
Sρ(F,y) = 1 |
2 ⟨Ψρ(F) −Ψρ(δy),Ψρ(F)−Ψρ(δy)⟩Hρ |
= 1 |
2∥Ψρ(F −δy)∥2 |
Hρ |
. |
The properties of the kernel mean embedding ensure that this relation still holds for all F ∈ Pρ. As a |
consequence, if (Tl)l≥1 is an Hilbertian basis of Hρ, we have |
Sρ(F,y) = 1 |
2∥Ψρ(F −δy)∥2 |
Hρ |
= 1 |
2 l≥1 |
⟨Ψρ(F −δy),Tl⟩2 |
Hρ |
. |
Finally, the properties of the kernel mean embedding ensure that, for all T ∈ Hρ, |
⟨Ψρ(F −δy),T⟩Hρ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.