idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
41,901
|
James-Stein Estimator with unequal variances (Ch. 2)
|
The calculation of the $\hat A_i$ and $\hat B_i$ columns in Table 3 follows the development in Section 8 of the 1973 paper Stein's Estimation Rule and Its Competitors--An Empirical Bayes Approach, which is reference [8] in the 1975 paper. Let $n=36$ be the number of observations. For row $i$ in the table we define the vector $d_1,\ldots,d_n$ as motivated by Lemma 2:
$$
d_j := \begin{cases}3 &\text{if $j=i$}\\1&\text{otherwise}\\\end{cases}.\tag{8.10}
$$
The vector $E_1,\ldots,E_n$ is then calculated as:
$$
E_j := \frac{S_j-d_jD_j}{d_j},\tag{8.6}
$$
with $S_j:=X_j^2$. Then $I_1,\ldots,I_n$ are defined as functions of the real variable $A$ via
$$
I_j(A):=\frac{d_j}{2(A+D_j)^2}.\tag{8.7}
$$
Next, solve the equation
$$
A = \frac{\sum_{j=1}^n E_jI_j(A)}{\sum_{j=1}^nI_j(A)}\tag{8.8}
$$
for $A$ and call this value $\hat A_i$ (remember we are dealing with a fixed $i$). Having determined $\hat A_i$, then compute:
$$
d_i^*:= 2(\hat A_i + D_i)^2\sum_j\frac{d_j}{2(\hat A_i+D_j)^2}\tag{8.9}
$$
and finally:
$$
\hat B_i:=\frac{d_i^*-4}{d_i^*}\frac{D_i}{\hat A_i + D_i}.
\tag{8.11}
$$
It's unclear what column $\hat k_i$ contains, but it's very close to $d_i^*-2$, perhaps representing some kind of equivalent sample size for row $i$ (see the paragraph after (8.11)). This quantity $\hat k_i$ is not as crucial to the computation as $d_i^*$.
Here is R code to carry all this out:
dat <- read.csv("tox.csv", header=T)
n <- nrow(dat)
attach(dat)
S <- X**2
D <- sqrtD**2
est_A <- numeric(n)
dstar <- numeric(n)
est_B <- numeric(n)
est_theta <- numeric(n)
guess_k <- numeric(n)
for (i in 1:n) {
# Define d[1] .. d[n] to be 1, except d[i] = 3
d <- rep(1, n)
d[i] <- 3
# Define E[1] .. E[n]
E <- (S - d*D)/d
# Solve f(a)=0 for a
f <- function(a) {
I <- d / (a + D)**2 /2
a - sum(E * I)/sum(I)
}
# Solve using uniroot == one-dimensional root finder
est_A[i] <- uniroot(f, c(0, 1))$root
# Plug in est_A to calculate dstar, est_B, est_theta
I <- d/(est_A[i]+D)**2 /2
dstar[i] <- 2 * (est_A[i] + D[i])**2 * sum(I)
est_B[i] <- (1 - 4 / dstar[i]) * D[i] / (est_A[i] + D[i])
est_theta[i] <- (1 - est_B[i]) * X[i]
guess_k[i] = dstar[i] - 2
}
on data file tox.csv:
i,X,sqrtD,delta,A,k,B
1,.293,.304,.035,.0120,1334.1,.882
2,.214,.039,.192,.0108,21.9,.102
3,.185,.047,.159,.0109,24.4,.143
4,.152,.115,.075,.0115,80.2,.509
5,.139,.081,.092,.0112,43.0,.336
6,.128,.061,.100,.0110,30.4,.221
7,.113,.061,.088,.0110,30.4,.221
8,.098,.087,.062,.0113,48.0,.370
9,.093,.049,.079,.0109,25.1,.154
10,.079,.041,.070,.0109,22.5,.112
11,.063,.071,.045,.0111,36.0,.279
12,.052,.048,.044,.0109,24.8,.148
13,.035,.056,.028,.0110,28.0,.192
14,.027,.040,.024,.0108,22.2,.107
15,.024,.049,.020,.0109,25.1,.154
16,.024,.039,.022,.0108,21.9,.102
17,.014,.043,.012,.0109,23.1,.122
18,.004,.085,.003,.0112,46.2,.359
19,-.016,.128,-.007,.0116,101.5,.564
20,-.028,.091,-.017,.0113,51.6,.392
21,-.034,.073,-.024,.0111,37.3,.291
22,-.040,.049,-.034,.0109,25.1,.154
23,-.055,.058,-.044,.0110,28.9,.204
24,-.083,.070,-.060,.0111,35.4,.273
25,-.098,.068,-.072,.0111,34.2,.262
26,-.100,.049,-.085,.0109,25.1,.154
27,-.112,.059,-.089,.0110,29.4,.210
28,-.138,.063,-.106,.0110,31.4,.233
29,-.156,.077,-.107,.0112,40.0,.314
30,-.169,.073,-.120,.0111,37.3,.291
31,-.241,.106,-.128,.0114,68.0,.468
32,-.294,.179,-.083,.0118,242.4,.719
33,-.296,.064,-.225,.0111,31.9,.238
34,-.324,.152,-.114,.0117,154.8,.647
35,-.397,.158,-.133,.0117,171.5,.665
36,-.665,-.216,-.140,.0119,426.8,.789
|
James-Stein Estimator with unequal variances (Ch. 2)
|
The calculation of the $\hat A_i$ and $\hat B_i$ columns in Table 3 follows the development in Section 8 of the 1973 paper Stein's Estimation Rule and Its Competitors--An Empirical Bayes Approach, whi
|
James-Stein Estimator with unequal variances (Ch. 2)
The calculation of the $\hat A_i$ and $\hat B_i$ columns in Table 3 follows the development in Section 8 of the 1973 paper Stein's Estimation Rule and Its Competitors--An Empirical Bayes Approach, which is reference [8] in the 1975 paper. Let $n=36$ be the number of observations. For row $i$ in the table we define the vector $d_1,\ldots,d_n$ as motivated by Lemma 2:
$$
d_j := \begin{cases}3 &\text{if $j=i$}\\1&\text{otherwise}\\\end{cases}.\tag{8.10}
$$
The vector $E_1,\ldots,E_n$ is then calculated as:
$$
E_j := \frac{S_j-d_jD_j}{d_j},\tag{8.6}
$$
with $S_j:=X_j^2$. Then $I_1,\ldots,I_n$ are defined as functions of the real variable $A$ via
$$
I_j(A):=\frac{d_j}{2(A+D_j)^2}.\tag{8.7}
$$
Next, solve the equation
$$
A = \frac{\sum_{j=1}^n E_jI_j(A)}{\sum_{j=1}^nI_j(A)}\tag{8.8}
$$
for $A$ and call this value $\hat A_i$ (remember we are dealing with a fixed $i$). Having determined $\hat A_i$, then compute:
$$
d_i^*:= 2(\hat A_i + D_i)^2\sum_j\frac{d_j}{2(\hat A_i+D_j)^2}\tag{8.9}
$$
and finally:
$$
\hat B_i:=\frac{d_i^*-4}{d_i^*}\frac{D_i}{\hat A_i + D_i}.
\tag{8.11}
$$
It's unclear what column $\hat k_i$ contains, but it's very close to $d_i^*-2$, perhaps representing some kind of equivalent sample size for row $i$ (see the paragraph after (8.11)). This quantity $\hat k_i$ is not as crucial to the computation as $d_i^*$.
Here is R code to carry all this out:
dat <- read.csv("tox.csv", header=T)
n <- nrow(dat)
attach(dat)
S <- X**2
D <- sqrtD**2
est_A <- numeric(n)
dstar <- numeric(n)
est_B <- numeric(n)
est_theta <- numeric(n)
guess_k <- numeric(n)
for (i in 1:n) {
# Define d[1] .. d[n] to be 1, except d[i] = 3
d <- rep(1, n)
d[i] <- 3
# Define E[1] .. E[n]
E <- (S - d*D)/d
# Solve f(a)=0 for a
f <- function(a) {
I <- d / (a + D)**2 /2
a - sum(E * I)/sum(I)
}
# Solve using uniroot == one-dimensional root finder
est_A[i] <- uniroot(f, c(0, 1))$root
# Plug in est_A to calculate dstar, est_B, est_theta
I <- d/(est_A[i]+D)**2 /2
dstar[i] <- 2 * (est_A[i] + D[i])**2 * sum(I)
est_B[i] <- (1 - 4 / dstar[i]) * D[i] / (est_A[i] + D[i])
est_theta[i] <- (1 - est_B[i]) * X[i]
guess_k[i] = dstar[i] - 2
}
on data file tox.csv:
i,X,sqrtD,delta,A,k,B
1,.293,.304,.035,.0120,1334.1,.882
2,.214,.039,.192,.0108,21.9,.102
3,.185,.047,.159,.0109,24.4,.143
4,.152,.115,.075,.0115,80.2,.509
5,.139,.081,.092,.0112,43.0,.336
6,.128,.061,.100,.0110,30.4,.221
7,.113,.061,.088,.0110,30.4,.221
8,.098,.087,.062,.0113,48.0,.370
9,.093,.049,.079,.0109,25.1,.154
10,.079,.041,.070,.0109,22.5,.112
11,.063,.071,.045,.0111,36.0,.279
12,.052,.048,.044,.0109,24.8,.148
13,.035,.056,.028,.0110,28.0,.192
14,.027,.040,.024,.0108,22.2,.107
15,.024,.049,.020,.0109,25.1,.154
16,.024,.039,.022,.0108,21.9,.102
17,.014,.043,.012,.0109,23.1,.122
18,.004,.085,.003,.0112,46.2,.359
19,-.016,.128,-.007,.0116,101.5,.564
20,-.028,.091,-.017,.0113,51.6,.392
21,-.034,.073,-.024,.0111,37.3,.291
22,-.040,.049,-.034,.0109,25.1,.154
23,-.055,.058,-.044,.0110,28.9,.204
24,-.083,.070,-.060,.0111,35.4,.273
25,-.098,.068,-.072,.0111,34.2,.262
26,-.100,.049,-.085,.0109,25.1,.154
27,-.112,.059,-.089,.0110,29.4,.210
28,-.138,.063,-.106,.0110,31.4,.233
29,-.156,.077,-.107,.0112,40.0,.314
30,-.169,.073,-.120,.0111,37.3,.291
31,-.241,.106,-.128,.0114,68.0,.468
32,-.294,.179,-.083,.0118,242.4,.719
33,-.296,.064,-.225,.0111,31.9,.238
34,-.324,.152,-.114,.0117,154.8,.647
35,-.397,.158,-.133,.0117,171.5,.665
36,-.665,-.216,-.140,.0119,426.8,.789
|
James-Stein Estimator with unequal variances (Ch. 2)
The calculation of the $\hat A_i$ and $\hat B_i$ columns in Table 3 follows the development in Section 8 of the 1973 paper Stein's Estimation Rule and Its Competitors--An Empirical Bayes Approach, whi
|
41,902
|
Is there any point in using MSE loss in modern deep neural networks?
|
Suppose you want an unbiased prediction and that the conditional distribution of your dependent data is asymmetric. Then you want to minimize the squared error, or $L^2$ loss.
Minimizing the absolute error, or $L^1$ loss, is equivalent to finding the median of the conditional distribution (Hanley et al., 2001, The American Statistician), not the mean. If the distribution is asymmetric, this will typically mean that the output is biased.
This is a purely statistical effect. It is completely independent of your ML algorithm, NN architecture, fitting method etc.
|
Is there any point in using MSE loss in modern deep neural networks?
|
Suppose you want an unbiased prediction and that the conditional distribution of your dependent data is asymmetric. Then you want to minimize the squared error, or $L^2$ loss.
Minimizing the absolute
|
Is there any point in using MSE loss in modern deep neural networks?
Suppose you want an unbiased prediction and that the conditional distribution of your dependent data is asymmetric. Then you want to minimize the squared error, or $L^2$ loss.
Minimizing the absolute error, or $L^1$ loss, is equivalent to finding the median of the conditional distribution (Hanley et al., 2001, The American Statistician), not the mean. If the distribution is asymmetric, this will typically mean that the output is biased.
This is a purely statistical effect. It is completely independent of your ML algorithm, NN architecture, fitting method etc.
|
Is there any point in using MSE loss in modern deep neural networks?
Suppose you want an unbiased prediction and that the conditional distribution of your dependent data is asymmetric. Then you want to minimize the squared error, or $L^2$ loss.
Minimizing the absolute
|
41,903
|
UMVUE of $e^{-\lambda}$ from poisson distribution
|
Yes, Lehman Scheffe gives you the sufficiency of $\bar{X}$ for $e^{-\lambda}$. The problem is that $\bar{Y}$ does not actually condition on the sufficient statistic. Although the $Y$ are a function of the $X$, $\bar{Y}$ is not a function of the $\bar{X}$. The problem is that your expression isn't reduced. What, in fact, is the distribution of $\bar{Y}$ given $\bar{X}$? This is discussed in the Wiki page on the Rao Blackwell theorem.
https://en.wikipedia.org/wiki/Rao%E2%80%93Blackwell_theorem#The_theorem
Essentially, let $f_{\lambda}(x)$ be the poisson density $\exp(-\lambda)\lambda^x/x!$
\begin{eqnarray}
E(Y_1 | \bar{X} = s) &=& P(X_1 = 0| \bar{X}=s) \\
&=& P(X_1 = 0, \sum_{i=2}^n X_i = ns) /P(\bar{X} = s)\\
&=& f_{\lambda}(0) f_{(n-1)\lambda}(ns) / f_{n\lambda}(ns) \\
\end{eqnarray}
Which some algebra reduces the last expression to $(1-\frac{1}{n})^{ns}$
|
UMVUE of $e^{-\lambda}$ from poisson distribution
|
Yes, Lehman Scheffe gives you the sufficiency of $\bar{X}$ for $e^{-\lambda}$. The problem is that $\bar{Y}$ does not actually condition on the sufficient statistic. Although the $Y$ are a function of
|
UMVUE of $e^{-\lambda}$ from poisson distribution
Yes, Lehman Scheffe gives you the sufficiency of $\bar{X}$ for $e^{-\lambda}$. The problem is that $\bar{Y}$ does not actually condition on the sufficient statistic. Although the $Y$ are a function of the $X$, $\bar{Y}$ is not a function of the $\bar{X}$. The problem is that your expression isn't reduced. What, in fact, is the distribution of $\bar{Y}$ given $\bar{X}$? This is discussed in the Wiki page on the Rao Blackwell theorem.
https://en.wikipedia.org/wiki/Rao%E2%80%93Blackwell_theorem#The_theorem
Essentially, let $f_{\lambda}(x)$ be the poisson density $\exp(-\lambda)\lambda^x/x!$
\begin{eqnarray}
E(Y_1 | \bar{X} = s) &=& P(X_1 = 0| \bar{X}=s) \\
&=& P(X_1 = 0, \sum_{i=2}^n X_i = ns) /P(\bar{X} = s)\\
&=& f_{\lambda}(0) f_{(n-1)\lambda}(ns) / f_{n\lambda}(ns) \\
\end{eqnarray}
Which some algebra reduces the last expression to $(1-\frac{1}{n})^{ns}$
|
UMVUE of $e^{-\lambda}$ from poisson distribution
Yes, Lehman Scheffe gives you the sufficiency of $\bar{X}$ for $e^{-\lambda}$. The problem is that $\bar{Y}$ does not actually condition on the sufficient statistic. Although the $Y$ are a function of
|
41,904
|
Difference Between Linear Regression in Machine Learning and Statistical Model
|
Unfortunately, the dichotomy you describe is invalid. ML models (almost always) define a response distribution. For example, the extremely popular gradient boosting machine library XGBoost defines particular learning objectives (e.g. linear, logistic, Poisson, Cox, etc.).
The implementation of linear regression and GLMs in Spark's MLlib is definitely based on standard Statistical theory for linear models. For example, quoting directly from pyspark/mllib/regression.py's LinearRegressionWithSGD comments: Train a linear regression model using Stochastic Gradient Descent (SGD). This solves the least squares regression formulation f(weights) = 1/(2n) ||A weights - y||^2 which is the mean squared error. i.e. this is a standard linear regression algorithm for Gaussian response. The implementation of the particular algorithm might be optimised such that it works for very large datasests (see for example this excellent thread on "Why use gradient descent for linear regression, when a closed-form math solution is available?") but the theory behind an algorithm is exactly the same.
|
Difference Between Linear Regression in Machine Learning and Statistical Model
|
Unfortunately, the dichotomy you describe is invalid. ML models (almost always) define a response distribution. For example, the extremely popular gradient boosting machine library XGBoost defines par
|
Difference Between Linear Regression in Machine Learning and Statistical Model
Unfortunately, the dichotomy you describe is invalid. ML models (almost always) define a response distribution. For example, the extremely popular gradient boosting machine library XGBoost defines particular learning objectives (e.g. linear, logistic, Poisson, Cox, etc.).
The implementation of linear regression and GLMs in Spark's MLlib is definitely based on standard Statistical theory for linear models. For example, quoting directly from pyspark/mllib/regression.py's LinearRegressionWithSGD comments: Train a linear regression model using Stochastic Gradient Descent (SGD). This solves the least squares regression formulation f(weights) = 1/(2n) ||A weights - y||^2 which is the mean squared error. i.e. this is a standard linear regression algorithm for Gaussian response. The implementation of the particular algorithm might be optimised such that it works for very large datasests (see for example this excellent thread on "Why use gradient descent for linear regression, when a closed-form math solution is available?") but the theory behind an algorithm is exactly the same.
|
Difference Between Linear Regression in Machine Learning and Statistical Model
Unfortunately, the dichotomy you describe is invalid. ML models (almost always) define a response distribution. For example, the extremely popular gradient boosting machine library XGBoost defines par
|
41,905
|
learning rate in Adaboost sklearn
|
The official documentation states that "The learning rate shrinks the contribution of each regressor by learning_rate.". Thus, basically we need to understand three concepts:
1. Weak Classifier
A model whose error rate is only slightly better than random guessing, that is to say, 50% accuracy.
2. Boosting
This technique has the objective to apply $K$ different times (sequentially) a model to modified versions of the data. So, suppose at each iteration $i \in \{1,2,..., K\}$ you build a new tree model $T_{i}$
\begin{align}
T_{i+1}(x) = T_{i}(x) + \alpha M(x),
\end{align}
where $$M(x) = \sum_{j=1}^{J} t(x, \theta_{j})$$
is the sum of trees with different paramaters $\theta_{j}$ and $\alpha$ is the learning rate between 0 and 1.
3. Learning Rate
This parameter controls how much I'm going to contribute with the new model to the existing one. Normally there is trade off between the number of iterations $K$ and the value of $\alpha$. In other words, when taking smaller values of alpha ($\alpha \approx 0$) you should consider more $K$ iterations, so that your base model (the weak classifier) continues to improve. According to Jerome Friedman, it is suggested to set $\alpha$ to smaller values ($\alpha < .1$).
|
learning rate in Adaboost sklearn
|
The official documentation states that "The learning rate shrinks the contribution of each regressor by learning_rate.". Thus, basically we need to understand three concepts:
1. Weak Classifier
A mode
|
learning rate in Adaboost sklearn
The official documentation states that "The learning rate shrinks the contribution of each regressor by learning_rate.". Thus, basically we need to understand three concepts:
1. Weak Classifier
A model whose error rate is only slightly better than random guessing, that is to say, 50% accuracy.
2. Boosting
This technique has the objective to apply $K$ different times (sequentially) a model to modified versions of the data. So, suppose at each iteration $i \in \{1,2,..., K\}$ you build a new tree model $T_{i}$
\begin{align}
T_{i+1}(x) = T_{i}(x) + \alpha M(x),
\end{align}
where $$M(x) = \sum_{j=1}^{J} t(x, \theta_{j})$$
is the sum of trees with different paramaters $\theta_{j}$ and $\alpha$ is the learning rate between 0 and 1.
3. Learning Rate
This parameter controls how much I'm going to contribute with the new model to the existing one. Normally there is trade off between the number of iterations $K$ and the value of $\alpha$. In other words, when taking smaller values of alpha ($\alpha \approx 0$) you should consider more $K$ iterations, so that your base model (the weak classifier) continues to improve. According to Jerome Friedman, it is suggested to set $\alpha$ to smaller values ($\alpha < .1$).
|
learning rate in Adaboost sklearn
The official documentation states that "The learning rate shrinks the contribution of each regressor by learning_rate.". Thus, basically we need to understand three concepts:
1. Weak Classifier
A mode
|
41,906
|
learning rate in Adaboost sklearn
|
It looks to me that this is mainly the problem with sklearn docs. Observe in a 2021 GitHub issue about this exact question, not one person has referenced a paper that is actually implemented, and entire discussion is purely about style.
I think @XavierBourretSicotte's answer above should be the accepted one, because it is consistent with sklearn's implementation. I'll also reopen that issue and suggest a link to the paper and/or precise definition of the LR.
|
learning rate in Adaboost sklearn
|
It looks to me that this is mainly the problem with sklearn docs. Observe in a 2021 GitHub issue about this exact question, not one person has referenced a paper that is actually implemented, and enti
|
learning rate in Adaboost sklearn
It looks to me that this is mainly the problem with sklearn docs. Observe in a 2021 GitHub issue about this exact question, not one person has referenced a paper that is actually implemented, and entire discussion is purely about style.
I think @XavierBourretSicotte's answer above should be the accepted one, because it is consistent with sklearn's implementation. I'll also reopen that issue and suggest a link to the paper and/or precise definition of the LR.
|
learning rate in Adaboost sklearn
It looks to me that this is mainly the problem with sklearn docs. Observe in a 2021 GitHub issue about this exact question, not one person has referenced a paper that is actually implemented, and enti
|
41,907
|
Fitting a distribution to match known points on the CDF
|
I think you are accidentally trying to maximize the squared errors. The default for optim() is to minimize the function, so the negative sign in your beta_func() results in searching for a max.
If you modify your function like so you get values closer to your guess:
beta_func2 <- function(par, x) sum( (pbeta( x, par[1], par[2]) - y)**2 )
out2 <- optim(c(9,.8), beta_func2, lower=c(1,.5), upper=c(200,200), method="L-BFGS-B", x=x)
out2 <- out2$par
print(out2)
[1] 11.04296 0.50000
You can check the new function against your observations (where out, x, and y are defined as in your example):
plot(x,(pbeta(x,out[1],out[2])), ylim=c(-.1,1), col="red", type="l")
points(x, (pbeta(x,9,0.8)), col="black", type="l")
points(x,(pbeta(x,out2[1],out2[2])), col="orange", type="l")
lines(x,y, col='blue')
title(main="Blue observed CDF, black guesstimate, gold optimized")
|
Fitting a distribution to match known points on the CDF
|
I think you are accidentally trying to maximize the squared errors. The default for optim() is to minimize the function, so the negative sign in your beta_func() results in searching for a max.
If yo
|
Fitting a distribution to match known points on the CDF
I think you are accidentally trying to maximize the squared errors. The default for optim() is to minimize the function, so the negative sign in your beta_func() results in searching for a max.
If you modify your function like so you get values closer to your guess:
beta_func2 <- function(par, x) sum( (pbeta( x, par[1], par[2]) - y)**2 )
out2 <- optim(c(9,.8), beta_func2, lower=c(1,.5), upper=c(200,200), method="L-BFGS-B", x=x)
out2 <- out2$par
print(out2)
[1] 11.04296 0.50000
You can check the new function against your observations (where out, x, and y are defined as in your example):
plot(x,(pbeta(x,out[1],out[2])), ylim=c(-.1,1), col="red", type="l")
points(x, (pbeta(x,9,0.8)), col="black", type="l")
points(x,(pbeta(x,out2[1],out2[2])), col="orange", type="l")
lines(x,y, col='blue')
title(main="Blue observed CDF, black guesstimate, gold optimized")
|
Fitting a distribution to match known points on the CDF
I think you are accidentally trying to maximize the squared errors. The default for optim() is to minimize the function, so the negative sign in your beta_func() results in searching for a max.
If yo
|
41,908
|
Is $R^2_{adjusted}$ both unbiased and consistent under the alternative in simple regression?
|
As a preliminary result, $R^2_{adjusted}$ is indeed unbiased under the null, at least under error normality.
From this question we have that
$$
R^2\sim Beta(1/2,(n-2)/2)
$$
under the null in the present setting of a simple regression ($k=2$). Hence, its mean is
$$
E(R^2)=\frac{1}{n-1}
$$
so that, from
$$
R^2_{adjusted}=1-(1-R^2)\frac{n-1}{n-2},
$$
we find
$$
E(R^2_{adjusted})=1-(1-E(R^2))\frac{n-1}{n-2}=0
$$
In fact, this result does not hinge on the simple regression case, as $R^2\sim Beta((k-1)/2,(n-k)/2)$ in general, so that $E(R^2)=(k-1)/(n-1)$ and
$$
E\left(1-(1-R^2)\frac{n-1}{n-k}\right)=0.
$$
As to consistency, it is given for any vector $\beta$: write
$$
R^2=1-\frac{\hat{u}'\hat{u}/n}{\tilde{y}'\tilde{y}/n}
$$
with $\tilde{y}$ denoting demeaned $y$s, standard laws of large numbers give us that sample variances consistently estimate population variances, $\hat{u}'\hat{u}/n\to_p\sigma^2_u$ and $\tilde{y}'\tilde{y}/n\to_p\sigma^2_y$.
Hence, by Slutzky's theorem,
$$
R^2\to_p1-\frac{\sigma^2_u}{\sigma^2_y},
$$
i.e., (at least what I consider) the population $R^2$. Since $R^2_{adjusted}-R^2=o_p(1)$, the same holds true for $R^2_{adjusted}$.
As for the mean of $R^2_{adjusted}$ under the alternative, this thread appears helpful. It establishes a noncentral beta distribution for $R^2$ under the alternative. I have not been able to use results like these to say something precise about $E(R^2)$.
In any case, this little simulation suggests that the answer is no:
reps <- 10000
adj.R2 <- rep(NA,reps)
beta <- 1
n <- 10
V.u <- 2
V.x <- 3
for (i in 1:reps){
u <- rnorm(n, sd=sqrt(V.u))
x <- rnorm(n, sd=sqrt(V.x))
y <- beta*x + u
adj.R2[i] <- summary(lm(y~x))$adj.r.squared
}
Result:
> mean(adj.R2)
[1] 0.5444916
> (pop.R2 <- 1-V.u/(beta^2*V.x+V.u))
[1] 0.6
|
Is $R^2_{adjusted}$ both unbiased and consistent under the alternative in simple regression?
|
As a preliminary result, $R^2_{adjusted}$ is indeed unbiased under the null, at least under error normality.
From this question we have that
$$
R^2\sim Beta(1/2,(n-2)/2)
$$
under the null in the pres
|
Is $R^2_{adjusted}$ both unbiased and consistent under the alternative in simple regression?
As a preliminary result, $R^2_{adjusted}$ is indeed unbiased under the null, at least under error normality.
From this question we have that
$$
R^2\sim Beta(1/2,(n-2)/2)
$$
under the null in the present setting of a simple regression ($k=2$). Hence, its mean is
$$
E(R^2)=\frac{1}{n-1}
$$
so that, from
$$
R^2_{adjusted}=1-(1-R^2)\frac{n-1}{n-2},
$$
we find
$$
E(R^2_{adjusted})=1-(1-E(R^2))\frac{n-1}{n-2}=0
$$
In fact, this result does not hinge on the simple regression case, as $R^2\sim Beta((k-1)/2,(n-k)/2)$ in general, so that $E(R^2)=(k-1)/(n-1)$ and
$$
E\left(1-(1-R^2)\frac{n-1}{n-k}\right)=0.
$$
As to consistency, it is given for any vector $\beta$: write
$$
R^2=1-\frac{\hat{u}'\hat{u}/n}{\tilde{y}'\tilde{y}/n}
$$
with $\tilde{y}$ denoting demeaned $y$s, standard laws of large numbers give us that sample variances consistently estimate population variances, $\hat{u}'\hat{u}/n\to_p\sigma^2_u$ and $\tilde{y}'\tilde{y}/n\to_p\sigma^2_y$.
Hence, by Slutzky's theorem,
$$
R^2\to_p1-\frac{\sigma^2_u}{\sigma^2_y},
$$
i.e., (at least what I consider) the population $R^2$. Since $R^2_{adjusted}-R^2=o_p(1)$, the same holds true for $R^2_{adjusted}$.
As for the mean of $R^2_{adjusted}$ under the alternative, this thread appears helpful. It establishes a noncentral beta distribution for $R^2$ under the alternative. I have not been able to use results like these to say something precise about $E(R^2)$.
In any case, this little simulation suggests that the answer is no:
reps <- 10000
adj.R2 <- rep(NA,reps)
beta <- 1
n <- 10
V.u <- 2
V.x <- 3
for (i in 1:reps){
u <- rnorm(n, sd=sqrt(V.u))
x <- rnorm(n, sd=sqrt(V.x))
y <- beta*x + u
adj.R2[i] <- summary(lm(y~x))$adj.r.squared
}
Result:
> mean(adj.R2)
[1] 0.5444916
> (pop.R2 <- 1-V.u/(beta^2*V.x+V.u))
[1] 0.6
|
Is $R^2_{adjusted}$ both unbiased and consistent under the alternative in simple regression?
As a preliminary result, $R^2_{adjusted}$ is indeed unbiased under the null, at least under error normality.
From this question we have that
$$
R^2\sim Beta(1/2,(n-2)/2)
$$
under the null in the pres
|
41,909
|
What constitutes a large KL divergence?
|
As commented by W. Huber, this is a fairly interesting question, even though I doubt there is a clear absolute answer. To quote a few generic references,
"...the K-L divergence represents the number of extra bits necessary to
code a source whose symbols were drawn from the distribution P, given
that the coder was designed for a source whose symbols were drawn from
Q." Quora
and
"...it is the amount of information lost when Q is used to approximate
P." Wikipedia
and
"The Kullback–Leibler divergence can also be interpreted as the
expected discrimination information for $H_1$ over $H_0$: the mean
information per sample for discriminating in favor of a hypothesis
$H_1$ against a hypothesis $H_0$, when hypothesis $H_1$ is true."
Wikipedia
But coding is a fairly specialised notion (in my opinion) while information is pretty vague (one could argue it is actually defined by the Kulback-Leibler distance). And there is no absolute scale since the distance most often ranges from 0 to $\infty$ (contrary to what the Wikipedia page may suggest in its first paragraph). Thus the scaling of calibration of a Kullback-Leibler distance will depend on the problem at hand and the reason why one measures such a distance.
An illustration of this calibration issue is provided in the following graph
which compares histograms of log-Kullback-Leibler distances between two Gamma distributions when
two datasets $x$ and $y$ of size $n$ are generated from a Gamma ${\cal G}(a,1)$
both parameters of the Gamma distribution are estimated from the samples by a method of moments
the Kullback-Leibler distance between the estimated Gammas is derived
Here is the core of the R code (using W. Huber's KL.gamma) in case this is unclear:
n=15
T=1e3
a=.3
diz=rep(0,T)
for (t in 1:T){
x=rgamma(n,17,1)
a=mean(x);b=var(x);a=a^2/b;b=sqrt(a/b)
y=rgamma(n,17,1)
c=mean(y);d=var(y);c=c^2/d;d=sqrt(c/d)
diz[t]=KL.gamma(a,b,c,d)}
The interpretation of this small experiment is that, when considering a sample of size $n=15$, a Kullback-Leibler divergence around $1$ is not significant in the sense that the same "true" parameters produce samples that lead to estimated distributions at a distance of around $1$. When moving to a sample of size $n=150$, this becomes a highly significant distance. (Note that this experiment is only trying to make a point of the lack of absolute "large" or "small" Kullback-Leibler divergence, not to turn this assessment of scale into a test or something like that!)
|
What constitutes a large KL divergence?
|
As commented by W. Huber, this is a fairly interesting question, even though I doubt there is a clear absolute answer. To quote a few generic references,
"...the K-L divergence represents the number
|
What constitutes a large KL divergence?
As commented by W. Huber, this is a fairly interesting question, even though I doubt there is a clear absolute answer. To quote a few generic references,
"...the K-L divergence represents the number of extra bits necessary to
code a source whose symbols were drawn from the distribution P, given
that the coder was designed for a source whose symbols were drawn from
Q." Quora
and
"...it is the amount of information lost when Q is used to approximate
P." Wikipedia
and
"The Kullback–Leibler divergence can also be interpreted as the
expected discrimination information for $H_1$ over $H_0$: the mean
information per sample for discriminating in favor of a hypothesis
$H_1$ against a hypothesis $H_0$, when hypothesis $H_1$ is true."
Wikipedia
But coding is a fairly specialised notion (in my opinion) while information is pretty vague (one could argue it is actually defined by the Kulback-Leibler distance). And there is no absolute scale since the distance most often ranges from 0 to $\infty$ (contrary to what the Wikipedia page may suggest in its first paragraph). Thus the scaling of calibration of a Kullback-Leibler distance will depend on the problem at hand and the reason why one measures such a distance.
An illustration of this calibration issue is provided in the following graph
which compares histograms of log-Kullback-Leibler distances between two Gamma distributions when
two datasets $x$ and $y$ of size $n$ are generated from a Gamma ${\cal G}(a,1)$
both parameters of the Gamma distribution are estimated from the samples by a method of moments
the Kullback-Leibler distance between the estimated Gammas is derived
Here is the core of the R code (using W. Huber's KL.gamma) in case this is unclear:
n=15
T=1e3
a=.3
diz=rep(0,T)
for (t in 1:T){
x=rgamma(n,17,1)
a=mean(x);b=var(x);a=a^2/b;b=sqrt(a/b)
y=rgamma(n,17,1)
c=mean(y);d=var(y);c=c^2/d;d=sqrt(c/d)
diz[t]=KL.gamma(a,b,c,d)}
The interpretation of this small experiment is that, when considering a sample of size $n=15$, a Kullback-Leibler divergence around $1$ is not significant in the sense that the same "true" parameters produce samples that lead to estimated distributions at a distance of around $1$. When moving to a sample of size $n=150$, this becomes a highly significant distance. (Note that this experiment is only trying to make a point of the lack of absolute "large" or "small" Kullback-Leibler divergence, not to turn this assessment of scale into a test or something like that!)
|
What constitutes a large KL divergence?
As commented by W. Huber, this is a fairly interesting question, even though I doubt there is a clear absolute answer. To quote a few generic references,
"...the K-L divergence represents the number
|
41,910
|
How well does $Q(z|X)$ match $N(0,I)$ in variational autoencoders?
|
It's ok for $Q(z|X)$ to be different from $\mathcal{N}(0, I)$, because when we sample from the VAE, we're not trying to reconstruct $X$ anymore. Instead, we're trying to sample some $X \sim \mathcal{X}$ where $\mathcal{X}$ is the distribution of all images in the dataset.
Imagine of the latent space were actually a uniform distribution over the interval $(0,10)$, and we were autoencoding MNIST digits. Suppose that images with 1 in them happened to have $Q(z|X)$ distributed around $(0,1)$, images with 2 happened to be around $(1,2)$, etc.
Then for any particular $X$, $Q(z|X)$ is not close to matching the uniform distribution. However, as long as the mixture $\frac{1}{n} \sum_i Q(z|X_i)$ reasonably covers and matches the uniform distribution, it's reasonable to sample $z \sim U(0,10)$ and then run the decoder, because the $z$ you got is probably close to $\mu(X)$ for some $X$.
edit: To answer the question of why we might expect the mixture of $Q(z|X)$ to be approximately $\mathcal{N}(0,I)$, note that we can decompose $P(z) = \int P(z|X) p(X) dz = E\left[ P(z|X) \right]$. By definition, $z \sim \mathcal{N}(0,I)$. However, when we approximate $P(z|X)$ with the encoder $Q(z|X)$, we end up with something slightly different.
Minimizing the VAE loss is equivalent to maximizing $\log P(X) - \mathcal{D}_\text{KL}(Q(z|X) || P(z|X))$. So we're simultaneously maximizing the log likelihood of the data while also encouraging $Q(z|X)$ to be as close to $P(z|X)$ as possible. As a result, we should end up with very close to $\mathcal{N}(0,I)$.
|
How well does $Q(z|X)$ match $N(0,I)$ in variational autoencoders?
|
It's ok for $Q(z|X)$ to be different from $\mathcal{N}(0, I)$, because when we sample from the VAE, we're not trying to reconstruct $X$ anymore. Instead, we're trying to sample some $X \sim \mathcal{X
|
How well does $Q(z|X)$ match $N(0,I)$ in variational autoencoders?
It's ok for $Q(z|X)$ to be different from $\mathcal{N}(0, I)$, because when we sample from the VAE, we're not trying to reconstruct $X$ anymore. Instead, we're trying to sample some $X \sim \mathcal{X}$ where $\mathcal{X}$ is the distribution of all images in the dataset.
Imagine of the latent space were actually a uniform distribution over the interval $(0,10)$, and we were autoencoding MNIST digits. Suppose that images with 1 in them happened to have $Q(z|X)$ distributed around $(0,1)$, images with 2 happened to be around $(1,2)$, etc.
Then for any particular $X$, $Q(z|X)$ is not close to matching the uniform distribution. However, as long as the mixture $\frac{1}{n} \sum_i Q(z|X_i)$ reasonably covers and matches the uniform distribution, it's reasonable to sample $z \sim U(0,10)$ and then run the decoder, because the $z$ you got is probably close to $\mu(X)$ for some $X$.
edit: To answer the question of why we might expect the mixture of $Q(z|X)$ to be approximately $\mathcal{N}(0,I)$, note that we can decompose $P(z) = \int P(z|X) p(X) dz = E\left[ P(z|X) \right]$. By definition, $z \sim \mathcal{N}(0,I)$. However, when we approximate $P(z|X)$ with the encoder $Q(z|X)$, we end up with something slightly different.
Minimizing the VAE loss is equivalent to maximizing $\log P(X) - \mathcal{D}_\text{KL}(Q(z|X) || P(z|X))$. So we're simultaneously maximizing the log likelihood of the data while also encouraging $Q(z|X)$ to be as close to $P(z|X)$ as possible. As a result, we should end up with very close to $\mathcal{N}(0,I)$.
|
How well does $Q(z|X)$ match $N(0,I)$ in variational autoencoders?
It's ok for $Q(z|X)$ to be different from $\mathcal{N}(0, I)$, because when we sample from the VAE, we're not trying to reconstruct $X$ anymore. Instead, we're trying to sample some $X \sim \mathcal{X
|
41,911
|
How well does $Q(z|X)$ match $N(0,I)$ in variational autoencoders?
|
The answer is two-fold:
(1) The encoder network needs to be expressive enough (wide enough and deep enough) to be able to map the nonlinear input space to something close to $\mathcal{N}(0,I)$.
(2) In addition to (1) (I added a 3rd hidden layer to the MNIST example I described in the question), when I increase the number of latent dimensions, I observe that the mapping of the training data into the latent space becomes closer to $\mathcal{N}(0,I)$. In hindsight this is not super surprising, because the system is able to store information across more dimensions, so each individual latent dimension can get closer to $\mathcal{N}(0,I)$.
|
How well does $Q(z|X)$ match $N(0,I)$ in variational autoencoders?
|
The answer is two-fold:
(1) The encoder network needs to be expressive enough (wide enough and deep enough) to be able to map the nonlinear input space to something close to $\mathcal{N}(0,I)$.
(2) In
|
How well does $Q(z|X)$ match $N(0,I)$ in variational autoencoders?
The answer is two-fold:
(1) The encoder network needs to be expressive enough (wide enough and deep enough) to be able to map the nonlinear input space to something close to $\mathcal{N}(0,I)$.
(2) In addition to (1) (I added a 3rd hidden layer to the MNIST example I described in the question), when I increase the number of latent dimensions, I observe that the mapping of the training data into the latent space becomes closer to $\mathcal{N}(0,I)$. In hindsight this is not super surprising, because the system is able to store information across more dimensions, so each individual latent dimension can get closer to $\mathcal{N}(0,I)$.
|
How well does $Q(z|X)$ match $N(0,I)$ in variational autoencoders?
The answer is two-fold:
(1) The encoder network needs to be expressive enough (wide enough and deep enough) to be able to map the nonlinear input space to something close to $\mathcal{N}(0,I)$.
(2) In
|
41,912
|
How well does $Q(z|X)$ match $N(0,I)$ in variational autoencoders?
|
Also, we sample $z$ from the prior and decode it because we made the assumption that $p_{\theta}(x) = \int p_{\theta}(x|z)p(z) dz$. Therefore, sampling from $p_{\theta}(x)$ is equivalent to sampling from the joint $p_{\theta}(x,z)$ then discarding $z$.
|
How well does $Q(z|X)$ match $N(0,I)$ in variational autoencoders?
|
Also, we sample $z$ from the prior and decode it because we made the assumption that $p_{\theta}(x) = \int p_{\theta}(x|z)p(z) dz$. Therefore, sampling from $p_{\theta}(x)$ is equivalent to sampling f
|
How well does $Q(z|X)$ match $N(0,I)$ in variational autoencoders?
Also, we sample $z$ from the prior and decode it because we made the assumption that $p_{\theta}(x) = \int p_{\theta}(x|z)p(z) dz$. Therefore, sampling from $p_{\theta}(x)$ is equivalent to sampling from the joint $p_{\theta}(x,z)$ then discarding $z$.
|
How well does $Q(z|X)$ match $N(0,I)$ in variational autoencoders?
Also, we sample $z$ from the prior and decode it because we made the assumption that $p_{\theta}(x) = \int p_{\theta}(x|z)p(z) dz$. Therefore, sampling from $p_{\theta}(x)$ is equivalent to sampling f
|
41,913
|
What are LS means useful for?
|
I disagree strongly with the "only situation" in the OP. EMMs (estimated marginal means, more restrictively known as least-squares means) are very useful for heading off a Simpson's paradox situation in evaluating the effects of a factor. In your example, consider a scenario where these three things are true:
When $x_2$ is held at any fixed level, the lowest mean response occurs at $x_1=1$.
For $x_1$ held fixed at either level, the highest mean response occurs when $x_2=3$.
The combination $(x_1=1, x_2=3)$ has a disproportionately large sample size, while $(x_1=1,x_2=1)$ and $(x_1=1,x_2=2)$ have small sample sizes.
Then it is possible that the marginal mean of $x_1$ is higher than that for $x_2$, even though the mean for $x_1=1$ is less than that for $x_1=2$ for each $x_2$.
If one instead computes EMMs, the observed means at $x_1=1$ and $x_2=1,2,3$ receive equal weight, so that the EMM for $x_1=1$ is less than that for $x_1=2$.
EMMs are comparable to what is termed "unweighted means analysis" in old experimental design texts. The idea was useful many decades ago, and it still is.
The "basics" vignette for the R package emmeans has a concrete illustration and some discussion of such issues.
Disclaimer
I have spent the last 5 years or so developing/refining R packages for such purposes, so I'm not exactly an objective observer. I hope to hear other perspectives.
|
What are LS means useful for?
|
I disagree strongly with the "only situation" in the OP. EMMs (estimated marginal means, more restrictively known as least-squares means) are very useful for heading off a Simpson's paradox situation
|
What are LS means useful for?
I disagree strongly with the "only situation" in the OP. EMMs (estimated marginal means, more restrictively known as least-squares means) are very useful for heading off a Simpson's paradox situation in evaluating the effects of a factor. In your example, consider a scenario where these three things are true:
When $x_2$ is held at any fixed level, the lowest mean response occurs at $x_1=1$.
For $x_1$ held fixed at either level, the highest mean response occurs when $x_2=3$.
The combination $(x_1=1, x_2=3)$ has a disproportionately large sample size, while $(x_1=1,x_2=1)$ and $(x_1=1,x_2=2)$ have small sample sizes.
Then it is possible that the marginal mean of $x_1$ is higher than that for $x_2$, even though the mean for $x_1=1$ is less than that for $x_1=2$ for each $x_2$.
If one instead computes EMMs, the observed means at $x_1=1$ and $x_2=1,2,3$ receive equal weight, so that the EMM for $x_1=1$ is less than that for $x_1=2$.
EMMs are comparable to what is termed "unweighted means analysis" in old experimental design texts. The idea was useful many decades ago, and it still is.
The "basics" vignette for the R package emmeans has a concrete illustration and some discussion of such issues.
Disclaimer
I have spent the last 5 years or so developing/refining R packages for such purposes, so I'm not exactly an objective observer. I hope to hear other perspectives.
|
What are LS means useful for?
I disagree strongly with the "only situation" in the OP. EMMs (estimated marginal means, more restrictively known as least-squares means) are very useful for heading off a Simpson's paradox situation
|
41,914
|
Reporting the effect of a predictor in a logistic regression fitted with a restricted cubic spline
|
My course notes describe the components of a restricted cubic spline function, and provides ways to interpret the model when general smooth effects are included. You can compute an odds ratio at two selected points for age (default in the summary.rms function is quartiles), or better: show the partial effect plots to depict the entire age effect. You can also use plot(nomogram(fit)) to construct a nomogram for the whole model. Don't try to interpret individual terms.
The terms such as age' represent differences in cubes that restrict the function of age to be linear beyond the outer knots.
|
Reporting the effect of a predictor in a logistic regression fitted with a restricted cubic spline
|
My course notes describe the components of a restricted cubic spline function, and provides ways to interpret the model when general smooth effects are included. You can compute an odds ratio at two
|
Reporting the effect of a predictor in a logistic regression fitted with a restricted cubic spline
My course notes describe the components of a restricted cubic spline function, and provides ways to interpret the model when general smooth effects are included. You can compute an odds ratio at two selected points for age (default in the summary.rms function is quartiles), or better: show the partial effect plots to depict the entire age effect. You can also use plot(nomogram(fit)) to construct a nomogram for the whole model. Don't try to interpret individual terms.
The terms such as age' represent differences in cubes that restrict the function of age to be linear beyond the outer knots.
|
Reporting the effect of a predictor in a logistic regression fitted with a restricted cubic spline
My course notes describe the components of a restricted cubic spline function, and provides ways to interpret the model when general smooth effects are included. You can compute an odds ratio at two
|
41,915
|
Cross-validation for elastic net regression: squared error vs. correlation on the test set
|
I think I figured out what was happening here.
Note that the value of correlation does not depend on the length of $\hat\beta$. So if the test correlation keeps increasing while the test R-squared drops, it might indicate that $\lVert\hat\beta\rVert$ is not optimal and scaling $\hat\beta$ up or down by a scalar factor might help.
After realizing this, I remembered that there have been multiple claims in the literature that elastic net, and even lasso on its own, "over-shrinks" the coefficients. For lasso, there is the "relaxed lasso" procedure that aims to amend this bias: see Advantages of doing "double lasso" or performing lasso twice?. For elastic net, the original Zou & Hastie 2005 paper actually advocated up-scaling $\hat\beta$ by a constant factor, see Why does glmnet use "naive" elastic net from Zou & Hastie original paper?. Such scaling would not change the value of correlation but would affect the R-squared.
When I apply the Zou & Hastie heuristic scaling $$\hat\beta^* = \big(1+\lambda(1-\alpha)\big)\hat\beta,$$ I obtain the following result:
Here the solid lines are the same as in the figure in my question whereas the dashed lines on the left subplot use the re-scaled beta. Now both metrics are maximized by around the same values of $\alpha$ and $\lambda$.
Magic!
|
Cross-validation for elastic net regression: squared error vs. correlation on the test set
|
I think I figured out what was happening here.
Note that the value of correlation does not depend on the length of $\hat\beta$. So if the test correlation keeps increasing while the test R-squared dro
|
Cross-validation for elastic net regression: squared error vs. correlation on the test set
I think I figured out what was happening here.
Note that the value of correlation does not depend on the length of $\hat\beta$. So if the test correlation keeps increasing while the test R-squared drops, it might indicate that $\lVert\hat\beta\rVert$ is not optimal and scaling $\hat\beta$ up or down by a scalar factor might help.
After realizing this, I remembered that there have been multiple claims in the literature that elastic net, and even lasso on its own, "over-shrinks" the coefficients. For lasso, there is the "relaxed lasso" procedure that aims to amend this bias: see Advantages of doing "double lasso" or performing lasso twice?. For elastic net, the original Zou & Hastie 2005 paper actually advocated up-scaling $\hat\beta$ by a constant factor, see Why does glmnet use "naive" elastic net from Zou & Hastie original paper?. Such scaling would not change the value of correlation but would affect the R-squared.
When I apply the Zou & Hastie heuristic scaling $$\hat\beta^* = \big(1+\lambda(1-\alpha)\big)\hat\beta,$$ I obtain the following result:
Here the solid lines are the same as in the figure in my question whereas the dashed lines on the left subplot use the re-scaled beta. Now both metrics are maximized by around the same values of $\alpha$ and $\lambda$.
Magic!
|
Cross-validation for elastic net regression: squared error vs. correlation on the test set
I think I figured out what was happening here.
Note that the value of correlation does not depend on the length of $\hat\beta$. So if the test correlation keeps increasing while the test R-squared dro
|
41,916
|
What is the nugget effect?
|
In the context of estimating a variogram, a nugget allows for the variogram to assume a non-zero value for two observations having a distance of zero. The implication also is that the correlation between adjacent observations is reduced slightly.
I might say a nugget is a more general concept than measurement error, at least in geostatistics. Or, they may be unrelated concepts altogether. "Measurement error" is an unclear term in statistics: with many sources of variability, it benefits us to be clear about our presumed sources of "error". Measurement error can be taken to imply that the measurement method is flawed. In the case of blood pressure, we would like to measure the pressure differential between blood leaving the heart and entering the heart. This "gold standard" is far too invasive to be done in practice, so we use an imperfect method that depends on a number of characteristics. The patient's 8 hour diet, their position (seated, laying down), the time of day, the volume of their heart, etc. all predict BP but we ignore these values in practice: I contend these aren't measurement error. However, the reactiveness and training of the administrator, the quality of the cuff affect the quality of the reading while not being a reflection of the actual blood pressure whatsoever: I contend this is measurement error.
In geostatistics, and many other fields, we may conduct the "same measurement" or "nearly the same measurement" twice and expect different results. Presence of a nugget means that any two observations sampled arbitrarily closely will not necessarily have the same value. Not allowing for nugget error may be an undesirable constraint when the design permits collecting data of that nature.
In many studies you simply can't sample the same area with replacement. Take blood pressure as an example, it is impossible to replicate a blood pressure measurement either by time or location on the human body. Even if two arms are measured simultaneously, they will provide a different reading, and if the same arm is measured twice immediately in sequence, the blood pressure fluctuates slightly due to response to environmental temperatures, metabolism, fatigue, duration in resting position, attenuation of the white coat effect, etc. These measures are certainly more serially correlated than measures collected farther out in time, at more distal parts of the body, or even in different people, but they are not perfectly correlated, thus imposing an inappropriate variance structure could be considerably biased.
|
What is the nugget effect?
|
In the context of estimating a variogram, a nugget allows for the variogram to assume a non-zero value for two observations having a distance of zero. The implication also is that the correlation betw
|
What is the nugget effect?
In the context of estimating a variogram, a nugget allows for the variogram to assume a non-zero value for two observations having a distance of zero. The implication also is that the correlation between adjacent observations is reduced slightly.
I might say a nugget is a more general concept than measurement error, at least in geostatistics. Or, they may be unrelated concepts altogether. "Measurement error" is an unclear term in statistics: with many sources of variability, it benefits us to be clear about our presumed sources of "error". Measurement error can be taken to imply that the measurement method is flawed. In the case of blood pressure, we would like to measure the pressure differential between blood leaving the heart and entering the heart. This "gold standard" is far too invasive to be done in practice, so we use an imperfect method that depends on a number of characteristics. The patient's 8 hour diet, their position (seated, laying down), the time of day, the volume of their heart, etc. all predict BP but we ignore these values in practice: I contend these aren't measurement error. However, the reactiveness and training of the administrator, the quality of the cuff affect the quality of the reading while not being a reflection of the actual blood pressure whatsoever: I contend this is measurement error.
In geostatistics, and many other fields, we may conduct the "same measurement" or "nearly the same measurement" twice and expect different results. Presence of a nugget means that any two observations sampled arbitrarily closely will not necessarily have the same value. Not allowing for nugget error may be an undesirable constraint when the design permits collecting data of that nature.
In many studies you simply can't sample the same area with replacement. Take blood pressure as an example, it is impossible to replicate a blood pressure measurement either by time or location on the human body. Even if two arms are measured simultaneously, they will provide a different reading, and if the same arm is measured twice immediately in sequence, the blood pressure fluctuates slightly due to response to environmental temperatures, metabolism, fatigue, duration in resting position, attenuation of the white coat effect, etc. These measures are certainly more serially correlated than measures collected farther out in time, at more distal parts of the body, or even in different people, but they are not perfectly correlated, thus imposing an inappropriate variance structure could be considerably biased.
|
What is the nugget effect?
In the context of estimating a variogram, a nugget allows for the variogram to assume a non-zero value for two observations having a distance of zero. The implication also is that the correlation betw
|
41,917
|
What is the nugget effect?
|
The nugget effect is like the random noise. It's just the small scale variability that you can't estimate with your large scale variability model.
The nugget effect is made of the measurement error and of the microscope variation, two types of variances.
This might be useful: Statistics or geostatistics? Sampling error or nugget effect?
Thanks to @whuber for his comment
|
What is the nugget effect?
|
The nugget effect is like the random noise. It's just the small scale variability that you can't estimate with your large scale variability model.
The nugget effect is made of the measurement error a
|
What is the nugget effect?
The nugget effect is like the random noise. It's just the small scale variability that you can't estimate with your large scale variability model.
The nugget effect is made of the measurement error and of the microscope variation, two types of variances.
This might be useful: Statistics or geostatistics? Sampling error or nugget effect?
Thanks to @whuber for his comment
|
What is the nugget effect?
The nugget effect is like the random noise. It's just the small scale variability that you can't estimate with your large scale variability model.
The nugget effect is made of the measurement error a
|
41,918
|
What does it mean to sample a data point from or according to a distribution? [duplicate]
|
1) Whether humans can sample without bias is a question entirely different from whether random sampling can be done. Yes, random sampling can be done, and though some will argue that random seeds are partially deterministic, for all intents and purposes a computer generated random sample is random enough.
2) What does 'randomly sample from x distribution' mean? In short, it means collecting a set of N points that were generated by some theoretical distribution. For the normal distribution, assuming a given mu and sigma, you select N points that conform to those parameters. I realize this is an unsatisfactory answer, but consider that many algorithms begin by sampling from the uniform distribution, which is fairly straightforward to comprehend. U(0,5) will produce a number between 0 and 5, each with equal probability. With a few steps, you can draw these random numbers and ensure they conform to the gaussian as detailed in:
sampling from normal
It is very common to sample from particular distributions, for demonstration and simulation purposes. Most popular statistics packages have these functions built-in: in R, you have rnorm, rbinom, rpois, runif, etc. If you were to sample a sizeable dataset using these functions, then try to fit it with any theoretical distribution, you'd find that the best fitting would match the one that generated it.
3) I do not think one can say that a datapoint from an experiment is always discrete. A discrete variable can only take on particular values, and a mean of a particular measure might take any number of significant digits. But, you are right that strictly speaking, the probability of any one particular, exact value in a probability density is vanishingly small; which is why one often uses the CDF rather than the PDF (the probability that a value is less than or equal to some value is easier to calculate).
4) Your last question and challenge is ill-formed. One cannot hope to prove or even guess the generating distribution from a single point. You'd need some non-trivial sample size to do that (in which case, you could very well do this).
|
What does it mean to sample a data point from or according to a distribution? [duplicate]
|
1) Whether humans can sample without bias is a question entirely different from whether random sampling can be done. Yes, random sampling can be done, and though some will argue that random seeds are
|
What does it mean to sample a data point from or according to a distribution? [duplicate]
1) Whether humans can sample without bias is a question entirely different from whether random sampling can be done. Yes, random sampling can be done, and though some will argue that random seeds are partially deterministic, for all intents and purposes a computer generated random sample is random enough.
2) What does 'randomly sample from x distribution' mean? In short, it means collecting a set of N points that were generated by some theoretical distribution. For the normal distribution, assuming a given mu and sigma, you select N points that conform to those parameters. I realize this is an unsatisfactory answer, but consider that many algorithms begin by sampling from the uniform distribution, which is fairly straightforward to comprehend. U(0,5) will produce a number between 0 and 5, each with equal probability. With a few steps, you can draw these random numbers and ensure they conform to the gaussian as detailed in:
sampling from normal
It is very common to sample from particular distributions, for demonstration and simulation purposes. Most popular statistics packages have these functions built-in: in R, you have rnorm, rbinom, rpois, runif, etc. If you were to sample a sizeable dataset using these functions, then try to fit it with any theoretical distribution, you'd find that the best fitting would match the one that generated it.
3) I do not think one can say that a datapoint from an experiment is always discrete. A discrete variable can only take on particular values, and a mean of a particular measure might take any number of significant digits. But, you are right that strictly speaking, the probability of any one particular, exact value in a probability density is vanishingly small; which is why one often uses the CDF rather than the PDF (the probability that a value is less than or equal to some value is easier to calculate).
4) Your last question and challenge is ill-formed. One cannot hope to prove or even guess the generating distribution from a single point. You'd need some non-trivial sample size to do that (in which case, you could very well do this).
|
What does it mean to sample a data point from or according to a distribution? [duplicate]
1) Whether humans can sample without bias is a question entirely different from whether random sampling can be done. Yes, random sampling can be done, and though some will argue that random seeds are
|
41,919
|
What does it mean to sample a data point from or according to a distribution? [duplicate]
|
This could really be answered by taking a course in Monte Carlo simulation, which goes into this topic in depth. Here's a good set of slides that cover common approaches to generating samples from specific distributions. There are also more complicated approaches like Markov Chain Monte Carlo and copula methods for very complicated distributions with dependence and other behavior.
This is a well studied area that should not be hard to remedy, especially if you've been able to handle measure theory. This is simple stuff in comparison.
Now, mathematically, why do these methods work? At their heart they depend on the ability to generate a sequence of real-valued numbers $X_i$ in the interval $[0,1]$ such that
$$\lim_{n \to \infty} \frac{|\{i: X_i \in (a,b), i\leq n\}|}{n} = b-a$$
In addition, we demand that $|COV(X_i,X_j)| \approx 0, \forall i\neq j$
There are a ton more mathematically stringent tests that such sequences must pass to demonstrate statistical randomness(see here, and here).
"Random numbers" are actually *pseudo-*random numbers -- they are deterministically created but are statistically indistinguishable from iid observations from a uniform distribution (up to some enormous lag).
Using the pseudo-random numbers, we can generate sequences of numbers in a range like $[0,1]$ and then use various transforms and algorithms to turn these into all the other types of random variables and processes we use.
Why $[0,1]$? Well, as someone who knows measure theory, you know that $P(\Omega)=1, P(\emptyset)=0, 1\geq P(X|X\subset \Omega) \geq 0$, so drawing random samples on this range allows you to sample the probability measure's range.
|
What does it mean to sample a data point from or according to a distribution? [duplicate]
|
This could really be answered by taking a course in Monte Carlo simulation, which goes into this topic in depth. Here's a good set of slides that cover common approaches to generating samples from spe
|
What does it mean to sample a data point from or according to a distribution? [duplicate]
This could really be answered by taking a course in Monte Carlo simulation, which goes into this topic in depth. Here's a good set of slides that cover common approaches to generating samples from specific distributions. There are also more complicated approaches like Markov Chain Monte Carlo and copula methods for very complicated distributions with dependence and other behavior.
This is a well studied area that should not be hard to remedy, especially if you've been able to handle measure theory. This is simple stuff in comparison.
Now, mathematically, why do these methods work? At their heart they depend on the ability to generate a sequence of real-valued numbers $X_i$ in the interval $[0,1]$ such that
$$\lim_{n \to \infty} \frac{|\{i: X_i \in (a,b), i\leq n\}|}{n} = b-a$$
In addition, we demand that $|COV(X_i,X_j)| \approx 0, \forall i\neq j$
There are a ton more mathematically stringent tests that such sequences must pass to demonstrate statistical randomness(see here, and here).
"Random numbers" are actually *pseudo-*random numbers -- they are deterministically created but are statistically indistinguishable from iid observations from a uniform distribution (up to some enormous lag).
Using the pseudo-random numbers, we can generate sequences of numbers in a range like $[0,1]$ and then use various transforms and algorithms to turn these into all the other types of random variables and processes we use.
Why $[0,1]$? Well, as someone who knows measure theory, you know that $P(\Omega)=1, P(\emptyset)=0, 1\geq P(X|X\subset \Omega) \geq 0$, so drawing random samples on this range allows you to sample the probability measure's range.
|
What does it mean to sample a data point from or according to a distribution? [duplicate]
This could really be answered by taking a course in Monte Carlo simulation, which goes into this topic in depth. Here's a good set of slides that cover common approaches to generating samples from spe
|
41,920
|
Is applying the CLT to the sum of random variables a good approximation?
|
Going the other way around, if the Z-score were truly a standard normal distribution, then your subsequent approximations would be exact. The degree of error should roughly scale with some measure of distance between the Z-score distribution and the standard Gaussian.
We can use K-S distance as our metric in the space of CDFs. Let's say that we will collect $N$ samples and our (unknown) true sample CDF of the Z-score of these $N$ samples will have a K-S distance of $\epsilon_N$: $\max_z |F_{Z_n}(z) - F_{\Phi}(z)| = \epsilon_N$.
Now, going from the $F_{Z_n}(z)$ to $F_{S_n}(s)$ where $S_n = \sum_1^N X_i$ involves only a shift of scale and location (i.e., a linear transformation $Lz$ of the argument of $F_{Z_n}(z)$). The same applies to get $F_{\Phi}(z)$ to a sum of normal random variables with the same mean and variance as your actual population. In fact, you will be making the exact same transformation to both variables, so we will simply be mapping $F_{Z_n}(z) \mapsto F_{Z_n}(L^{-1}z)$ and similarly for $F_{\Phi}$ -- because we are subjecting each distribution's argument to the same transformation, we will preserve vertical distances.
So, the KS distance for the $F_{S_n}$ will converge to zero at the same rate as for $F_{Z_n}$. However, $F_{S_n}$ doesn't have a limiting distribution (it's basically $F(x)=0.5$, which is not a distribution) whereas $F_{Z_n}$ converges to an actual distribution function.
|
Is applying the CLT to the sum of random variables a good approximation?
|
Going the other way around, if the Z-score were truly a standard normal distribution, then your subsequent approximations would be exact. The degree of error should roughly scale with some measure of
|
Is applying the CLT to the sum of random variables a good approximation?
Going the other way around, if the Z-score were truly a standard normal distribution, then your subsequent approximations would be exact. The degree of error should roughly scale with some measure of distance between the Z-score distribution and the standard Gaussian.
We can use K-S distance as our metric in the space of CDFs. Let's say that we will collect $N$ samples and our (unknown) true sample CDF of the Z-score of these $N$ samples will have a K-S distance of $\epsilon_N$: $\max_z |F_{Z_n}(z) - F_{\Phi}(z)| = \epsilon_N$.
Now, going from the $F_{Z_n}(z)$ to $F_{S_n}(s)$ where $S_n = \sum_1^N X_i$ involves only a shift of scale and location (i.e., a linear transformation $Lz$ of the argument of $F_{Z_n}(z)$). The same applies to get $F_{\Phi}(z)$ to a sum of normal random variables with the same mean and variance as your actual population. In fact, you will be making the exact same transformation to both variables, so we will simply be mapping $F_{Z_n}(z) \mapsto F_{Z_n}(L^{-1}z)$ and similarly for $F_{\Phi}$ -- because we are subjecting each distribution's argument to the same transformation, we will preserve vertical distances.
So, the KS distance for the $F_{S_n}$ will converge to zero at the same rate as for $F_{Z_n}$. However, $F_{S_n}$ doesn't have a limiting distribution (it's basically $F(x)=0.5$, which is not a distribution) whereas $F_{Z_n}$ converges to an actual distribution function.
|
Is applying the CLT to the sum of random variables a good approximation?
Going the other way around, if the Z-score were truly a standard normal distribution, then your subsequent approximations would be exact. The degree of error should roughly scale with some measure of
|
41,921
|
Is kernel ridge regression the same as kernel regression?
|
Yeah, you are right. You practically replace the square matrix $X^TX$ with a Kernel $K$ when you estimate your coefficients.
|
Is kernel ridge regression the same as kernel regression?
|
Yeah, you are right. You practically replace the square matrix $X^TX$ with a Kernel $K$ when you estimate your coefficients.
|
Is kernel ridge regression the same as kernel regression?
Yeah, you are right. You practically replace the square matrix $X^TX$ with a Kernel $K$ when you estimate your coefficients.
|
Is kernel ridge regression the same as kernel regression?
Yeah, you are right. You practically replace the square matrix $X^TX$ with a Kernel $K$ when you estimate your coefficients.
|
41,922
|
Is kernel ridge regression the same as kernel regression?
|
No, they are not the same as algorithms, though you might be able to find pairs of kernel where they give the same answer (model) in terms of predictors.
Kernel Regression (simplest form) is a density estimator with mean prediction:
$$
\begin{align*}
\mu_{\text{kernel-regression}} &= \sum_i w_i y_i, \quad w_i = \frac{K(X^*, X_i)}{\sum_j K(X^*, X_j)} \\
\end{align*}
$$
while Kernel Ridge Regression is a regression (least-squares type inversion) with mean prediction:
$$
\mu_{\text{kernel-ridge-regression}} = \sum_i w_i y_i, \quad w_i = \sum_j K(X^*, X_j) K_{ji}^{-1}(X, X)
$$
And as noted before, the "kernel" part of kernel ridge regression is that you practically replace the square matrix $X^TX$ with a Kernel $K$ when you estimate your coefficients.
I have no idea why "kernel regression" means "kernel density esimation" ... if someone knows the history, or knows of complete results on mapping between the two methods please update this answer.
This is related:
Is Kernel Regression similar to Gaussian Process Regression?
You can see discussiona about his in Kevin Murphy's (newer) book as well as the older Rasmussen book.
|
Is kernel ridge regression the same as kernel regression?
|
No, they are not the same as algorithms, though you might be able to find pairs of kernel where they give the same answer (model) in terms of predictors.
Kernel Regression (simplest form) is a density
|
Is kernel ridge regression the same as kernel regression?
No, they are not the same as algorithms, though you might be able to find pairs of kernel where they give the same answer (model) in terms of predictors.
Kernel Regression (simplest form) is a density estimator with mean prediction:
$$
\begin{align*}
\mu_{\text{kernel-regression}} &= \sum_i w_i y_i, \quad w_i = \frac{K(X^*, X_i)}{\sum_j K(X^*, X_j)} \\
\end{align*}
$$
while Kernel Ridge Regression is a regression (least-squares type inversion) with mean prediction:
$$
\mu_{\text{kernel-ridge-regression}} = \sum_i w_i y_i, \quad w_i = \sum_j K(X^*, X_j) K_{ji}^{-1}(X, X)
$$
And as noted before, the "kernel" part of kernel ridge regression is that you practically replace the square matrix $X^TX$ with a Kernel $K$ when you estimate your coefficients.
I have no idea why "kernel regression" means "kernel density esimation" ... if someone knows the history, or knows of complete results on mapping between the two methods please update this answer.
This is related:
Is Kernel Regression similar to Gaussian Process Regression?
You can see discussiona about his in Kevin Murphy's (newer) book as well as the older Rasmussen book.
|
Is kernel ridge regression the same as kernel regression?
No, they are not the same as algorithms, though you might be able to find pairs of kernel where they give the same answer (model) in terms of predictors.
Kernel Regression (simplest form) is a density
|
41,923
|
Combining False Discovery Rates (FDR)?
|
You are not testing hypotheses, but fishing for interesting findings. There is nothing wrong with that. Do not do things that are often demanded by those who cannot tell the difference between a preliminary scientific investigation and bad statistics. See my commentary on the ASA's statement on P-values for more detail. (It's in the supplementary material and takes a ridiculous amount of clicking to reach, so here is a direct link to a preprint of it I found online.)
'Correction' of P-value for multiplicity burns the power to detect real effects. Never do it unless you have no alternative, where no alternative comes from an inability to do any sort of follow-up and the absence of any corroboration from other data or theory. Do not dichotomise the results into 'significant' and 'not significant', but show all of the observed effect sizes. (I suspect this response will be voted down, but it is not wrong.)
Treat this as a preliminary investigation. Do not adjust the P-values, but rank the statistical 'interestingness' of the features in order of smallness of the P-values. Then follow up with a study designed to investigate just those features that are statistically and/or scientifically interesting. In this case I would say that even if no follow-up study is possible you should publish the raw P-values so that other investigators can use your data as corroboration for their findings.
|
Combining False Discovery Rates (FDR)?
|
You are not testing hypotheses, but fishing for interesting findings. There is nothing wrong with that. Do not do things that are often demanded by those who cannot tell the difference between a preli
|
Combining False Discovery Rates (FDR)?
You are not testing hypotheses, but fishing for interesting findings. There is nothing wrong with that. Do not do things that are often demanded by those who cannot tell the difference between a preliminary scientific investigation and bad statistics. See my commentary on the ASA's statement on P-values for more detail. (It's in the supplementary material and takes a ridiculous amount of clicking to reach, so here is a direct link to a preprint of it I found online.)
'Correction' of P-value for multiplicity burns the power to detect real effects. Never do it unless you have no alternative, where no alternative comes from an inability to do any sort of follow-up and the absence of any corroboration from other data or theory. Do not dichotomise the results into 'significant' and 'not significant', but show all of the observed effect sizes. (I suspect this response will be voted down, but it is not wrong.)
Treat this as a preliminary investigation. Do not adjust the P-values, but rank the statistical 'interestingness' of the features in order of smallness of the P-values. Then follow up with a study designed to investigate just those features that are statistically and/or scientifically interesting. In this case I would say that even if no follow-up study is possible you should publish the raw P-values so that other investigators can use your data as corroboration for their findings.
|
Combining False Discovery Rates (FDR)?
You are not testing hypotheses, but fishing for interesting findings. There is nothing wrong with that. Do not do things that are often demanded by those who cannot tell the difference between a preli
|
41,924
|
What are the downsides of bayesian neural networks?
|
There are still many disadvantages of BNN compared with NN as listed below:
The computational cost is heavier. In here I am not just referring to the cost of training, i.e. getting the posterior distribution of all parameters. This part is in fact OK if you use variational inference with a simple distribution family for BNN parameters. After your model is deployed and you want to make an inference, then you will need to sample N parameters from their posterior distribution in order to get the distribution of output, and this is N times more computational cost than just using NN.
The tools of BNN is not popularized yet and is not so automatic as tools of NN.
You need to make some assumptions about your prior, which is relatively difficult for most users.
This is the most important reason that I think why BNN is not adopted universally instead of NN: the uncertainty we get is not as useful as we thought at first glance. Let's take an easy example: say you have two types of customers. Type A will have equal probability of giving you \$40 or \$60, and Type B will have equal probability of giving you \$30 or \$70. They have equal expectation, but larger uncertainty for the type A customer. Assume your BNN works perfectly well to tell one distribution from another. However, the uncertainty here does not matter if you have one million customers of each, because at that time what matters is not the uncertainty of individual customers, but the uncertainty of average, which goes towards zero when your number of customers goes larger according to law of large numbers. Therefore, you really do not need uncertainty in your model most of the time.
|
What are the downsides of bayesian neural networks?
|
There are still many disadvantages of BNN compared with NN as listed below:
The computational cost is heavier. In here I am not just referring to the cost of training, i.e. getting the posterior dist
|
What are the downsides of bayesian neural networks?
There are still many disadvantages of BNN compared with NN as listed below:
The computational cost is heavier. In here I am not just referring to the cost of training, i.e. getting the posterior distribution of all parameters. This part is in fact OK if you use variational inference with a simple distribution family for BNN parameters. After your model is deployed and you want to make an inference, then you will need to sample N parameters from their posterior distribution in order to get the distribution of output, and this is N times more computational cost than just using NN.
The tools of BNN is not popularized yet and is not so automatic as tools of NN.
You need to make some assumptions about your prior, which is relatively difficult for most users.
This is the most important reason that I think why BNN is not adopted universally instead of NN: the uncertainty we get is not as useful as we thought at first glance. Let's take an easy example: say you have two types of customers. Type A will have equal probability of giving you \$40 or \$60, and Type B will have equal probability of giving you \$30 or \$70. They have equal expectation, but larger uncertainty for the type A customer. Assume your BNN works perfectly well to tell one distribution from another. However, the uncertainty here does not matter if you have one million customers of each, because at that time what matters is not the uncertainty of individual customers, but the uncertainty of average, which goes towards zero when your number of customers goes larger according to law of large numbers. Therefore, you really do not need uncertainty in your model most of the time.
|
What are the downsides of bayesian neural networks?
There are still many disadvantages of BNN compared with NN as listed below:
The computational cost is heavier. In here I am not just referring to the cost of training, i.e. getting the posterior dist
|
41,925
|
What are the downsides of bayesian neural networks?
|
BNN is not capable to recognize posterior distribution. It returns expectation and variance (confidence interval in the assumption of normal distribution). It is overkill to build a stochastic model for modelling expectation and variance. You can find backup experiments on ezcodesample.com proving that it is completely fake.
|
What are the downsides of bayesian neural networks?
|
BNN is not capable to recognize posterior distribution. It returns expectation and variance (confidence interval in the assumption of normal distribution). It is overkill to build a stochastic model f
|
What are the downsides of bayesian neural networks?
BNN is not capable to recognize posterior distribution. It returns expectation and variance (confidence interval in the assumption of normal distribution). It is overkill to build a stochastic model for modelling expectation and variance. You can find backup experiments on ezcodesample.com proving that it is completely fake.
|
What are the downsides of bayesian neural networks?
BNN is not capable to recognize posterior distribution. It returns expectation and variance (confidence interval in the assumption of normal distribution). It is overkill to build a stochastic model f
|
41,926
|
Negative Binomial distribution
|
Use a Chi-squared test as explained at https://stats.stackexchange.com/a/17148/919. The R code below implements such a test, with defaults appropriate for calving data.
The Chi-squared test is appropriate for such discrete datasets, as explained at that link. To see that it might work to decide whether a particular dataset is consistent with a Negative Binomial distribution, here are the results of simulating a thousand datasets of 180 independent values.
The first two datasets are shown in the scatterplot at left (the pairings are arbitrary). A histogram of the Chi-squared p-values is shown next. Its small deviations from a uniform distribution (shown by the horizontal gray line) are attributable to chance, strongly indicating this test provides the correct p-values when the null hypothesis (of a Negative Binomial distribution) is true.
The power of this test is its ability to discriminate Negative Binomial from other distributions. For typical calving data, the Negative Binomial is close to Normal (allowing for rounding to the nearest day). So are other distributions, such as Poisson distributions with appropriate parameters. Thus, we shouldn't expect much of this test (or any such test). The distributions of p-values from simulated data with Poisson and Normal distributions appear in the right two histograms. Because there is a tendency for p-values to be smaller with these alternatives, the test has some power to detect the difference. But because the p-values aren't very small very often, the power is low: with a dataset of 180, it will be difficult to distinguish Negative Binomial from Poisson from Normal data. This suggests that the question whether the data are consistent with a Negative Binomial distribution might have little inherent meaning or usefulness.
The parameters for this example come from Werth, Azzam and Kinder, Calving intervals in beef cows at 2, 3, and 4 years of age when breeding is not restricted after calving. J. Animal Sci. 1996, 74:593-596. Because this paper does not provide adequate descriptive statistics, I estimated the mean and variance (and set the breaks for the chi-squared test) from this figure:
This is the R code to implement the calculations and plots shown here. It's not bulletproof: before applying any of these functions to other datasets, it would be prudent to test them and perhaps include code to verify the maximum likelihood estimates are correct.
library(MASS) # rnegbin
#
# Specify parameters to generate data.
#
mu <- 360 # Mean days in interval
v <- 30^2 # Variance of days: must exceed mu^2
n <- 18000 # Sample size
n.sim <- 3e2 # Simulation size
#
# Functions to fit a negative binomial to data.
#
pnegbin <- function(k, mu, theta) {
v <- mu + mu^2/theta # Variance
p <- 1 - mu / v # "Success probability"
r <- mu * (1-p) / p # "Number of failures until the experiment is stopped"
pbeta(p, k+1, r, lower.tail=FALSE)
}
# #
# # Test `pnegbin` by comparing it to randomly generated data.
# #
# z <- rnegbin(1e3, mu, theta)
# plot(ecdf(z))
# curve(pnegbin(x, mu, theta), add=TRUE, col="Red", lwd=2)
#
# Maximum likelihood fitting of data based on counts in predefined bins.
# Returns the fit and chi-squared statistics.
#
negbin.fit <- function(x, breaks) {
if (missing(breaks))
breaks <- c(-1, seq(-40, 30, by=10) + 365, Inf)
observed <- table(cut(x, breaks))
n <- length(x)
counts.expected <- function(n, mu, theta)
n * diff(pnegbin(breaks, mu, theta))
log.lik.m <- function(parameters) {
mu <- parameters[1]
theta <- parameters[2]
-sum(observed * log(diff(pnegbin(breaks, mu, theta))))
}
v <- var(x)
m <- mean(x)
if (v > m) theta <- m^2 / (v - m) else theta <- 1e6 * m^2
parameters <- c(m, theta)
fit <- optim(parameters, log.lik.m)
expected <- counts.expected(n, fit$par[1], fit$par[2])
chi.square <- sum(res <- (observed - expected)^2 / expected)
df <- length(observed) - length(parameters) - 1
p.value <- pchisq(chi.square, df, lower.tail=FALSE)
return(list(fit=fit, chi.square=chi.square, df=df, p.value=p.value,
data=x, breaks=breaks, observed=observed, expected=expected,
residuals=res))
}
#
# Test on randomly generated data.
#
# set.seed(17)
sim <- replicate(n.sim, negbin.fit(rnegbin(n, mu, theta))$p.value)
#
# Generate data for illustration.
#
theta <- mu^2 / (v - mu)
x <- rnegbin(n, mu, theta)
y <- rnegbin(n, mu-4.3, theta)
#
# Display data and simulation.
#
par(mfrow=c(1,4))
plot(x-365, y-365, pch=15, col="#00000040",
xlab="First calving interval", ylab="Second calving interval",
main="Simulated Data")
abline(h=0)
abline(v=0)
hist(sim, freq=FALSE, xlab="p-values", ylab="Frequency",
main="Histogram of Simulated P-values",
sub="Negative Binomial Data")
abline(h=1, col="Gray", lty=3)
#
# Simulate non-Negative Binomial data for comparison.
#
sim.2 <- replicate(n.sim, negbin.fit(rpois(n, mu))$p.value)
hist(sim.2, freq=FALSE, xlab="p-values", ylab="Frequency",
main="Histogram of Simulated P-values",
sub="Poisson Data")
abline(h=1, col="Gray", lty=3)
sim.3 <- replicate(n.sim, negbin.fit(floor(rnorm(n, mu, sqrt(mu))))$p.value)
hist(sim.3, freq=FALSE, xlab="p-values", ylab="Frequency",
main="Histogram of Simulated P-values",
sub="Normal Data")
abline(h=1, col="Gray", lty=3)
par(mfrow=c(1,1))
|
Negative Binomial distribution
|
Use a Chi-squared test as explained at https://stats.stackexchange.com/a/17148/919. The R code below implements such a test, with defaults appropriate for calving data.
The Chi-squared test is approp
|
Negative Binomial distribution
Use a Chi-squared test as explained at https://stats.stackexchange.com/a/17148/919. The R code below implements such a test, with defaults appropriate for calving data.
The Chi-squared test is appropriate for such discrete datasets, as explained at that link. To see that it might work to decide whether a particular dataset is consistent with a Negative Binomial distribution, here are the results of simulating a thousand datasets of 180 independent values.
The first two datasets are shown in the scatterplot at left (the pairings are arbitrary). A histogram of the Chi-squared p-values is shown next. Its small deviations from a uniform distribution (shown by the horizontal gray line) are attributable to chance, strongly indicating this test provides the correct p-values when the null hypothesis (of a Negative Binomial distribution) is true.
The power of this test is its ability to discriminate Negative Binomial from other distributions. For typical calving data, the Negative Binomial is close to Normal (allowing for rounding to the nearest day). So are other distributions, such as Poisson distributions with appropriate parameters. Thus, we shouldn't expect much of this test (or any such test). The distributions of p-values from simulated data with Poisson and Normal distributions appear in the right two histograms. Because there is a tendency for p-values to be smaller with these alternatives, the test has some power to detect the difference. But because the p-values aren't very small very often, the power is low: with a dataset of 180, it will be difficult to distinguish Negative Binomial from Poisson from Normal data. This suggests that the question whether the data are consistent with a Negative Binomial distribution might have little inherent meaning or usefulness.
The parameters for this example come from Werth, Azzam and Kinder, Calving intervals in beef cows at 2, 3, and 4 years of age when breeding is not restricted after calving. J. Animal Sci. 1996, 74:593-596. Because this paper does not provide adequate descriptive statistics, I estimated the mean and variance (and set the breaks for the chi-squared test) from this figure:
This is the R code to implement the calculations and plots shown here. It's not bulletproof: before applying any of these functions to other datasets, it would be prudent to test them and perhaps include code to verify the maximum likelihood estimates are correct.
library(MASS) # rnegbin
#
# Specify parameters to generate data.
#
mu <- 360 # Mean days in interval
v <- 30^2 # Variance of days: must exceed mu^2
n <- 18000 # Sample size
n.sim <- 3e2 # Simulation size
#
# Functions to fit a negative binomial to data.
#
pnegbin <- function(k, mu, theta) {
v <- mu + mu^2/theta # Variance
p <- 1 - mu / v # "Success probability"
r <- mu * (1-p) / p # "Number of failures until the experiment is stopped"
pbeta(p, k+1, r, lower.tail=FALSE)
}
# #
# # Test `pnegbin` by comparing it to randomly generated data.
# #
# z <- rnegbin(1e3, mu, theta)
# plot(ecdf(z))
# curve(pnegbin(x, mu, theta), add=TRUE, col="Red", lwd=2)
#
# Maximum likelihood fitting of data based on counts in predefined bins.
# Returns the fit and chi-squared statistics.
#
negbin.fit <- function(x, breaks) {
if (missing(breaks))
breaks <- c(-1, seq(-40, 30, by=10) + 365, Inf)
observed <- table(cut(x, breaks))
n <- length(x)
counts.expected <- function(n, mu, theta)
n * diff(pnegbin(breaks, mu, theta))
log.lik.m <- function(parameters) {
mu <- parameters[1]
theta <- parameters[2]
-sum(observed * log(diff(pnegbin(breaks, mu, theta))))
}
v <- var(x)
m <- mean(x)
if (v > m) theta <- m^2 / (v - m) else theta <- 1e6 * m^2
parameters <- c(m, theta)
fit <- optim(parameters, log.lik.m)
expected <- counts.expected(n, fit$par[1], fit$par[2])
chi.square <- sum(res <- (observed - expected)^2 / expected)
df <- length(observed) - length(parameters) - 1
p.value <- pchisq(chi.square, df, lower.tail=FALSE)
return(list(fit=fit, chi.square=chi.square, df=df, p.value=p.value,
data=x, breaks=breaks, observed=observed, expected=expected,
residuals=res))
}
#
# Test on randomly generated data.
#
# set.seed(17)
sim <- replicate(n.sim, negbin.fit(rnegbin(n, mu, theta))$p.value)
#
# Generate data for illustration.
#
theta <- mu^2 / (v - mu)
x <- rnegbin(n, mu, theta)
y <- rnegbin(n, mu-4.3, theta)
#
# Display data and simulation.
#
par(mfrow=c(1,4))
plot(x-365, y-365, pch=15, col="#00000040",
xlab="First calving interval", ylab="Second calving interval",
main="Simulated Data")
abline(h=0)
abline(v=0)
hist(sim, freq=FALSE, xlab="p-values", ylab="Frequency",
main="Histogram of Simulated P-values",
sub="Negative Binomial Data")
abline(h=1, col="Gray", lty=3)
#
# Simulate non-Negative Binomial data for comparison.
#
sim.2 <- replicate(n.sim, negbin.fit(rpois(n, mu))$p.value)
hist(sim.2, freq=FALSE, xlab="p-values", ylab="Frequency",
main="Histogram of Simulated P-values",
sub="Poisson Data")
abline(h=1, col="Gray", lty=3)
sim.3 <- replicate(n.sim, negbin.fit(floor(rnorm(n, mu, sqrt(mu))))$p.value)
hist(sim.3, freq=FALSE, xlab="p-values", ylab="Frequency",
main="Histogram of Simulated P-values",
sub="Normal Data")
abline(h=1, col="Gray", lty=3)
par(mfrow=c(1,1))
|
Negative Binomial distribution
Use a Chi-squared test as explained at https://stats.stackexchange.com/a/17148/919. The R code below implements such a test, with defaults appropriate for calving data.
The Chi-squared test is approp
|
41,927
|
Negative Binomial distribution
|
I don't think that negative binomial is a reasonable first choice of the distribution for this variable. Yes, the number of days is a discrete number, but the true interval between the events is continuous: cows do not give birth exactly at given hour of the day. It just happens so that you measure the interval in days. Therefore, there is not reason to start with discrete distributions. The underlying distribution is certainly not discrete. If you were measuring number of births a cow given in 5 years, that would be inherently discrete quantity, and would ask for a discrete probability distribution.
In your case, my first guess would be to try something like an exponential distribution.
|
Negative Binomial distribution
|
I don't think that negative binomial is a reasonable first choice of the distribution for this variable. Yes, the number of days is a discrete number, but the true interval between the events is conti
|
Negative Binomial distribution
I don't think that negative binomial is a reasonable first choice of the distribution for this variable. Yes, the number of days is a discrete number, but the true interval between the events is continuous: cows do not give birth exactly at given hour of the day. It just happens so that you measure the interval in days. Therefore, there is not reason to start with discrete distributions. The underlying distribution is certainly not discrete. If you were measuring number of births a cow given in 5 years, that would be inherently discrete quantity, and would ask for a discrete probability distribution.
In your case, my first guess would be to try something like an exponential distribution.
|
Negative Binomial distribution
I don't think that negative binomial is a reasonable first choice of the distribution for this variable. Yes, the number of days is a discrete number, but the true interval between the events is conti
|
41,928
|
Negative Binomial distribution
|
If you are set on using a discrete distribution then think of the days as a count variable and try a Poisson instead of a negative binomial. Then you can just use a glm in R. (I know people will get angry and say use a continuous, but maybe running both you can see why).
so do something like this
modp<- glm(Y ~ X1 + X2, family = poisson, data)
then if you are really set on the negative binomial you can load the MASS package and use:
modnb <- glm.nb(Y ~ X1 + X2, data)
Some comments:
Some ways to see if the form you chose after the poisson model is correct:
run summary(modp) and look at the residual deviance. If it is greater than the residual deviance degrees of freedom then you have a bad fit. You will need to do a few things:
First, check for outliers using halfnorm(residuals(modp) If there aren't any surprises there, then try looking at the variance. You can do something like
plot(log(fitted(modp)), log((data$Y-fitted(modp))^2), xlab=
expression(hat(mu)),ylab=expression((y-hat(mu))^2))
abline(0,1)
Make sure the variance is proportional (moves with the mean, since Poisson only has one parameter). You will also want the points to look randomly thrown in around the abline. So if they are all over or all under, then you have a dispersion problem.
To fix the dispersion problem, you can use family = quasipoisson. so make a new modp1 <- glm(Y ~ X1 + X2, family = quasipoisson, data) and then look at your summary again: summary(modp1)
if your residual deviance is still too high, then you need to transform your variables or, more likely, you have a specification error, ie, wrong distribution.
To check for bad fit here, you can use a chi-square test
pchisq(deviance(modp1),df.residual(modp1),lower=FALSE)
You will want this to be greater than a level of significance, so something like 0.05 or 0.1. If it is smaller than this, then you still have a bad fit and you can try the negative binomial code above.
|
Negative Binomial distribution
|
If you are set on using a discrete distribution then think of the days as a count variable and try a Poisson instead of a negative binomial. Then you can just use a glm in R. (I know people will get a
|
Negative Binomial distribution
If you are set on using a discrete distribution then think of the days as a count variable and try a Poisson instead of a negative binomial. Then you can just use a glm in R. (I know people will get angry and say use a continuous, but maybe running both you can see why).
so do something like this
modp<- glm(Y ~ X1 + X2, family = poisson, data)
then if you are really set on the negative binomial you can load the MASS package and use:
modnb <- glm.nb(Y ~ X1 + X2, data)
Some comments:
Some ways to see if the form you chose after the poisson model is correct:
run summary(modp) and look at the residual deviance. If it is greater than the residual deviance degrees of freedom then you have a bad fit. You will need to do a few things:
First, check for outliers using halfnorm(residuals(modp) If there aren't any surprises there, then try looking at the variance. You can do something like
plot(log(fitted(modp)), log((data$Y-fitted(modp))^2), xlab=
expression(hat(mu)),ylab=expression((y-hat(mu))^2))
abline(0,1)
Make sure the variance is proportional (moves with the mean, since Poisson only has one parameter). You will also want the points to look randomly thrown in around the abline. So if they are all over or all under, then you have a dispersion problem.
To fix the dispersion problem, you can use family = quasipoisson. so make a new modp1 <- glm(Y ~ X1 + X2, family = quasipoisson, data) and then look at your summary again: summary(modp1)
if your residual deviance is still too high, then you need to transform your variables or, more likely, you have a specification error, ie, wrong distribution.
To check for bad fit here, you can use a chi-square test
pchisq(deviance(modp1),df.residual(modp1),lower=FALSE)
You will want this to be greater than a level of significance, so something like 0.05 or 0.1. If it is smaller than this, then you still have a bad fit and you can try the negative binomial code above.
|
Negative Binomial distribution
If you are set on using a discrete distribution then think of the days as a count variable and try a Poisson instead of a negative binomial. Then you can just use a glm in R. (I know people will get a
|
41,929
|
Why is MCMC not reliable when compared to stochastic gradient descent?
|
In practice, the reliability of both methods will depend on the situation. But, here are a few points to consider:
MCMC must sample the space in a way that's representative of the distribution. This can require a large number of points, particularly in high dimensions. In contrast, SGD doesn't care about the overall structure of the objective function; it only needs to find the minimum, and tries to do this by stepping downhill at each point. In a sense, this is a simpler problem.
However, when the objective function has multiple local minima, SGD can easily get stuck--once we enter the basin surrounding a local minimum, there may be no escape. So, SGD isn't a feasible strategy for problems where there are multiple local minima but we need the global minimum. In the context of deep learning, objective functions are highly nonconvex and contain many local minima. But, the saving grace is that many of these local minima can correspond to networks with good generalization performance. We don't need the global minimum and, in fact, we may not even want it (as it can have worse generalization performance than many of the local minima). SGD can also be trapped by saddle points (or at least take exponential time to escape from them). These are quite prevalent in cost functions for deep nets.
MCMC doesn't get stuck in the same sense because it doesn't seek extrema but, rather, aims to sample from or integrate over the entire distribution. But, MCMC can become trapped if the distribution contains widely separated modes, as the probability of transitioning to the low density region separating the modes will be small. There are various approaches for dealing with this situation.
Compared to SGD, MCMC can require more tuning to work on a particular problem. First, there's a choice of which MCMC algorithm to use, which is problem dependent. Then, there's a choice of algorithm-specific parameters, which can also be highly problem dependent. Finally, there's burn-in, discarding correlated samples, and number of iterations (perhaps not that big a deal in comparison to the previous choices). SGD is a single algorithm that tends to work in many circumstances (even if other optimization methods might be more efficient). There are only a few parameters, which are straightforward to set. Chief among them is learning rate, which can be set using cross validation. Batch size can typically be chosen a priori, and many choices work. Number of iterations can be chosen by looking for convergence or monitoring performance on the validation set (early stopping).
|
Why is MCMC not reliable when compared to stochastic gradient descent?
|
In practice, the reliability of both methods will depend on the situation. But, here are a few points to consider:
MCMC must sample the space in a way that's representative of the distribution. This c
|
Why is MCMC not reliable when compared to stochastic gradient descent?
In practice, the reliability of both methods will depend on the situation. But, here are a few points to consider:
MCMC must sample the space in a way that's representative of the distribution. This can require a large number of points, particularly in high dimensions. In contrast, SGD doesn't care about the overall structure of the objective function; it only needs to find the minimum, and tries to do this by stepping downhill at each point. In a sense, this is a simpler problem.
However, when the objective function has multiple local minima, SGD can easily get stuck--once we enter the basin surrounding a local minimum, there may be no escape. So, SGD isn't a feasible strategy for problems where there are multiple local minima but we need the global minimum. In the context of deep learning, objective functions are highly nonconvex and contain many local minima. But, the saving grace is that many of these local minima can correspond to networks with good generalization performance. We don't need the global minimum and, in fact, we may not even want it (as it can have worse generalization performance than many of the local minima). SGD can also be trapped by saddle points (or at least take exponential time to escape from them). These are quite prevalent in cost functions for deep nets.
MCMC doesn't get stuck in the same sense because it doesn't seek extrema but, rather, aims to sample from or integrate over the entire distribution. But, MCMC can become trapped if the distribution contains widely separated modes, as the probability of transitioning to the low density region separating the modes will be small. There are various approaches for dealing with this situation.
Compared to SGD, MCMC can require more tuning to work on a particular problem. First, there's a choice of which MCMC algorithm to use, which is problem dependent. Then, there's a choice of algorithm-specific parameters, which can also be highly problem dependent. Finally, there's burn-in, discarding correlated samples, and number of iterations (perhaps not that big a deal in comparison to the previous choices). SGD is a single algorithm that tends to work in many circumstances (even if other optimization methods might be more efficient). There are only a few parameters, which are straightforward to set. Chief among them is learning rate, which can be set using cross validation. Batch size can typically be chosen a priori, and many choices work. Number of iterations can be chosen by looking for convergence or monitoring performance on the validation set (early stopping).
|
Why is MCMC not reliable when compared to stochastic gradient descent?
In practice, the reliability of both methods will depend on the situation. But, here are a few points to consider:
MCMC must sample the space in a way that's representative of the distribution. This c
|
41,930
|
How to make a LSTM to predict next word
|
The most widely-used method for predicting the probability of a categorical output is to use softmax activation in the final layer and cross-entropy loss for training. This is because you're modeling the probability that each word in your dictionary comes next. Full softmax can be expensive, though, so sampled softmax and its variants can speed things up.
Another option is to observe that it's generally expensive to work with words, because there are so many of them: you'll have a large embedding layer, and a large projection layer -- that's a lot of data to store. Instead, working with characters can work very well -- there are relatively fewer English characters, so full softmax works fine, and LSTMs can essentially "learn" English from nothing.
To generate the next word, perhaps because you want your network write a new sonnet, take the output of the network as a probability vector. Then sample from a multinomial distribution. The sampled element is used as the next input to the network, and so on. This works because the output of the softmax layer is a probability vector, and you can write your sonnet word-by-word, or character-by-character.
Using ReLU as the output layer doesn't make much sense. Your outcome is categorical so you need a probability distribution over the categorical output. ReLUs output unbounded values, so it's not a probability distribution.
This article is a good place to start: Andrej Karpathy, "The Unreasonable Effectiveness of Recurrent Neural Networks"
|
How to make a LSTM to predict next word
|
The most widely-used method for predicting the probability of a categorical output is to use softmax activation in the final layer and cross-entropy loss for training. This is because you're modeling
|
How to make a LSTM to predict next word
The most widely-used method for predicting the probability of a categorical output is to use softmax activation in the final layer and cross-entropy loss for training. This is because you're modeling the probability that each word in your dictionary comes next. Full softmax can be expensive, though, so sampled softmax and its variants can speed things up.
Another option is to observe that it's generally expensive to work with words, because there are so many of them: you'll have a large embedding layer, and a large projection layer -- that's a lot of data to store. Instead, working with characters can work very well -- there are relatively fewer English characters, so full softmax works fine, and LSTMs can essentially "learn" English from nothing.
To generate the next word, perhaps because you want your network write a new sonnet, take the output of the network as a probability vector. Then sample from a multinomial distribution. The sampled element is used as the next input to the network, and so on. This works because the output of the softmax layer is a probability vector, and you can write your sonnet word-by-word, or character-by-character.
Using ReLU as the output layer doesn't make much sense. Your outcome is categorical so you need a probability distribution over the categorical output. ReLUs output unbounded values, so it's not a probability distribution.
This article is a good place to start: Andrej Karpathy, "The Unreasonable Effectiveness of Recurrent Neural Networks"
|
How to make a LSTM to predict next word
The most widely-used method for predicting the probability of a categorical output is to use softmax activation in the final layer and cross-entropy loss for training. This is because you're modeling
|
41,931
|
Probability of k zeros give the sum of n Poisson random variables is t?
|
Let $Y:=X_1+\ldots+X_n$. Observe that the distribution of $(X_1,\ldots,X_n)$ conditional on $Y = t$ is multinomial (exercise). This gives a conceptually easier way to think about the problem- you have $n$ boxes and throw $t$ balls in them at random. What is the probability that $k$ are empty?
Well, first of all, there are $n^t$ ways of throwing the $t$ balls in the $n$ boxes with no restrictions.
Now it gets a little more complicated, even though we are just counting stuff. There are $\binom{n}{k}$ ways of choosing the $k$ boxes that stay empty. We are then left with $t$ balls to throw into the $n-k$ remaining boxes, such that each box is not empty. You can do this by inclusion/exclusion, just like in the proof of the Stirling number https://math.stackexchange.com/questions/550256/stirling-numbers-of-second-type .
Combining these ingredients gives the desired probability,
$$
\frac{1}{n^t}\binom{n}{k}\sum_{j=0}^{n-k} (-1)^{n-k-j} \binom{n-k}{j} j^t,
$$
$t \ge n-k$, $n \ge k$.
Note that $\lambda$ does not feature in the answer.
Out of interest and as a quick exercise I coded this (borrowing a Stirling number function I found with Google) to see what the answer looks like:
##-- Stirling numbers of the 2nd kind
##-- (Abramowitz/Stegun: 24,1,4 (p. 824-5 ; Table 24.4, p.835)
##> S^{(m)}_n = number of ways of partitioning a set of $n$ elements into $m$
##> non-empty subsets
Stirling2 <- function(n,m)
{
## Purpose: Stirling Numbers of the 2-nd kind
## S^{(m)}_n = number of ways of partitioning a set of
## $n$ elements into $m$ non-empty subsets
## Author: Martin Maechler, Date: May 28 1992, 23:42
## ----------------------------------------------------------------
## Abramowitz/Stegun: 24,1,4 (p. 824-5 ; Table 24.4, p.835)
## Closed Form : p.824 "C."
## ----------------------------------------------------------------
if (0 > m || m > n) stop("'m' must be in 0..n !")
k <- 0:m
sig <- rep(c(1,-1)*(-1)^m, length= m+1)# 1 for m=0; -1 1 (m=1)
## The following gives rounding errors for (25,5) :
## r <- sum( sig * k^n /(gamma(k+1)*gamma(m+1-k)) )
ga <- gamma(k+1)
round(sum( sig * k^n /(ga * rev(ga))))
}
pmf<-function(n,t,k) {
if (t >= (n-k) & n >= k) {
(choose(n,k) * factorial(n-k) * Stirling2(t,n-k) )/(n^t)
} else {
0
}
}
lambda <- 1
n <- 10
reps <- 500000
set.seed(2017)
X <- matrix(ncol=n,nrow=reps,data=rpois(n*reps,lambda))
K <- apply(X, 1,function(x){sum(x == 0)})
hist(K)
# restrict only to those that sum to t
Y<-rowSums(X)
t<-8
G<- (Y == t)
sum(G)
k <- 5
#head(X[which(K==k),])
#head(Y[which(K==k)])
#head(X[G,])
#head(Y[G])
posskvalues <- (n-t):n
nk <- length(posskvalues)
empP <- numeric(nk)
thP <- numeric(nk)
for(i in 1:nk) {
k <- posskvalues[i]
# sum(K[G] == k)
empP[i] <- sum(K[G] == k)/sum(G)
thP[i] <- pmf(n,t,k)
}
plot(posskvalues,empP,main=paste("n=",n,", t=",t))
points(posskvalues,thP,pch="x")
|
Probability of k zeros give the sum of n Poisson random variables is t?
|
Let $Y:=X_1+\ldots+X_n$. Observe that the distribution of $(X_1,\ldots,X_n)$ conditional on $Y = t$ is multinomial (exercise). This gives a conceptually easier way to think about the problem- you ha
|
Probability of k zeros give the sum of n Poisson random variables is t?
Let $Y:=X_1+\ldots+X_n$. Observe that the distribution of $(X_1,\ldots,X_n)$ conditional on $Y = t$ is multinomial (exercise). This gives a conceptually easier way to think about the problem- you have $n$ boxes and throw $t$ balls in them at random. What is the probability that $k$ are empty?
Well, first of all, there are $n^t$ ways of throwing the $t$ balls in the $n$ boxes with no restrictions.
Now it gets a little more complicated, even though we are just counting stuff. There are $\binom{n}{k}$ ways of choosing the $k$ boxes that stay empty. We are then left with $t$ balls to throw into the $n-k$ remaining boxes, such that each box is not empty. You can do this by inclusion/exclusion, just like in the proof of the Stirling number https://math.stackexchange.com/questions/550256/stirling-numbers-of-second-type .
Combining these ingredients gives the desired probability,
$$
\frac{1}{n^t}\binom{n}{k}\sum_{j=0}^{n-k} (-1)^{n-k-j} \binom{n-k}{j} j^t,
$$
$t \ge n-k$, $n \ge k$.
Note that $\lambda$ does not feature in the answer.
Out of interest and as a quick exercise I coded this (borrowing a Stirling number function I found with Google) to see what the answer looks like:
##-- Stirling numbers of the 2nd kind
##-- (Abramowitz/Stegun: 24,1,4 (p. 824-5 ; Table 24.4, p.835)
##> S^{(m)}_n = number of ways of partitioning a set of $n$ elements into $m$
##> non-empty subsets
Stirling2 <- function(n,m)
{
## Purpose: Stirling Numbers of the 2-nd kind
## S^{(m)}_n = number of ways of partitioning a set of
## $n$ elements into $m$ non-empty subsets
## Author: Martin Maechler, Date: May 28 1992, 23:42
## ----------------------------------------------------------------
## Abramowitz/Stegun: 24,1,4 (p. 824-5 ; Table 24.4, p.835)
## Closed Form : p.824 "C."
## ----------------------------------------------------------------
if (0 > m || m > n) stop("'m' must be in 0..n !")
k <- 0:m
sig <- rep(c(1,-1)*(-1)^m, length= m+1)# 1 for m=0; -1 1 (m=1)
## The following gives rounding errors for (25,5) :
## r <- sum( sig * k^n /(gamma(k+1)*gamma(m+1-k)) )
ga <- gamma(k+1)
round(sum( sig * k^n /(ga * rev(ga))))
}
pmf<-function(n,t,k) {
if (t >= (n-k) & n >= k) {
(choose(n,k) * factorial(n-k) * Stirling2(t,n-k) )/(n^t)
} else {
0
}
}
lambda <- 1
n <- 10
reps <- 500000
set.seed(2017)
X <- matrix(ncol=n,nrow=reps,data=rpois(n*reps,lambda))
K <- apply(X, 1,function(x){sum(x == 0)})
hist(K)
# restrict only to those that sum to t
Y<-rowSums(X)
t<-8
G<- (Y == t)
sum(G)
k <- 5
#head(X[which(K==k),])
#head(Y[which(K==k)])
#head(X[G,])
#head(Y[G])
posskvalues <- (n-t):n
nk <- length(posskvalues)
empP <- numeric(nk)
thP <- numeric(nk)
for(i in 1:nk) {
k <- posskvalues[i]
# sum(K[G] == k)
empP[i] <- sum(K[G] == k)/sum(G)
thP[i] <- pmf(n,t,k)
}
plot(posskvalues,empP,main=paste("n=",n,", t=",t))
points(posskvalues,thP,pch="x")
|
Probability of k zeros give the sum of n Poisson random variables is t?
Let $Y:=X_1+\ldots+X_n$. Observe that the distribution of $(X_1,\ldots,X_n)$ conditional on $Y = t$ is multinomial (exercise). This gives a conceptually easier way to think about the problem- you ha
|
41,932
|
Probability of k zeros give the sum of n Poisson random variables is t?
|
$Y = X_1+X_2+\cdots + X_n$ is a Poisson random variable with parameter $n\lambda$. So you can write down an expression for $P(Y=t)$.
There are $\binom{n}{k}$ choices for a set of $k$ variables that are zero. Pick a specific set. Then, the sum of the complementary set of variables is a Poisson random variable $Z$ with parameter $(n-k)\lambda$, and $Z$ is independent of the chosen $k$ variables. So, you can use independence to write down expressions for
$P(Z=t)$,
$P($chosen $k$ variables are $0)$,
$P($chosen $k$ are zero AND $Z=t) = P($chosen $k$ are zero AND $Y=t)$.
Can you take it from here? There shouldn't be any binomial distributions involved...
|
Probability of k zeros give the sum of n Poisson random variables is t?
|
$Y = X_1+X_2+\cdots + X_n$ is a Poisson random variable with parameter $n\lambda$. So you can write down an expression for $P(Y=t)$.
There are $\binom{n}{k}$ choices for a set of $k$ variables that ar
|
Probability of k zeros give the sum of n Poisson random variables is t?
$Y = X_1+X_2+\cdots + X_n$ is a Poisson random variable with parameter $n\lambda$. So you can write down an expression for $P(Y=t)$.
There are $\binom{n}{k}$ choices for a set of $k$ variables that are zero. Pick a specific set. Then, the sum of the complementary set of variables is a Poisson random variable $Z$ with parameter $(n-k)\lambda$, and $Z$ is independent of the chosen $k$ variables. So, you can use independence to write down expressions for
$P(Z=t)$,
$P($chosen $k$ variables are $0)$,
$P($chosen $k$ are zero AND $Z=t) = P($chosen $k$ are zero AND $Y=t)$.
Can you take it from here? There shouldn't be any binomial distributions involved...
|
Probability of k zeros give the sum of n Poisson random variables is t?
$Y = X_1+X_2+\cdots + X_n$ is a Poisson random variable with parameter $n\lambda$. So you can write down an expression for $P(Y=t)$.
There are $\binom{n}{k}$ choices for a set of $k$ variables that ar
|
41,933
|
Confused about the realizability assumption and equations of upper bound
|
Let's formulate the problem in a clearer way.
ASSUMPTIONS:
$h_s = \underset{h\in\mathcal{H}}{\arg\min} \, L_{S}(h) \qquad$ (definition of ERM)
$\exists h^{*} \in \mathcal{H}: L_{(\mathcal{D}, f)}(h^{*}) = 0 \qquad$ (realizability assumption)
$\mathcal{H}_B = \{ h \in \mathcal{H}: L_{(\mathcal{D}, f)}(h) > \epsilon \}$
$M = \{ S\vert_x: \exists h\in \mathcal{H}_{B}, \, L_{S}(h) = 0 \}$
PROVE: $\{S\vert_x: L_{(\mathcal{D}, f)}\left(h_{S} \right) > \epsilon \} \subseteq M$
PROOF:
To prove a set A is a subset of set B, or $A \subseteq B$, we need to prove that every element in set A is in set B. Here, given the definition of $\mathcal{H}_{B}$, we can rewrite set M as
$M = \{ S\vert_{x}: h \in \mathcal{H}, \, L_{(\mathcal{D}, f)}(h) > \epsilon, \, L_{S}(h) = 0 \}$.
Thus, to be an element in set $M$, it needs to satisfy the 3 conditions specified in set M.
Every element of the set on the right-hand side already satisfies two conditions:
$h_{S} \in \mathcal{H}$
$L_{(\mathcal{D}, f)}\left(h_{S} \right) > \epsilon$.
Hence, proving $L_{S}(h_{S}) = 0$ would complete the proof. This can be done by employing the definition of ERM and the realizability assumption.
From the realizability assumption combined with the fact that $S$ is a sample from $\mathcal{D}$:
$L_{(\mathcal{D}, f)}(h^{*}) = 0 \implies L_{S}(h^{*}) = 0$.
From the definition of ERM:
$L_{S}(h_S) \le L_{S}(h^{*}) = 0 \quad \implies L_{S}(h_S) = 0$.
|
Confused about the realizability assumption and equations of upper bound
|
Let's formulate the problem in a clearer way.
ASSUMPTIONS:
$h_s = \underset{h\in\mathcal{H}}{\arg\min} \, L_{S}(h) \qquad$ (definition of ERM)
$\exists h^{*} \in \mathcal{H}: L_{(\mathcal{D}, f)}(h^{
|
Confused about the realizability assumption and equations of upper bound
Let's formulate the problem in a clearer way.
ASSUMPTIONS:
$h_s = \underset{h\in\mathcal{H}}{\arg\min} \, L_{S}(h) \qquad$ (definition of ERM)
$\exists h^{*} \in \mathcal{H}: L_{(\mathcal{D}, f)}(h^{*}) = 0 \qquad$ (realizability assumption)
$\mathcal{H}_B = \{ h \in \mathcal{H}: L_{(\mathcal{D}, f)}(h) > \epsilon \}$
$M = \{ S\vert_x: \exists h\in \mathcal{H}_{B}, \, L_{S}(h) = 0 \}$
PROVE: $\{S\vert_x: L_{(\mathcal{D}, f)}\left(h_{S} \right) > \epsilon \} \subseteq M$
PROOF:
To prove a set A is a subset of set B, or $A \subseteq B$, we need to prove that every element in set A is in set B. Here, given the definition of $\mathcal{H}_{B}$, we can rewrite set M as
$M = \{ S\vert_{x}: h \in \mathcal{H}, \, L_{(\mathcal{D}, f)}(h) > \epsilon, \, L_{S}(h) = 0 \}$.
Thus, to be an element in set $M$, it needs to satisfy the 3 conditions specified in set M.
Every element of the set on the right-hand side already satisfies two conditions:
$h_{S} \in \mathcal{H}$
$L_{(\mathcal{D}, f)}\left(h_{S} \right) > \epsilon$.
Hence, proving $L_{S}(h_{S}) = 0$ would complete the proof. This can be done by employing the definition of ERM and the realizability assumption.
From the realizability assumption combined with the fact that $S$ is a sample from $\mathcal{D}$:
$L_{(\mathcal{D}, f)}(h^{*}) = 0 \implies L_{S}(h^{*}) = 0$.
From the definition of ERM:
$L_{S}(h_S) \le L_{S}(h^{*}) = 0 \quad \implies L_{S}(h_S) = 0$.
|
Confused about the realizability assumption and equations of upper bound
Let's formulate the problem in a clearer way.
ASSUMPTIONS:
$h_s = \underset{h\in\mathcal{H}}{\arg\min} \, L_{S}(h) \qquad$ (definition of ERM)
$\exists h^{*} \in \mathcal{H}: L_{(\mathcal{D}, f)}(h^{
|
41,934
|
Confused about the realizability assumption and equations of upper bound
|
I actually got stuck on this for a bit too ...
Remember how $h_S$ is defined:
$$h_S \in \underset{h\in H}{\mathrm{argmin}} (L_S(h))$$
The realizability assumption will tell you there's a perfect $h^*$. (Very strong assumption). This $h^*$ will have $L_S(h^*)=0$ by definition.
So when $h_S$ is defined as the argmin - it is necessarily 0 with probability 1. (This is not the same as the Loss function achieving 0 on every sample).
So we are looking for a class of predictors $h_S$
The realizability assumption tells us that $L_S(h^*)=0$
if $\exists h \in H_B$ s.t. $L_S(h)=0$, this implies that $h \in argmin$
The rest follows that the set we want to define is contained in $M$. I don't think this isn't a constructive proof - it doesn't tell you how to find a $h$, it ultimately shows that there's an upper bound on the size of the samples that would create the conditions to create a bad predictor. They don't really talk about measurability but there was a footnote suppressing those details and $H$ was finite, so I'm assuming everything works !
|
Confused about the realizability assumption and equations of upper bound
|
I actually got stuck on this for a bit too ...
Remember how $h_S$ is defined:
$$h_S \in \underset{h\in H}{\mathrm{argmin}} (L_S(h))$$
The realizability assumption will tell you there's a perfect $h^*
|
Confused about the realizability assumption and equations of upper bound
I actually got stuck on this for a bit too ...
Remember how $h_S$ is defined:
$$h_S \in \underset{h\in H}{\mathrm{argmin}} (L_S(h))$$
The realizability assumption will tell you there's a perfect $h^*$. (Very strong assumption). This $h^*$ will have $L_S(h^*)=0$ by definition.
So when $h_S$ is defined as the argmin - it is necessarily 0 with probability 1. (This is not the same as the Loss function achieving 0 on every sample).
So we are looking for a class of predictors $h_S$
The realizability assumption tells us that $L_S(h^*)=0$
if $\exists h \in H_B$ s.t. $L_S(h)=0$, this implies that $h \in argmin$
The rest follows that the set we want to define is contained in $M$. I don't think this isn't a constructive proof - it doesn't tell you how to find a $h$, it ultimately shows that there's an upper bound on the size of the samples that would create the conditions to create a bad predictor. They don't really talk about measurability but there was a footnote suppressing those details and $H$ was finite, so I'm assuming everything works !
|
Confused about the realizability assumption and equations of upper bound
I actually got stuck on this for a bit too ...
Remember how $h_S$ is defined:
$$h_S \in \underset{h\in H}{\mathrm{argmin}} (L_S(h))$$
The realizability assumption will tell you there's a perfect $h^*
|
41,935
|
Is is possible for a gradient boosting regression to predict values outside of the range seen in its training data?
|
In the comment you ask for an example. You can find it here (links to most informative comment, but please read entire thread for clarity).
In the above example, the most intriguing part for me is the value of -666. It is the score on the 2nd tree (the one with variable V2). Note that score falls outside of assumed distribution of $Y$, i.e. $[2000 - 20000]$.
I understand this could be because -666 from example above does not come from averaging as in simple regression tree / random forest, but from the fact that entire prediction comes from aggregation (chain-like summation) of results from different sub-trees. The summation involves weights $w$ that are assigned to each tree and the weights themselves come from:
$w_j^\ast = -\frac{G_j}{H_j+\lambda}$
where $G_j$ and $H_j$ are within-leaf calculations of first and second order derivatives of loss function, therefore they do not depend on the lower or upper $Y$ boundaries.
Please note that the linked example does not prove this is mathematically or empirically possible, because values in the example are arbitrarily selected and do not come from an actual model.
Formulas above come from xgboost website / paper
|
Is is possible for a gradient boosting regression to predict values outside of the range seen in its
|
In the comment you ask for an example. You can find it here (links to most informative comment, but please read entire thread for clarity).
In the above example, the most intriguing part for me is the
|
Is is possible for a gradient boosting regression to predict values outside of the range seen in its training data?
In the comment you ask for an example. You can find it here (links to most informative comment, but please read entire thread for clarity).
In the above example, the most intriguing part for me is the value of -666. It is the score on the 2nd tree (the one with variable V2). Note that score falls outside of assumed distribution of $Y$, i.e. $[2000 - 20000]$.
I understand this could be because -666 from example above does not come from averaging as in simple regression tree / random forest, but from the fact that entire prediction comes from aggregation (chain-like summation) of results from different sub-trees. The summation involves weights $w$ that are assigned to each tree and the weights themselves come from:
$w_j^\ast = -\frac{G_j}{H_j+\lambda}$
where $G_j$ and $H_j$ are within-leaf calculations of first and second order derivatives of loss function, therefore they do not depend on the lower or upper $Y$ boundaries.
Please note that the linked example does not prove this is mathematically or empirically possible, because values in the example are arbitrarily selected and do not come from an actual model.
Formulas above come from xgboost website / paper
|
Is is possible for a gradient boosting regression to predict values outside of the range seen in its
In the comment you ask for an example. You can find it here (links to most informative comment, but please read entire thread for clarity).
In the above example, the most intriguing part for me is the
|
41,936
|
Normally sampling from the standard simplex
|
These papers describe how to sample from a multivariate normal truncated on a (p - 1) simplex ([http://ieeexplore.ieee.org/abstract/document/6884588/], [dobigeon.perso.enseeiht.fr/papers/Dobigeon_TechReport_2007b.pdf]). Sampling is done via Gibbs sampling or HMC. In short, it uses ideas from the (conditional) multivariate Normal distribution. Assume that you want to sample a vector $\alpha_{(p\times 1)}$ which is contrained to a Multivariate Normal truncated on a $p-1$ simplex, i.e., $\alpha\sim N(\mu,\Sigma)I(\alpha\in\mathbb{S}^{p-1})$. You can sample the $j^{th}$ component ($\alpha_j$) conditional on $\alpha_{-j}$ (i.e., the remaining components of $\alpha$), and set the last component ($p^{th}$) to $1 - \sum_{j=1}^{p-1}\alpha_j$ with conditional mean $\mu_{j|-j}$ and conditional variance $\Sigma_{j|-j}$. The papers I mentioned describe how to calculate these. Note that there's only $p - 1$ pieces of information.
|
Normally sampling from the standard simplex
|
These papers describe how to sample from a multivariate normal truncated on a (p - 1) simplex ([http://ieeexplore.ieee.org/abstract/document/6884588/], [dobigeon.perso.enseeiht.fr/papers/Dobigeon_Tech
|
Normally sampling from the standard simplex
These papers describe how to sample from a multivariate normal truncated on a (p - 1) simplex ([http://ieeexplore.ieee.org/abstract/document/6884588/], [dobigeon.perso.enseeiht.fr/papers/Dobigeon_TechReport_2007b.pdf]). Sampling is done via Gibbs sampling or HMC. In short, it uses ideas from the (conditional) multivariate Normal distribution. Assume that you want to sample a vector $\alpha_{(p\times 1)}$ which is contrained to a Multivariate Normal truncated on a $p-1$ simplex, i.e., $\alpha\sim N(\mu,\Sigma)I(\alpha\in\mathbb{S}^{p-1})$. You can sample the $j^{th}$ component ($\alpha_j$) conditional on $\alpha_{-j}$ (i.e., the remaining components of $\alpha$), and set the last component ($p^{th}$) to $1 - \sum_{j=1}^{p-1}\alpha_j$ with conditional mean $\mu_{j|-j}$ and conditional variance $\Sigma_{j|-j}$. The papers I mentioned describe how to calculate these. Note that there's only $p - 1$ pieces of information.
|
Normally sampling from the standard simplex
These papers describe how to sample from a multivariate normal truncated on a (p - 1) simplex ([http://ieeexplore.ieee.org/abstract/document/6884588/], [dobigeon.perso.enseeiht.fr/papers/Dobigeon_Tech
|
41,937
|
Normally sampling from the standard simplex
|
It sounds like you want the logit-normal distribution. This distribution shows up a lot in Compositional Data Analysis (CDA). CDA is often used in geology to measure the composition of minerals within soil or rock samples. The logit-normal takes a logit tranform of your random variable and this logit-transformed random variable is a normally distributed random variable. Formally,
$$Y=log\left(\frac{X}{1-X}\right)$$
where $X$ is logit-normal and $Y$ is normal. Multivariate extensions exist and are the more commonly used forms of the density.
If this is not what you want and you truly want a normal random variable that is restricted by a collection of constraints to always sum to 1 and have all entries be non-negative, you'll need to resort to other simulation techniques to get draws from the distribution. It is fairly complicated to perform these draws. John Geweke wrote a paper about doing this and Christian Robert also wrote a paper on sampling from this type of distribution.
|
Normally sampling from the standard simplex
|
It sounds like you want the logit-normal distribution. This distribution shows up a lot in Compositional Data Analysis (CDA). CDA is often used in geology to measure the composition of minerals within
|
Normally sampling from the standard simplex
It sounds like you want the logit-normal distribution. This distribution shows up a lot in Compositional Data Analysis (CDA). CDA is often used in geology to measure the composition of minerals within soil or rock samples. The logit-normal takes a logit tranform of your random variable and this logit-transformed random variable is a normally distributed random variable. Formally,
$$Y=log\left(\frac{X}{1-X}\right)$$
where $X$ is logit-normal and $Y$ is normal. Multivariate extensions exist and are the more commonly used forms of the density.
If this is not what you want and you truly want a normal random variable that is restricted by a collection of constraints to always sum to 1 and have all entries be non-negative, you'll need to resort to other simulation techniques to get draws from the distribution. It is fairly complicated to perform these draws. John Geweke wrote a paper about doing this and Christian Robert also wrote a paper on sampling from this type of distribution.
|
Normally sampling from the standard simplex
It sounds like you want the logit-normal distribution. This distribution shows up a lot in Compositional Data Analysis (CDA). CDA is often used in geology to measure the composition of minerals within
|
41,938
|
Independent Bernoulli trials vs markov chain
|
As a general rule, if you want to test a hypothesis $H_0$ against $H_A$, you must specify a model that encapsulates both possibilities and then use that model to try to infer which of the hypotheses is correct. So if you think your observed time-series data might have some form of auto-correlation, and you want to test this, it is no good modelling them as IID. You need to use a model that allows for auto-correlation, but also allows independence as a special case, so that these competing hypotheses can be tested from within the model.
You are correct that the standard AR(1) auto-correlation structure for a continuous variable is no good in this case. You will instead need to formulate some kind of model that is suitable for a sequence of Bernoulli trials. For a sequence of Bernoulli trials, a stationary Markov chain for this process would have two parameters $0< \theta_0 <1$ and $0<\theta_1<1$ and use the recursive equations:
$$\begin{matrix}
X_1 \sim \text{Bern} \left( \frac{\theta_0}{1+\theta_0-\theta_1} \right) & & & X_{t+1} | X_t \sim \text{Bern}(\theta_{X_t}).
\end{matrix}$$
This gives a general stationary Markov chain, where the probability in the Bernoulli trial depends on the previous outcome. It can be shown that $\mathbb{Corr}(X_{t+1}, X_t) = \theta_1 - \theta_0$ within this model, so if you want to test for independence, you would be testing the hypotheses:
$$\begin{matrix}
H_0: \theta_1 = \theta_0 & & H_A: \theta_1 \neq \theta_0.
\end{matrix}$$
It should be possible to fit this model to your data, estimate the parameters of the model, and test the hypothesis of independence. This could be done using classical or Bayesian methods.
|
Independent Bernoulli trials vs markov chain
|
As a general rule, if you want to test a hypothesis $H_0$ against $H_A$, you must specify a model that encapsulates both possibilities and then use that model to try to infer which of the hypotheses i
|
Independent Bernoulli trials vs markov chain
As a general rule, if you want to test a hypothesis $H_0$ against $H_A$, you must specify a model that encapsulates both possibilities and then use that model to try to infer which of the hypotheses is correct. So if you think your observed time-series data might have some form of auto-correlation, and you want to test this, it is no good modelling them as IID. You need to use a model that allows for auto-correlation, but also allows independence as a special case, so that these competing hypotheses can be tested from within the model.
You are correct that the standard AR(1) auto-correlation structure for a continuous variable is no good in this case. You will instead need to formulate some kind of model that is suitable for a sequence of Bernoulli trials. For a sequence of Bernoulli trials, a stationary Markov chain for this process would have two parameters $0< \theta_0 <1$ and $0<\theta_1<1$ and use the recursive equations:
$$\begin{matrix}
X_1 \sim \text{Bern} \left( \frac{\theta_0}{1+\theta_0-\theta_1} \right) & & & X_{t+1} | X_t \sim \text{Bern}(\theta_{X_t}).
\end{matrix}$$
This gives a general stationary Markov chain, where the probability in the Bernoulli trial depends on the previous outcome. It can be shown that $\mathbb{Corr}(X_{t+1}, X_t) = \theta_1 - \theta_0$ within this model, so if you want to test for independence, you would be testing the hypotheses:
$$\begin{matrix}
H_0: \theta_1 = \theta_0 & & H_A: \theta_1 \neq \theta_0.
\end{matrix}$$
It should be possible to fit this model to your data, estimate the parameters of the model, and test the hypothesis of independence. This could be done using classical or Bayesian methods.
|
Independent Bernoulli trials vs markov chain
As a general rule, if you want to test a hypothesis $H_0$ against $H_A$, you must specify a model that encapsulates both possibilities and then use that model to try to infer which of the hypotheses i
|
41,939
|
OLS asymptotic results
|
This comes up a lot in various contexts and therefore deserves a general answer. The question concerns pairs of random variables, so let's simply call them $X$ and $Y$. Think of them as the components of a bivariate random variable $(X,Y)$ with some distribution $F$. The properties of $F$ won't matter: the following exposition is completely general. It assumes only that any expectation that is taken actually exists and is finite.
A (raw) moment is given by powers $r$ and $s$, usually taken to be small nonnegative integers $0,1,2,\ldots$. By definition it is
$$\mu_F(r,s) = E_F(X^rY^s).\tag{1}$$
You see, it's merely the expectation of a monomial function of $(X,Y)$. When $s=0$, $Y^s=1$ disappears and we are left with the usual (univariate) moments,
$$\mu_F(r,0) = \mu_F(r) = E_F(X^r).$$
Let's take a little tangent to see that central moments don't introduce any real complication.
The central moments are linear combinations of the raw moments with universal coefficients (they don't depend on $F$, only on $r$ and $s$). By definition they are
$$\mu^\prime_F(r,s) = E_F((X-\bar X)^r(Y-\bar Y)^s).\tag{2}$$
(As is usual, I have written $\mu_F(1,0) = \bar X$ for the expectation of $X$ and $\mu_F(0,1)=\bar Y$ for the expectation of $Y$.) Ordinary algebra (the Binomial Theorem) lets us expand the right hand side of $(2)$:
$$(X-\bar X)^r(Y-\bar Y)^s = \sum_{i=0}^r\sum_{j=0}^s \binom{r}{i}\binom{s}{j}X^i Y^j \bar X^{r-i} \bar Y^{s-j}.$$
Note the Binomial coefficients $\binom{r}{i}$ and $\binom{s}{j}$, which do not depend on $F$. Linearity of expectation says if we take the expectation of the left hand side, it will be the combination of expectations on the right. Bear in mind that since $\bar X$ and $\bar Y$ are numbers, they introduce no complications:
$$\eqalign{\mu^\prime_F(r,s)&=E((X-\bar X)^r(Y-\bar Y)^s) \\&= \sum_{i=0,j=0}^{r,s} \binom{r}{i}\binom{s}{j}E(X^i Y^j)\bar X^{r-i} \bar Y^{s-j} \\&= \sum_{i=0,j=0}^{r,s} \binom{r}{i}\binom{s}{j}\mu_F(i,j)\mu_F(1,0)^{r-i}\mu_F(0,1)^{s-j}.} $$
This exhibits the central moments $\mu^\prime$ as linear combinations of the raw moments $\mu$.
Returning to the question, consider how you would go about computing, say, the variance of any monomial $X^rY^s$. By one definition of variance (as the expected square minus the square of the expectation) and the definition of moments $(1)$, this is
$$\operatorname{Var}(X^rY^s) = E((X^rY^s)^2) - E(X^rY^s)^2 = \mu_F(2r,2s) - \mu_F(r,s)^2.$$
Nothing new appears: moments of powers of the variables are linear combinations of moments of the original variables.
It's that simple, and the generalizations to higher moments (central or not) of monomials, and to more than two variables, ought now to be obvious.
|
OLS asymptotic results
|
This comes up a lot in various contexts and therefore deserves a general answer. The question concerns pairs of random variables, so let's simply call them $X$ and $Y$. Think of them as the componen
|
OLS asymptotic results
This comes up a lot in various contexts and therefore deserves a general answer. The question concerns pairs of random variables, so let's simply call them $X$ and $Y$. Think of them as the components of a bivariate random variable $(X,Y)$ with some distribution $F$. The properties of $F$ won't matter: the following exposition is completely general. It assumes only that any expectation that is taken actually exists and is finite.
A (raw) moment is given by powers $r$ and $s$, usually taken to be small nonnegative integers $0,1,2,\ldots$. By definition it is
$$\mu_F(r,s) = E_F(X^rY^s).\tag{1}$$
You see, it's merely the expectation of a monomial function of $(X,Y)$. When $s=0$, $Y^s=1$ disappears and we are left with the usual (univariate) moments,
$$\mu_F(r,0) = \mu_F(r) = E_F(X^r).$$
Let's take a little tangent to see that central moments don't introduce any real complication.
The central moments are linear combinations of the raw moments with universal coefficients (they don't depend on $F$, only on $r$ and $s$). By definition they are
$$\mu^\prime_F(r,s) = E_F((X-\bar X)^r(Y-\bar Y)^s).\tag{2}$$
(As is usual, I have written $\mu_F(1,0) = \bar X$ for the expectation of $X$ and $\mu_F(0,1)=\bar Y$ for the expectation of $Y$.) Ordinary algebra (the Binomial Theorem) lets us expand the right hand side of $(2)$:
$$(X-\bar X)^r(Y-\bar Y)^s = \sum_{i=0}^r\sum_{j=0}^s \binom{r}{i}\binom{s}{j}X^i Y^j \bar X^{r-i} \bar Y^{s-j}.$$
Note the Binomial coefficients $\binom{r}{i}$ and $\binom{s}{j}$, which do not depend on $F$. Linearity of expectation says if we take the expectation of the left hand side, it will be the combination of expectations on the right. Bear in mind that since $\bar X$ and $\bar Y$ are numbers, they introduce no complications:
$$\eqalign{\mu^\prime_F(r,s)&=E((X-\bar X)^r(Y-\bar Y)^s) \\&= \sum_{i=0,j=0}^{r,s} \binom{r}{i}\binom{s}{j}E(X^i Y^j)\bar X^{r-i} \bar Y^{s-j} \\&= \sum_{i=0,j=0}^{r,s} \binom{r}{i}\binom{s}{j}\mu_F(i,j)\mu_F(1,0)^{r-i}\mu_F(0,1)^{s-j}.} $$
This exhibits the central moments $\mu^\prime$ as linear combinations of the raw moments $\mu$.
Returning to the question, consider how you would go about computing, say, the variance of any monomial $X^rY^s$. By one definition of variance (as the expected square minus the square of the expectation) and the definition of moments $(1)$, this is
$$\operatorname{Var}(X^rY^s) = E((X^rY^s)^2) - E(X^rY^s)^2 = \mu_F(2r,2s) - \mu_F(r,s)^2.$$
Nothing new appears: moments of powers of the variables are linear combinations of moments of the original variables.
It's that simple, and the generalizations to higher moments (central or not) of monomials, and to more than two variables, ought now to be obvious.
|
OLS asymptotic results
This comes up a lot in various contexts and therefore deserves a general answer. The question concerns pairs of random variables, so let's simply call them $X$ and $Y$. Think of them as the componen
|
41,940
|
What's the skewed-t distribution?
|
I have found the definition of the skewed distribution. Here the "skewed parameter" is not skewness.
|
What's the skewed-t distribution?
|
I have found the definition of the skewed distribution. Here the "skewed parameter" is not skewness.
|
What's the skewed-t distribution?
I have found the definition of the skewed distribution. Here the "skewed parameter" is not skewness.
|
What's the skewed-t distribution?
I have found the definition of the skewed distribution. Here the "skewed parameter" is not skewness.
|
41,941
|
What's the skewed-t distribution?
|
It seems like skew-t is used as name for different distributions. The mostly used one (I believe) is in https://en.wikipedia.org/wiki/Skewed_generalized_t_distribution, while the answer by @Monier is another one.
A Azallini introduced a general way of "skewing" a distribution, start with some density function $f$ (such as standard normal or $t_\nu$) symmetric about zero and transform it by $$2 f(x) F(\alpha x)$$ (where $F$ is the cdf corresponding to $f$), and $\alpha$ is a new transform parameter modeling the skewness. Observe that when $\alpha=0$ we get back $f$, since by symmetry $F(0)=1/2$. The skew normal case is https://en.wikipedia.org/wiki/Skew_normal_distribution. Many other symmetric distribution families can be skewed the same way, and location and scale parameters can be added.
For the skew normal case there is many posts on this site, see https://stats.stackexchange.com/search?q=+skew+distribution+normal+answers%3A1+
|
What's the skewed-t distribution?
|
It seems like skew-t is used as name for different distributions. The mostly used one (I believe) is in https://en.wikipedia.org/wiki/Skewed_generalized_t_distribution, while the answer by @Monier is
|
What's the skewed-t distribution?
It seems like skew-t is used as name for different distributions. The mostly used one (I believe) is in https://en.wikipedia.org/wiki/Skewed_generalized_t_distribution, while the answer by @Monier is another one.
A Azallini introduced a general way of "skewing" a distribution, start with some density function $f$ (such as standard normal or $t_\nu$) symmetric about zero and transform it by $$2 f(x) F(\alpha x)$$ (where $F$ is the cdf corresponding to $f$), and $\alpha$ is a new transform parameter modeling the skewness. Observe that when $\alpha=0$ we get back $f$, since by symmetry $F(0)=1/2$. The skew normal case is https://en.wikipedia.org/wiki/Skew_normal_distribution. Many other symmetric distribution families can be skewed the same way, and location and scale parameters can be added.
For the skew normal case there is many posts on this site, see https://stats.stackexchange.com/search?q=+skew+distribution+normal+answers%3A1+
|
What's the skewed-t distribution?
It seems like skew-t is used as name for different distributions. The mostly used one (I believe) is in https://en.wikipedia.org/wiki/Skewed_generalized_t_distribution, while the answer by @Monier is
|
41,942
|
If a random sample all came out positive, what can be inferred about the population? [duplicate]
|
20 of 1000 are good. For rest 980, number of good ones can be from 0 to 980.
Calculate the probabilities that a random sample of 20 products from a manufacturing batch of 1000 and they all tested good, when # of good ones are 20, 21, ..., 1000 among 1000. (totally 981 probabilities). Add them together as denominator, and last one (1000 goods among 1000) as numerator. This ratio is your x.
Let $Y$ be the number of good ones among 1000 products. Because no prior information, we assume $\Pr(Y=k) = 1/1001$ because $Y$ can be 0, 1, ..., 1000. It is uniform distribution and is used as no informative prior very often.
Let $B$ be the event that all of 20 products are good in the random sample of 20 from 1000 products.
So the asked question is
$\Pr(Y=1000|B)$
So $\Pr(Y=1000|B) = \frac{Pr(B|Y=1000)\Pr(Y=1000)}{\sum_{y=0}^{1000}\Pr(B|Y=y)\Pr(Y=y)}$
$=\frac{1/1001}{1/1001}\frac{Pr(B|Y=1000)}{\sum_{y=0}^{1000}\Pr(B|Y=y)}$
$=\frac{Pr(B|Y=1000)}{\sum_{y=0}^{1000}\Pr(B|Y=y)}$
$=1/47.666667 = 0.02097902$ appr. 2.1%
Under the assumption of simple random sample, the # of good ones among 20 sampled units follows hyper-geometric distribution. So
$\Pr(B|Y=y)={\frac {{\binom {y}{20}}{\binom {980}{0}}}{\binom {1000}{20}}} = \frac {{\binom {y}{20}}}{\binom {1000}{20}}$. Need to know that $\binom {y}{20}= 0 $ for $y<20$
In fact, calculation can be performed by any software with probability density function of hyper-geometric distribution.
|
If a random sample all came out positive, what can be inferred about the population? [duplicate]
|
20 of 1000 are good. For rest 980, number of good ones can be from 0 to 980.
Calculate the probabilities that a random sample of 20 products from a manufacturing batch of 1000 and they all tested good
|
If a random sample all came out positive, what can be inferred about the population? [duplicate]
20 of 1000 are good. For rest 980, number of good ones can be from 0 to 980.
Calculate the probabilities that a random sample of 20 products from a manufacturing batch of 1000 and they all tested good, when # of good ones are 20, 21, ..., 1000 among 1000. (totally 981 probabilities). Add them together as denominator, and last one (1000 goods among 1000) as numerator. This ratio is your x.
Let $Y$ be the number of good ones among 1000 products. Because no prior information, we assume $\Pr(Y=k) = 1/1001$ because $Y$ can be 0, 1, ..., 1000. It is uniform distribution and is used as no informative prior very often.
Let $B$ be the event that all of 20 products are good in the random sample of 20 from 1000 products.
So the asked question is
$\Pr(Y=1000|B)$
So $\Pr(Y=1000|B) = \frac{Pr(B|Y=1000)\Pr(Y=1000)}{\sum_{y=0}^{1000}\Pr(B|Y=y)\Pr(Y=y)}$
$=\frac{1/1001}{1/1001}\frac{Pr(B|Y=1000)}{\sum_{y=0}^{1000}\Pr(B|Y=y)}$
$=\frac{Pr(B|Y=1000)}{\sum_{y=0}^{1000}\Pr(B|Y=y)}$
$=1/47.666667 = 0.02097902$ appr. 2.1%
Under the assumption of simple random sample, the # of good ones among 20 sampled units follows hyper-geometric distribution. So
$\Pr(B|Y=y)={\frac {{\binom {y}{20}}{\binom {980}{0}}}{\binom {1000}{20}}} = \frac {{\binom {y}{20}}}{\binom {1000}{20}}$. Need to know that $\binom {y}{20}= 0 $ for $y<20$
In fact, calculation can be performed by any software with probability density function of hyper-geometric distribution.
|
If a random sample all came out positive, what can be inferred about the population? [duplicate]
20 of 1000 are good. For rest 980, number of good ones can be from 0 to 980.
Calculate the probabilities that a random sample of 20 products from a manufacturing batch of 1000 and they all tested good
|
41,943
|
If a random sample all came out positive, what can be inferred about the population? [duplicate]
|
You can construct a confidence interval for the proportion $p$ of not defectives. The interval will be one-sided $[p_c, 1]$ say 95% this gives you an idea based on the width of the interval $([1-p_c,1])$ how confident you are that the proportion is close to 100%. This would be based on the hypergeometric distribution.
Based on Bill Huber's comment about the Bayesian approach. This shows that a lot can be said without prior knowledge. This computation is based strictly on frequentist methods.
|
If a random sample all came out positive, what can be inferred about the population? [duplicate]
|
You can construct a confidence interval for the proportion $p$ of not defectives. The interval will be one-sided $[p_c, 1]$ say 95% this gives you an idea based on the width of the interval $([1-p_c,
|
If a random sample all came out positive, what can be inferred about the population? [duplicate]
You can construct a confidence interval for the proportion $p$ of not defectives. The interval will be one-sided $[p_c, 1]$ say 95% this gives you an idea based on the width of the interval $([1-p_c,1])$ how confident you are that the proportion is close to 100%. This would be based on the hypergeometric distribution.
Based on Bill Huber's comment about the Bayesian approach. This shows that a lot can be said without prior knowledge. This computation is based strictly on frequentist methods.
|
If a random sample all came out positive, what can be inferred about the population? [duplicate]
You can construct a confidence interval for the proportion $p$ of not defectives. The interval will be one-sided $[p_c, 1]$ say 95% this gives you an idea based on the width of the interval $([1-p_c,
|
41,944
|
What is the maximum likelihood/GLM version of least absolute deviations for robust linear regression?
|
There's no GLM (no natural exponential family model) that corresponds to L1 (Least absolute value) regression.*
Note that if you're doing MLE then a density of form $\frac{c}{\phi}\exp(-g(\frac{y-\mathbf{x}\beta}{\phi}))$ with have log-likelihood $-n\log(\phi)-\sum_i g(\frac{y_i-\mathbf{x}_i\beta}{\phi})$.
Now maximizing likelihood with respect to the parameters in $\beta$ would correspond to minimizing $\sum_i g(\frac{y_i-\mathbf{x}_i\beta}{\phi})$.
So if you're trying to minimize $\frac{1}{\phi}\sum_i |y_i-\mathbf{x}_i\beta|=\sum_i |\frac{y_i-\mathbf{x}_i\beta}{\phi}| $... the form of $g$ and hence of the density of the errors that this will be ML for should be immediately obvious -- it's the Laplace.
It is useful and robust in the sense that it minimises the effect of outliers in the response variable on the fitted line
It provides no protection at all against influential observations, and so it's not at all robust to influential outliers, as illustrated here.
I also don't think it's quite correct to say it minimizes the effect; (ignoring the above point about influential observations -- e.g. if we're just looking at location rather than regression) it bounds the effect very nicely but if you learn about influence functions and M-estimators you'll see that there are estimators with influence functions that redescend (which L1 estimators don't), and so there's estimators of location where outliers have even less effect than they do on the median.
* Leaving aside the scale$^\dagger$ for simplicity, you just can't write $\sum_i |y_i-\mathbf{x}_i\beta|$ in the form $\sum_i -\eta(\beta)\cdot T(y_i) +A(\beta)-B(y_i)$ - the absolute value function doesn't break up like that.
$\dagger$ Actually if we only had the scale to estimate, that would be exponential family.
|
What is the maximum likelihood/GLM version of least absolute deviations for robust linear regression
|
There's no GLM (no natural exponential family model) that corresponds to L1 (Least absolute value) regression.*
Note that if you're doing MLE then a density of form $\frac{c}{\phi}\exp(-g(\frac{y-\mat
|
What is the maximum likelihood/GLM version of least absolute deviations for robust linear regression?
There's no GLM (no natural exponential family model) that corresponds to L1 (Least absolute value) regression.*
Note that if you're doing MLE then a density of form $\frac{c}{\phi}\exp(-g(\frac{y-\mathbf{x}\beta}{\phi}))$ with have log-likelihood $-n\log(\phi)-\sum_i g(\frac{y_i-\mathbf{x}_i\beta}{\phi})$.
Now maximizing likelihood with respect to the parameters in $\beta$ would correspond to minimizing $\sum_i g(\frac{y_i-\mathbf{x}_i\beta}{\phi})$.
So if you're trying to minimize $\frac{1}{\phi}\sum_i |y_i-\mathbf{x}_i\beta|=\sum_i |\frac{y_i-\mathbf{x}_i\beta}{\phi}| $... the form of $g$ and hence of the density of the errors that this will be ML for should be immediately obvious -- it's the Laplace.
It is useful and robust in the sense that it minimises the effect of outliers in the response variable on the fitted line
It provides no protection at all against influential observations, and so it's not at all robust to influential outliers, as illustrated here.
I also don't think it's quite correct to say it minimizes the effect; (ignoring the above point about influential observations -- e.g. if we're just looking at location rather than regression) it bounds the effect very nicely but if you learn about influence functions and M-estimators you'll see that there are estimators with influence functions that redescend (which L1 estimators don't), and so there's estimators of location where outliers have even less effect than they do on the median.
* Leaving aside the scale$^\dagger$ for simplicity, you just can't write $\sum_i |y_i-\mathbf{x}_i\beta|$ in the form $\sum_i -\eta(\beta)\cdot T(y_i) +A(\beta)-B(y_i)$ - the absolute value function doesn't break up like that.
$\dagger$ Actually if we only had the scale to estimate, that would be exponential family.
|
What is the maximum likelihood/GLM version of least absolute deviations for robust linear regression
There's no GLM (no natural exponential family model) that corresponds to L1 (Least absolute value) regression.*
Note that if you're doing MLE then a density of form $\frac{c}{\phi}\exp(-g(\frac{y-\mat
|
41,945
|
Why is it a big problem to have sparsity issues in Natural Language processing (NLP)?
|
There are two kinds of sparsity: data sparsity and model sparsity. Model sparsity can be good because it means that there is a concise explanation for the effect that we are modeling. Data sparsity is usually bad because it means that we are missing information that might be important. That slide is talking about data sparsity.
Using the SVD to compress the matrix gives us dense low-dimensional vectors for each word. This is a way of sharing information between similar words to help deal with the data sparsity. Another thing that people sometimes do to deal with sparsity is to use sub-word units instead of words or to use stemming or lexicalization to reduce the vocabulary size.
Data sparsity is more of an issue in NLP than in other machine learning fields because we typically deal with large vocabularies where it is impossible to have enough data to actually observe examples of all the things that people can say. There will be many real phrases that we will just never see in the training data.
|
Why is it a big problem to have sparsity issues in Natural Language processing (NLP)?
|
There are two kinds of sparsity: data sparsity and model sparsity. Model sparsity can be good because it means that there is a concise explanation for the effect that we are modeling. Data sparsity is
|
Why is it a big problem to have sparsity issues in Natural Language processing (NLP)?
There are two kinds of sparsity: data sparsity and model sparsity. Model sparsity can be good because it means that there is a concise explanation for the effect that we are modeling. Data sparsity is usually bad because it means that we are missing information that might be important. That slide is talking about data sparsity.
Using the SVD to compress the matrix gives us dense low-dimensional vectors for each word. This is a way of sharing information between similar words to help deal with the data sparsity. Another thing that people sometimes do to deal with sparsity is to use sub-word units instead of words or to use stemming or lexicalization to reduce the vocabulary size.
Data sparsity is more of an issue in NLP than in other machine learning fields because we typically deal with large vocabularies where it is impossible to have enough data to actually observe examples of all the things that people can say. There will be many real phrases that we will just never see in the training data.
|
Why is it a big problem to have sparsity issues in Natural Language processing (NLP)?
There are two kinds of sparsity: data sparsity and model sparsity. Model sparsity can be good because it means that there is a concise explanation for the effect that we are modeling. Data sparsity is
|
41,946
|
Why is it a big problem to have sparsity issues in Natural Language processing (NLP)?
|
A 1 hot word vector in a large dictionary or a co-occurrence matrix in a large corpus are both usually very sparse matrices. This is a problem because a one hot vector, [0 0 1 0 .. 0] vector cannot compute relations to another one hot vector, i.e. dot product. So this would cause two words such as hotel and motel to have no mathematical relations with one another if all of the word vectors were one hot vectors.
A co-occurrence matrix will solve this problem of relations, because hotel and motel will have similar co-occurrence results in a large corpus. But this matrix is too large in size, and computationally is inefficient to store in memory. For a corpus with a dictionary of 100,000 unique tokens, this is a (100,000 x 100,000) matrix which is 10 billion numbers.
Using something like word2vec or glove, you solve both of these problems. Word2vec uses a feed forward neural network with a single hidden layer. We will end up using the weights from the hidden layer as the word vector representations. The length of vectors is a hyperparameter, (100, 300, etc.). This way words such as hotel and motel can do vector operations to find the similarity or closeness to another vector. Also, this reduces the size of each word vector from 100,000 from the example above to 300 or 100, depending on the dimension of the word vector set.
|
Why is it a big problem to have sparsity issues in Natural Language processing (NLP)?
|
A 1 hot word vector in a large dictionary or a co-occurrence matrix in a large corpus are both usually very sparse matrices. This is a problem because a one hot vector, [0 0 1 0 .. 0] vector cannot co
|
Why is it a big problem to have sparsity issues in Natural Language processing (NLP)?
A 1 hot word vector in a large dictionary or a co-occurrence matrix in a large corpus are both usually very sparse matrices. This is a problem because a one hot vector, [0 0 1 0 .. 0] vector cannot compute relations to another one hot vector, i.e. dot product. So this would cause two words such as hotel and motel to have no mathematical relations with one another if all of the word vectors were one hot vectors.
A co-occurrence matrix will solve this problem of relations, because hotel and motel will have similar co-occurrence results in a large corpus. But this matrix is too large in size, and computationally is inefficient to store in memory. For a corpus with a dictionary of 100,000 unique tokens, this is a (100,000 x 100,000) matrix which is 10 billion numbers.
Using something like word2vec or glove, you solve both of these problems. Word2vec uses a feed forward neural network with a single hidden layer. We will end up using the weights from the hidden layer as the word vector representations. The length of vectors is a hyperparameter, (100, 300, etc.). This way words such as hotel and motel can do vector operations to find the similarity or closeness to another vector. Also, this reduces the size of each word vector from 100,000 from the example above to 300 or 100, depending on the dimension of the word vector set.
|
Why is it a big problem to have sparsity issues in Natural Language processing (NLP)?
A 1 hot word vector in a large dictionary or a co-occurrence matrix in a large corpus are both usually very sparse matrices. This is a problem because a one hot vector, [0 0 1 0 .. 0] vector cannot co
|
41,947
|
Why is it a big problem to have sparsity issues in Natural Language processing (NLP)?
|
In my opinion, the slide here is just stating that the co-occurrence matrix will have a lot of zeros in it. This is a bad thing because it is very CPU- and memory-inefficient. Imagine having to compute 0 times something 90% of the time and always get 0 as the answer. Also imagine having to store a lot of 0s in order to encode this matrix in memory.
|
Why is it a big problem to have sparsity issues in Natural Language processing (NLP)?
|
In my opinion, the slide here is just stating that the co-occurrence matrix will have a lot of zeros in it. This is a bad thing because it is very CPU- and memory-inefficient. Imagine having to comput
|
Why is it a big problem to have sparsity issues in Natural Language processing (NLP)?
In my opinion, the slide here is just stating that the co-occurrence matrix will have a lot of zeros in it. This is a bad thing because it is very CPU- and memory-inefficient. Imagine having to compute 0 times something 90% of the time and always get 0 as the answer. Also imagine having to store a lot of 0s in order to encode this matrix in memory.
|
Why is it a big problem to have sparsity issues in Natural Language processing (NLP)?
In my opinion, the slide here is just stating that the co-occurrence matrix will have a lot of zeros in it. This is a bad thing because it is very CPU- and memory-inefficient. Imagine having to comput
|
41,948
|
Covariance of Kronecker product?
|
Example 1:
Assuming without loss of generality your vectors are centered, or that $E[\mathbf{a}] = E[\mathbf{b}] = \mathbf{0}$, the covariance matrix is
\begin{align*}
E[\mathbf{c}\mathbf{c}^T] &= E[(\mathbf{a}\otimes \mathbf{b})(\mathbf{a} \otimes \mathbf{b})^T] \\
&= E\left\{
\left[ \begin{array}{c}
a_1\mathbf{b} \\
a_2\mathbf{b} \\
\vdots \\
a_m\mathbf{b}
\end{array}\right]
\left[a_1\mathbf{b}^T, a_2\mathbf{b}^T, \cdots, a_m\mathbf{b}^T \right]
\right\} \\
&=
\left[\begin{array}{cccc}
E[a_1a_1\mathbf{b}\mathbf{b}^T] & E[a_1a_2 \mathbf{b}\mathbf{b}^T] & \cdots & E[a_1a_m \mathbf{b}\mathbf{b}^T]\\
\vdots &\vdots & \ddots & \ddots \\
E[a_ma_1 \mathbf{b}\mathbf{b}^T] & E[a_m a_2 \mathbf{b}\mathbf{b}^T] & \cdots & E[a_ma_m \mathbf{b}\mathbf{b}^T]
\end{array} \right]\\
&=
\left[\begin{array}{cccc}
E[a_1a_1]E[\mathbf{b}\mathbf{b}^T] & E[a_1a_2]E[ \mathbf{b}\mathbf{b}^T] & \cdots & E[a_1a_m]E[ \mathbf{b}\mathbf{b}^T]\\
\vdots &\vdots & \ddots & \ddots \\
E[a_ma_1]E[ \mathbf{b}\mathbf{b}^T] & E[a_m a_2]E[ \mathbf{b}\mathbf{b}^T] & \cdots & E[a_ma_m]E[ \mathbf{b}\mathbf{b}^T]
\end{array} \right]\\
&=E[\mathbf{a}\mathbf{a}'] \otimes E[ \mathbf{b}\mathbf{b}^T].
\end{align*}
Notice the assumption of independence is used in the second to last equality. But we can see here that the variance of the Kronecker product is the Kronecker product of the variances.
Example 2:
Your example in the (now-deleted) comments was an example where the two vectors were not independent. In that case, the above quantity would simplify to
\begin{align*}
\left[\begin{array}{cccc}
E[a_1a_1\mathbf{a}\mathbf{a}^T] & E[a_1a_2 \mathbf{a}\mathbf{a}^T] & \cdots & E[a_1a_m \mathbf{a}\mathbf{a}^T]\\
\vdots &\vdots & \ddots & \ddots \\
E[a_ma_1 \mathbf{a}\mathbf{a}^T] & E[a_m a_2 \mathbf{a}\mathbf{a}^T] & \cdots & E[a_ma_m \mathbf{a}\mathbf{a}^T]
\end{array} \right].
\end{align*}
The only way for this to be white would be if $E[a_i^4]=1$ for all $i=1,\ldots,m$ and $E[a_i a_j a_k a_l]=0$ if there is one pair $(m,n)$ such that $m \neq n$. Is this possible? Well if we have one distinct index, the centering makes that term $0$, but..
The problem is the terms of the form $E[a_i^2 a_j^2]$. Even if you assume your vectors are centered and white,
$$
E[a_i^2 a_j^2] = \text{Var}(a_i)\text{Var}(a_j) \ge 0.
$$
The only way for these terms to be zero is if one of them is a degenerate random variable.
|
Covariance of Kronecker product?
|
Example 1:
Assuming without loss of generality your vectors are centered, or that $E[\mathbf{a}] = E[\mathbf{b}] = \mathbf{0}$, the covariance matrix is
\begin{align*}
E[\mathbf{c}\mathbf{c}^T] &= E[(
|
Covariance of Kronecker product?
Example 1:
Assuming without loss of generality your vectors are centered, or that $E[\mathbf{a}] = E[\mathbf{b}] = \mathbf{0}$, the covariance matrix is
\begin{align*}
E[\mathbf{c}\mathbf{c}^T] &= E[(\mathbf{a}\otimes \mathbf{b})(\mathbf{a} \otimes \mathbf{b})^T] \\
&= E\left\{
\left[ \begin{array}{c}
a_1\mathbf{b} \\
a_2\mathbf{b} \\
\vdots \\
a_m\mathbf{b}
\end{array}\right]
\left[a_1\mathbf{b}^T, a_2\mathbf{b}^T, \cdots, a_m\mathbf{b}^T \right]
\right\} \\
&=
\left[\begin{array}{cccc}
E[a_1a_1\mathbf{b}\mathbf{b}^T] & E[a_1a_2 \mathbf{b}\mathbf{b}^T] & \cdots & E[a_1a_m \mathbf{b}\mathbf{b}^T]\\
\vdots &\vdots & \ddots & \ddots \\
E[a_ma_1 \mathbf{b}\mathbf{b}^T] & E[a_m a_2 \mathbf{b}\mathbf{b}^T] & \cdots & E[a_ma_m \mathbf{b}\mathbf{b}^T]
\end{array} \right]\\
&=
\left[\begin{array}{cccc}
E[a_1a_1]E[\mathbf{b}\mathbf{b}^T] & E[a_1a_2]E[ \mathbf{b}\mathbf{b}^T] & \cdots & E[a_1a_m]E[ \mathbf{b}\mathbf{b}^T]\\
\vdots &\vdots & \ddots & \ddots \\
E[a_ma_1]E[ \mathbf{b}\mathbf{b}^T] & E[a_m a_2]E[ \mathbf{b}\mathbf{b}^T] & \cdots & E[a_ma_m]E[ \mathbf{b}\mathbf{b}^T]
\end{array} \right]\\
&=E[\mathbf{a}\mathbf{a}'] \otimes E[ \mathbf{b}\mathbf{b}^T].
\end{align*}
Notice the assumption of independence is used in the second to last equality. But we can see here that the variance of the Kronecker product is the Kronecker product of the variances.
Example 2:
Your example in the (now-deleted) comments was an example where the two vectors were not independent. In that case, the above quantity would simplify to
\begin{align*}
\left[\begin{array}{cccc}
E[a_1a_1\mathbf{a}\mathbf{a}^T] & E[a_1a_2 \mathbf{a}\mathbf{a}^T] & \cdots & E[a_1a_m \mathbf{a}\mathbf{a}^T]\\
\vdots &\vdots & \ddots & \ddots \\
E[a_ma_1 \mathbf{a}\mathbf{a}^T] & E[a_m a_2 \mathbf{a}\mathbf{a}^T] & \cdots & E[a_ma_m \mathbf{a}\mathbf{a}^T]
\end{array} \right].
\end{align*}
The only way for this to be white would be if $E[a_i^4]=1$ for all $i=1,\ldots,m$ and $E[a_i a_j a_k a_l]=0$ if there is one pair $(m,n)$ such that $m \neq n$. Is this possible? Well if we have one distinct index, the centering makes that term $0$, but..
The problem is the terms of the form $E[a_i^2 a_j^2]$. Even if you assume your vectors are centered and white,
$$
E[a_i^2 a_j^2] = \text{Var}(a_i)\text{Var}(a_j) \ge 0.
$$
The only way for these terms to be zero is if one of them is a degenerate random variable.
|
Covariance of Kronecker product?
Example 1:
Assuming without loss of generality your vectors are centered, or that $E[\mathbf{a}] = E[\mathbf{b}] = \mathbf{0}$, the covariance matrix is
\begin{align*}
E[\mathbf{c}\mathbf{c}^T] &= E[(
|
41,949
|
Algorithm for generating a multi-level fractional factorial design
|
Since the purpose of the design is to "cover" the input space of the program to search for bugs, not to optimize or fit some model, it is not clear that statistical design of experiments is useful. See https://cran.r-project.org/web/views/ExperimentalDesign.html for an overview of what is available in R, especially the section "Experimental designs for computer experiments ". Latin hypercube sampling from the package lhs is sometimes used for computer experiments, but it is designed for continuous variables, not for factors. Since you have factors with two or three levels, you could maybe use n=6:
library(lhs)
maximinLHS(n=6,k=10)
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 0.1581259 0.34922240 0.77650438 0.57445546 0.39810199 0.53289942
[2,] 0.3942601 0.71557361 0.48436302 0.01056952 0.52782167 0.75466255
[3,] 0.5420127 0.08628741 0.59935565 0.45518451 0.31659314 0.41713492
[4,] 0.9704732 0.22435911 0.09793048 0.26863924 0.69497397 0.23899666
[5,] 0.7315111 0.97137094 0.85733108 0.72832007 0.06766873 0.04235271
[6,] 0.2159874 0.50054039 0.19493044 0.96520339 0.89314903 0.90171824
[,7] [,8] [,9] [,10]
[1,] 0.94280352 0.64377198 0.5827483 0.92633749
[2,] 0.43391370 0.09490557 0.6728905 0.06741415
[3,] 0.21854447 0.78748899 0.1438589 0.30366601
[4,] 0.70834668 0.29569709 0.3194497 0.68048008
[5,] 0.52475737 0.46038173 0.8618724 0.48145297
[6,] 0.08212028 0.95824574 0.4378693 0.52659065
and then divide the unit interval in two halves for the two-level factors, in three for the tree-level factors. Maybe. There are other ideas in that section of the task view.
Returning to the question for fractional factorial designs, multiple R packages can be used. But most will only generate symmetric designs where all factors have the same number of levels. One exception is the package planor, its use seems complicated. Below I give an example of use of AlgDesign, which is simpler to use:
library(AlgDesign)
cand <- gen.factorial(levels=c(rep(2,3),rep(3,3)),
nVars= 6,
factors="all", varNames = LETTERS[1:6])
des <- optFederov( ~ ., data=cand, nTrials = 36)
ev <- eval.design( ~ ., des$design, confounding=TRUE, X=cand)
|
Algorithm for generating a multi-level fractional factorial design
|
Since the purpose of the design is to "cover" the input space of the program to search for bugs, not to optimize or fit some model, it is not clear that statistical design of experiments is useful. S
|
Algorithm for generating a multi-level fractional factorial design
Since the purpose of the design is to "cover" the input space of the program to search for bugs, not to optimize or fit some model, it is not clear that statistical design of experiments is useful. See https://cran.r-project.org/web/views/ExperimentalDesign.html for an overview of what is available in R, especially the section "Experimental designs for computer experiments ". Latin hypercube sampling from the package lhs is sometimes used for computer experiments, but it is designed for continuous variables, not for factors. Since you have factors with two or three levels, you could maybe use n=6:
library(lhs)
maximinLHS(n=6,k=10)
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 0.1581259 0.34922240 0.77650438 0.57445546 0.39810199 0.53289942
[2,] 0.3942601 0.71557361 0.48436302 0.01056952 0.52782167 0.75466255
[3,] 0.5420127 0.08628741 0.59935565 0.45518451 0.31659314 0.41713492
[4,] 0.9704732 0.22435911 0.09793048 0.26863924 0.69497397 0.23899666
[5,] 0.7315111 0.97137094 0.85733108 0.72832007 0.06766873 0.04235271
[6,] 0.2159874 0.50054039 0.19493044 0.96520339 0.89314903 0.90171824
[,7] [,8] [,9] [,10]
[1,] 0.94280352 0.64377198 0.5827483 0.92633749
[2,] 0.43391370 0.09490557 0.6728905 0.06741415
[3,] 0.21854447 0.78748899 0.1438589 0.30366601
[4,] 0.70834668 0.29569709 0.3194497 0.68048008
[5,] 0.52475737 0.46038173 0.8618724 0.48145297
[6,] 0.08212028 0.95824574 0.4378693 0.52659065
and then divide the unit interval in two halves for the two-level factors, in three for the tree-level factors. Maybe. There are other ideas in that section of the task view.
Returning to the question for fractional factorial designs, multiple R packages can be used. But most will only generate symmetric designs where all factors have the same number of levels. One exception is the package planor, its use seems complicated. Below I give an example of use of AlgDesign, which is simpler to use:
library(AlgDesign)
cand <- gen.factorial(levels=c(rep(2,3),rep(3,3)),
nVars= 6,
factors="all", varNames = LETTERS[1:6])
des <- optFederov( ~ ., data=cand, nTrials = 36)
ev <- eval.design( ~ ., des$design, confounding=TRUE, X=cand)
|
Algorithm for generating a multi-level fractional factorial design
Since the purpose of the design is to "cover" the input space of the program to search for bugs, not to optimize or fit some model, it is not clear that statistical design of experiments is useful. S
|
41,950
|
Is inference based on full (global) regression model appropriate?
|
All depends on your study aims:
A) Exploratory study: Your aim is to screen a number of potentially interesting predictors for relationships. You want to to build a testable model based on these exploratory results. No inferences (in a null-hypothesis-significance-testing sense), or other important decisions are drawn from the study. The study is a pilot and will be followed by another confirmatory/prespecified study.
In this case model selection procedures (using AIC, BIC, or cross-validation techniques) are your methods of choice.
The reference you cited is correct: The p-Values obtained for the predictors in the final model will be overly optimistic: By essentially trying out many different models in model selection you created a multiple-comparisons-problem — "the garden of forking paths". Conventional statistical tests will yield you p-values for the current model only and not control for these multiple comparisons.
B) Confirmatory/"pre-specified" study:
In this case you should ideally test a single model — the one pre-specified before the study was performed. If you had good reason to believe before the study started that all of your predictors are having an effect then the full model is a natural choice. If you included some predictors on mere suspicion you likely performed an exploratory study.
"Non-important" variables, i.e. variables that do not explain much variance in the outcome variable, will only exert undue influence on your data if you have too many predictors relative to your sample size (overfitting) or if there are predictors that are highly correlated (collinear). Ideally you avoid these situations by performing an exploratory study.
One way to check for overfitting/unstable model problems is by exploring a "reduced model" that includes "significant" terms from the main model, only. Importantly, this reduced model analysis should be referred to as a post-hoc control analysis aiding interpretation. Conclusions should solely be based on the pre-specified model.
|
Is inference based on full (global) regression model appropriate?
|
All depends on your study aims:
A) Exploratory study: Your aim is to screen a number of potentially interesting predictors for relationships. You want to to build a testable model based on these explo
|
Is inference based on full (global) regression model appropriate?
All depends on your study aims:
A) Exploratory study: Your aim is to screen a number of potentially interesting predictors for relationships. You want to to build a testable model based on these exploratory results. No inferences (in a null-hypothesis-significance-testing sense), or other important decisions are drawn from the study. The study is a pilot and will be followed by another confirmatory/prespecified study.
In this case model selection procedures (using AIC, BIC, or cross-validation techniques) are your methods of choice.
The reference you cited is correct: The p-Values obtained for the predictors in the final model will be overly optimistic: By essentially trying out many different models in model selection you created a multiple-comparisons-problem — "the garden of forking paths". Conventional statistical tests will yield you p-values for the current model only and not control for these multiple comparisons.
B) Confirmatory/"pre-specified" study:
In this case you should ideally test a single model — the one pre-specified before the study was performed. If you had good reason to believe before the study started that all of your predictors are having an effect then the full model is a natural choice. If you included some predictors on mere suspicion you likely performed an exploratory study.
"Non-important" variables, i.e. variables that do not explain much variance in the outcome variable, will only exert undue influence on your data if you have too many predictors relative to your sample size (overfitting) or if there are predictors that are highly correlated (collinear). Ideally you avoid these situations by performing an exploratory study.
One way to check for overfitting/unstable model problems is by exploring a "reduced model" that includes "significant" terms from the main model, only. Importantly, this reduced model analysis should be referred to as a post-hoc control analysis aiding interpretation. Conclusions should solely be based on the pre-specified model.
|
Is inference based on full (global) regression model appropriate?
All depends on your study aims:
A) Exploratory study: Your aim is to screen a number of potentially interesting predictors for relationships. You want to to build a testable model based on these explo
|
41,951
|
Squared Loss for Multilabel Classification
|
This scheme is called the Brier loss. It is a proper scoring rule, and hence only the optimal classifier is correct, etc. It corresponds, of course, to the $L_2$ distance between the predictive label distribution and the true label distribution (which is a point mass).
Deep learning types these days strongly prefer the cross-entropy loss, which corresponds to the KL divergence $KL( y \| \hat y)$. This will penalize giving very low probabilities to the correct class very harshly, perhaps encouraging a flattening out of predicted probabilities relative to the Brier loss.
Consider a $K$-way classification problem, where your estimate of the probability of the $i$th class is $\hat p_i$.
Let $y$ be the correct label for a given instance $x$, and $B = (\hat p_y(x) - 1)^2$ the Brier loss. Then
$$\nabla B = 2 (\hat p_y(x) - 1) \nabla \hat p_y(x),$$
whereas if $C(x, y, w) = - \log \hat p_y(x)$ is the cross-entropy loss, then
$$\nabla C = - \frac{1}{\hat p_y(x)} \nabla \hat p_y(x).$$
Plotting these:
We can thus see that the cross-entropy really emphasizes wrong values, whereas Brier loss scales just linearly with the probability estimate.
Another interesting property: suppose that there are three categories, with the first one being correct. Cross-entropy would value the predictions $(.8, .2, 0)$ and $(.8, .1, .1)$ equally, whereas Brier loss would prefer the second one. I don't know if that's of huge practical importance, but only caring about the true category seems like a reasonable criterion to me, and that leads to cross-entropy being the only proper scoring rule.
|
Squared Loss for Multilabel Classification
|
This scheme is called the Brier loss. It is a proper scoring rule, and hence only the optimal classifier is correct, etc. It corresponds, of course, to the $L_2$ distance between the predictive label
|
Squared Loss for Multilabel Classification
This scheme is called the Brier loss. It is a proper scoring rule, and hence only the optimal classifier is correct, etc. It corresponds, of course, to the $L_2$ distance between the predictive label distribution and the true label distribution (which is a point mass).
Deep learning types these days strongly prefer the cross-entropy loss, which corresponds to the KL divergence $KL( y \| \hat y)$. This will penalize giving very low probabilities to the correct class very harshly, perhaps encouraging a flattening out of predicted probabilities relative to the Brier loss.
Consider a $K$-way classification problem, where your estimate of the probability of the $i$th class is $\hat p_i$.
Let $y$ be the correct label for a given instance $x$, and $B = (\hat p_y(x) - 1)^2$ the Brier loss. Then
$$\nabla B = 2 (\hat p_y(x) - 1) \nabla \hat p_y(x),$$
whereas if $C(x, y, w) = - \log \hat p_y(x)$ is the cross-entropy loss, then
$$\nabla C = - \frac{1}{\hat p_y(x)} \nabla \hat p_y(x).$$
Plotting these:
We can thus see that the cross-entropy really emphasizes wrong values, whereas Brier loss scales just linearly with the probability estimate.
Another interesting property: suppose that there are three categories, with the first one being correct. Cross-entropy would value the predictions $(.8, .2, 0)$ and $(.8, .1, .1)$ equally, whereas Brier loss would prefer the second one. I don't know if that's of huge practical importance, but only caring about the true category seems like a reasonable criterion to me, and that leads to cross-entropy being the only proper scoring rule.
|
Squared Loss for Multilabel Classification
This scheme is called the Brier loss. It is a proper scoring rule, and hence only the optimal classifier is correct, etc. It corresponds, of course, to the $L_2$ distance between the predictive label
|
41,952
|
Squared Loss for Multilabel Classification
|
Thank @djs for the great answer. Agreeing majority of it, but maybe not the last part. (Had to post another answer due to lack of reputation to comment directly.)
Another interesting property: suppose that there are three categories, with the first one being correct. Cross-entropy would value the predictions $(.8, .2, 0)$ and $(.8, .1, .1)$ equally, whereas Brier loss would prefer the second one. I don't know if that's of huge practical importance, but only caring about the true category seems like a reasonable criterion to me, and that leads to cross-entropy being the only proper scoring rule.
IMO, caring about false categories is actually a valuable feature. In knowledge distillation, utilizing predictions of false categories (so-called "dark knowledge") is one of the underlying principles.
For three categories (dog, cat, car), suppose the true label is "dog", a prediction $(.8, .2, 0)$ is obviously better than a prediction $(.8, .1, .1)$ , because "car" is no where near "dog" and $0.1$ prediction for "car" is hardly reasonable.
Nonetheless, this doesn't make MSE a better loss function for classification. Cross entropy is still preferred.
|
Squared Loss for Multilabel Classification
|
Thank @djs for the great answer. Agreeing majority of it, but maybe not the last part. (Had to post another answer due to lack of reputation to comment directly.)
Another interesting property: suppos
|
Squared Loss for Multilabel Classification
Thank @djs for the great answer. Agreeing majority of it, but maybe not the last part. (Had to post another answer due to lack of reputation to comment directly.)
Another interesting property: suppose that there are three categories, with the first one being correct. Cross-entropy would value the predictions $(.8, .2, 0)$ and $(.8, .1, .1)$ equally, whereas Brier loss would prefer the second one. I don't know if that's of huge practical importance, but only caring about the true category seems like a reasonable criterion to me, and that leads to cross-entropy being the only proper scoring rule.
IMO, caring about false categories is actually a valuable feature. In knowledge distillation, utilizing predictions of false categories (so-called "dark knowledge") is one of the underlying principles.
For three categories (dog, cat, car), suppose the true label is "dog", a prediction $(.8, .2, 0)$ is obviously better than a prediction $(.8, .1, .1)$ , because "car" is no where near "dog" and $0.1$ prediction for "car" is hardly reasonable.
Nonetheless, this doesn't make MSE a better loss function for classification. Cross entropy is still preferred.
|
Squared Loss for Multilabel Classification
Thank @djs for the great answer. Agreeing majority of it, but maybe not the last part. (Had to post another answer due to lack of reputation to comment directly.)
Another interesting property: suppos
|
41,953
|
Scores are still correlated after PCA
|
Remember how PCA works: It starts by finding the single dimension (direction) of greatest variation in your dataset. That becomes PC1. Then it finds the direction of greatest variation that is at right angles to that. It becomes PC2. Etc.
The important thing to recognize is that if your data aren't centered, the direction of greatest variation might well be from the origin of the space (e.g., $(0, 0)$ in a Cartesian plane) to the centroid (mean vector) of your data. Every subsequent principal component is constrained by that first one: They all have to be at right angles to it. That means that the resulting PCs may not uncorrelate your data.
Here is a quick illustration (written in R):
library(MASS) # we'll use this package
set.seed(7668) # this makes the example exactly reproducible
X = mvrnorm(100, mu=c(-5, 0), Sigma=rbind(c(1, 1.6), # here I generate data
c(1.6, 4) ))
windows(height=4, width=7)
layout(matrix(1:2, nrow=1))
plot(X[,1], X[,2], xlim=c(-8, 1))
abline(h=0, col="gray"); abline(v=0, col="gray")
points(mean(X[,1]), 0, pch="*", cex=2, col="red")
biplot(prcomp(X, center=FALSE))
|
Scores are still correlated after PCA
|
Remember how PCA works: It starts by finding the single dimension (direction) of greatest variation in your dataset. That becomes PC1. Then it finds the direction of greatest variation that is at ri
|
Scores are still correlated after PCA
Remember how PCA works: It starts by finding the single dimension (direction) of greatest variation in your dataset. That becomes PC1. Then it finds the direction of greatest variation that is at right angles to that. It becomes PC2. Etc.
The important thing to recognize is that if your data aren't centered, the direction of greatest variation might well be from the origin of the space (e.g., $(0, 0)$ in a Cartesian plane) to the centroid (mean vector) of your data. Every subsequent principal component is constrained by that first one: They all have to be at right angles to it. That means that the resulting PCs may not uncorrelate your data.
Here is a quick illustration (written in R):
library(MASS) # we'll use this package
set.seed(7668) # this makes the example exactly reproducible
X = mvrnorm(100, mu=c(-5, 0), Sigma=rbind(c(1, 1.6), # here I generate data
c(1.6, 4) ))
windows(height=4, width=7)
layout(matrix(1:2, nrow=1))
plot(X[,1], X[,2], xlim=c(-8, 1))
abline(h=0, col="gray"); abline(v=0, col="gray")
points(mean(X[,1]), 0, pch="*", cex=2, col="red")
biplot(prcomp(X, center=FALSE))
|
Scores are still correlated after PCA
Remember how PCA works: It starts by finding the single dimension (direction) of greatest variation in your dataset. That becomes PC1. Then it finds the direction of greatest variation that is at ri
|
41,954
|
Statistical approach to compare the calibration between models
|
As you know the Brier score measures calibration and is the mean square error, $\bar B = n^{-1} \sum (\hat y_i - y_i)^2$, between the predictions, $\hat y,$ and the responses, $y$. Since the Brier score is a mean, comparing two Brier scores is basically a comparison of means and you can go as fancy with it as you like. I'll suggest two things and point to a third:
One option: do a t-test
My immediate response when I hear comparisons of means is to do a t-test. Squared errors probably aren't normally distributed in general so it's possible that this isn't the most powerful test. It seems fine in your extreme example. Below I test the alternative hypothesis that p1 has greater MSE than p2:
y <- rbinom(100,1,1:100/100)
p1 <- 1:100/10001
p2 <- 1:100/101
squares_1 <- (p1 - y)^2
squares_2 <- (p2 - y)^2
t.test(squares_1, squares_2, paired=T, alternative="greater")
#>
#> Paired t-test
#>
#> data: squares_1 and squares_2
#> t = 4.8826, df = 99, p-value = 2.01e-06
#> alternative hypothesis: true difference in means is greater than 0
#> 95 percent confidence interval:
#> 0.1769769 Inf
#> sample estimates:
#> mean of the differences
#> 0.2681719
We get a super-low p-value. I did a paired t-test as, observation for observation, the two sets of predictions compare against the same outcome.
Another option: permutation testing
If the distribution of the squared errors worries you, perhaps you don't want to make assumptions of a t-test. You could for instance test the same hypothesis with a permutation test:
library(plyr)
observed <- mean(squares_1) - mean(squares_2)
permutations <- raply(500000, {
swap <- sample(c(T, F), 100, replace=T)
one <- squares_1
one[swap] <- squares_2[swap]
two <- squares_2
two[swap] <- squares_1[swap]
mean(one) - mean(two)
})
hist(permutations, prob=T, nclass=60, xlim=c(-.4, .4))
abline(v=observed, col="red")
# p-value. I add 1 so that the p-value doesn't come out 0
(sum(permutations > observed) + 1)/(length(permutations) + 1)
#> [1] 1.999996e-06
The two tests seem to agree closely.
Some other answers
A quick search of this site on comparison of MSEs point to the Diebold-Mariano test (see the answer here, and a comment here). This looks like it's simply Wald's test and I guess it will perform similarly to the t-test above.
|
Statistical approach to compare the calibration between models
|
As you know the Brier score measures calibration and is the mean square error, $\bar B = n^{-1} \sum (\hat y_i - y_i)^2$, between the predictions, $\hat y,$ and the responses, $y$. Since the Brier sco
|
Statistical approach to compare the calibration between models
As you know the Brier score measures calibration and is the mean square error, $\bar B = n^{-1} \sum (\hat y_i - y_i)^2$, between the predictions, $\hat y,$ and the responses, $y$. Since the Brier score is a mean, comparing two Brier scores is basically a comparison of means and you can go as fancy with it as you like. I'll suggest two things and point to a third:
One option: do a t-test
My immediate response when I hear comparisons of means is to do a t-test. Squared errors probably aren't normally distributed in general so it's possible that this isn't the most powerful test. It seems fine in your extreme example. Below I test the alternative hypothesis that p1 has greater MSE than p2:
y <- rbinom(100,1,1:100/100)
p1 <- 1:100/10001
p2 <- 1:100/101
squares_1 <- (p1 - y)^2
squares_2 <- (p2 - y)^2
t.test(squares_1, squares_2, paired=T, alternative="greater")
#>
#> Paired t-test
#>
#> data: squares_1 and squares_2
#> t = 4.8826, df = 99, p-value = 2.01e-06
#> alternative hypothesis: true difference in means is greater than 0
#> 95 percent confidence interval:
#> 0.1769769 Inf
#> sample estimates:
#> mean of the differences
#> 0.2681719
We get a super-low p-value. I did a paired t-test as, observation for observation, the two sets of predictions compare against the same outcome.
Another option: permutation testing
If the distribution of the squared errors worries you, perhaps you don't want to make assumptions of a t-test. You could for instance test the same hypothesis with a permutation test:
library(plyr)
observed <- mean(squares_1) - mean(squares_2)
permutations <- raply(500000, {
swap <- sample(c(T, F), 100, replace=T)
one <- squares_1
one[swap] <- squares_2[swap]
two <- squares_2
two[swap] <- squares_1[swap]
mean(one) - mean(two)
})
hist(permutations, prob=T, nclass=60, xlim=c(-.4, .4))
abline(v=observed, col="red")
# p-value. I add 1 so that the p-value doesn't come out 0
(sum(permutations > observed) + 1)/(length(permutations) + 1)
#> [1] 1.999996e-06
The two tests seem to agree closely.
Some other answers
A quick search of this site on comparison of MSEs point to the Diebold-Mariano test (see the answer here, and a comment here). This looks like it's simply Wald's test and I guess it will perform similarly to the t-test above.
|
Statistical approach to compare the calibration between models
As you know the Brier score measures calibration and is the mean square error, $\bar B = n^{-1} \sum (\hat y_i - y_i)^2$, between the predictions, $\hat y,$ and the responses, $y$. Since the Brier sco
|
41,955
|
Statistical approach to compare the calibration between models
|
For future reference, IMO the first answer does not address the calibration issue.
Consider predictions $ \hat{y}_1,\hat{y}_2 ..., \hat{y}_n $ made by a reasonable, well calibrated, model for input values $x_1, x_2,..., x_n$. Now consider a second set of predictions $\tilde{y}_1, \tilde{y}_2, ..., \tilde{y}_n$ that are made by a model that simply scrambles the predictions of the first model within each of the two classes and outputs them in random order.
The second model is likely to be poorly calibrated compared to the first well calibrated model, but the brier scores of the two models will be the same.
As stated in the original question, I suggest looking at the Hosmer–Lemeshow test, and comparing the HL test-statistics computed for the predictions of each of the two models (A larger HL statistic suggests poorer calibration).
|
Statistical approach to compare the calibration between models
|
For future reference, IMO the first answer does not address the calibration issue.
Consider predictions $ \hat{y}_1,\hat{y}_2 ..., \hat{y}_n $ made by a reasonable, well calibrated, model for input va
|
Statistical approach to compare the calibration between models
For future reference, IMO the first answer does not address the calibration issue.
Consider predictions $ \hat{y}_1,\hat{y}_2 ..., \hat{y}_n $ made by a reasonable, well calibrated, model for input values $x_1, x_2,..., x_n$. Now consider a second set of predictions $\tilde{y}_1, \tilde{y}_2, ..., \tilde{y}_n$ that are made by a model that simply scrambles the predictions of the first model within each of the two classes and outputs them in random order.
The second model is likely to be poorly calibrated compared to the first well calibrated model, but the brier scores of the two models will be the same.
As stated in the original question, I suggest looking at the Hosmer–Lemeshow test, and comparing the HL test-statistics computed for the predictions of each of the two models (A larger HL statistic suggests poorer calibration).
|
Statistical approach to compare the calibration between models
For future reference, IMO the first answer does not address the calibration issue.
Consider predictions $ \hat{y}_1,\hat{y}_2 ..., \hat{y}_n $ made by a reasonable, well calibrated, model for input va
|
41,956
|
How do you optimize a classification model, when you only care about the top 5% of the ROC Curve?
|
The general topic of binary classification with strongly unbalanced classes has been cover to a certain extent in the thread with the same name. In very short: caret does allow for more imbalance-appropriate metrics like Cohen's kappa or Precision-Recall AUC; the PRAUC is relatively new, you can find it using the prSummary metric. You can also try resampling approaches where you will rebalance the sample during estimation so class features become more prominent.
Having said the above, you seem to have a particular constraint about the total number of positives $N$ you can predict. I can think two immediate work-arounds. Both of them rely on idea you are using a probabilistic classifier. Simply put, a probabilistic classifier is a classification routine that can output a measurement of belief about its prediction in the form of a $[0,1]$ decimal number we can interpreter as a probability. Elastic nets, Random Forests and various ensemble classifiers do offer this usually out-of-the-box. SVMs usually do not provide out-of-the-box probabilities but you can get them if you are willing to accept some approximations. Anyway, back to the work-arounds:
Use a custom metric. Instead of evaluating the area below the whole PR curve we focus on the area that guarantees a minimum number of points. These are generally known as partial AUC metrics. They require us to define a custom performance metric. Check caret's trainControl summaryFunction argument for more on this. Let me stress you do not have to definitely look into an AUC. Given that we can estimate probabilities in each steps of our model training procedure, we can do an thresholding step within the estimation procedure right before evaluating our performance metric. Notice that in the case we "fix $N$", using the Recall (Sensitivity) value as a metric would be fine because it would immediately control for the fact we want $N$ points. (Actually in that case the Recall and Precision would be equal as the number of False Negatives would equate the number of False Positives.)
Threshold the final output. Given one can estimate the probabilities of an item belonging to a particular class, we can pick the items with the $N$ highest probabilities related to the class of interest. This is very easy to implement as essentially we apply a threshold right before reporting our findings. We can estimate models and evaluate them using our favourite performance metrics without any really changes in our work-flow. This is a simplistic approach but it is the easiest way to satisfy the constraints given. If we use this approach it will be probably more relevant to use an AUC-based performance metric originally. That is because using something like Accuracy, Recall, etc. would suggest using a particular threshold $p$ (usually $0.5$) to calculate the metrics needed for model training - we do not want to do that as we will not calibrate that $p$ using this approach).
A very important caveat: we need to have a well-calibrated probabilistic classifier model to use this approach; ie. we need to have good consistency between the predicted class probabilities and the observed class rates (check caret's function calibration on this). Otherwise our insights will be completely off when it comes to discriminating between items. As a final suggestion I would recommend that you look at lift-curves; they will allow you to see how fast you can find a given number of positive examples. Given the restriction imposed probably lift charts will be very informative and probably you want to present them when reporting your findings.
|
How do you optimize a classification model, when you only care about the top 5% of the ROC Curve?
|
The general topic of binary classification with strongly unbalanced classes has been cover to a certain extent in the thread with the same name. In very short: caret does allow for more imbalance-appr
|
How do you optimize a classification model, when you only care about the top 5% of the ROC Curve?
The general topic of binary classification with strongly unbalanced classes has been cover to a certain extent in the thread with the same name. In very short: caret does allow for more imbalance-appropriate metrics like Cohen's kappa or Precision-Recall AUC; the PRAUC is relatively new, you can find it using the prSummary metric. You can also try resampling approaches where you will rebalance the sample during estimation so class features become more prominent.
Having said the above, you seem to have a particular constraint about the total number of positives $N$ you can predict. I can think two immediate work-arounds. Both of them rely on idea you are using a probabilistic classifier. Simply put, a probabilistic classifier is a classification routine that can output a measurement of belief about its prediction in the form of a $[0,1]$ decimal number we can interpreter as a probability. Elastic nets, Random Forests and various ensemble classifiers do offer this usually out-of-the-box. SVMs usually do not provide out-of-the-box probabilities but you can get them if you are willing to accept some approximations. Anyway, back to the work-arounds:
Use a custom metric. Instead of evaluating the area below the whole PR curve we focus on the area that guarantees a minimum number of points. These are generally known as partial AUC metrics. They require us to define a custom performance metric. Check caret's trainControl summaryFunction argument for more on this. Let me stress you do not have to definitely look into an AUC. Given that we can estimate probabilities in each steps of our model training procedure, we can do an thresholding step within the estimation procedure right before evaluating our performance metric. Notice that in the case we "fix $N$", using the Recall (Sensitivity) value as a metric would be fine because it would immediately control for the fact we want $N$ points. (Actually in that case the Recall and Precision would be equal as the number of False Negatives would equate the number of False Positives.)
Threshold the final output. Given one can estimate the probabilities of an item belonging to a particular class, we can pick the items with the $N$ highest probabilities related to the class of interest. This is very easy to implement as essentially we apply a threshold right before reporting our findings. We can estimate models and evaluate them using our favourite performance metrics without any really changes in our work-flow. This is a simplistic approach but it is the easiest way to satisfy the constraints given. If we use this approach it will be probably more relevant to use an AUC-based performance metric originally. That is because using something like Accuracy, Recall, etc. would suggest using a particular threshold $p$ (usually $0.5$) to calculate the metrics needed for model training - we do not want to do that as we will not calibrate that $p$ using this approach).
A very important caveat: we need to have a well-calibrated probabilistic classifier model to use this approach; ie. we need to have good consistency between the predicted class probabilities and the observed class rates (check caret's function calibration on this). Otherwise our insights will be completely off when it comes to discriminating between items. As a final suggestion I would recommend that you look at lift-curves; they will allow you to see how fast you can find a given number of positive examples. Given the restriction imposed probably lift charts will be very informative and probably you want to present them when reporting your findings.
|
How do you optimize a classification model, when you only care about the top 5% of the ROC Curve?
The general topic of binary classification with strongly unbalanced classes has been cover to a certain extent in the thread with the same name. In very short: caret does allow for more imbalance-appr
|
41,957
|
MCMC Metropolis Hastings - Normalised distribution
|
$Z$ would be the integral of $\pi^\prime$ (i.e. the integral of the unnormalized density). The integral of $\pi$ would be 1.
Evaluating the integral accurately would often be difficult (or where it's not difficult, at least time consuming).
|
MCMC Metropolis Hastings - Normalised distribution
|
$Z$ would be the integral of $\pi^\prime$ (i.e. the integral of the unnormalized density). The integral of $\pi$ would be 1.
Evaluating the integral accurately would often be difficult (or where it's
|
MCMC Metropolis Hastings - Normalised distribution
$Z$ would be the integral of $\pi^\prime$ (i.e. the integral of the unnormalized density). The integral of $\pi$ would be 1.
Evaluating the integral accurately would often be difficult (or where it's not difficult, at least time consuming).
|
MCMC Metropolis Hastings - Normalised distribution
$Z$ would be the integral of $\pi^\prime$ (i.e. the integral of the unnormalized density). The integral of $\pi$ would be 1.
Evaluating the integral accurately would often be difficult (or where it's
|
41,958
|
Mean centering or not in the context of Partial Least Squares
|
There are mainly two algorithms for PLSR namely NIPALS and SIMPLS.
SIMPLS algorithm is generally faster yet numerically less stable(in most cases the difference is very small). The original article of SIMPLS provides the steps which starts with mean centering both X and Y. The maintainer of the package probably relies on these steps. However, directly quoting from the article:
With NIPALS algorithm, in this very essential article the authors mentions the mean centering is done by default to make the calculations easier and provided no other specific information.
Lastly, there is this article which directly questions the reasoning behind mean centering and provides some case studies. The authors stated exactly what you have observed. In some cases mean-centering can even decrease the predictive ability of the model. While it allows easy calculation of intercept term, I believe it is safe to omit centering.
Since all algorithms basically carries out eigenvalue decomposition of covariance matrices that involves distance of variables from their means, it is still called PLS without mean-centering. However, that requires alteration of your code.
The options in the softwares you have mentioned may allow you to skip centering but these options may be available for the data that is already centered. In other words they might be still using
X' * Y
instead of
(X - mean(x))' * (Y - mean(Y))
for the calculation of covariance matrix, for instance.
The articles:
SIMPLS: De Jong, Sijmen. "SIMPLS: an alternative approach to partial least squares regression." Chemometrics and intelligent laboratory systems 18, no. 3 (1993): 251-263.
Harvard
PLS tutorial with NIPALS: Geladi, Paul, and Bruce R. Kowalski. "Partial least-squares regression: a tutorial." Analytica chimica acta 185 (1986): 1-17.
Mean-centering in PLS: Seasholtz, Mary Beth, and Bruce R. Kowalski. "The effect of mean centering on prediction in multivariate calibration." Journal of Chemometrics 6, no. 2 (1992): 103-111.
Harvard
|
Mean centering or not in the context of Partial Least Squares
|
There are mainly two algorithms for PLSR namely NIPALS and SIMPLS.
SIMPLS algorithm is generally faster yet numerically less stable(in most cases the difference is very small). The original article of
|
Mean centering or not in the context of Partial Least Squares
There are mainly two algorithms for PLSR namely NIPALS and SIMPLS.
SIMPLS algorithm is generally faster yet numerically less stable(in most cases the difference is very small). The original article of SIMPLS provides the steps which starts with mean centering both X and Y. The maintainer of the package probably relies on these steps. However, directly quoting from the article:
With NIPALS algorithm, in this very essential article the authors mentions the mean centering is done by default to make the calculations easier and provided no other specific information.
Lastly, there is this article which directly questions the reasoning behind mean centering and provides some case studies. The authors stated exactly what you have observed. In some cases mean-centering can even decrease the predictive ability of the model. While it allows easy calculation of intercept term, I believe it is safe to omit centering.
Since all algorithms basically carries out eigenvalue decomposition of covariance matrices that involves distance of variables from their means, it is still called PLS without mean-centering. However, that requires alteration of your code.
The options in the softwares you have mentioned may allow you to skip centering but these options may be available for the data that is already centered. In other words they might be still using
X' * Y
instead of
(X - mean(x))' * (Y - mean(Y))
for the calculation of covariance matrix, for instance.
The articles:
SIMPLS: De Jong, Sijmen. "SIMPLS: an alternative approach to partial least squares regression." Chemometrics and intelligent laboratory systems 18, no. 3 (1993): 251-263.
Harvard
PLS tutorial with NIPALS: Geladi, Paul, and Bruce R. Kowalski. "Partial least-squares regression: a tutorial." Analytica chimica acta 185 (1986): 1-17.
Mean-centering in PLS: Seasholtz, Mary Beth, and Bruce R. Kowalski. "The effect of mean centering on prediction in multivariate calibration." Journal of Chemometrics 6, no. 2 (1992): 103-111.
Harvard
|
Mean centering or not in the context of Partial Least Squares
There are mainly two algorithms for PLSR namely NIPALS and SIMPLS.
SIMPLS algorithm is generally faster yet numerically less stable(in most cases the difference is very small). The original article of
|
41,959
|
Why is F-test not possible for comparing non-nested models
|
With nested models, it's possible to conceive of a saturated model. This provides a theoretical upper bound to the likelihood and parameter space. When you have this, you know for a fact that the long-range behavior of the log likelihood ratio is a $\chi^2_{p-q}$ random variable. This is necessary for conducting formal inference. The $F$ statistic is the actual distribution of the exact test for normally distributed variables, but the $F$ converges to a $\chi^2$ distribution as the denominator degrees of freedom goes to infinity.
With non-nested models, it's possible to have arbitrary high and low likelihoods. Then there are no guarantees about the long range behavior of the statistic. This means some scenarios give you very high false negative, very high false positive probabilities, or both with no way of calibrating the test.
You can qualitatively compare non-nested models using the AIC or BIC. But you cannot make formal inference on their relative impact, you just have to say, "Model Y had a higher/lower IC than Model X".
|
Why is F-test not possible for comparing non-nested models
|
With nested models, it's possible to conceive of a saturated model. This provides a theoretical upper bound to the likelihood and parameter space. When you have this, you know for a fact that the long
|
Why is F-test not possible for comparing non-nested models
With nested models, it's possible to conceive of a saturated model. This provides a theoretical upper bound to the likelihood and parameter space. When you have this, you know for a fact that the long-range behavior of the log likelihood ratio is a $\chi^2_{p-q}$ random variable. This is necessary for conducting formal inference. The $F$ statistic is the actual distribution of the exact test for normally distributed variables, but the $F$ converges to a $\chi^2$ distribution as the denominator degrees of freedom goes to infinity.
With non-nested models, it's possible to have arbitrary high and low likelihoods. Then there are no guarantees about the long range behavior of the statistic. This means some scenarios give you very high false negative, very high false positive probabilities, or both with no way of calibrating the test.
You can qualitatively compare non-nested models using the AIC or BIC. But you cannot make formal inference on their relative impact, you just have to say, "Model Y had a higher/lower IC than Model X".
|
Why is F-test not possible for comparing non-nested models
With nested models, it's possible to conceive of a saturated model. This provides a theoretical upper bound to the likelihood and parameter space. When you have this, you know for a fact that the long
|
41,960
|
Properties of the white noise process
|
Consider the i.i.d. process $\{x_t\}$ where $P(x_t=0)=P(x_t=2)=1/2$ for each integer $t$. This sequence satisfies $E(x_t)=1$ for each integer $t$. Now, construct the process $\{u_t\}=\{x_t^2(1-x_{t-1})\}$. For each integer $t$, then, we have \begin{align}E(u_t)&=E(x^2_t(1-x_{t-1}))\\ &=E(x_t^2)(1-E(x_{t-1}))\\ &=E(x_t^2)(1-1)\\ &=0\end{align} where the second inequality follows from independence and the linearity of expectation.
Furthermore, \begin{align}Eu_t^2&=Ex_t^4E(1-x_{t-1})^2\\
&=2^4/2\cdot [1^2/2+(-1)^2/2]\\ &=8,\end{align} which is finite and independent of $t$. For $k\neq - 1$ we also have \begin{align}Eu_tu_{t+k}&=E(x_t^2(1-x_{t-1})x_{t+k}^2(1-x_{t+k-1}))\\ &=E(1-x_{t-1})E(x_t^2x_{t+k}^2(1-x_{t+k-1}))\\ &= 0\cdot E(x_t^2x_{t+k}^2(1-x_{t+k-1}))\\ &=0, \end{align} and if $k=-1$ we may do as follows: \begin{align}Eu_tu_{t-1}&=E(x_t^2(1-x_{t-1})x_{t-1}^2(1-x_{t-2}))\\ &=E(1-x_{t-2})E(x_t^2(1-x_{t-1})x_{t-1}^2)\\ &= 0\cdot E(x_t^2(1-x_{t-1})x_{t-1}^2)\\ &=0. \end{align}
In other words, $\{u_t\}$ is a white noise process.
To show that $E_tu_{t+1}\neq 0$, consider the information set $I_t$ up until time period $t$ which says that $u_t=0$. Then $x_t^2(1-x_{t-1})=0$. By construction this can only be the case if $x_t=0$. Hence, $u_{t+1}=x_{t+1}^2(1-0)=x_{t+1}^2$ if $u_t=0$ is given. (Note that this suggests that $u_{t+1}$ depends on information in time periods $k\leq t$.) Hence, as the information set $I_t$ says nothing about the value of $x_{t+1}$, the distribution of $u_{t+1}|I_t$ is equivalent to the distribution of $x_{t+1}^2$. Thus, \begin{align}E_tu_{t+1} &=Ex_{t+1}^2\\ &=0^2/2+2^2/2\\ &=2\\ &\neq 0.\end{align}
Thus, we have found a white noise process not satisfying $E_tu_{t+1}= 0$!
|
Properties of the white noise process
|
Consider the i.i.d. process $\{x_t\}$ where $P(x_t=0)=P(x_t=2)=1/2$ for each integer $t$. This sequence satisfies $E(x_t)=1$ for each integer $t$. Now, construct the process $\{u_t\}=\{x_t^2(1-x_{t-1}
|
Properties of the white noise process
Consider the i.i.d. process $\{x_t\}$ where $P(x_t=0)=P(x_t=2)=1/2$ for each integer $t$. This sequence satisfies $E(x_t)=1$ for each integer $t$. Now, construct the process $\{u_t\}=\{x_t^2(1-x_{t-1})\}$. For each integer $t$, then, we have \begin{align}E(u_t)&=E(x^2_t(1-x_{t-1}))\\ &=E(x_t^2)(1-E(x_{t-1}))\\ &=E(x_t^2)(1-1)\\ &=0\end{align} where the second inequality follows from independence and the linearity of expectation.
Furthermore, \begin{align}Eu_t^2&=Ex_t^4E(1-x_{t-1})^2\\
&=2^4/2\cdot [1^2/2+(-1)^2/2]\\ &=8,\end{align} which is finite and independent of $t$. For $k\neq - 1$ we also have \begin{align}Eu_tu_{t+k}&=E(x_t^2(1-x_{t-1})x_{t+k}^2(1-x_{t+k-1}))\\ &=E(1-x_{t-1})E(x_t^2x_{t+k}^2(1-x_{t+k-1}))\\ &= 0\cdot E(x_t^2x_{t+k}^2(1-x_{t+k-1}))\\ &=0, \end{align} and if $k=-1$ we may do as follows: \begin{align}Eu_tu_{t-1}&=E(x_t^2(1-x_{t-1})x_{t-1}^2(1-x_{t-2}))\\ &=E(1-x_{t-2})E(x_t^2(1-x_{t-1})x_{t-1}^2)\\ &= 0\cdot E(x_t^2(1-x_{t-1})x_{t-1}^2)\\ &=0. \end{align}
In other words, $\{u_t\}$ is a white noise process.
To show that $E_tu_{t+1}\neq 0$, consider the information set $I_t$ up until time period $t$ which says that $u_t=0$. Then $x_t^2(1-x_{t-1})=0$. By construction this can only be the case if $x_t=0$. Hence, $u_{t+1}=x_{t+1}^2(1-0)=x_{t+1}^2$ if $u_t=0$ is given. (Note that this suggests that $u_{t+1}$ depends on information in time periods $k\leq t$.) Hence, as the information set $I_t$ says nothing about the value of $x_{t+1}$, the distribution of $u_{t+1}|I_t$ is equivalent to the distribution of $x_{t+1}^2$. Thus, \begin{align}E_tu_{t+1} &=Ex_{t+1}^2\\ &=0^2/2+2^2/2\\ &=2\\ &\neq 0.\end{align}
Thus, we have found a white noise process not satisfying $E_tu_{t+1}= 0$!
|
Properties of the white noise process
Consider the i.i.d. process $\{x_t\}$ where $P(x_t=0)=P(x_t=2)=1/2$ for each integer $t$. This sequence satisfies $E(x_t)=1$ for each integer $t$. Now, construct the process $\{u_t\}=\{x_t^2(1-x_{t-1}
|
41,961
|
Tic Tac Toe AI with Machine Learning
|
Presumably you don't want an AI which looks ahead a few moves and brute-forces the best move. I guess you want an AI which will evaluate the strength of each possible move and choose the best.
One way you can approach this is to train an AI to take an input of the board and an input of where to play next and output a probability that this move will lead to a win.
You can create your own data by playing this AI against itself or against a player which plays randomly. This is more involved than using a dataset with the best moves listed for many positions, it's an option if you can't find such a dataset or if you want a challenge.
One possible way to create your own data and use it to iteratively improve the AI is the following:
Let the AI play a few moves and then pause the game
Select a random move to play (random allows the AI to learn from moves it wouldn't normally make)
Record the state of the game and the new move
Let the AI finish the game and record the result
This approach will create game data with many positions and many actions taken with the expected win/loss/draw result. You can use this data to train an AI to predict the result of the game if a given move is played. Repeat this training cycle to iteratively improve the AI.
|
Tic Tac Toe AI with Machine Learning
|
Presumably you don't want an AI which looks ahead a few moves and brute-forces the best move. I guess you want an AI which will evaluate the strength of each possible move and choose the best.
One way
|
Tic Tac Toe AI with Machine Learning
Presumably you don't want an AI which looks ahead a few moves and brute-forces the best move. I guess you want an AI which will evaluate the strength of each possible move and choose the best.
One way you can approach this is to train an AI to take an input of the board and an input of where to play next and output a probability that this move will lead to a win.
You can create your own data by playing this AI against itself or against a player which plays randomly. This is more involved than using a dataset with the best moves listed for many positions, it's an option if you can't find such a dataset or if you want a challenge.
One possible way to create your own data and use it to iteratively improve the AI is the following:
Let the AI play a few moves and then pause the game
Select a random move to play (random allows the AI to learn from moves it wouldn't normally make)
Record the state of the game and the new move
Let the AI finish the game and record the result
This approach will create game data with many positions and many actions taken with the expected win/loss/draw result. You can use this data to train an AI to predict the result of the game if a given move is played. Repeat this training cycle to iteratively improve the AI.
|
Tic Tac Toe AI with Machine Learning
Presumably you don't want an AI which looks ahead a few moves and brute-forces the best move. I guess you want an AI which will evaluate the strength of each possible move and choose the best.
One way
|
41,962
|
Tic Tac Toe AI with Machine Learning
|
I am working on a very similar project to you, and I'm also doing it as an introduction to machine learning.
One of the best methods I've seen for getting a tic-tac-toe AI is a reinforcement learning method described in this paper and formally discussed in this book.
Basically, you have a program run through possible moves and then update probability of how likely that move is to be correct based on whether or not it ends up winning when playing against a random player.
If the player is not random, or you would like a more formal analysis, you can use minimax or alpha-beta pruning as detailed here. Then, I'd recommend training a neural network on the data you've received for an additional challenge.
You can also use more advanced methods such as the Monte Carlo Tree Search (similar to the reinforcement learning method above).
|
Tic Tac Toe AI with Machine Learning
|
I am working on a very similar project to you, and I'm also doing it as an introduction to machine learning.
One of the best methods I've seen for getting a tic-tac-toe AI is a reinforcement learning
|
Tic Tac Toe AI with Machine Learning
I am working on a very similar project to you, and I'm also doing it as an introduction to machine learning.
One of the best methods I've seen for getting a tic-tac-toe AI is a reinforcement learning method described in this paper and formally discussed in this book.
Basically, you have a program run through possible moves and then update probability of how likely that move is to be correct based on whether or not it ends up winning when playing against a random player.
If the player is not random, or you would like a more formal analysis, you can use minimax or alpha-beta pruning as detailed here. Then, I'd recommend training a neural network on the data you've received for an additional challenge.
You can also use more advanced methods such as the Monte Carlo Tree Search (similar to the reinforcement learning method above).
|
Tic Tac Toe AI with Machine Learning
I am working on a very similar project to you, and I'm also doing it as an introduction to machine learning.
One of the best methods I've seen for getting a tic-tac-toe AI is a reinforcement learning
|
41,963
|
Tic Tac Toe AI with Machine Learning
|
I have just finished making a tictactoe bot.
The way I have set it up is that it learns from the moves that I make.
So, if I'm playing a PlayervsAI or PlayervsPlayer game, every move that the winner makes is recorded and saved in a file. So this data is essentially 'moves that lead up to a win'. The format of this data is the 'state of the board + position played'
So when the AI is playing, it looks at this data, sees if the current state of the board matches any in the saved moves file, then plays that move. If it cant find a match it just looks for a win, or a block or just random.
As TicTacToe has an optimal strategy, technically you could input all the optimal moves manually, but I found it more interesting 'teaching' the AI.
|
Tic Tac Toe AI with Machine Learning
|
I have just finished making a tictactoe bot.
The way I have set it up is that it learns from the moves that I make.
So, if I'm playing a PlayervsAI or PlayervsPlayer game, every move that the winner
|
Tic Tac Toe AI with Machine Learning
I have just finished making a tictactoe bot.
The way I have set it up is that it learns from the moves that I make.
So, if I'm playing a PlayervsAI or PlayervsPlayer game, every move that the winner makes is recorded and saved in a file. So this data is essentially 'moves that lead up to a win'. The format of this data is the 'state of the board + position played'
So when the AI is playing, it looks at this data, sees if the current state of the board matches any in the saved moves file, then plays that move. If it cant find a match it just looks for a win, or a block or just random.
As TicTacToe has an optimal strategy, technically you could input all the optimal moves manually, but I found it more interesting 'teaching' the AI.
|
Tic Tac Toe AI with Machine Learning
I have just finished making a tictactoe bot.
The way I have set it up is that it learns from the moves that I make.
So, if I'm playing a PlayervsAI or PlayervsPlayer game, every move that the winner
|
41,964
|
word2vec neural network - bias units
|
It seems there are indeed no bias units at either layer. In his thesis on neural network based language models, Mikolov states that:
[...] biases are not used in the neural network, as no significant
improvement of performance was observed - following the Occam's razor,
the solution is as simple as it needs to be.
(Mikolov, T.: Statistical Language Models Based on Neural Networks, p. 29)
While this is a quote concerning recurrent neural networks specifically, I am going to assume the same is valid for the Skip-gram model.
|
word2vec neural network - bias units
|
It seems there are indeed no bias units at either layer. In his thesis on neural network based language models, Mikolov states that:
[...] biases are not used in the neural network, as no significan
|
word2vec neural network - bias units
It seems there are indeed no bias units at either layer. In his thesis on neural network based language models, Mikolov states that:
[...] biases are not used in the neural network, as no significant
improvement of performance was observed - following the Occam's razor,
the solution is as simple as it needs to be.
(Mikolov, T.: Statistical Language Models Based on Neural Networks, p. 29)
While this is a quote concerning recurrent neural networks specifically, I am going to assume the same is valid for the Skip-gram model.
|
word2vec neural network - bias units
It seems there are indeed no bias units at either layer. In his thesis on neural network based language models, Mikolov states that:
[...] biases are not used in the neural network, as no significan
|
41,965
|
word2vec neural network - bias units
|
Bias is hidden in average vectors (take mean of all vectors; projection of a given vector on this mean effectively carries a bias).
|
word2vec neural network - bias units
|
Bias is hidden in average vectors (take mean of all vectors; projection of a given vector on this mean effectively carries a bias).
|
word2vec neural network - bias units
Bias is hidden in average vectors (take mean of all vectors; projection of a given vector on this mean effectively carries a bias).
|
word2vec neural network - bias units
Bias is hidden in average vectors (take mean of all vectors; projection of a given vector on this mean effectively carries a bias).
|
41,966
|
Can I treat a counting response variable as a continuous variable and run OLS?
|
A random variable is a counting variable does not only mean that it has natural number values. So, the number of sunny days in a year is not a counting random variable, because it is not the result of a counting process. Probably, a day is declared sunny if some burocratic criteria are fulfilled, like at least 5 hours clear sun, or whatever. It is not a count of independent events. Examples of count data is: number of auto accidents in new york, per day. Number of stillbirths in Guatemala, per day. These are counting independent events, which could as a first approximation be modelled via Poisson distribution or a poisson point process. I can see no such Poisson model lurking behind number of sunny days! For instance, have a look at my answer here: Goodness of fit and which model to choose linear regression or Poisson The arguments used there are irrelevant in your case.
Back to your question, if "the count is big". It is not the bigness in itself that matters, big counts could still be Poisson (but big counts would in practice often be clustered and some more complicated model than poisson would be needed). For number of sunny days in a year, you could sure try ordinary linear regression, as a starting point.
To elaborate on why "number of sunny days" is not a count variable. First, number of hours of (sufficiently strong) sunshine is measured at meteorological stations with a Campbell–Stokes recorder, see https://en.wikipedia.org/wiki/Campbell%E2%80%93Stokes_recorder They look like this:
and works by focusing the sun on a paper clip, and burning a path there when the sun is strong enough. Then one have to measure the length of the burnt path. That gives a measured variable, not a count variable! The underlying process is measurement, not counting. Then this is converted into a binary sunny/not sunny indicator by some arbitrary ("burocratic") cutoff. Hope this is a better explanation of my answer!
|
Can I treat a counting response variable as a continuous variable and run OLS?
|
A random variable is a counting variable does not only mean that it has natural number values. So, the number of sunny days in a year is not a counting random variable, because it is not the result of
|
Can I treat a counting response variable as a continuous variable and run OLS?
A random variable is a counting variable does not only mean that it has natural number values. So, the number of sunny days in a year is not a counting random variable, because it is not the result of a counting process. Probably, a day is declared sunny if some burocratic criteria are fulfilled, like at least 5 hours clear sun, or whatever. It is not a count of independent events. Examples of count data is: number of auto accidents in new york, per day. Number of stillbirths in Guatemala, per day. These are counting independent events, which could as a first approximation be modelled via Poisson distribution or a poisson point process. I can see no such Poisson model lurking behind number of sunny days! For instance, have a look at my answer here: Goodness of fit and which model to choose linear regression or Poisson The arguments used there are irrelevant in your case.
Back to your question, if "the count is big". It is not the bigness in itself that matters, big counts could still be Poisson (but big counts would in practice often be clustered and some more complicated model than poisson would be needed). For number of sunny days in a year, you could sure try ordinary linear regression, as a starting point.
To elaborate on why "number of sunny days" is not a count variable. First, number of hours of (sufficiently strong) sunshine is measured at meteorological stations with a Campbell–Stokes recorder, see https://en.wikipedia.org/wiki/Campbell%E2%80%93Stokes_recorder They look like this:
and works by focusing the sun on a paper clip, and burning a path there when the sun is strong enough. Then one have to measure the length of the burnt path. That gives a measured variable, not a count variable! The underlying process is measurement, not counting. Then this is converted into a binary sunny/not sunny indicator by some arbitrary ("burocratic") cutoff. Hope this is a better explanation of my answer!
|
Can I treat a counting response variable as a continuous variable and run OLS?
A random variable is a counting variable does not only mean that it has natural number values. So, the number of sunny days in a year is not a counting random variable, because it is not the result of
|
41,967
|
Can I treat a counting response variable as a continuous variable and run OLS?
|
At the fundamental level of chemistry and atomic theory, one could argue that the world is discrete rather than continuous. One might argue that continuous variables themselves are just very useful approximations to an underlying discrete reality. So clearly it is OK to treat counts as continuous variables. We do it all the time in practice.
This is different from the issue of whether Poisson approximations are appropriate for any particular application. The answer from @kjetil covers that well.
|
Can I treat a counting response variable as a continuous variable and run OLS?
|
At the fundamental level of chemistry and atomic theory, one could argue that the world is discrete rather than continuous. One might argue that continuous variables themselves are just very useful ap
|
Can I treat a counting response variable as a continuous variable and run OLS?
At the fundamental level of chemistry and atomic theory, one could argue that the world is discrete rather than continuous. One might argue that continuous variables themselves are just very useful approximations to an underlying discrete reality. So clearly it is OK to treat counts as continuous variables. We do it all the time in practice.
This is different from the issue of whether Poisson approximations are appropriate for any particular application. The answer from @kjetil covers that well.
|
Can I treat a counting response variable as a continuous variable and run OLS?
At the fundamental level of chemistry and atomic theory, one could argue that the world is discrete rather than continuous. One might argue that continuous variables themselves are just very useful ap
|
41,968
|
Is the decision boundary of a logistic classifier linear?
|
There are various different things that can be meant by "non-linear" (cf., this great answer: How to tell the difference between linear and non-linear regression models?) Part of the confusion behind questions like yours often resides in ambiguity about the term non-linear. It will help to get clearer on that (see the linked answer).
That said, the decision boundary for the model you display is a 'straight' line (or perhaps a flat hyperplane) in the appropriate, high-dimensional, space. It is hard to see that, because it is a four-dimensional space. However, perhaps seeing an analog of this issue in a different setting, which can be completely represented in a three-dimensional space, might break through the conceptual logjam. You can see such an example in my answer here: Why is polynomial regression considered a special case of multiple linear regression?
|
Is the decision boundary of a logistic classifier linear?
|
There are various different things that can be meant by "non-linear" (cf., this great answer: How to tell the difference between linear and non-linear regression models?) Part of the confusion behind
|
Is the decision boundary of a logistic classifier linear?
There are various different things that can be meant by "non-linear" (cf., this great answer: How to tell the difference between linear and non-linear regression models?) Part of the confusion behind questions like yours often resides in ambiguity about the term non-linear. It will help to get clearer on that (see the linked answer).
That said, the decision boundary for the model you display is a 'straight' line (or perhaps a flat hyperplane) in the appropriate, high-dimensional, space. It is hard to see that, because it is a four-dimensional space. However, perhaps seeing an analog of this issue in a different setting, which can be completely represented in a three-dimensional space, might break through the conceptual logjam. You can see such an example in my answer here: Why is polynomial regression considered a special case of multiple linear regression?
|
Is the decision boundary of a logistic classifier linear?
There are various different things that can be meant by "non-linear" (cf., this great answer: How to tell the difference between linear and non-linear regression models?) Part of the confusion behind
|
41,969
|
Choosing Between Additive and Multiplicative Model?
|
I would go for additive too. As your apparent signal seems of low frequency, you can go a little beyond, at least empirically. You can check for instance the homoscedasticity of finite differences of the data (first or second order). This would act as a very crude high-pass filter, where you could expect the noise to be dominant.
If your signal is much longer, moving windows and Fourier transforms could be of help.
However, as for forecasting, you can perform both models in parallel, and decide which one you apply based, for instance, on the best performance of one of them based on past statistics. This is a heuristic method that I have recently used in the prediction of outcomes for hybrid system co-simulation, where no model is known: perform different extrapolations in parallel, very fast, and decide. It is not very theoretical, but it works well on our data.
If interested, I could develop. The reference is called: CHOPtrey: contextual online polynomial extrapolation for enhanced multi-core co-simulation of complex systems
As the data is quite short, and I am not sure we have a full seasonal period, I tried to perform some Fourier analysis on the data, its gradient and Laplacian. The fluctuation seems to be quite periodic, so on the bottom plot I have attempted to design a "filtering" moving average. The residue does not vary in amplitude a lot. It really does not seem to be random.
|
Choosing Between Additive and Multiplicative Model?
|
I would go for additive too. As your apparent signal seems of low frequency, you can go a little beyond, at least empirically. You can check for instance the homoscedasticity of finite differences o
|
Choosing Between Additive and Multiplicative Model?
I would go for additive too. As your apparent signal seems of low frequency, you can go a little beyond, at least empirically. You can check for instance the homoscedasticity of finite differences of the data (first or second order). This would act as a very crude high-pass filter, where you could expect the noise to be dominant.
If your signal is much longer, moving windows and Fourier transforms could be of help.
However, as for forecasting, you can perform both models in parallel, and decide which one you apply based, for instance, on the best performance of one of them based on past statistics. This is a heuristic method that I have recently used in the prediction of outcomes for hybrid system co-simulation, where no model is known: perform different extrapolations in parallel, very fast, and decide. It is not very theoretical, but it works well on our data.
If interested, I could develop. The reference is called: CHOPtrey: contextual online polynomial extrapolation for enhanced multi-core co-simulation of complex systems
As the data is quite short, and I am not sure we have a full seasonal period, I tried to perform some Fourier analysis on the data, its gradient and Laplacian. The fluctuation seems to be quite periodic, so on the bottom plot I have attempted to design a "filtering" moving average. The residue does not vary in amplitude a lot. It really does not seem to be random.
|
Choosing Between Additive and Multiplicative Model?
I would go for additive too. As your apparent signal seems of low frequency, you can go a little beyond, at least empirically. You can check for instance the homoscedasticity of finite differences o
|
41,970
|
Choosing Between Additive and Multiplicative Model?
|
I took the 55 values and used AUTOBOX to automatically detect a hybrid model possibly including deterministic structure as well as ARIMA structure. The plot of the original data and the ACF plot of the original series is here. AUTOBOX concluded that a single trend and 3 seasonal dummies were more appropriate tham SARIMA while also including AR structure of order 1 . Here is the model AND here with the following statistical summaries .
The residual plot is here suggesting sufficiency with the companion ACF of the residuals here .
The Actual, Fit and Forecast plot is here and the OUTLIER adusted plot clearly suggesting the need for the 4 pulses in the model . Finally the Forecast plot is here for the next 8 periods.
Transformations such as logarithms or multiplicative models need to be justified and suggested by the data or by the user who has certain domain knowledge. This was not so in this case. See here for when power transforms are needed When (and why) should you take the log of a distribution (of numbers)? . Note that AUTOBOX essentially converged on the HW Additive Seasonal Model with TREND and 4 anomalies and a highly significant AR(1) coefficient.
COMMENTS FOR LAURENT:
Three of the four deterministic comments were required (Trend,Seasonal(QUARTERLY) Dummies and Pulses) while also needing the AR(1) structure to deal with short-term memory.
|
Choosing Between Additive and Multiplicative Model?
|
I took the 55 values and used AUTOBOX to automatically detect a hybrid model possibly including deterministic structure as well as ARIMA structure. The plot of the original data and the ACF plot of th
|
Choosing Between Additive and Multiplicative Model?
I took the 55 values and used AUTOBOX to automatically detect a hybrid model possibly including deterministic structure as well as ARIMA structure. The plot of the original data and the ACF plot of the original series is here. AUTOBOX concluded that a single trend and 3 seasonal dummies were more appropriate tham SARIMA while also including AR structure of order 1 . Here is the model AND here with the following statistical summaries .
The residual plot is here suggesting sufficiency with the companion ACF of the residuals here .
The Actual, Fit and Forecast plot is here and the OUTLIER adusted plot clearly suggesting the need for the 4 pulses in the model . Finally the Forecast plot is here for the next 8 periods.
Transformations such as logarithms or multiplicative models need to be justified and suggested by the data or by the user who has certain domain knowledge. This was not so in this case. See here for when power transforms are needed When (and why) should you take the log of a distribution (of numbers)? . Note that AUTOBOX essentially converged on the HW Additive Seasonal Model with TREND and 4 anomalies and a highly significant AR(1) coefficient.
COMMENTS FOR LAURENT:
Three of the four deterministic comments were required (Trend,Seasonal(QUARTERLY) Dummies and Pulses) while also needing the AR(1) structure to deal with short-term memory.
|
Choosing Between Additive and Multiplicative Model?
I took the 55 values and used AUTOBOX to automatically detect a hybrid model possibly including deterministic structure as well as ARIMA structure. The plot of the original data and the ACF plot of th
|
41,971
|
LSTM: how to feed the Network with a mini batch? When to reset the LSTM state?
|
when should I reset the state of the LSTMs?
Typically, for each new input, i.e. for each sample.
how to feed the Network with a mini batch?
Typically, samples are padded so that all samples in a mini batch have the same length, for programming and performance reasons.
|
LSTM: how to feed the Network with a mini batch? When to reset the LSTM state?
|
when should I reset the state of the LSTMs?
Typically, for each new input, i.e. for each sample.
how to feed the Network with a mini batch?
Typically, samples are padded so that all samples in a m
|
LSTM: how to feed the Network with a mini batch? When to reset the LSTM state?
when should I reset the state of the LSTMs?
Typically, for each new input, i.e. for each sample.
how to feed the Network with a mini batch?
Typically, samples are padded so that all samples in a mini batch have the same length, for programming and performance reasons.
|
LSTM: how to feed the Network with a mini batch? When to reset the LSTM state?
when should I reset the state of the LSTMs?
Typically, for each new input, i.e. for each sample.
how to feed the Network with a mini batch?
Typically, samples are padded so that all samples in a m
|
41,972
|
Power and sample size in regression context?
|
There are two main approaches to power analysis:
When your design conforms to "classical standards" with regard to estimators used and distributional assumptions, then the formulae from Cohen (most of which are much older than that reference) are mathematically correct, provably so.
When your design starts to depart from these standards, either because you are using nonstandard estimators (for whatever reason) or because there are other wrinkles with your data generation or selection process, theory generally breaks down quickly. Whilst there do exist formulae for a few cases which are very close to the classical paradigm, the normal approach is simulation. If you believe your effect is of a certain magnitude then simulate, say, 10000 datasets of a given sample size, with this magnitude of effect. Apply your chosen estimator to each of these datasets, and see how many return a significant result. Then, adjust the sample size to suit your needs (if not enough of the replicates are significant, you should increase the sample size. If more were significant than were required, you can get away with reducing it.)
|
Power and sample size in regression context?
|
There are two main approaches to power analysis:
When your design conforms to "classical standards" with regard to estimators used and distributional assumptions, then the formulae from Cohen (most of
|
Power and sample size in regression context?
There are two main approaches to power analysis:
When your design conforms to "classical standards" with regard to estimators used and distributional assumptions, then the formulae from Cohen (most of which are much older than that reference) are mathematically correct, provably so.
When your design starts to depart from these standards, either because you are using nonstandard estimators (for whatever reason) or because there are other wrinkles with your data generation or selection process, theory generally breaks down quickly. Whilst there do exist formulae for a few cases which are very close to the classical paradigm, the normal approach is simulation. If you believe your effect is of a certain magnitude then simulate, say, 10000 datasets of a given sample size, with this magnitude of effect. Apply your chosen estimator to each of these datasets, and see how many return a significant result. Then, adjust the sample size to suit your needs (if not enough of the replicates are significant, you should increase the sample size. If more were significant than were required, you can get away with reducing it.)
|
Power and sample size in regression context?
There are two main approaches to power analysis:
When your design conforms to "classical standards" with regard to estimators used and distributional assumptions, then the formulae from Cohen (most of
|
41,973
|
What is the difference between denoising autoencoder and contractive autoencoder?
|
No. The CAE tries to make the encoder (i.e. mapping from input to hidden layer) have the property of locality, i.e. small changes in input lead to small changes at hidden layer. This is a nice property because it means the mapping is not too sensitive, which should help it generalise beyond the training data.
(There is an extra complication: the CAE tries in particular to enforce locality along the direction of the low-dimensional manifold which all autoencoders assume is present in the input data.
My understanding of this is really based on the original paper (Rifai et al.). This video explains the directionality a bit differently.
I find this part a bit harder to explain. I suggest to concentrate on the locality in the first place.)
|
What is the difference between denoising autoencoder and contractive autoencoder?
|
No. The CAE tries to make the encoder (i.e. mapping from input to hidden layer) have the property of locality, i.e. small changes in input lead to small changes at hidden layer. This is a nice propert
|
What is the difference between denoising autoencoder and contractive autoencoder?
No. The CAE tries to make the encoder (i.e. mapping from input to hidden layer) have the property of locality, i.e. small changes in input lead to small changes at hidden layer. This is a nice property because it means the mapping is not too sensitive, which should help it generalise beyond the training data.
(There is an extra complication: the CAE tries in particular to enforce locality along the direction of the low-dimensional manifold which all autoencoders assume is present in the input data.
My understanding of this is really based on the original paper (Rifai et al.). This video explains the directionality a bit differently.
I find this part a bit harder to explain. I suggest to concentrate on the locality in the first place.)
|
What is the difference between denoising autoencoder and contractive autoencoder?
No. The CAE tries to make the encoder (i.e. mapping from input to hidden layer) have the property of locality, i.e. small changes in input lead to small changes at hidden layer. This is a nice propert
|
41,974
|
How to properly do stacking/meta ensembling with cross validation
|
Only method 2 is valid i.e. without any leakage.
When doing stacking/ensembling, the golden rule is that the meta model must be trained on a separate dataset than the dataset(s) used to train the base models.
This is really important otherwise you risk to get poor generalization (test accuracy) because of target leakage.
Target leakage is the issue of having some features (here the predictions of the base models) which are abnormally correlated with the target.
This happens if you train meta model and base models on the same dataset because predictions of the base models on this dataset are unrealistically very good (they leak the target).
If you do use a separate dataset to train the meta model, it correctly replicates what will happen at test time i.e. predictions of base models will be made on new data that the base models have never seen before (which is the case in method 2).
I quote this article :
"It is important that the meta-learner is trained on a separate dataset to the examples used to train the level 0 models to avoid overfitting."
Here is another simple cross validation method without any target leakage :
splits: A B C
1st iteration of cross validation with :
X, Y, Z = A, B, C
First Layer Models
fit base models on X
Meta Ensemble
fit meta model on Y (with predictions of base models)
evaluate it on Z (with predictions of base models)
Then repeat same procedure for all permutations :
2nd iteration with X, Y, Z = A, C, B
3rd iteration with X, Y, Z = B, A, C
4th iteration with X, Y, Z = B, C, A
5th iteration with X, Y, Z = C, A, B
6th iteration with X, Y, Z = C, B, A
|
How to properly do stacking/meta ensembling with cross validation
|
Only method 2 is valid i.e. without any leakage.
When doing stacking/ensembling, the golden rule is that the meta model must be trained on a separate dataset than the dataset(s) used to train the base
|
How to properly do stacking/meta ensembling with cross validation
Only method 2 is valid i.e. without any leakage.
When doing stacking/ensembling, the golden rule is that the meta model must be trained on a separate dataset than the dataset(s) used to train the base models.
This is really important otherwise you risk to get poor generalization (test accuracy) because of target leakage.
Target leakage is the issue of having some features (here the predictions of the base models) which are abnormally correlated with the target.
This happens if you train meta model and base models on the same dataset because predictions of the base models on this dataset are unrealistically very good (they leak the target).
If you do use a separate dataset to train the meta model, it correctly replicates what will happen at test time i.e. predictions of base models will be made on new data that the base models have never seen before (which is the case in method 2).
I quote this article :
"It is important that the meta-learner is trained on a separate dataset to the examples used to train the level 0 models to avoid overfitting."
Here is another simple cross validation method without any target leakage :
splits: A B C
1st iteration of cross validation with :
X, Y, Z = A, B, C
First Layer Models
fit base models on X
Meta Ensemble
fit meta model on Y (with predictions of base models)
evaluate it on Z (with predictions of base models)
Then repeat same procedure for all permutations :
2nd iteration with X, Y, Z = A, C, B
3rd iteration with X, Y, Z = B, A, C
4th iteration with X, Y, Z = B, C, A
5th iteration with X, Y, Z = C, A, B
6th iteration with X, Y, Z = C, B, A
|
How to properly do stacking/meta ensembling with cross validation
Only method 2 is valid i.e. without any leakage.
When doing stacking/ensembling, the golden rule is that the meta model must be trained on a separate dataset than the dataset(s) used to train the base
|
41,975
|
How to properly do stacking/meta ensembling with cross validation
|
I think the only way to really determine this is to experiment. I made a small one here. I split the dataset in two and trained a model and the stacking model with the same training data at the core. In the 2nd one I trained it on the other half of the data. The accuracy of the second was was higher slightly. However, this could be explained away by the additional data that model gets to see. At the end of the day I think either method will work as long as the underlying models generalize well. It will also depend on how many observations there are to play with, training time, etc.
library(caret)
data("segmentationData")
segmentationData <- segmentationData[,c(-1,-2)]
inTrain = createDataPartition(segmentationData$Class, list = FALSE, p = 0.5)
x.train <- segmentationData[inTrain,]
x.lg <- segmentationData[-inTrain,]
fit.knn <- train(Class ~ ., x.train, method = "knn")
fit.svm <- train(Class ~ ., x.train, method = "svmRadial")
## Train Logistic Regression with same training data
e.train <- data.frame(knn = predict(fit.knn, x.train), svm = predict(fit.svm, x.train), Class = x.train$Class)
fit.lgB <- train(Class ~ ., e.train, method = "glm")
## Train Logistic Regression with different training data
e.train <- data.frame(knn = predict(fit.knn, x.lg), svm = predict(fit.svm, x.lg), Class = x.lg$Class)
fit.lgB <- train(Class ~ ., e.train, method = "glm")
resamps <- resamples(list(diff = fit.lgB, same = fit.lgA))
library(lattice)
bwplot(resamps)
> summary(resamps)
Call:
summary.resamples(object = resamps)
Models: diff, same
Number of resamples: 25
Accuracy
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
diff 0.7762 0.8142 0.8262 0.8249 0.8356 0.8753 0
same 0.7865 0.8037 0.8128 0.8148 0.8255 0.8538 0
Kappa
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
diff 0.5019 0.601 0.6273 0.6214 0.6466 0.7257 0
same 0.5380 0.575 0.5917 0.5955 0.6223 0.6776 0
Perhaps use this as a template for your own experiments :)
|
How to properly do stacking/meta ensembling with cross validation
|
I think the only way to really determine this is to experiment. I made a small one here. I split the dataset in two and trained a model and the stacking model with the same training data at the core.
|
How to properly do stacking/meta ensembling with cross validation
I think the only way to really determine this is to experiment. I made a small one here. I split the dataset in two and trained a model and the stacking model with the same training data at the core. In the 2nd one I trained it on the other half of the data. The accuracy of the second was was higher slightly. However, this could be explained away by the additional data that model gets to see. At the end of the day I think either method will work as long as the underlying models generalize well. It will also depend on how many observations there are to play with, training time, etc.
library(caret)
data("segmentationData")
segmentationData <- segmentationData[,c(-1,-2)]
inTrain = createDataPartition(segmentationData$Class, list = FALSE, p = 0.5)
x.train <- segmentationData[inTrain,]
x.lg <- segmentationData[-inTrain,]
fit.knn <- train(Class ~ ., x.train, method = "knn")
fit.svm <- train(Class ~ ., x.train, method = "svmRadial")
## Train Logistic Regression with same training data
e.train <- data.frame(knn = predict(fit.knn, x.train), svm = predict(fit.svm, x.train), Class = x.train$Class)
fit.lgB <- train(Class ~ ., e.train, method = "glm")
## Train Logistic Regression with different training data
e.train <- data.frame(knn = predict(fit.knn, x.lg), svm = predict(fit.svm, x.lg), Class = x.lg$Class)
fit.lgB <- train(Class ~ ., e.train, method = "glm")
resamps <- resamples(list(diff = fit.lgB, same = fit.lgA))
library(lattice)
bwplot(resamps)
> summary(resamps)
Call:
summary.resamples(object = resamps)
Models: diff, same
Number of resamples: 25
Accuracy
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
diff 0.7762 0.8142 0.8262 0.8249 0.8356 0.8753 0
same 0.7865 0.8037 0.8128 0.8148 0.8255 0.8538 0
Kappa
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
diff 0.5019 0.601 0.6273 0.6214 0.6466 0.7257 0
same 0.5380 0.575 0.5917 0.5955 0.6223 0.6776 0
Perhaps use this as a template for your own experiments :)
|
How to properly do stacking/meta ensembling with cross validation
I think the only way to really determine this is to experiment. I made a small one here. I split the dataset in two and trained a model and the stacking model with the same training data at the core.
|
41,976
|
Determine the PMF of the multiplication of two discrete random variables
|
If $X,Y$ are independent discrete random variables with supports $\cal X, \cal Y$ (i.e. places where they are non-zero, then $$P(XY=c) = \sum_{x \in \cal X, ~~y \in \cal Y \\\text{ s.t. } xy=c} P(X=x,Y=y) = \sum_{x \in \cal X,~~ y \in \cal Y\\ \text{ s.t. } xy=c} P(X=x)P(Y=y)$$.
The first equality is true for any discrete random variables. The second is true for independent ones.
|
Determine the PMF of the multiplication of two discrete random variables
|
If $X,Y$ are independent discrete random variables with supports $\cal X, \cal Y$ (i.e. places where they are non-zero, then $$P(XY=c) = \sum_{x \in \cal X, ~~y \in \cal Y \\\text{ s.t. } xy=c} P(X=x,
|
Determine the PMF of the multiplication of two discrete random variables
If $X,Y$ are independent discrete random variables with supports $\cal X, \cal Y$ (i.e. places where they are non-zero, then $$P(XY=c) = \sum_{x \in \cal X, ~~y \in \cal Y \\\text{ s.t. } xy=c} P(X=x,Y=y) = \sum_{x \in \cal X,~~ y \in \cal Y\\ \text{ s.t. } xy=c} P(X=x)P(Y=y)$$.
The first equality is true for any discrete random variables. The second is true for independent ones.
|
Determine the PMF of the multiplication of two discrete random variables
If $X,Y$ are independent discrete random variables with supports $\cal X, \cal Y$ (i.e. places where they are non-zero, then $$P(XY=c) = \sum_{x \in \cal X, ~~y \in \cal Y \\\text{ s.t. } xy=c} P(X=x,
|
41,977
|
The importance of the Gaussian copula
|
Now my question is, what exactly makes the Gaussian copula so important among all the possible choices of copulas?
Is it? What makes you say it's especially important?
suppose that their joint distribution is not a multivariate normal. Then my understanding is that since marginals are decoupled from the copula, that their joint distribution (being non-normal) cannot have a Gaussian copula.
Correct; with normal margins if it did have a Gaussian copula it would be multivariate normal. Without a Gaussian copula it would not be multivariate normal.
But will they still be in some sense 'well approximated' by a Gaussian copula?
It depends on the copula they do have, but in general, no.
Furthermore, what if our marginals are not normal, now it seems even less justifiable to use a Gaussian copula,
It depends on why it's being used. It may be a reasonable approximation or it may not.
rather than just choosing the copula that fits the data the best in [0,1]n space.
If you have no particular reason to choose a Gaussian copula it may be a convenient - but perhaps not an ideal - choice.
Could you also speak to the family of meta-Gaussian distributions?
There's no distinction between "distributions with a Gaussian copula" and "meta Gaussian distributions". So we've been discussing the dependence structure of the family of meta-Gaussian distributions all along.
In some areas people are tempted to use the Gaussian copula in multivariate situations because more generally copulas are more work once you move beyond the bivariate case and in some ways the Gaussian case is easy to work with (if you transform the margins to normal you can just fit a multivariate Gaussian). However, there are vine copulas, for example.
The Gaussian copula is frequently inadequate -- it can't model tail dependence, for example, making it unsuitable for the many situations where tail dependence exists. This stuff is pretty well documented in basic books and papers on copulas though. Indeed, misuse of the Gaussian copula to model dependence among debt defaults was credited with making the global financial crisis worse - precisely because as you condition on being in the tail, the Gaussian copula does essentially the exact opposite of what's needed for describing the dependence (becoming less and less dependent when the process being modelled becomes more dependent, at least in the tail that mattered in the crisis).
The Gaussian copula is most popular when dealing with elliptical distributions, for which there's at least some argument for considering it, since the correlation coefficients still have a relatively direct interpretation. Otherwise, they're just parameters of the dependence structure, and it would usually be better to consider the actual characteristics of the dependence you have (or at least the most essential characteristics of it).
|
The importance of the Gaussian copula
|
Now my question is, what exactly makes the Gaussian copula so important among all the possible choices of copulas?
Is it? What makes you say it's especially important?
suppose that their joint distr
|
The importance of the Gaussian copula
Now my question is, what exactly makes the Gaussian copula so important among all the possible choices of copulas?
Is it? What makes you say it's especially important?
suppose that their joint distribution is not a multivariate normal. Then my understanding is that since marginals are decoupled from the copula, that their joint distribution (being non-normal) cannot have a Gaussian copula.
Correct; with normal margins if it did have a Gaussian copula it would be multivariate normal. Without a Gaussian copula it would not be multivariate normal.
But will they still be in some sense 'well approximated' by a Gaussian copula?
It depends on the copula they do have, but in general, no.
Furthermore, what if our marginals are not normal, now it seems even less justifiable to use a Gaussian copula,
It depends on why it's being used. It may be a reasonable approximation or it may not.
rather than just choosing the copula that fits the data the best in [0,1]n space.
If you have no particular reason to choose a Gaussian copula it may be a convenient - but perhaps not an ideal - choice.
Could you also speak to the family of meta-Gaussian distributions?
There's no distinction between "distributions with a Gaussian copula" and "meta Gaussian distributions". So we've been discussing the dependence structure of the family of meta-Gaussian distributions all along.
In some areas people are tempted to use the Gaussian copula in multivariate situations because more generally copulas are more work once you move beyond the bivariate case and in some ways the Gaussian case is easy to work with (if you transform the margins to normal you can just fit a multivariate Gaussian). However, there are vine copulas, for example.
The Gaussian copula is frequently inadequate -- it can't model tail dependence, for example, making it unsuitable for the many situations where tail dependence exists. This stuff is pretty well documented in basic books and papers on copulas though. Indeed, misuse of the Gaussian copula to model dependence among debt defaults was credited with making the global financial crisis worse - precisely because as you condition on being in the tail, the Gaussian copula does essentially the exact opposite of what's needed for describing the dependence (becoming less and less dependent when the process being modelled becomes more dependent, at least in the tail that mattered in the crisis).
The Gaussian copula is most popular when dealing with elliptical distributions, for which there's at least some argument for considering it, since the correlation coefficients still have a relatively direct interpretation. Otherwise, they're just parameters of the dependence structure, and it would usually be better to consider the actual characteristics of the dependence you have (or at least the most essential characteristics of it).
|
The importance of the Gaussian copula
Now my question is, what exactly makes the Gaussian copula so important among all the possible choices of copulas?
Is it? What makes you say it's especially important?
suppose that their joint distr
|
41,978
|
Optimizing a "black box" function: Linear Regression or Bayesian Optimization... what's the difference?
|
Your understanding is correct.
BO inherently measures the uncertainty of regions of your search space. And the acquisition function governs the tradeoff between exploring a point in a region with high uncertainty versus exploring further in a region with lower uncertainty, but a higher value.
By contrast, vanilla regression models assume equal variance - while you can locate the maximum of a polynomial model within some box, the search will be excessively local and not have a great exploration-exploitation tradeoff.
But this just repeats what you already know.
Typical mean functions in BO (and GPs generally) are either 0 or another constant, with all of the heavy-lifting is done by the kernel function. This is mostly a computational trick, because in this case predictions are easily made via linear algebra; otherwise, you have to resort to simulation.
The Jones 1998 paper compares GP and polynomial regression on page 464. This is not strictly the same model that you propose (choosing polynomial terms by CV), but it's consistent with your aims.
|
Optimizing a "black box" function: Linear Regression or Bayesian Optimization... what's the differen
|
Your understanding is correct.
BO inherently measures the uncertainty of regions of your search space. And the acquisition function governs the tradeoff between exploring a point in a region with high
|
Optimizing a "black box" function: Linear Regression or Bayesian Optimization... what's the difference?
Your understanding is correct.
BO inherently measures the uncertainty of regions of your search space. And the acquisition function governs the tradeoff between exploring a point in a region with high uncertainty versus exploring further in a region with lower uncertainty, but a higher value.
By contrast, vanilla regression models assume equal variance - while you can locate the maximum of a polynomial model within some box, the search will be excessively local and not have a great exploration-exploitation tradeoff.
But this just repeats what you already know.
Typical mean functions in BO (and GPs generally) are either 0 or another constant, with all of the heavy-lifting is done by the kernel function. This is mostly a computational trick, because in this case predictions are easily made via linear algebra; otherwise, you have to resort to simulation.
The Jones 1998 paper compares GP and polynomial regression on page 464. This is not strictly the same model that you propose (choosing polynomial terms by CV), but it's consistent with your aims.
|
Optimizing a "black box" function: Linear Regression or Bayesian Optimization... what's the differen
Your understanding is correct.
BO inherently measures the uncertainty of regions of your search space. And the acquisition function governs the tradeoff between exploring a point in a region with high
|
41,979
|
How does numer.ai make predictions about the future?
|
If you sort the t_id column in the test dataset you will see that it goes from 1-36000. I would assume that it refers to "trade id". The way financial time series forecasting works is the you usually take lagged values of features at time t-1 and use them to predict the target value at time t, thus I would assume that all the features from 1-21 are lagged values from the previous week and the target variable must be the price increase/decrease for a particular trade id.
If you look at the probabilities that your algo outputs you can see that they are usually in a range of 0.45-0.55 or something like this, thus it's not very precise, however you still get a slight edge over random results. This explains the large testing set of 36000, in order to squeeze that small edge you need to make a lot of trades. Here Renaissance Technologies (arguably the best quant fund) mentions that they have a very slight edge in forecasting the prices, but they exploit it via a large number of trades + they are using a lot of leverage to enhance these returns. I would assume Numerai is doing something similar.
|
How does numer.ai make predictions about the future?
|
If you sort the t_id column in the test dataset you will see that it goes from 1-36000. I would assume that it refers to "trade id". The way financial time series forecasting works is the you usually
|
How does numer.ai make predictions about the future?
If you sort the t_id column in the test dataset you will see that it goes from 1-36000. I would assume that it refers to "trade id". The way financial time series forecasting works is the you usually take lagged values of features at time t-1 and use them to predict the target value at time t, thus I would assume that all the features from 1-21 are lagged values from the previous week and the target variable must be the price increase/decrease for a particular trade id.
If you look at the probabilities that your algo outputs you can see that they are usually in a range of 0.45-0.55 or something like this, thus it's not very precise, however you still get a slight edge over random results. This explains the large testing set of 36000, in order to squeeze that small edge you need to make a lot of trades. Here Renaissance Technologies (arguably the best quant fund) mentions that they have a very slight edge in forecasting the prices, but they exploit it via a large number of trades + they are using a lot of leverage to enhance these returns. I would assume Numerai is doing something similar.
|
How does numer.ai make predictions about the future?
If you sort the t_id column in the test dataset you will see that it goes from 1-36000. I would assume that it refers to "trade id". The way financial time series forecasting works is the you usually
|
41,980
|
How does numer.ai make predictions about the future?
|
You give them your predictions and not your classifier.
They're lowering the barrier to entry in one area, and not all of them. Sure, you could start up a fund of your own or sell your classifier to some other firm, but, (or so I think) goes their thought process and business model, the competition that they're providing via incentivisation will lead to them them coming out ahead (or at least tied with the best) as everyone will be incentivized to share their future predictons with them.
|
How does numer.ai make predictions about the future?
|
You give them your predictions and not your classifier.
They're lowering the barrier to entry in one area, and not all of them. Sure, you could start up a fund of your own or sell your classifier t
|
How does numer.ai make predictions about the future?
You give them your predictions and not your classifier.
They're lowering the barrier to entry in one area, and not all of them. Sure, you could start up a fund of your own or sell your classifier to some other firm, but, (or so I think) goes their thought process and business model, the competition that they're providing via incentivisation will lead to them them coming out ahead (or at least tied with the best) as everyone will be incentivized to share their future predictons with them.
|
How does numer.ai make predictions about the future?
You give them your predictions and not your classifier.
They're lowering the barrier to entry in one area, and not all of them. Sure, you could start up a fund of your own or sell your classifier t
|
41,981
|
How to Find Confidence Level of Results
|
Since you are uncertain about what may be the true conversion rate of the ad, you should consider using bayesian approach:
Let $X$ be an r.v such that $X=1$ if the ad is successful. So $X \sim Be(p)$, where $p$ is the conversion rate of the ad and let's assume further that $p \sim U(0,1)$. So basically you have a no prior belief about how efficient the ad might be (this of course can be changed by changing the prior distribution).
Further for $n$ independent $X_i$, $Y \equiv {\sum_n X_i} \sim Bin(n,p)$
From Bayes' theorem:
$$f(p| Y=y) = \frac{Pr(Y=y|p)f(p)}{Pr(Y=y)}$$
From distribution of $Y$, we know that $Pr(Y=y|p) = {n\choose y} p^y(1-p)^{n-y}$ and $f(p)=1$
Unconditional probability of $Y=y$:
\begin{align}
Pr(Y=y)&={n\choose y}\int_0^1 p^y(1-p)^{n-y} f(p)dp \\
&= {n\choose y}\int_0^1 p^y(1-p)^{n-y}dp \\
&= {n\choose y}\frac{y!(n-y)!}{(n+1)!} \\
&= \frac{1}{n+1}
\end{align}
See this for the proof of above integral.
So we have:
$$f(p| Y=y) = {n\choose y}(n+1)p^y(1-p)^{n-y}$$
So you now have a full posterior distribution of conversion rate. You can simulate this distribution and/or sample from it to construct your credible interval, preferably a highest posterior density interval, to get an estimate of upper and lower bounds of your estimate for a given level of confidence.
EDIT: Embarrassingly, didn't notice that the posterior is beta distribution.
So, $$f(p| Y=y) \sim Beta(\alpha, \beta)$$
where $\alpha = y+1$ and $\beta=n-y+1$
Therefore for credible interval at 5% significance can be calculated using R:
> n=1000
> y=10
> a=y+1; b=n-y+1
> qbeta(0.025,shape1 = a,shape2 = b) # p_lower
[1] 0.005498084
> qbeta((1-0.025),shape1 = a, shape2 = b) # p_upper
[1] 0.01829503
So for your sample:
$$Pr(0.0055 \leq p \leq 0.0183 | Y=10)=0.95$$
|
How to Find Confidence Level of Results
|
Since you are uncertain about what may be the true conversion rate of the ad, you should consider using bayesian approach:
Let $X$ be an r.v such that $X=1$ if the ad is successful. So $X \sim Be(p)$,
|
How to Find Confidence Level of Results
Since you are uncertain about what may be the true conversion rate of the ad, you should consider using bayesian approach:
Let $X$ be an r.v such that $X=1$ if the ad is successful. So $X \sim Be(p)$, where $p$ is the conversion rate of the ad and let's assume further that $p \sim U(0,1)$. So basically you have a no prior belief about how efficient the ad might be (this of course can be changed by changing the prior distribution).
Further for $n$ independent $X_i$, $Y \equiv {\sum_n X_i} \sim Bin(n,p)$
From Bayes' theorem:
$$f(p| Y=y) = \frac{Pr(Y=y|p)f(p)}{Pr(Y=y)}$$
From distribution of $Y$, we know that $Pr(Y=y|p) = {n\choose y} p^y(1-p)^{n-y}$ and $f(p)=1$
Unconditional probability of $Y=y$:
\begin{align}
Pr(Y=y)&={n\choose y}\int_0^1 p^y(1-p)^{n-y} f(p)dp \\
&= {n\choose y}\int_0^1 p^y(1-p)^{n-y}dp \\
&= {n\choose y}\frac{y!(n-y)!}{(n+1)!} \\
&= \frac{1}{n+1}
\end{align}
See this for the proof of above integral.
So we have:
$$f(p| Y=y) = {n\choose y}(n+1)p^y(1-p)^{n-y}$$
So you now have a full posterior distribution of conversion rate. You can simulate this distribution and/or sample from it to construct your credible interval, preferably a highest posterior density interval, to get an estimate of upper and lower bounds of your estimate for a given level of confidence.
EDIT: Embarrassingly, didn't notice that the posterior is beta distribution.
So, $$f(p| Y=y) \sim Beta(\alpha, \beta)$$
where $\alpha = y+1$ and $\beta=n-y+1$
Therefore for credible interval at 5% significance can be calculated using R:
> n=1000
> y=10
> a=y+1; b=n-y+1
> qbeta(0.025,shape1 = a,shape2 = b) # p_lower
[1] 0.005498084
> qbeta((1-0.025),shape1 = a, shape2 = b) # p_upper
[1] 0.01829503
So for your sample:
$$Pr(0.0055 \leq p \leq 0.0183 | Y=10)=0.95$$
|
How to Find Confidence Level of Results
Since you are uncertain about what may be the true conversion rate of the ad, you should consider using bayesian approach:
Let $X$ be an r.v such that $X=1$ if the ad is successful. So $X \sim Be(p)$,
|
41,982
|
How to Find Confidence Level of Results
|
The classical parametric approach in frequentist statistics would be to model the situation as follows. Suppose all the people who see a certain advertisement are all independent of each other and have the same probability $p$ of clicking on the advertisement. We say these people form a Bernoulli population, i.e. a member $X$ of this population either reacts ($X=1$) to the advertisement or does not ($X=0$) with probabilities $p$ and $1-p$ respectively. Suppose you now take a sample of size $n$ ($n=1000$ in your case) of the population, so you observe $X_1, \ldots, X_n$ and see how many of them have clicked. In other words, we observe the sample average
$$
\bar{X} = \frac{1}{n} \sum_{i=1}^n X_i
$$
In your case, you observe $\bar{X}=0.01$ and you are using this sample average as an estimate of the unknown population parameter $p$, i.e. $\hat{p}=\bar{X}$.
The statistical exercise we have to make now is deciding how 'reliable' or how 'precise' this estimate is. The central question here is: assuming the population fraction is indeed $p$, then what values of our estimator $\bar{X}$ are we likely to observe in a sample of size $n$? This brings probabilities into to the story. Specifically, we know (possibly to be explained further) that the sample sum $n\bar{X}$ is binomially distributed:
$$
\text{Prob}[n\bar{X} = i] = \binom{n}{i} p^i(1-p)^{n-i}\,,\qquad i=0,\ldots,n
$$
This is the point now where most people will suggest the approximation $\bar{X}\sim \text{N}(p,\frac{p(1-p)}{n})$ which works fine if $n$ is large compared to $np$. So instead of the discrete binomial distribution, we now use the continuous normal distribution which is somewhat easier to work with. For example, it follows (could also be explained further) that
$$
\frac{\bar{X} - p}{\sqrt{\frac{p(1-p)}{n}}} \sim \text{N}(0,1)
$$
has the standard normal distribution. Note that this distribution now no longer depends on $p$ or any other population parameter and for that reason, we call this quantity a pivot. Pivots are extremely useful to construct hypothesis tests and/or confidence intervals from. In our case, let $z_{\alpha/2}$ and $z_{1-\alpha/2}$ be the left and right quantiles of the standard normal distribution (in case $\alpha=0.05$ then these are the familiar values -1.96 and 1.96 respectively), then using the known distribution of the pivot:
$$
1-\alpha = \text{Prob}[ z_{\alpha/2} < \frac{\bar{X} - p}{\sqrt{\frac{p(1-p)}{n}}} < z_{1-\alpha/2} ]
$$
which after some manipulations becomes
$$
1-\alpha = \text{Prob}[ \bar{X} - z_{1-\alpha/2} \sqrt{\frac{p(1-p)}{n}} < p < \bar{X} - z_{\alpha/2} \sqrt{\frac{p(1-p)}{n}} ]
$$
These bounds form an (unobserved) confidence interval. Note that the left and right bound are not proper statistics. That means that strictly speaking, these left and right bounds can not be computed from a sample because we don't know what $p$ is! Yet another pragmatic approximation is to use the estimate $\hat{p}=\bar{X}$ instead of $p$ in these bounds. That brings us finally to:
$$
\text{a } (1-\alpha)\text{-CI for }p\text{ is }[\bar{X} - z_{1-\alpha/2} \sqrt{\frac{\bar{X}(1-\bar{X})}{n}} , \bar{X} - z_{\alpha/2} \sqrt{\frac{\bar{X}(1-\bar{X})}{n}}]
$$
This often used way of constructing a CI is the culmination of two approximations: First we use CLT to apprimate the binomial distribution by a normal distribution and then we estimate the standard error by using $\hat{p}=\bar{X}$. In most cases however, the resulting CI is useful. Sometimes however, the lower bound can be negative or the upper bound larger than 1 which makes no practical sense for a parameter like $p$.
For $\alpha=0.05$, $n=1000$ and $\bar{X}=0.01$ we have the CI $[0.0038,0.0162]$.
|
How to Find Confidence Level of Results
|
The classical parametric approach in frequentist statistics would be to model the situation as follows. Suppose all the people who see a certain advertisement are all independent of each other and hav
|
How to Find Confidence Level of Results
The classical parametric approach in frequentist statistics would be to model the situation as follows. Suppose all the people who see a certain advertisement are all independent of each other and have the same probability $p$ of clicking on the advertisement. We say these people form a Bernoulli population, i.e. a member $X$ of this population either reacts ($X=1$) to the advertisement or does not ($X=0$) with probabilities $p$ and $1-p$ respectively. Suppose you now take a sample of size $n$ ($n=1000$ in your case) of the population, so you observe $X_1, \ldots, X_n$ and see how many of them have clicked. In other words, we observe the sample average
$$
\bar{X} = \frac{1}{n} \sum_{i=1}^n X_i
$$
In your case, you observe $\bar{X}=0.01$ and you are using this sample average as an estimate of the unknown population parameter $p$, i.e. $\hat{p}=\bar{X}$.
The statistical exercise we have to make now is deciding how 'reliable' or how 'precise' this estimate is. The central question here is: assuming the population fraction is indeed $p$, then what values of our estimator $\bar{X}$ are we likely to observe in a sample of size $n$? This brings probabilities into to the story. Specifically, we know (possibly to be explained further) that the sample sum $n\bar{X}$ is binomially distributed:
$$
\text{Prob}[n\bar{X} = i] = \binom{n}{i} p^i(1-p)^{n-i}\,,\qquad i=0,\ldots,n
$$
This is the point now where most people will suggest the approximation $\bar{X}\sim \text{N}(p,\frac{p(1-p)}{n})$ which works fine if $n$ is large compared to $np$. So instead of the discrete binomial distribution, we now use the continuous normal distribution which is somewhat easier to work with. For example, it follows (could also be explained further) that
$$
\frac{\bar{X} - p}{\sqrt{\frac{p(1-p)}{n}}} \sim \text{N}(0,1)
$$
has the standard normal distribution. Note that this distribution now no longer depends on $p$ or any other population parameter and for that reason, we call this quantity a pivot. Pivots are extremely useful to construct hypothesis tests and/or confidence intervals from. In our case, let $z_{\alpha/2}$ and $z_{1-\alpha/2}$ be the left and right quantiles of the standard normal distribution (in case $\alpha=0.05$ then these are the familiar values -1.96 and 1.96 respectively), then using the known distribution of the pivot:
$$
1-\alpha = \text{Prob}[ z_{\alpha/2} < \frac{\bar{X} - p}{\sqrt{\frac{p(1-p)}{n}}} < z_{1-\alpha/2} ]
$$
which after some manipulations becomes
$$
1-\alpha = \text{Prob}[ \bar{X} - z_{1-\alpha/2} \sqrt{\frac{p(1-p)}{n}} < p < \bar{X} - z_{\alpha/2} \sqrt{\frac{p(1-p)}{n}} ]
$$
These bounds form an (unobserved) confidence interval. Note that the left and right bound are not proper statistics. That means that strictly speaking, these left and right bounds can not be computed from a sample because we don't know what $p$ is! Yet another pragmatic approximation is to use the estimate $\hat{p}=\bar{X}$ instead of $p$ in these bounds. That brings us finally to:
$$
\text{a } (1-\alpha)\text{-CI for }p\text{ is }[\bar{X} - z_{1-\alpha/2} \sqrt{\frac{\bar{X}(1-\bar{X})}{n}} , \bar{X} - z_{\alpha/2} \sqrt{\frac{\bar{X}(1-\bar{X})}{n}}]
$$
This often used way of constructing a CI is the culmination of two approximations: First we use CLT to apprimate the binomial distribution by a normal distribution and then we estimate the standard error by using $\hat{p}=\bar{X}$. In most cases however, the resulting CI is useful. Sometimes however, the lower bound can be negative or the upper bound larger than 1 which makes no practical sense for a parameter like $p$.
For $\alpha=0.05$, $n=1000$ and $\bar{X}=0.01$ we have the CI $[0.0038,0.0162]$.
|
How to Find Confidence Level of Results
The classical parametric approach in frequentist statistics would be to model the situation as follows. Suppose all the people who see a certain advertisement are all independent of each other and hav
|
41,983
|
How to Find Confidence Level of Results
|
Assuming that all people make independent choices, we can model the experiment as draws $X_i$ from a Bernoulli($p$) distribution with $E(X_i) = p$ and $\sigma^2(X_i) = p(1-p)$. For large $n$ ($n = 1000$ in your case) the independent draws from the Bernoulli distribution can be modelled as a Normal distribution according to the Central Limit Theorem. We then have for our Maximum Likelihood Estimate $\hat{p}$ ($\hat{p} = 1\%$ in your first example and $\hat{p} = 3\%$ in your second one)
$$ \frac{\hat{p} - p}{\sigma_p/\sqrt{n}} \sim N(0,1)$$
From this, we can construct the 'Confidence Interval'
$$ \hat{p} \pm \frac{z_{\alpha/2}}{\sqrt{n}}\sigma(p) $$
where $ z_{\alpha/2}$ is the required z-score for the interval to contain the true value with probability $1-\alpha$. In order to make this a true Confidence Interval we need an estimation for $\sigma_p$ and could for example use $\sigma_p \approx \sigma_\hat{p} = \sqrt{\hat{p}(1-\hat{p})}$.
In your first example we then obtain the 95% Confidence Interval [0.38, 1.62] (in percent) and in your second example [1.94 4.06].
Note that the second Confidence Interval is larger, despite the same number of people that participated in the experiment. This is due to the fact that $\sigma_\hat{p} = \sqrt{\hat{p}(1-\hat{p})}$ has its maximum for $\hat{p} = 0.5$. This fact is often used in opinion polls (for example for elections) that can be modelled as draws from a Bernoulli distribution as well. You will notice that the number of people that are being asked is often at around 1000, which leads to a margin of error $\pm 3\%$ at the 95% confidence level using the same calculation as above (under the assumption that $p=0.5$).
|
How to Find Confidence Level of Results
|
Assuming that all people make independent choices, we can model the experiment as draws $X_i$ from a Bernoulli($p$) distribution with $E(X_i) = p$ and $\sigma^2(X_i) = p(1-p)$. For large $n$ ($n = 100
|
How to Find Confidence Level of Results
Assuming that all people make independent choices, we can model the experiment as draws $X_i$ from a Bernoulli($p$) distribution with $E(X_i) = p$ and $\sigma^2(X_i) = p(1-p)$. For large $n$ ($n = 1000$ in your case) the independent draws from the Bernoulli distribution can be modelled as a Normal distribution according to the Central Limit Theorem. We then have for our Maximum Likelihood Estimate $\hat{p}$ ($\hat{p} = 1\%$ in your first example and $\hat{p} = 3\%$ in your second one)
$$ \frac{\hat{p} - p}{\sigma_p/\sqrt{n}} \sim N(0,1)$$
From this, we can construct the 'Confidence Interval'
$$ \hat{p} \pm \frac{z_{\alpha/2}}{\sqrt{n}}\sigma(p) $$
where $ z_{\alpha/2}$ is the required z-score for the interval to contain the true value with probability $1-\alpha$. In order to make this a true Confidence Interval we need an estimation for $\sigma_p$ and could for example use $\sigma_p \approx \sigma_\hat{p} = \sqrt{\hat{p}(1-\hat{p})}$.
In your first example we then obtain the 95% Confidence Interval [0.38, 1.62] (in percent) and in your second example [1.94 4.06].
Note that the second Confidence Interval is larger, despite the same number of people that participated in the experiment. This is due to the fact that $\sigma_\hat{p} = \sqrt{\hat{p}(1-\hat{p})}$ has its maximum for $\hat{p} = 0.5$. This fact is often used in opinion polls (for example for elections) that can be modelled as draws from a Bernoulli distribution as well. You will notice that the number of people that are being asked is often at around 1000, which leads to a margin of error $\pm 3\%$ at the 95% confidence level using the same calculation as above (under the assumption that $p=0.5$).
|
How to Find Confidence Level of Results
Assuming that all people make independent choices, we can model the experiment as draws $X_i$ from a Bernoulli($p$) distribution with $E(X_i) = p$ and $\sigma^2(X_i) = p(1-p)$. For large $n$ ($n = 100
|
41,984
|
How to combine/pool binomial confidence intervals after multiple imputation?
|
This is indeed an interesting problem. The issue is that the standard errors that are based on the central limit theorem for proportions are often undesirable because proportions are a computed quantity and for that reason exhibit skewed uncertainty over sampling. The Wilson score, such as you mentioned, gets around the skewness by estimating a different quantity than the standard proportion $k/n$. What you need to use Rubin's rules is an estimate of the within-imputation variance of this transformed proportion, which is just the variance/standard error estimated on a single dataset, along with the transformed proportion itself for each dataset.
So for the Wilson score interval, you first need to calculate the transformed estimate
$
\hat{p} + \frac{1}{2n}z^2
$
and then separately the variance, which from your formula is
$
(z\sqrt{\frac{1}{n} \hat{p}(1-\hat{p}) + \frac{1}{4n^2}z^2})^2
$
That will give you estimates of the transformed parameter and the transformed parameter's variance for each of $m$ datasets.
You can then combine these estimates using some of the available R tools, such as mi.meld from Amelia or mice as you mentioned or the R package mitools. Then once you have the transformed parameters, you can compute the confidence interval based on the newly derived variance/parameter estimate.
This would be easier if these R packages supplied the transformed estimates instead of just the confidence intervals, but you can probably dig them out of the associated R code.
|
How to combine/pool binomial confidence intervals after multiple imputation?
|
This is indeed an interesting problem. The issue is that the standard errors that are based on the central limit theorem for proportions are often undesirable because proportions are a computed quanti
|
How to combine/pool binomial confidence intervals after multiple imputation?
This is indeed an interesting problem. The issue is that the standard errors that are based on the central limit theorem for proportions are often undesirable because proportions are a computed quantity and for that reason exhibit skewed uncertainty over sampling. The Wilson score, such as you mentioned, gets around the skewness by estimating a different quantity than the standard proportion $k/n$. What you need to use Rubin's rules is an estimate of the within-imputation variance of this transformed proportion, which is just the variance/standard error estimated on a single dataset, along with the transformed proportion itself for each dataset.
So for the Wilson score interval, you first need to calculate the transformed estimate
$
\hat{p} + \frac{1}{2n}z^2
$
and then separately the variance, which from your formula is
$
(z\sqrt{\frac{1}{n} \hat{p}(1-\hat{p}) + \frac{1}{4n^2}z^2})^2
$
That will give you estimates of the transformed parameter and the transformed parameter's variance for each of $m$ datasets.
You can then combine these estimates using some of the available R tools, such as mi.meld from Amelia or mice as you mentioned or the R package mitools. Then once you have the transformed parameters, you can compute the confidence interval based on the newly derived variance/parameter estimate.
This would be easier if these R packages supplied the transformed estimates instead of just the confidence intervals, but you can probably dig them out of the associated R code.
|
How to combine/pool binomial confidence intervals after multiple imputation?
This is indeed an interesting problem. The issue is that the standard errors that are based on the central limit theorem for proportions are often undesirable because proportions are a computed quanti
|
41,985
|
Is the Brier Score appropriate for ordered categorical data?
|
In your case, the Brier score still "works" in that it incentivizes you to state the true forecast distribution (it is a proper scoring rule, see Gneiting and Raftery, 2007). For example, if the true probabilities were 0, 1/4, 1/4, 1/2, you would minimize your expected Brier score by actually stating these probabilities.
and 3. The Brier score is proper for both nominal and ordered data. However, as your example shows, it does not care about the forecast probabilities in the bins next to the one that realized. Whether this is problematic or not is largely a philosophical question. If you dislike this feature of the Brier score, you might use the Ranked Probability Score, defined as
$$RPS = \sum_{i=1}^r BS(i),$$ where $BS(i)$ is the Brier score for the event that the outcome lies within the first $i$ categories (loosely quoting from the post linked above). In your example, Person 1 gets an RPS of $$RPS_1 = 0.25^2 + 0.5^2 + 0.75^2 + 0^2 = 0.875,$$ whereas Person 2 attains an RPS of $$RPS_2 = 0^2 + 0^2 + 0.7^2 + 0^2 = 0.49,$$ thus outperforming Person 1. Note that like the Brier score, the RPS is also a proper scoring rule (and thus justified from a stats theory perspective).
|
Is the Brier Score appropriate for ordered categorical data?
|
In your case, the Brier score still "works" in that it incentivizes you to state the true forecast distribution (it is a proper scoring rule, see Gneiting and Raftery, 2007). For example, if the true
|
Is the Brier Score appropriate for ordered categorical data?
In your case, the Brier score still "works" in that it incentivizes you to state the true forecast distribution (it is a proper scoring rule, see Gneiting and Raftery, 2007). For example, if the true probabilities were 0, 1/4, 1/4, 1/2, you would minimize your expected Brier score by actually stating these probabilities.
and 3. The Brier score is proper for both nominal and ordered data. However, as your example shows, it does not care about the forecast probabilities in the bins next to the one that realized. Whether this is problematic or not is largely a philosophical question. If you dislike this feature of the Brier score, you might use the Ranked Probability Score, defined as
$$RPS = \sum_{i=1}^r BS(i),$$ where $BS(i)$ is the Brier score for the event that the outcome lies within the first $i$ categories (loosely quoting from the post linked above). In your example, Person 1 gets an RPS of $$RPS_1 = 0.25^2 + 0.5^2 + 0.75^2 + 0^2 = 0.875,$$ whereas Person 2 attains an RPS of $$RPS_2 = 0^2 + 0^2 + 0.7^2 + 0^2 = 0.49,$$ thus outperforming Person 1. Note that like the Brier score, the RPS is also a proper scoring rule (and thus justified from a stats theory perspective).
|
Is the Brier Score appropriate for ordered categorical data?
In your case, the Brier score still "works" in that it incentivizes you to state the true forecast distribution (it is a proper scoring rule, see Gneiting and Raftery, 2007). For example, if the true
|
41,986
|
Intuition for near-decorrelation through centering
|
Here is a geometric argument that I think provides the required intuition.
Let us consider a $n\times p$ data matrix $\mathbf U$, where each row is one sample of your random variable $\mathbf u$. Usual "centering" refers to centering the columns of $\mathbf U$; as your $\mathbf u$ has zero mean, the data matrix $\mathbf U$ will be approximately centered in this sense.
Your operation makes row means (as opposed to column means) equal to zero; so I will call it "row-centering".
Now, consider columns of $\mathbf U$ as vectors in $\mathbb R^n$. Each of these vectors corresponds to one of the $p$ variables (components of $\mathbf u$). Assuming that $\mathbf U$ is [column-]centered, the length of each vector is equal to the variance of this variable and the cosine of the angle between any two vectors is equal to the correlation between them. This $n$-dimensional geometric view is standard in linear regression, PCA/FA, etc.
This was setting-up. Now comes the argument.
So what you have before row-centering is $p$ vectors of equal length and the same angle $\mathrm{arccos}(p)$ between any two of them. The end-points of these vectors form a cloud of points; this cloud lies entirely in one "hyper-quadrant" of $\mathbb R^n$ (because the angles are all below $90^\circ$).
When you do row-centering, you are centering this cloud of points in the usual sense. So you take this cloud of points and shift to zero. Now all vectors, instead of pointing in the sort-of-same direction, are suddenly pointing in various directions away from zero. In other words, correlations become close to $0$.
To get a better idea of it, consider what happens when $p=2$. There are only two points, so after centering the angle between them is $180^\circ$, corresponding to correlation $-1$, as your formula says. For $p=3$ there are three points forming a perfect triangle; after centering the angles are $120^\circ$, corresponding to correlation $-1/2$. Etc. For larger $p$ you will quickly get to near-zero correlations.
(There really should have been a figure here, but I don't have the time for that now.)
|
Intuition for near-decorrelation through centering
|
Here is a geometric argument that I think provides the required intuition.
Let us consider a $n\times p$ data matrix $\mathbf U$, where each row is one sample of your random variable $\mathbf u$. Usua
|
Intuition for near-decorrelation through centering
Here is a geometric argument that I think provides the required intuition.
Let us consider a $n\times p$ data matrix $\mathbf U$, where each row is one sample of your random variable $\mathbf u$. Usual "centering" refers to centering the columns of $\mathbf U$; as your $\mathbf u$ has zero mean, the data matrix $\mathbf U$ will be approximately centered in this sense.
Your operation makes row means (as opposed to column means) equal to zero; so I will call it "row-centering".
Now, consider columns of $\mathbf U$ as vectors in $\mathbb R^n$. Each of these vectors corresponds to one of the $p$ variables (components of $\mathbf u$). Assuming that $\mathbf U$ is [column-]centered, the length of each vector is equal to the variance of this variable and the cosine of the angle between any two vectors is equal to the correlation between them. This $n$-dimensional geometric view is standard in linear regression, PCA/FA, etc.
This was setting-up. Now comes the argument.
So what you have before row-centering is $p$ vectors of equal length and the same angle $\mathrm{arccos}(p)$ between any two of them. The end-points of these vectors form a cloud of points; this cloud lies entirely in one "hyper-quadrant" of $\mathbb R^n$ (because the angles are all below $90^\circ$).
When you do row-centering, you are centering this cloud of points in the usual sense. So you take this cloud of points and shift to zero. Now all vectors, instead of pointing in the sort-of-same direction, are suddenly pointing in various directions away from zero. In other words, correlations become close to $0$.
To get a better idea of it, consider what happens when $p=2$. There are only two points, so after centering the angle between them is $180^\circ$, corresponding to correlation $-1$, as your formula says. For $p=3$ there are three points forming a perfect triangle; after centering the angles are $120^\circ$, corresponding to correlation $-1/2$. Etc. For larger $p$ you will quickly get to near-zero correlations.
(There really should have been a figure here, but I don't have the time for that now.)
|
Intuition for near-decorrelation through centering
Here is a geometric argument that I think provides the required intuition.
Let us consider a $n\times p$ data matrix $\mathbf U$, where each row is one sample of your random variable $\mathbf u$. Usua
|
41,987
|
Check uniformly distributed continuous random variable
|
Although it is meaningless to find a probability (unless you first specify a prior distribution of the endpoints), you can find the relative likelihood. A good basis for comparison would be the alternative hypothesis that the numbers are drawn from a uniform distribution between a lower bound $L$ and upper bound $U$.
Sufficient statistics are the minimum $X$ and maximum $Y$ of all the data (assuming each number is obtained independently). It doesn't matter whether you draw the data in batches or not. When drawn from the interval $[0,1]$, the joint distribution of $(X, Y)$ is continuous and has density
$$\eqalign{f(x,y) &= \binom{n}{1,n-2,1}(y-x)^{n-2}\mathcal{I}(0\le x\le y\le 1) \\ &= n(n-1)(y-x)^{n-2}\mathcal{I}(0\le x\le y\le 1).}$$
When scaled by $U-L$ and shifted by $L$, this density becomes
$$f_{(L,U)}(x,y) = (U-L)^{-n} n(n-1)(y-x)^{n-2}\mathcal{I}(L\le x\le y\le U).$$
Obviously this is greatest when $L = x$ and $U=y$.
The relative likelihood is their ratio, best expressed as a logarithm:
$$\Lambda(X,Y) = \log\left(\frac{f_{(X,Y)}(X,Y)}{f(X,Y)}\right) = -n\log(Y-X).$$
A small value of this is evidence for the hypothesis $(L,U)=(0,1)$; larger values are evidence against it. Of course if $X \lt 0$ or $Y \gt 1$ the hypothesis is controverted. But when the hypothesis is true, for large $n$ (greater than $20$ or so), $2\Lambda(X,Y)$ will have approximately a $\chi^2(4)$ distribution. Assuming $X \ge 0$ and $Y \le 1$, this enables you to reject the hypothesis when the chance of a $\chi^2(4)$ variable exceeding $2\Lambda(X,Y)$ becomes so small you can no longer suppose the large value can be attributed to chance alone.
I will not attempt to prove that the $\chi^2(4)$ distribution is the one to use; I will merely show that it works by simulating a large number of independent values of $2\Lambda(X,Y)$ when the hypothesis is true. Since you have the ability to generate large values of $n$, let's take $n=500$ as an example.
$100,000$ results are shown for $n=500$. The red curve graphs the density of a $\chi^2(4)$ variable. It closely agrees with the histogram.
As a worked example consider the situation posed in the question where $n=100$, $X= 0.51$, and $Y=0.69$. Now
$$-2\Lambda(0.51, 0.69) = -2(100\log(0.69 - 0.51)) = 343.$$
The corresponding $\chi^2(4)$ probability is less than $10^{-72}$: although we would never trust the accuracy of the $\chi^2$ approximation this far out into the tail (even with $n=100$ observations), this value is so small that certainly these data were not obtained from $100$ independent uniform$(0,1)$ variables!
In the second situation where $X=0.01$ and $Y=0.99$,
$$-2\Lambda(0.01, 0.99) = -2(100\log(0.99 - 0.01)) = 4.04.$$
Now the $\chi^2(4)$ probability is $0.40 = 40\%$, quite consistent with the hypothesis that $(L,U)=(0,1)$.
BTW, here's R code to perform simulations. I have reset it to just $10,000$ iterations so that it will take less than one second to complete.
n <- 500 # Sample size
N <- 1e4 # Number of simulation trials
lambda <- apply(matrix(runif(n*N), nrow=n), 2, function(x) -2 * n * log(diff(range(x))))
#
# Plot the results.
#
hist(lambda, freq=FALSE, breaks=seq(0, ceiling(max(lambda)), 1/4), border="#00000040",
main="Histogram", xlab="2*Lambda")
curve(dchisq(x, 4), add=TRUE, col="Red", lwd=2)
|
Check uniformly distributed continuous random variable
|
Although it is meaningless to find a probability (unless you first specify a prior distribution of the endpoints), you can find the relative likelihood. A good basis for comparison would be the alter
|
Check uniformly distributed continuous random variable
Although it is meaningless to find a probability (unless you first specify a prior distribution of the endpoints), you can find the relative likelihood. A good basis for comparison would be the alternative hypothesis that the numbers are drawn from a uniform distribution between a lower bound $L$ and upper bound $U$.
Sufficient statistics are the minimum $X$ and maximum $Y$ of all the data (assuming each number is obtained independently). It doesn't matter whether you draw the data in batches or not. When drawn from the interval $[0,1]$, the joint distribution of $(X, Y)$ is continuous and has density
$$\eqalign{f(x,y) &= \binom{n}{1,n-2,1}(y-x)^{n-2}\mathcal{I}(0\le x\le y\le 1) \\ &= n(n-1)(y-x)^{n-2}\mathcal{I}(0\le x\le y\le 1).}$$
When scaled by $U-L$ and shifted by $L$, this density becomes
$$f_{(L,U)}(x,y) = (U-L)^{-n} n(n-1)(y-x)^{n-2}\mathcal{I}(L\le x\le y\le U).$$
Obviously this is greatest when $L = x$ and $U=y$.
The relative likelihood is their ratio, best expressed as a logarithm:
$$\Lambda(X,Y) = \log\left(\frac{f_{(X,Y)}(X,Y)}{f(X,Y)}\right) = -n\log(Y-X).$$
A small value of this is evidence for the hypothesis $(L,U)=(0,1)$; larger values are evidence against it. Of course if $X \lt 0$ or $Y \gt 1$ the hypothesis is controverted. But when the hypothesis is true, for large $n$ (greater than $20$ or so), $2\Lambda(X,Y)$ will have approximately a $\chi^2(4)$ distribution. Assuming $X \ge 0$ and $Y \le 1$, this enables you to reject the hypothesis when the chance of a $\chi^2(4)$ variable exceeding $2\Lambda(X,Y)$ becomes so small you can no longer suppose the large value can be attributed to chance alone.
I will not attempt to prove that the $\chi^2(4)$ distribution is the one to use; I will merely show that it works by simulating a large number of independent values of $2\Lambda(X,Y)$ when the hypothesis is true. Since you have the ability to generate large values of $n$, let's take $n=500$ as an example.
$100,000$ results are shown for $n=500$. The red curve graphs the density of a $\chi^2(4)$ variable. It closely agrees with the histogram.
As a worked example consider the situation posed in the question where $n=100$, $X= 0.51$, and $Y=0.69$. Now
$$-2\Lambda(0.51, 0.69) = -2(100\log(0.69 - 0.51)) = 343.$$
The corresponding $\chi^2(4)$ probability is less than $10^{-72}$: although we would never trust the accuracy of the $\chi^2$ approximation this far out into the tail (even with $n=100$ observations), this value is so small that certainly these data were not obtained from $100$ independent uniform$(0,1)$ variables!
In the second situation where $X=0.01$ and $Y=0.99$,
$$-2\Lambda(0.01, 0.99) = -2(100\log(0.99 - 0.01)) = 4.04.$$
Now the $\chi^2(4)$ probability is $0.40 = 40\%$, quite consistent with the hypothesis that $(L,U)=(0,1)$.
BTW, here's R code to perform simulations. I have reset it to just $10,000$ iterations so that it will take less than one second to complete.
n <- 500 # Sample size
N <- 1e4 # Number of simulation trials
lambda <- apply(matrix(runif(n*N), nrow=n), 2, function(x) -2 * n * log(diff(range(x))))
#
# Plot the results.
#
hist(lambda, freq=FALSE, breaks=seq(0, ceiling(max(lambda)), 1/4), border="#00000040",
main="Histogram", xlab="2*Lambda")
curve(dchisq(x, 4), add=TRUE, col="Red", lwd=2)
|
Check uniformly distributed continuous random variable
Although it is meaningless to find a probability (unless you first specify a prior distribution of the endpoints), you can find the relative likelihood. A good basis for comparison would be the alter
|
41,988
|
Check uniformly distributed continuous random variable
|
It's not clear exactly what you need, but let's first take a look at how to estimate the lower and upper boundaries. It is claimed on Wikipedia that the uniformly minimum variance unbiased estimators for the continuous uniform distribution on $(a,b)$ are the maximum spacing estimators $$ \hat{a}={{nx_{(1)}-x_{(n)}} \over {n-1}},\ \ \hat{b}={{nx_{(n)}-x_{(1)}} \over {n-1}}, $$ where $x_{(1)}$ is the minimum observed value, $x_{(n)}$ is the maximum observed value, and $n$ is the sample size. Note that $\hat{a}$ can be negative and $\hat{b}$ can be greater than one when the population is $U(0,1)$.
I don't know if this is enough for you or not. It seems you really want to test whether your output is truly $U(0,1).$ If that is the case, why not just conduct a K-S test since your desired distribution is fully specified? In this case, if the null hypothesis is true, your p-values should also have a $U(0,1)$ distribution.
|
Check uniformly distributed continuous random variable
|
It's not clear exactly what you need, but let's first take a look at how to estimate the lower and upper boundaries. It is claimed on Wikipedia that the uniformly minimum variance unbiased estimators
|
Check uniformly distributed continuous random variable
It's not clear exactly what you need, but let's first take a look at how to estimate the lower and upper boundaries. It is claimed on Wikipedia that the uniformly minimum variance unbiased estimators for the continuous uniform distribution on $(a,b)$ are the maximum spacing estimators $$ \hat{a}={{nx_{(1)}-x_{(n)}} \over {n-1}},\ \ \hat{b}={{nx_{(n)}-x_{(1)}} \over {n-1}}, $$ where $x_{(1)}$ is the minimum observed value, $x_{(n)}$ is the maximum observed value, and $n$ is the sample size. Note that $\hat{a}$ can be negative and $\hat{b}$ can be greater than one when the population is $U(0,1)$.
I don't know if this is enough for you or not. It seems you really want to test whether your output is truly $U(0,1).$ If that is the case, why not just conduct a K-S test since your desired distribution is fully specified? In this case, if the null hypothesis is true, your p-values should also have a $U(0,1)$ distribution.
|
Check uniformly distributed continuous random variable
It's not clear exactly what you need, but let's first take a look at how to estimate the lower and upper boundaries. It is claimed on Wikipedia that the uniformly minimum variance unbiased estimators
|
41,989
|
Lay person's explanation of Sheather-Jones method for bandwidth selection?
|
Assuming it is indeed the preferred bandwidth estimator in Sheather and Jones' (1991) JRSS-B paper [1] that you mean (specifically, $\hat{h}_{2S}$), here's a brief discussion (as requested), but a brief discussion of a highly technical topic is necessarily a little vague and cryptic.
The basic issue of finding efficient$^\dagger$ bandwidth estimators boils down to finding a good estimate of $R(f'')$ (where $R(g) = \int g^2(x) dx$), the integrated squared second derivative of the density to be estimated -- i.e. the asymptotically optimal bandwidth depends on the second derivative of the very thing we wish to estimate!
$^\dagger$ here, specifically in the sense of minimum asymptotic mean integrated squared error (AMISE) ... about which, see here
Why does the integrated squared second derivative matter? In effect it measures how "wiggly" the curve is over the range you're looking at. If you have a very wiggly curve you won't get a good estimate of it with a wide bandwidth because you'll average over a bunch of wiggles instead of following them. If you have a curve that's pretty straight it makes sense to have a much wider bandwidth (since you can reduce the noise in your estimate by including more data).
A number of bandwidth estimators use (in turn) a kernel based estimate of $R(f'')$.
Sheather and Jones include a bias term in their estimate of $R(f'')$ that had previously been neglected. This results in estimating $R(f''')$ (a lot of detail is being glossed over here).
How to summarize all that? It's an improved version of a kernel-based estimate of the optimal bandwidth, not that this is likely to help much.
As to why it's popular (I won't engage in idle discussion of whether it's the most popular, since it seems impossible to reliably assess), the abstract gives a highly plausible reason:
reliably good performance for smooth densities in simulations [...] second to none in the existing literature
i.e. it (demonstrably) works well in practice for a reasonably broad set of of cases.
[There have been - unsurprisingly - further suggested improvements in the last quarter century, but this bandwidth estimator remains popular.]
[1] S. J. Sheather, and M. C. Jones. (1991)
"A Reliable Data-Based Bandwidth Selection Method for Kernel Density Estimation."
Journal of the Royal Statistical Society. Series B, 53 (3) pp683-690
|
Lay person's explanation of Sheather-Jones method for bandwidth selection?
|
Assuming it is indeed the preferred bandwidth estimator in Sheather and Jones' (1991) JRSS-B paper [1] that you mean (specifically, $\hat{h}_{2S}$), here's a brief discussion (as requested), but a bri
|
Lay person's explanation of Sheather-Jones method for bandwidth selection?
Assuming it is indeed the preferred bandwidth estimator in Sheather and Jones' (1991) JRSS-B paper [1] that you mean (specifically, $\hat{h}_{2S}$), here's a brief discussion (as requested), but a brief discussion of a highly technical topic is necessarily a little vague and cryptic.
The basic issue of finding efficient$^\dagger$ bandwidth estimators boils down to finding a good estimate of $R(f'')$ (where $R(g) = \int g^2(x) dx$), the integrated squared second derivative of the density to be estimated -- i.e. the asymptotically optimal bandwidth depends on the second derivative of the very thing we wish to estimate!
$^\dagger$ here, specifically in the sense of minimum asymptotic mean integrated squared error (AMISE) ... about which, see here
Why does the integrated squared second derivative matter? In effect it measures how "wiggly" the curve is over the range you're looking at. If you have a very wiggly curve you won't get a good estimate of it with a wide bandwidth because you'll average over a bunch of wiggles instead of following them. If you have a curve that's pretty straight it makes sense to have a much wider bandwidth (since you can reduce the noise in your estimate by including more data).
A number of bandwidth estimators use (in turn) a kernel based estimate of $R(f'')$.
Sheather and Jones include a bias term in their estimate of $R(f'')$ that had previously been neglected. This results in estimating $R(f''')$ (a lot of detail is being glossed over here).
How to summarize all that? It's an improved version of a kernel-based estimate of the optimal bandwidth, not that this is likely to help much.
As to why it's popular (I won't engage in idle discussion of whether it's the most popular, since it seems impossible to reliably assess), the abstract gives a highly plausible reason:
reliably good performance for smooth densities in simulations [...] second to none in the existing literature
i.e. it (demonstrably) works well in practice for a reasonably broad set of of cases.
[There have been - unsurprisingly - further suggested improvements in the last quarter century, but this bandwidth estimator remains popular.]
[1] S. J. Sheather, and M. C. Jones. (1991)
"A Reliable Data-Based Bandwidth Selection Method for Kernel Density Estimation."
Journal of the Royal Statistical Society. Series B, 53 (3) pp683-690
|
Lay person's explanation of Sheather-Jones method for bandwidth selection?
Assuming it is indeed the preferred bandwidth estimator in Sheather and Jones' (1991) JRSS-B paper [1] that you mean (specifically, $\hat{h}_{2S}$), here's a brief discussion (as requested), but a bri
|
41,990
|
rigorous statistical-inference book recommendation
|
One book not mentioned above which I quite like is Theoretical Statistics, Topics for a Core Course by Keener. It is relatively rigorous but quite readable at the same time. Personally though, I think a subset of Theory of Point Estimation by Lehmann, Mathematical Statistics by Shao, and Keener should cover almost all the topics at the level you want.
|
rigorous statistical-inference book recommendation
|
One book not mentioned above which I quite like is Theoretical Statistics, Topics for a Core Course by Keener. It is relatively rigorous but quite readable at the same time. Personally though, I think
|
rigorous statistical-inference book recommendation
One book not mentioned above which I quite like is Theoretical Statistics, Topics for a Core Course by Keener. It is relatively rigorous but quite readable at the same time. Personally though, I think a subset of Theory of Point Estimation by Lehmann, Mathematical Statistics by Shao, and Keener should cover almost all the topics at the level you want.
|
rigorous statistical-inference book recommendation
One book not mentioned above which I quite like is Theoretical Statistics, Topics for a Core Course by Keener. It is relatively rigorous but quite readable at the same time. Personally though, I think
|
41,991
|
rigorous statistical-inference book recommendation
|
Take a look at Testing Statistical Hypotheses by Erich Lehmann and Joseph Romano. I also like Statistical Inference by Casella and Berger.
|
rigorous statistical-inference book recommendation
|
Take a look at Testing Statistical Hypotheses by Erich Lehmann and Joseph Romano. I also like Statistical Inference by Casella and Berger.
|
rigorous statistical-inference book recommendation
Take a look at Testing Statistical Hypotheses by Erich Lehmann and Joseph Romano. I also like Statistical Inference by Casella and Berger.
|
rigorous statistical-inference book recommendation
Take a look at Testing Statistical Hypotheses by Erich Lehmann and Joseph Romano. I also like Statistical Inference by Casella and Berger.
|
41,992
|
rigorous statistical-inference book recommendation
|
I recommend the book Mathematical Statistics by Keith Knight, a very friendly introduction that covers all the basics you need to know to study statistics at a high level.
|
rigorous statistical-inference book recommendation
|
I recommend the book Mathematical Statistics by Keith Knight, a very friendly introduction that covers all the basics you need to know to study statistics at a high level.
|
rigorous statistical-inference book recommendation
I recommend the book Mathematical Statistics by Keith Knight, a very friendly introduction that covers all the basics you need to know to study statistics at a high level.
|
rigorous statistical-inference book recommendation
I recommend the book Mathematical Statistics by Keith Knight, a very friendly introduction that covers all the basics you need to know to study statistics at a high level.
|
41,993
|
Book recommendations for Design of Experiments? [duplicate]
|
There is a book by John Lawson Design and Analysis of Experiments with R and he has developed a companion library called daewr. It's easy to follow. Good luck
|
Book recommendations for Design of Experiments? [duplicate]
|
There is a book by John Lawson Design and Analysis of Experiments with R and he has developed a companion library called daewr. It's easy to follow. Good luck
|
Book recommendations for Design of Experiments? [duplicate]
There is a book by John Lawson Design and Analysis of Experiments with R and he has developed a companion library called daewr. It's easy to follow. Good luck
|
Book recommendations for Design of Experiments? [duplicate]
There is a book by John Lawson Design and Analysis of Experiments with R and he has developed a companion library called daewr. It's easy to follow. Good luck
|
41,994
|
Chi-squared test for continuous variables (averages)
|
The question you are asking could be answered by performing what is very often called a 2-way ANOVA (also called two factor analysis of variance in Zar's Biostatistical Analysis).
Side comment: What I find very interesting in the way you ask your question is that indeed there are striking similarities between a 2-way ANOVA and a $\chi^2$ test.
In order to use the method, we need more information that what is available in the "mean" table: in order to calculate the statistic of the test, we also need to be able to compute the variability within each cell, and the total variability.
|
Chi-squared test for continuous variables (averages)
|
The question you are asking could be answered by performing what is very often called a 2-way ANOVA (also called two factor analysis of variance in Zar's Biostatistical Analysis).
Side comment: What I
|
Chi-squared test for continuous variables (averages)
The question you are asking could be answered by performing what is very often called a 2-way ANOVA (also called two factor analysis of variance in Zar's Biostatistical Analysis).
Side comment: What I find very interesting in the way you ask your question is that indeed there are striking similarities between a 2-way ANOVA and a $\chi^2$ test.
In order to use the method, we need more information that what is available in the "mean" table: in order to calculate the statistic of the test, we also need to be able to compute the variability within each cell, and the total variability.
|
Chi-squared test for continuous variables (averages)
The question you are asking could be answered by performing what is very often called a 2-way ANOVA (also called two factor analysis of variance in Zar's Biostatistical Analysis).
Side comment: What I
|
41,995
|
Chi-squared test for continuous variables (averages)
|
On the first part at least, I think it's wrong to say you can use the Chi2 test "to determine if the NUMBER of whales/sharks are independent to ocean location". One can use the Chi2 test to determine "whether observations of a big animal is whale or shark are independent to ocean location". And for this hypothesis, you use the "count of whale/shark observations" to construct a contingency table.
|
Chi-squared test for continuous variables (averages)
|
On the first part at least, I think it's wrong to say you can use the Chi2 test "to determine if the NUMBER of whales/sharks are independent to ocean location". One can use the Chi2 test to determine
|
Chi-squared test for continuous variables (averages)
On the first part at least, I think it's wrong to say you can use the Chi2 test "to determine if the NUMBER of whales/sharks are independent to ocean location". One can use the Chi2 test to determine "whether observations of a big animal is whale or shark are independent to ocean location". And for this hypothesis, you use the "count of whale/shark observations" to construct a contingency table.
|
Chi-squared test for continuous variables (averages)
On the first part at least, I think it's wrong to say you can use the Chi2 test "to determine if the NUMBER of whales/sharks are independent to ocean location". One can use the Chi2 test to determine
|
41,996
|
Making simple simulation to confirm power of statistical test?
|
The test you used in your simulation isn't a t-test because you used 1.96 instead of a t-value and you used the true standard deviation instead of the sample standard deviation. Your simulation was approximating the power of the corresponding "z test":
> pwr.norm.test(d = 0.5, n = 25, power = NULL)
Mean power calculation for normal distribution with known variance
d = 0.5
n = 25
sig.level = 0.05
power = 0.705418
alternative = two.sided
The following code shows how you can verify the power value given by pwr.t.test.
n <- 25 # sample size
mu <- 107.5 # true mean
sigma <- 15 # true SD
mu0 <- 100 # mean under the null hypothesis
reps <- 100000 # number of simulations
## p-value approach:
pvalues <- numeric(reps)
set.seed(1)
for (i in 1:reps) {
x <- rnorm(n, mu, sigma)
t.stat <- (mean(x) - mu0)/(sd(x)/sqrt(n))
pvalues[i] <- 2*(1 - pt(abs(t.stat), n-1))
# alternatively: pvalues[i] <- t.test(x, mu = mu0)$p.value
}
> mean(pvalues < 0.05)
[1] 0.66907
## Confidence interval approach:
outsideCI <- numeric(reps) # 1 if mu0 not in 95% CI, otherwise 0
set.seed(2)
for (i in 1:reps) {
x <- rnorm(n, mu, sigma)
CI.lower <- mean(x) - qt(0.975, n-1)*sd(x)/sqrt(n)
CI.upper <- mean(x) + qt(0.975, n-1)*sd(x)/sqrt(n)
outsideCI[i] <- ifelse(mu0 < CI.lower | mu0 > CI.upper, 1, 0)
}
> mean(outsideCI)
[1] 0.66893
|
Making simple simulation to confirm power of statistical test?
|
The test you used in your simulation isn't a t-test because you used 1.96 instead of a t-value and you used the true standard deviation instead of the sample standard deviation. Your simulation was ap
|
Making simple simulation to confirm power of statistical test?
The test you used in your simulation isn't a t-test because you used 1.96 instead of a t-value and you used the true standard deviation instead of the sample standard deviation. Your simulation was approximating the power of the corresponding "z test":
> pwr.norm.test(d = 0.5, n = 25, power = NULL)
Mean power calculation for normal distribution with known variance
d = 0.5
n = 25
sig.level = 0.05
power = 0.705418
alternative = two.sided
The following code shows how you can verify the power value given by pwr.t.test.
n <- 25 # sample size
mu <- 107.5 # true mean
sigma <- 15 # true SD
mu0 <- 100 # mean under the null hypothesis
reps <- 100000 # number of simulations
## p-value approach:
pvalues <- numeric(reps)
set.seed(1)
for (i in 1:reps) {
x <- rnorm(n, mu, sigma)
t.stat <- (mean(x) - mu0)/(sd(x)/sqrt(n))
pvalues[i] <- 2*(1 - pt(abs(t.stat), n-1))
# alternatively: pvalues[i] <- t.test(x, mu = mu0)$p.value
}
> mean(pvalues < 0.05)
[1] 0.66907
## Confidence interval approach:
outsideCI <- numeric(reps) # 1 if mu0 not in 95% CI, otherwise 0
set.seed(2)
for (i in 1:reps) {
x <- rnorm(n, mu, sigma)
CI.lower <- mean(x) - qt(0.975, n-1)*sd(x)/sqrt(n)
CI.upper <- mean(x) + qt(0.975, n-1)*sd(x)/sqrt(n)
outsideCI[i] <- ifelse(mu0 < CI.lower | mu0 > CI.upper, 1, 0)
}
> mean(outsideCI)
[1] 0.66893
|
Making simple simulation to confirm power of statistical test?
The test you used in your simulation isn't a t-test because you used 1.96 instead of a t-value and you used the true standard deviation instead of the sample standard deviation. Your simulation was ap
|
41,997
|
How to test whether a time series of measurements have converged to an equilibrium
|
Calculate the root mean square deviation in a sliding window over the course of a time series. Set a threshold to define "Convergence". Stop when the root mean square value falls below the threshold. i.e.
$$WindowRMSD_j=\sqrt{\frac{\sum_{i=1}^n(x_i-\bar{x})^2 }{n-1}}$$
where $n$ is the window size, $j$ is the current window, $x_i$ is the $i$th value in the window, and $\bar{x}$ is the mean of the values in the window. The first element in the window should begin with the index $i=0$.
As the time-series approaches convergence, the WindowRMSD will approach zero if the values converge.
|
How to test whether a time series of measurements have converged to an equilibrium
|
Calculate the root mean square deviation in a sliding window over the course of a time series. Set a threshold to define "Convergence". Stop when the root mean square value falls below the threshold.
|
How to test whether a time series of measurements have converged to an equilibrium
Calculate the root mean square deviation in a sliding window over the course of a time series. Set a threshold to define "Convergence". Stop when the root mean square value falls below the threshold. i.e.
$$WindowRMSD_j=\sqrt{\frac{\sum_{i=1}^n(x_i-\bar{x})^2 }{n-1}}$$
where $n$ is the window size, $j$ is the current window, $x_i$ is the $i$th value in the window, and $\bar{x}$ is the mean of the values in the window. The first element in the window should begin with the index $i=0$.
As the time-series approaches convergence, the WindowRMSD will approach zero if the values converge.
|
How to test whether a time series of measurements have converged to an equilibrium
Calculate the root mean square deviation in a sliding window over the course of a time series. Set a threshold to define "Convergence". Stop when the root mean square value falls below the threshold.
|
41,998
|
practical implementation detail of Bayesian Optimization
|
The norm is to use any global optimizer you like. The problem is that the EI surface is highly multi-modal and disconnected; optimizing this acquisition function is a nontrivial problem in itself.
A common choice that I have seen in various papers is the DIRECT algorithm; sometimes I've seen CMA-ES which is a state-of-the-art method in nonlinear optimization. In my experience for other forms of optimization, MCS (Multi-Level Coordinate Search) tends to work relatively well. You can find a review of derivative-free global optimizers here:
Rios and Sahinidis, "Derivative-free optimization: a review of algorithms and comparison of software implementations", Journal of Global Optimization (2013).
By the way, the EI is analytical so if you want you can also compute its gradient to guide the optimization, but this is not necessary. An effective technique is to run a global optimizer first to find promising solutions and then run a local optimizer to refine it (e.g., a quasi-Newton method such as BFGS, that is fminunc in MATLAB; or fmincon if you have constraints).
Finally, if speed of the optimization of the acquisition function is a factor (which is not the "traditional" BO scenario), I have found decent results by starting with a Latin Hypercube design or a quasi-random Sobol sequence design, then refined with a few steps of a local optimizer from the best point(s); see also @user777 comment. Since this is not the standard BO scenario, I don't have any specific reference that actually uses this method.
Examples of papers that refer to DIRECT or CMA-ES:
Calandra, R., Seyfarth, A., Peters, J., & Deisenroth, M. P. (2015). Bayesian optimization for learning gaits under uncertainty. Annals of Mathematics and Artificial Intelligence, 1-19 (link).
Mahendran, N., Wang, Z., Hamze, F., & Freitas, N. D. (2012). Adaptive MCMC with Bayesian optimization. In International Conference on Artificial Intelligence and Statistics (pp. 751-760) (link).
Gunter, T., Osborne, M. A., Garnett, R., Hennig, P., & Roberts, S. J. (2014). Sampling for inference in probabilistic models with fast Bayesian quadrature. In Advances in neural information processing systems (pp. 2789-2797) (link).
You can just google "Bayesian optimization" + the desired global optimization algorithm, and you'll find a bunch of papers. Plus, in pretty much every other paper about BO you would find a sentence such as:
[...] BO usually requires an auxiliary global
optimizer in each iteration to optimize the acquisition function. It
is customary in the BO literature to use DIvided RECTangles (DIRECT)
to accomplish such a task. Other global optimization algorithms like
CMA-ES could also be applied.
|
practical implementation detail of Bayesian Optimization
|
The norm is to use any global optimizer you like. The problem is that the EI surface is highly multi-modal and disconnected; optimizing this acquisition function is a nontrivial problem in itself.
A c
|
practical implementation detail of Bayesian Optimization
The norm is to use any global optimizer you like. The problem is that the EI surface is highly multi-modal and disconnected; optimizing this acquisition function is a nontrivial problem in itself.
A common choice that I have seen in various papers is the DIRECT algorithm; sometimes I've seen CMA-ES which is a state-of-the-art method in nonlinear optimization. In my experience for other forms of optimization, MCS (Multi-Level Coordinate Search) tends to work relatively well. You can find a review of derivative-free global optimizers here:
Rios and Sahinidis, "Derivative-free optimization: a review of algorithms and comparison of software implementations", Journal of Global Optimization (2013).
By the way, the EI is analytical so if you want you can also compute its gradient to guide the optimization, but this is not necessary. An effective technique is to run a global optimizer first to find promising solutions and then run a local optimizer to refine it (e.g., a quasi-Newton method such as BFGS, that is fminunc in MATLAB; or fmincon if you have constraints).
Finally, if speed of the optimization of the acquisition function is a factor (which is not the "traditional" BO scenario), I have found decent results by starting with a Latin Hypercube design or a quasi-random Sobol sequence design, then refined with a few steps of a local optimizer from the best point(s); see also @user777 comment. Since this is not the standard BO scenario, I don't have any specific reference that actually uses this method.
Examples of papers that refer to DIRECT or CMA-ES:
Calandra, R., Seyfarth, A., Peters, J., & Deisenroth, M. P. (2015). Bayesian optimization for learning gaits under uncertainty. Annals of Mathematics and Artificial Intelligence, 1-19 (link).
Mahendran, N., Wang, Z., Hamze, F., & Freitas, N. D. (2012). Adaptive MCMC with Bayesian optimization. In International Conference on Artificial Intelligence and Statistics (pp. 751-760) (link).
Gunter, T., Osborne, M. A., Garnett, R., Hennig, P., & Roberts, S. J. (2014). Sampling for inference in probabilistic models with fast Bayesian quadrature. In Advances in neural information processing systems (pp. 2789-2797) (link).
You can just google "Bayesian optimization" + the desired global optimization algorithm, and you'll find a bunch of papers. Plus, in pretty much every other paper about BO you would find a sentence such as:
[...] BO usually requires an auxiliary global
optimizer in each iteration to optimize the acquisition function. It
is customary in the BO literature to use DIvided RECTangles (DIRECT)
to accomplish such a task. Other global optimization algorithms like
CMA-ES could also be applied.
|
practical implementation detail of Bayesian Optimization
The norm is to use any global optimizer you like. The problem is that the EI surface is highly multi-modal and disconnected; optimizing this acquisition function is a nontrivial problem in itself.
A c
|
41,999
|
Why use quantile regression instead of splitting the data in quantiles and calculating multiple linear regressions?
|
You need to look at the difference between conditional and unconditional quantiles.
Your approach analyzes unconditional quantiles of $y$, and how they depend on $x$. That may be a worthwhile question to ask, but it is not the question that quantile regression discusses.
Quantile regression analyzes quantiles of $y$ conditional on $x$. That is: given a value of $x$, what is the likely quantile of the conditional distribution of $y$ for exactly this $x$?
Let's simulate a little data.
Quantile regression will fit a line (in the simplest case, a linear relationship with $x$, i.e., a straight line) such that at each value of $x$, we expect a certain percentage of the data to lie above this line. Here, I am working with an 80% quantile:
The approach you propose amounts to cutting off the top 20% of the $y$ without regard to $x$. Graphically, that amounts to putting a horizontal line through the point cloud and then looking at the points above this line:
An analysis of these points may be useful. But it will simply be a different analysis than quantile regression. You may be able to say something about the distribution of $x$ among your top 20% of $y$. But you will not be able to say anything about the conditional quantile of $y$ for any given $x$.
R code for the plots:
n_points <- 2000
set.seed(1)
xx <- rnorm(n_points)
yy <- xx+rnorm(n_points)
qq <- 0.8
width <- 400
height <- 400
png("qr_1.png",width=width,height=height)
par(mai=c(.8,.8,.1,.1),las=1)
plot(xx,yy,pch=19,cex=0.6)
dev.off()
library(quantreg)
model <- rq(yy~xx,tau=qq)
png("qr_2.png",width=width,height=height)
par(mai=c(.8,.8,.1,.1),las=1)
plot(xx,yy,pch=19,cex=0.6,col="lightgray")
abline(model,lwd=1.5,col="red")
index <- yy>=predict(model)
points(xx[index],yy[index],pch=19,cex=0.6)
dev.off()
png("qr_3.png",width=width,height=height)
par(mai=c(.8,.8,.1,.1),las=1)
plot(xx,yy,pch=19,cex=0.6,col="lightgray")
index <- yy>=quantile(yy,qq)
points(xx[index],yy[index],pch=19,cex=0.6)
dev.off()
|
Why use quantile regression instead of splitting the data in quantiles and calculating multiple line
|
You need to look at the difference between conditional and unconditional quantiles.
Your approach analyzes unconditional quantiles of $y$, and how they depend on $x$. That may be a worthwhile question
|
Why use quantile regression instead of splitting the data in quantiles and calculating multiple linear regressions?
You need to look at the difference between conditional and unconditional quantiles.
Your approach analyzes unconditional quantiles of $y$, and how they depend on $x$. That may be a worthwhile question to ask, but it is not the question that quantile regression discusses.
Quantile regression analyzes quantiles of $y$ conditional on $x$. That is: given a value of $x$, what is the likely quantile of the conditional distribution of $y$ for exactly this $x$?
Let's simulate a little data.
Quantile regression will fit a line (in the simplest case, a linear relationship with $x$, i.e., a straight line) such that at each value of $x$, we expect a certain percentage of the data to lie above this line. Here, I am working with an 80% quantile:
The approach you propose amounts to cutting off the top 20% of the $y$ without regard to $x$. Graphically, that amounts to putting a horizontal line through the point cloud and then looking at the points above this line:
An analysis of these points may be useful. But it will simply be a different analysis than quantile regression. You may be able to say something about the distribution of $x$ among your top 20% of $y$. But you will not be able to say anything about the conditional quantile of $y$ for any given $x$.
R code for the plots:
n_points <- 2000
set.seed(1)
xx <- rnorm(n_points)
yy <- xx+rnorm(n_points)
qq <- 0.8
width <- 400
height <- 400
png("qr_1.png",width=width,height=height)
par(mai=c(.8,.8,.1,.1),las=1)
plot(xx,yy,pch=19,cex=0.6)
dev.off()
library(quantreg)
model <- rq(yy~xx,tau=qq)
png("qr_2.png",width=width,height=height)
par(mai=c(.8,.8,.1,.1),las=1)
plot(xx,yy,pch=19,cex=0.6,col="lightgray")
abline(model,lwd=1.5,col="red")
index <- yy>=predict(model)
points(xx[index],yy[index],pch=19,cex=0.6)
dev.off()
png("qr_3.png",width=width,height=height)
par(mai=c(.8,.8,.1,.1),las=1)
plot(xx,yy,pch=19,cex=0.6,col="lightgray")
index <- yy>=quantile(yy,qq)
points(xx[index],yy[index],pch=19,cex=0.6)
dev.off()
|
Why use quantile regression instead of splitting the data in quantiles and calculating multiple line
You need to look at the difference between conditional and unconditional quantiles.
Your approach analyzes unconditional quantiles of $y$, and how they depend on $x$. That may be a worthwhile question
|
42,000
|
What is the motivation for the entropy term in the proof of EM algorithm?
|
I would not say that the entropy is the most important piece of the proof. I will try to explain where the entropy term comes from in the following.
I will follow the notation used in the reference you provided. The goal of the EM algorithm is to maximize the log-likelihood function $l(D ; \theta)$ on a set of $n$ training examples $D = \{x_1, \dots, x_n\}$. Putting equations $(5)$ and $(6)$ from the reference together, we can write the log-likelihood function as follows:
$$
l(D; \theta) = \sum_{i=1}^n \log \left( \sum_{j=1}^m P(j, x_i | \theta_j) \right)
$$
Let $q_i(j)$ denote a distribution, specific to training example $x_i$, that has non-zero probability for all values of $j$; so we have that $\forall i, j, q_i(j) > 0$ and $\forall i, \sum_{j=1}^m q_i(j) = 1$. We can divide and multiply each term in the $\log$ by $q_i(j)$ in the previous equation to get the following ($q_i(j) > 0$ is required to avoid division by zero):
$$
l(D; \theta) = \sum_{i=1}^n \log \left( \sum_{j=1}^m \frac{q_i(j) P(j, x_i | \theta_j)}{q_i(j)} \right) \tag{1}
$$
$\log$ is a concave function and for any set of values $v_1, \dots, v_m$ and any discrete distribution on $m$ values $p(1), \dots, p(m)$ we have:
$$
\log \left( \sum_{j=1}^m p(j) v_j \right) \geq
\sum_{j=1}^m p(j) \log v_j
$$
This follows from Jensen's inequality.
We can use the concavity of the $\log$ function in $(1)$ and write the following inequality for the log-likelihood function:
$$
\begin{align}
l(D; \theta) \geq &
\sum_{i=1}^n \sum_{j=1}^m q_i(j) \log \left( \frac{P(j, x_i | \theta_j)}{q_i(j)} \right) \\
= &
\sum_{i=1}^n \sum_{j=1}^m q_i(j) \log \left( P(j, x_i | \theta_j) \right) + H(q_i) \tag{2}
\end{align}
$$
As I mentioned before, the distributions $q_i$ can be arbitrarily chosen. However, when $q_i(j) = P(j | x_i; \theta)$ then the inequality in $(2)$ becomes an equality.
|
What is the motivation for the entropy term in the proof of EM algorithm?
|
I would not say that the entropy is the most important piece of the proof. I will try to explain where the entropy term comes from in the following.
I will follow the notation used in the reference yo
|
What is the motivation for the entropy term in the proof of EM algorithm?
I would not say that the entropy is the most important piece of the proof. I will try to explain where the entropy term comes from in the following.
I will follow the notation used in the reference you provided. The goal of the EM algorithm is to maximize the log-likelihood function $l(D ; \theta)$ on a set of $n$ training examples $D = \{x_1, \dots, x_n\}$. Putting equations $(5)$ and $(6)$ from the reference together, we can write the log-likelihood function as follows:
$$
l(D; \theta) = \sum_{i=1}^n \log \left( \sum_{j=1}^m P(j, x_i | \theta_j) \right)
$$
Let $q_i(j)$ denote a distribution, specific to training example $x_i$, that has non-zero probability for all values of $j$; so we have that $\forall i, j, q_i(j) > 0$ and $\forall i, \sum_{j=1}^m q_i(j) = 1$. We can divide and multiply each term in the $\log$ by $q_i(j)$ in the previous equation to get the following ($q_i(j) > 0$ is required to avoid division by zero):
$$
l(D; \theta) = \sum_{i=1}^n \log \left( \sum_{j=1}^m \frac{q_i(j) P(j, x_i | \theta_j)}{q_i(j)} \right) \tag{1}
$$
$\log$ is a concave function and for any set of values $v_1, \dots, v_m$ and any discrete distribution on $m$ values $p(1), \dots, p(m)$ we have:
$$
\log \left( \sum_{j=1}^m p(j) v_j \right) \geq
\sum_{j=1}^m p(j) \log v_j
$$
This follows from Jensen's inequality.
We can use the concavity of the $\log$ function in $(1)$ and write the following inequality for the log-likelihood function:
$$
\begin{align}
l(D; \theta) \geq &
\sum_{i=1}^n \sum_{j=1}^m q_i(j) \log \left( \frac{P(j, x_i | \theta_j)}{q_i(j)} \right) \\
= &
\sum_{i=1}^n \sum_{j=1}^m q_i(j) \log \left( P(j, x_i | \theta_j) \right) + H(q_i) \tag{2}
\end{align}
$$
As I mentioned before, the distributions $q_i$ can be arbitrarily chosen. However, when $q_i(j) = P(j | x_i; \theta)$ then the inequality in $(2)$ becomes an equality.
|
What is the motivation for the entropy term in the proof of EM algorithm?
I would not say that the entropy is the most important piece of the proof. I will try to explain where the entropy term comes from in the following.
I will follow the notation used in the reference yo
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.