idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
12,301
|
What is the difference between PCA and asymptotic PCA?
|
It's not about Math. only. To understand why it's different, you need to know the Finance theory background of factor models. For a factor model $R = BF + \epsilon$. It is like a multivariate regression, R has dimension the number of stocks.
F, the number of factors.
B is a matrix of loadings.
$\epsilon$ (epsilon) has a diagonal covariance matrix.
In PCA you first get the loadings as the Eigen vectors of the covariance matrix. Then you decide how many you keep, say K. In a second step you get the K factor scores (estimates of the K factor values at every time t) by cross-sectional regression of the stock returns on the loadings for every time (aka Bartlett method). Since the Bs are noisy estimates from a noisy covariance matrix, you have a measurement error in your right hand side data (the Bs) in the second step.
Now why it's important: The typical equity finance application has many many stocks and not so many time period => noisy noisy cov matrix => noisy loadings B (can you say Junk in?) => massive bias to zero of the estimates of the factor scores. In the extreme, you have more stocks than periods and your cov. matrix is not full rank.
In Asymptotic PC, see Connors and Korajczik, the cross-product matrix is TxT, where T is the number of observations. It is always PDS, as the number of stocks goes to infinity for a given sample size, it converges to the factor scores with arbitrary precision. Then in a second step you get the loadings by a time series regression of each return on the factors.
And here is the (nice) catch: In a stock situation, you have N >> T, you could not even estimate the cov.matrix properly. But you have infinitely precise estimates of the factor scores. In the second step where you get the loadings from the scores, you have NO problem of errors in the variables.
If you take the pain to read their papers, you will see the simulation that shows how effective this is for typical cross-section size vs sample size.
|
What is the difference between PCA and asymptotic PCA?
|
It's not about Math. only. To understand why it's different, you need to know the Finance theory background of factor models. For a factor model $R = BF + \epsilon$. It is like a multivariate regressi
|
What is the difference between PCA and asymptotic PCA?
It's not about Math. only. To understand why it's different, you need to know the Finance theory background of factor models. For a factor model $R = BF + \epsilon$. It is like a multivariate regression, R has dimension the number of stocks.
F, the number of factors.
B is a matrix of loadings.
$\epsilon$ (epsilon) has a diagonal covariance matrix.
In PCA you first get the loadings as the Eigen vectors of the covariance matrix. Then you decide how many you keep, say K. In a second step you get the K factor scores (estimates of the K factor values at every time t) by cross-sectional regression of the stock returns on the loadings for every time (aka Bartlett method). Since the Bs are noisy estimates from a noisy covariance matrix, you have a measurement error in your right hand side data (the Bs) in the second step.
Now why it's important: The typical equity finance application has many many stocks and not so many time period => noisy noisy cov matrix => noisy loadings B (can you say Junk in?) => massive bias to zero of the estimates of the factor scores. In the extreme, you have more stocks than periods and your cov. matrix is not full rank.
In Asymptotic PC, see Connors and Korajczik, the cross-product matrix is TxT, where T is the number of observations. It is always PDS, as the number of stocks goes to infinity for a given sample size, it converges to the factor scores with arbitrary precision. Then in a second step you get the loadings by a time series regression of each return on the factors.
And here is the (nice) catch: In a stock situation, you have N >> T, you could not even estimate the cov.matrix properly. But you have infinitely precise estimates of the factor scores. In the second step where you get the loadings from the scores, you have NO problem of errors in the variables.
If you take the pain to read their papers, you will see the simulation that shows how effective this is for typical cross-section size vs sample size.
|
What is the difference between PCA and asymptotic PCA?
It's not about Math. only. To understand why it's different, you need to know the Finance theory background of factor models. For a factor model $R = BF + \epsilon$. It is like a multivariate regressi
|
12,302
|
Does LASSO suffer from the same problems stepwise regression does?
|
The probability interpretation of frequentist expressions of likelihood, p-values etcetera, for a LASSO model, and stepwise regression, are not correct.
Those expressions overestimate the probability. E.g. a 95% confidence interval for some parameter is supposed to say that you have a 95% probability that the method will result in an interval with the true model variable inside that interval.
However, the fitted models do not result from a typical single hypothesis, and instead we are cherry picking (select out of many possible alternative models) when we do stepwise regression or LASSO regression.
It makes little sense to evaluate the correctness of the model parameters (especially when it is likely that the model is not correct).
In the example below, explained later, the model is fitted to many regressors and it 'suffers' from multicollinearity. This makes it likely that a neighboring regressor (which is strongly correlating) is selected in the model instead of the one that is truly in the model. The strong correlation causes the coefficients to have a large error/variance (relating to the matrix $(X^TX)^{-1}$).
However, this high variance due to multicollionearity is not 'seen' in the diagnostics like p-values or standard error of coefficients, because these are based on a smaller design matrix $X$ with less regressors. (and there is no straightforward method to compute those type of statistics for LASSO)
Example: the graph below which displays the results of a toy-model for some signal that is a linear sum of 10 Gaussian curves (this may for instance resemble an analysis in chemistry where a signal for a spectrum is considered to be a linear sum of several components). The signal of the 10 curves is fitted with a model of 100 components (Gaussian curves with different mean) using LASSO. The signal is well estimated (compare the red and black curve which are reasonably close). But, the actual underlying coefficients are not well estimated and may be completely wrong (compare the red and black bars with dots which are not the same). See also the last 10 coefficients:
91 91 92 93 94 95 96 97 98 99 100
true model 0 0 0 0 0 0 0 142.8 0 0 0
fitted 0 0 0 0 0 0 129.7 6.9 0 0 0
The LASSO model does select coefficients which are very approximate, but from the perspective of the coefficients themselves it means a large error when a coefficient that should be non-zero is estimated to be zero and a neighboring coefficient that should be zero is estimated to be non-zero. Any confidence intervals for the coefficients would make very little sense.
LASSO fitting
Stepwise fitting
As a comparision, the same curve can be fitted with a stepwise algorithm leading to the image below. (with similar problems that the coefficients are close but do not match)
Even when you consider the accuracy of the curve (rather than the parameters, which in the previous point is made clear that it makes no sense) then you have to deal with overfitting. When you do a fitting procedure with LASSO then you make use of training data (to fit the models with different parameters) and test/validation data (to tune/find which is the best parameter), but you should also use a third seperate set of test/validation data to find out the performance of the data.
A p-value or something simular is not gonna work because you are working on a tuned model which is cherry picking and different (much larger degrees of freedom) from the regular linear fitting method.
suffer from the same problems stepwise regression does?
You seem to refer to problems like bias in values like $R^2$, p-values, F-scores or standard errors. I believe that LASSO is not used in order to solve those problems.
I thought that the main reason to use LASSO in place of stepwise regression is that LASSO allows a less greedy parameter selection, that is less influenced by multicollinarity. (more differences between LASSO and stepwise: Superiority of LASSO over forward selection/backward elimination in terms of the cross validation prediction error of the model)
Code for the example image
# settings
library(glmnet)
n <- 10^2 # number of regressors/vectors
m <- 2 # multiplier for number of datapoints
nel <- 10 # number of elements in the model
set.seed(1)
sig <- 4
t <- seq(0,n,length.out=m*n)
# vectors
X <- sapply(1:n, FUN <- function(x) dnorm(t,x,sig))
# some random function with nel elements, with Poisson noise added
par <- sample(1:n,nel)
coef <- rep(0,n)
coef[par] <- rnorm(nel,10,5)^2
Y <- rpois(n*m,X %*% coef)
# LASSO cross validation
fit <- cv.glmnet(X,Y, lower.limits=0, intercept=FALSE,
alpha=1, nfolds=5, lambda=exp(seq(-4,4,0.1)))
plot(fit$lambda, fit$cvm,log="xy")
plot(fit)
Yfit <- (X %*% coef(fit)[-1])
# non negative least squares
# (uses a stepwise algorithm or should be equivalent to stepwise)
fit2<-nnls(X,Y)
# plotting
par(mgp=c(0.3,0.0,0), mar=c(2,4.1,0.2,2.1))
layout(matrix(1:2,2),heights=c(1,0.55))
plot(t,Y,pch=21,col=rgb(0,0,0,0.3),bg=rgb(0,0,0,0.3),cex=0.7,
xaxt = "n", yaxt = "n",
ylab="", xlab = "",bty="n")
#lines(t,Yfit,col=2,lwd=2) # fitted mean
lines(t,X %*% coef,lwd=2) # true mean
lines(t,X %*% coef(fit2), col=3,lwd=2) # 2nd fit
# add coefficients in the plot
for (i in 1:n) {
if (coef[i] > 0) {
lines(c(i,i),c(0,coef[i])*dnorm(0,0,sig))
points(i,coef[i]*dnorm(0,0,sig), pch=21, col=1,bg="white",cex=1)
}
if (coef(fit)[i+1] > 0) {
# lines(c(i,i),c(0,coef(fit)[i+1])*dnorm(0,0,sig),col=2)
# points(i,coef(fit)[i+1]*dnorm(0,0,sig), pch=21, col=2,bg="white",cex=1)
}
if (coef(fit2)[i+1] > 0) {
lines(c(i,i),c(0,coef(fit2)[i+1])*dnorm(0,0,sig),col=3)
points(i,coef(fit2)[i+1]*dnorm(0,0,sig), pch=21, col=3,bg="white",cex=1)
}
}
#Arrows(85,23,85-6,23+10,-0.2,col=1,cex=0.5,arr.length=0.1)
#Arrows(86.5,33,86.5-6,33+10,-0.2,col=2,cex=0.5,arr.length=0.1)
#text(85-6,23+10,"true coefficient", pos=2, cex=0.7,col=1)
#text(86.5-6,33+10, "fitted coefficient", pos=2, cex=0.7,col=2)
text(0,50, "signal versus position\n true mean (black), fitted with nnls (green)", cex=1,col=1,pos=4, font=2)
plot(-100,-100,pch=21,col=1,bg="white",cex=0.7,type="l",lwd=2,
xaxt = "n", yaxt = "n",
ylab="", xlab = "",
ylim=c(0,max(coef(fit)))*dnorm(0,0,sig),xlim=c(0,n),bty="n")
#lines(t,X %*% coef,lwd=2,col=2)
for (i in 1:n) {
if (coef[i] > 0) {
lines(t,X[,i]*coef[i],lty=1)
}
if (coef(fit)[i+1] > 0) {
# lines(t,X[,i]*coef(fit)[i+1],col=2,lty=1)
}
if (coef(fit2)[i+1] > 0) {
lines(t,X[,i]*coef(fit2)[i+1],col=3,lty=1)
}
}
text(0,33, "illustration of seperate components/curves", cex=1,col=1,pos=4, font=2)
|
Does LASSO suffer from the same problems stepwise regression does?
|
The probability interpretation of frequentist expressions of likelihood, p-values etcetera, for a LASSO model, and stepwise regression, are not correct.
Those expressions overestimate the probability
|
Does LASSO suffer from the same problems stepwise regression does?
The probability interpretation of frequentist expressions of likelihood, p-values etcetera, for a LASSO model, and stepwise regression, are not correct.
Those expressions overestimate the probability. E.g. a 95% confidence interval for some parameter is supposed to say that you have a 95% probability that the method will result in an interval with the true model variable inside that interval.
However, the fitted models do not result from a typical single hypothesis, and instead we are cherry picking (select out of many possible alternative models) when we do stepwise regression or LASSO regression.
It makes little sense to evaluate the correctness of the model parameters (especially when it is likely that the model is not correct).
In the example below, explained later, the model is fitted to many regressors and it 'suffers' from multicollinearity. This makes it likely that a neighboring regressor (which is strongly correlating) is selected in the model instead of the one that is truly in the model. The strong correlation causes the coefficients to have a large error/variance (relating to the matrix $(X^TX)^{-1}$).
However, this high variance due to multicollionearity is not 'seen' in the diagnostics like p-values or standard error of coefficients, because these are based on a smaller design matrix $X$ with less regressors. (and there is no straightforward method to compute those type of statistics for LASSO)
Example: the graph below which displays the results of a toy-model for some signal that is a linear sum of 10 Gaussian curves (this may for instance resemble an analysis in chemistry where a signal for a spectrum is considered to be a linear sum of several components). The signal of the 10 curves is fitted with a model of 100 components (Gaussian curves with different mean) using LASSO. The signal is well estimated (compare the red and black curve which are reasonably close). But, the actual underlying coefficients are not well estimated and may be completely wrong (compare the red and black bars with dots which are not the same). See also the last 10 coefficients:
91 91 92 93 94 95 96 97 98 99 100
true model 0 0 0 0 0 0 0 142.8 0 0 0
fitted 0 0 0 0 0 0 129.7 6.9 0 0 0
The LASSO model does select coefficients which are very approximate, but from the perspective of the coefficients themselves it means a large error when a coefficient that should be non-zero is estimated to be zero and a neighboring coefficient that should be zero is estimated to be non-zero. Any confidence intervals for the coefficients would make very little sense.
LASSO fitting
Stepwise fitting
As a comparision, the same curve can be fitted with a stepwise algorithm leading to the image below. (with similar problems that the coefficients are close but do not match)
Even when you consider the accuracy of the curve (rather than the parameters, which in the previous point is made clear that it makes no sense) then you have to deal with overfitting. When you do a fitting procedure with LASSO then you make use of training data (to fit the models with different parameters) and test/validation data (to tune/find which is the best parameter), but you should also use a third seperate set of test/validation data to find out the performance of the data.
A p-value or something simular is not gonna work because you are working on a tuned model which is cherry picking and different (much larger degrees of freedom) from the regular linear fitting method.
suffer from the same problems stepwise regression does?
You seem to refer to problems like bias in values like $R^2$, p-values, F-scores or standard errors. I believe that LASSO is not used in order to solve those problems.
I thought that the main reason to use LASSO in place of stepwise regression is that LASSO allows a less greedy parameter selection, that is less influenced by multicollinarity. (more differences between LASSO and stepwise: Superiority of LASSO over forward selection/backward elimination in terms of the cross validation prediction error of the model)
Code for the example image
# settings
library(glmnet)
n <- 10^2 # number of regressors/vectors
m <- 2 # multiplier for number of datapoints
nel <- 10 # number of elements in the model
set.seed(1)
sig <- 4
t <- seq(0,n,length.out=m*n)
# vectors
X <- sapply(1:n, FUN <- function(x) dnorm(t,x,sig))
# some random function with nel elements, with Poisson noise added
par <- sample(1:n,nel)
coef <- rep(0,n)
coef[par] <- rnorm(nel,10,5)^2
Y <- rpois(n*m,X %*% coef)
# LASSO cross validation
fit <- cv.glmnet(X,Y, lower.limits=0, intercept=FALSE,
alpha=1, nfolds=5, lambda=exp(seq(-4,4,0.1)))
plot(fit$lambda, fit$cvm,log="xy")
plot(fit)
Yfit <- (X %*% coef(fit)[-1])
# non negative least squares
# (uses a stepwise algorithm or should be equivalent to stepwise)
fit2<-nnls(X,Y)
# plotting
par(mgp=c(0.3,0.0,0), mar=c(2,4.1,0.2,2.1))
layout(matrix(1:2,2),heights=c(1,0.55))
plot(t,Y,pch=21,col=rgb(0,0,0,0.3),bg=rgb(0,0,0,0.3),cex=0.7,
xaxt = "n", yaxt = "n",
ylab="", xlab = "",bty="n")
#lines(t,Yfit,col=2,lwd=2) # fitted mean
lines(t,X %*% coef,lwd=2) # true mean
lines(t,X %*% coef(fit2), col=3,lwd=2) # 2nd fit
# add coefficients in the plot
for (i in 1:n) {
if (coef[i] > 0) {
lines(c(i,i),c(0,coef[i])*dnorm(0,0,sig))
points(i,coef[i]*dnorm(0,0,sig), pch=21, col=1,bg="white",cex=1)
}
if (coef(fit)[i+1] > 0) {
# lines(c(i,i),c(0,coef(fit)[i+1])*dnorm(0,0,sig),col=2)
# points(i,coef(fit)[i+1]*dnorm(0,0,sig), pch=21, col=2,bg="white",cex=1)
}
if (coef(fit2)[i+1] > 0) {
lines(c(i,i),c(0,coef(fit2)[i+1])*dnorm(0,0,sig),col=3)
points(i,coef(fit2)[i+1]*dnorm(0,0,sig), pch=21, col=3,bg="white",cex=1)
}
}
#Arrows(85,23,85-6,23+10,-0.2,col=1,cex=0.5,arr.length=0.1)
#Arrows(86.5,33,86.5-6,33+10,-0.2,col=2,cex=0.5,arr.length=0.1)
#text(85-6,23+10,"true coefficient", pos=2, cex=0.7,col=1)
#text(86.5-6,33+10, "fitted coefficient", pos=2, cex=0.7,col=2)
text(0,50, "signal versus position\n true mean (black), fitted with nnls (green)", cex=1,col=1,pos=4, font=2)
plot(-100,-100,pch=21,col=1,bg="white",cex=0.7,type="l",lwd=2,
xaxt = "n", yaxt = "n",
ylab="", xlab = "",
ylim=c(0,max(coef(fit)))*dnorm(0,0,sig),xlim=c(0,n),bty="n")
#lines(t,X %*% coef,lwd=2,col=2)
for (i in 1:n) {
if (coef[i] > 0) {
lines(t,X[,i]*coef[i],lty=1)
}
if (coef(fit)[i+1] > 0) {
# lines(t,X[,i]*coef(fit)[i+1],col=2,lty=1)
}
if (coef(fit2)[i+1] > 0) {
lines(t,X[,i]*coef(fit2)[i+1],col=3,lty=1)
}
}
text(0,33, "illustration of seperate components/curves", cex=1,col=1,pos=4, font=2)
|
Does LASSO suffer from the same problems stepwise regression does?
The probability interpretation of frequentist expressions of likelihood, p-values etcetera, for a LASSO model, and stepwise regression, are not correct.
Those expressions overestimate the probability
|
12,303
|
Does LASSO suffer from the same problems stepwise regression does?
|
I have a new talk that addresses this. Bottom line: lasso has a low
probability of selecting the "correct" variables. The slides are at
http://fharrell.com/talk/stratos19
– Frank Harrell
Related to "Bottom line: lasso has a low probability of selecting the
"correct" variables": there's a section on the same topic in
Statistical Learning with Sparsity
(https://web.stanford.edu/~hastie/StatLearnSparsity_files/SLS_corrected_1.4.16.pdf),
11.4.1 Variable-Selection Consistency for the Lasso
– Adrian
Also related to "Bottom line: lasso has a low probability of selecting
the 'correct' variables": see
https://statweb.stanford.edu/~candes/teaching/stats300c/Lectures/Lecture24.pdf
case studies 1 and 2
– Adrian
|
Does LASSO suffer from the same problems stepwise regression does?
|
I have a new talk that addresses this. Bottom line: lasso has a low
probability of selecting the "correct" variables. The slides are at
http://fharrell.com/talk/stratos19
– Frank Harrell
Relate
|
Does LASSO suffer from the same problems stepwise regression does?
I have a new talk that addresses this. Bottom line: lasso has a low
probability of selecting the "correct" variables. The slides are at
http://fharrell.com/talk/stratos19
– Frank Harrell
Related to "Bottom line: lasso has a low probability of selecting the
"correct" variables": there's a section on the same topic in
Statistical Learning with Sparsity
(https://web.stanford.edu/~hastie/StatLearnSparsity_files/SLS_corrected_1.4.16.pdf),
11.4.1 Variable-Selection Consistency for the Lasso
– Adrian
Also related to "Bottom line: lasso has a low probability of selecting
the 'correct' variables": see
https://statweb.stanford.edu/~candes/teaching/stats300c/Lectures/Lecture24.pdf
case studies 1 and 2
– Adrian
|
Does LASSO suffer from the same problems stepwise regression does?
I have a new talk that addresses this. Bottom line: lasso has a low
probability of selecting the "correct" variables. The slides are at
http://fharrell.com/talk/stratos19
– Frank Harrell
Relate
|
12,304
|
Inverting the Fourier Transform for a Fisher distribution
|
There is no closed-form density for a convolution of F-statistics, so trying to invert the characteristic function analytically is not likely to lead to anything useful.
In mathematical statistics, the tilted Edgeworth expansion (also known as the saddlepoint approximation) is a famous and often used technique for approximating a density function given the characteristic function. The saddlepoint approximation if often remarkably accurate. Ole Barndorff-Nielsen and David Cox wrote a textbook explaining this mathematical technique.
There are other ways to approach the problem without using the characteristic function. One would expect the convolution distribution to be something like an F-distribution in shape. One might try an approximation like $aF(n,k)$ for the $n$-convolution, and then choose $a$ and $k$ to make the first two moments of the distribution correct. This is easy given the known mean and variance of the F-distribution.
If $\alpha$ is large, then the convolution converges to a chisquare distribution on $n$ degrees of freedom. This is equivalent to choosing $a=n$ and $k=\infty$ in the above approximation, showing that the simple approximation is accurate for large $\alpha$.
|
Inverting the Fourier Transform for a Fisher distribution
|
There is no closed-form density for a convolution of F-statistics, so trying to invert the characteristic function analytically is not likely to lead to anything useful.
In mathematical statistics, th
|
Inverting the Fourier Transform for a Fisher distribution
There is no closed-form density for a convolution of F-statistics, so trying to invert the characteristic function analytically is not likely to lead to anything useful.
In mathematical statistics, the tilted Edgeworth expansion (also known as the saddlepoint approximation) is a famous and often used technique for approximating a density function given the characteristic function. The saddlepoint approximation if often remarkably accurate. Ole Barndorff-Nielsen and David Cox wrote a textbook explaining this mathematical technique.
There are other ways to approach the problem without using the characteristic function. One would expect the convolution distribution to be something like an F-distribution in shape. One might try an approximation like $aF(n,k)$ for the $n$-convolution, and then choose $a$ and $k$ to make the first two moments of the distribution correct. This is easy given the known mean and variance of the F-distribution.
If $\alpha$ is large, then the convolution converges to a chisquare distribution on $n$ degrees of freedom. This is equivalent to choosing $a=n$ and $k=\infty$ in the above approximation, showing that the simple approximation is accurate for large $\alpha$.
|
Inverting the Fourier Transform for a Fisher distribution
There is no closed-form density for a convolution of F-statistics, so trying to invert the characteristic function analytically is not likely to lead to anything useful.
In mathematical statistics, th
|
12,305
|
Prediction interval based on cross-validation (CV)
|
After reading over this question again, I can give you the following bound:
Assume the samples are drawn iid, the distribution is fixed, and the loss is bounded by $B$, then with probability at least $1 - \delta$,
$$
\mathbb{E}[\mathcal{E}(h)] \leq \hat{\mathcal{E}}(h) + B\sqrt{\frac{\log \frac{1}{\delta}}{2m}}
$$
where $m$ is the sample size, and $1-\delta$ is the confidence. The bound holds trivially by McDiarmid's inequality.
$m$ is the sample size, $\mathbb{E}[\mathcal{E}(h)]$ is the generalization error, and $\hat{\mathcal{E}}(h)$ is the test error for hypothesis.
Please don't report only the cross validation error nor the test error, those are meaningless in general since they are just point estimates.
Old post for record:
I'm not sure that I completely understood your question, but I will take a stab at it.
First, I am not sure how you would define a prediction interval for model selection, since, as I understand it, prediction intervals make some distributional assumptions. Instead, you could derive concentration inequalities, which essentially bound a random variable by its variance for some probability. Concentration inequalities are used throught machine learning, including the advanced theory for boosting. In this case you want to bound the generalization error (your error in general, points you haven't seen) by your empirical error (your error on the test set) plus some complexity term and a term that relates to the variance.
Now I need to dispell a misunderstanding about cross validation that is extremely common. Cross validation will only give you an unbiased estimate of the expected error of a model FOR A FIXED SAMPLE SIZE. The proof for this only works for the leave one out protocol. This is actually fairly weak, since it gives you no information regarding the variance. On the other hand, cross validation will return a model that is close to the structural risk minimization solution, which is the theoretically best solution. You can find the proof in the appendix here: http://www.cns.nyu.edu/~rabadi/resources/scat-150519.pdf
So how to derive a generalization bound? (Remember a generalization bound is basically a prediction interval about the generalization error for a specific model). Well, these bounds are algorithm specific. Unfortunately there is only one textbook that puts bounds for all of the commonly used algorithms in machine learning (including boosting). The book is Foundations of Machine Learning (2012) by Mohri, Rostamizadeh, and Talwalkar. For lecture slides that cover the material, you can find them on Mohri's web-page: http://www.cs.nyu.edu/~mohri/ml14/
While Elements of Statistical Learning is an important and somewhat helpful book, it is not very rigorous and it omits many very important technical details regarding the algorithms and completely omits any sort of generalization bounds. Foundations of Machine Learning is the most comprehensive book for machine learning (which makes sense seeing as it was written by some of the best in the field). However, the textbook is advanced, so just beware of technical details.
The generalization bound for boosting can be found (with proof) here: http://www.cs.nyu.edu/~mohri/mls/lecture_6.pdf
I hope those are enough pointers to answer your question. I'm hesitant about giving a complete answer because it will take about 50 pages to go over all of the necessary details, let alone the preliminary discussions...
Good luck!
|
Prediction interval based on cross-validation (CV)
|
After reading over this question again, I can give you the following bound:
Assume the samples are drawn iid, the distribution is fixed, and the loss is bounded by $B$, then with probability at least
|
Prediction interval based on cross-validation (CV)
After reading over this question again, I can give you the following bound:
Assume the samples are drawn iid, the distribution is fixed, and the loss is bounded by $B$, then with probability at least $1 - \delta$,
$$
\mathbb{E}[\mathcal{E}(h)] \leq \hat{\mathcal{E}}(h) + B\sqrt{\frac{\log \frac{1}{\delta}}{2m}}
$$
where $m$ is the sample size, and $1-\delta$ is the confidence. The bound holds trivially by McDiarmid's inequality.
$m$ is the sample size, $\mathbb{E}[\mathcal{E}(h)]$ is the generalization error, and $\hat{\mathcal{E}}(h)$ is the test error for hypothesis.
Please don't report only the cross validation error nor the test error, those are meaningless in general since they are just point estimates.
Old post for record:
I'm not sure that I completely understood your question, but I will take a stab at it.
First, I am not sure how you would define a prediction interval for model selection, since, as I understand it, prediction intervals make some distributional assumptions. Instead, you could derive concentration inequalities, which essentially bound a random variable by its variance for some probability. Concentration inequalities are used throught machine learning, including the advanced theory for boosting. In this case you want to bound the generalization error (your error in general, points you haven't seen) by your empirical error (your error on the test set) plus some complexity term and a term that relates to the variance.
Now I need to dispell a misunderstanding about cross validation that is extremely common. Cross validation will only give you an unbiased estimate of the expected error of a model FOR A FIXED SAMPLE SIZE. The proof for this only works for the leave one out protocol. This is actually fairly weak, since it gives you no information regarding the variance. On the other hand, cross validation will return a model that is close to the structural risk minimization solution, which is the theoretically best solution. You can find the proof in the appendix here: http://www.cns.nyu.edu/~rabadi/resources/scat-150519.pdf
So how to derive a generalization bound? (Remember a generalization bound is basically a prediction interval about the generalization error for a specific model). Well, these bounds are algorithm specific. Unfortunately there is only one textbook that puts bounds for all of the commonly used algorithms in machine learning (including boosting). The book is Foundations of Machine Learning (2012) by Mohri, Rostamizadeh, and Talwalkar. For lecture slides that cover the material, you can find them on Mohri's web-page: http://www.cs.nyu.edu/~mohri/ml14/
While Elements of Statistical Learning is an important and somewhat helpful book, it is not very rigorous and it omits many very important technical details regarding the algorithms and completely omits any sort of generalization bounds. Foundations of Machine Learning is the most comprehensive book for machine learning (which makes sense seeing as it was written by some of the best in the field). However, the textbook is advanced, so just beware of technical details.
The generalization bound for boosting can be found (with proof) here: http://www.cs.nyu.edu/~mohri/mls/lecture_6.pdf
I hope those are enough pointers to answer your question. I'm hesitant about giving a complete answer because it will take about 50 pages to go over all of the necessary details, let alone the preliminary discussions...
Good luck!
|
Prediction interval based on cross-validation (CV)
After reading over this question again, I can give you the following bound:
Assume the samples are drawn iid, the distribution is fixed, and the loss is bounded by $B$, then with probability at least
|
12,306
|
How to test equality of variances with circular data
|
1) The Watson-Williams test is appropriate here.
2) It is parametric, and assumes a Von-Mises distribution. The second assumption is that each group has a common concentration parameter. I do not recall how robust the test is to violations of that assumption.
3) I have been using an implementation of the Watson test in a circular statistics toolbox, written for Matlab and available on the file exchange (link below). I have not tried, but I believe the Watson test (circ_wwtest.m) is set up for multiple groups.
https://www.mathworks.com/matlabcentral/fileexchange/10676-circular-statistics-toolbox--directional-statistics-
|
How to test equality of variances with circular data
|
1) The Watson-Williams test is appropriate here.
2) It is parametric, and assumes a Von-Mises distribution. The second assumption is that each group has a common concentration parameter. I do not reca
|
How to test equality of variances with circular data
1) The Watson-Williams test is appropriate here.
2) It is parametric, and assumes a Von-Mises distribution. The second assumption is that each group has a common concentration parameter. I do not recall how robust the test is to violations of that assumption.
3) I have been using an implementation of the Watson test in a circular statistics toolbox, written for Matlab and available on the file exchange (link below). I have not tried, but I believe the Watson test (circ_wwtest.m) is set up for multiple groups.
https://www.mathworks.com/matlabcentral/fileexchange/10676-circular-statistics-toolbox--directional-statistics-
|
How to test equality of variances with circular data
1) The Watson-Williams test is appropriate here.
2) It is parametric, and assumes a Von-Mises distribution. The second assumption is that each group has a common concentration parameter. I do not reca
|
12,307
|
How to test equality of variances with circular data
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Regarding your third question, I wrote a function in MATLAB for the algorithm based on Watson (1962) to compute the test statistic and p-value:
https://github.com/aatobaa/hatlab/blob/master/watson1962.m
|
How to test equality of variances with circular data
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
How to test equality of variances with circular data
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Regarding your third question, I wrote a function in MATLAB for the algorithm based on Watson (1962) to compute the test statistic and p-value:
https://github.com/aatobaa/hatlab/blob/master/watson1962.m
|
How to test equality of variances with circular data
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
12,308
|
Shrunken $r$ vs unbiased $r$: estimators of $\rho$
|
Regarding the bias in the correlation: When sample sizes are small enough for bias to have any practical significance (e.g., the n < 30 you suggested), then bias is likely to be the least of your worries, because inaccuracy is terrible.
Regarding the bias of R2 in multiple regression, there are many different adjustments that pertain to unbiased population estimation vs. unbiased estimation in an independent sample of equal size. See Yin, P. & Fan, X. (2001). Estimating R2 shrinkage in multiple regression: A comparison of analytical methods. The Journal of Experimental Education, 69, 203-224.
Modern-day regression methods also address the shrinkage of regression coefficients as well as R2 as a consequence -- e.g., the elastic net with k-fold cross validation, see http://web.stanford.edu/~hastie/Papers/elasticnet.pdf.
|
Shrunken $r$ vs unbiased $r$: estimators of $\rho$
|
Regarding the bias in the correlation: When sample sizes are small enough for bias to have any practical significance (e.g., the n < 30 you suggested), then bias is likely to be the least of your worr
|
Shrunken $r$ vs unbiased $r$: estimators of $\rho$
Regarding the bias in the correlation: When sample sizes are small enough for bias to have any practical significance (e.g., the n < 30 you suggested), then bias is likely to be the least of your worries, because inaccuracy is terrible.
Regarding the bias of R2 in multiple regression, there are many different adjustments that pertain to unbiased population estimation vs. unbiased estimation in an independent sample of equal size. See Yin, P. & Fan, X. (2001). Estimating R2 shrinkage in multiple regression: A comparison of analytical methods. The Journal of Experimental Education, 69, 203-224.
Modern-day regression methods also address the shrinkage of regression coefficients as well as R2 as a consequence -- e.g., the elastic net with k-fold cross validation, see http://web.stanford.edu/~hastie/Papers/elasticnet.pdf.
|
Shrunken $r$ vs unbiased $r$: estimators of $\rho$
Regarding the bias in the correlation: When sample sizes are small enough for bias to have any practical significance (e.g., the n < 30 you suggested), then bias is likely to be the least of your worr
|
12,309
|
Shrunken $r$ vs unbiased $r$: estimators of $\rho$
|
I think the answer is in the context of simple regression and multiple regression. In simple regression with one IV and one DV, the R sq is not positively biased, and in-fact may be negatively biased given r is negatively biased. But in multiple regression with several IV's which may be correlated themselves, R sq may be positively biased because of any "suppression" that may be happening. Thus, my take is that observed R2 overestimates the corresponding population R-square, but only in multiple regression
|
Shrunken $r$ vs unbiased $r$: estimators of $\rho$
|
I think the answer is in the context of simple regression and multiple regression. In simple regression with one IV and one DV, the R sq is not positively biased, and in-fact may be negatively biased
|
Shrunken $r$ vs unbiased $r$: estimators of $\rho$
I think the answer is in the context of simple regression and multiple regression. In simple regression with one IV and one DV, the R sq is not positively biased, and in-fact may be negatively biased given r is negatively biased. But in multiple regression with several IV's which may be correlated themselves, R sq may be positively biased because of any "suppression" that may be happening. Thus, my take is that observed R2 overestimates the corresponding population R-square, but only in multiple regression
|
Shrunken $r$ vs unbiased $r$: estimators of $\rho$
I think the answer is in the context of simple regression and multiple regression. In simple regression with one IV and one DV, the R sq is not positively biased, and in-fact may be negatively biased
|
12,310
|
Why is the mean of the natural log of a uniform distribution (between 0 and 1) different from the natural log of 0.5?
|
Consider two values symmetrically placed around $0.5$ - like $0.4$ and $0.6$ or $0.25$ and $0.75$. Their logs are not symmetric around $\log(0.5)$. $\log(0.5-\epsilon)$ is further from $\log(0.5)$ than $\log(0.5+\epsilon)$ is. So when you average them you get something less than $\log(0.5)$.
Similarly, if you take a teeny interval around a collection of such pairs of symmetrically placed values, you still get the average of the logs of each pair being below $\log(0.5)$... and it's a simple matter to move from that observation to the definition of the expectation of the log.
Indeed, usually, $E(t(X))\neq t(E(X))$ unless $t$ is linear.
|
Why is the mean of the natural log of a uniform distribution (between 0 and 1) different from the na
|
Consider two values symmetrically placed around $0.5$ - like $0.4$ and $0.6$ or $0.25$ and $0.75$. Their logs are not symmetric around $\log(0.5)$. $\log(0.5-\epsilon)$ is further from $\log(0.5)$ tha
|
Why is the mean of the natural log of a uniform distribution (between 0 and 1) different from the natural log of 0.5?
Consider two values symmetrically placed around $0.5$ - like $0.4$ and $0.6$ or $0.25$ and $0.75$. Their logs are not symmetric around $\log(0.5)$. $\log(0.5-\epsilon)$ is further from $\log(0.5)$ than $\log(0.5+\epsilon)$ is. So when you average them you get something less than $\log(0.5)$.
Similarly, if you take a teeny interval around a collection of such pairs of symmetrically placed values, you still get the average of the logs of each pair being below $\log(0.5)$... and it's a simple matter to move from that observation to the definition of the expectation of the log.
Indeed, usually, $E(t(X))\neq t(E(X))$ unless $t$ is linear.
|
Why is the mean of the natural log of a uniform distribution (between 0 and 1) different from the na
Consider two values symmetrically placed around $0.5$ - like $0.4$ and $0.6$ or $0.25$ and $0.75$. Their logs are not symmetric around $\log(0.5)$. $\log(0.5-\epsilon)$ is further from $\log(0.5)$ tha
|
12,311
|
Why is the mean of the natural log of a uniform distribution (between 0 and 1) different from the natural log of 0.5?
|
This is another illustration of Jensen's inequality
$$\mathbb E[\log X] < \log \mathbb E[X]$$
(since the function $x\mapsto \log(x)$ is strictly concave] and of the more general (anti-)property that the expectation of the transform is not the transform of the expectation when the transform is not linear (plus a few exotic cases). (Most of my undergraduate students are however firm believers in the magical identity $\mathbb E[h(X)] = h(\mathbb E[X])$ if I only judge from the frequency of this equality appearing in their final exam papers.)
|
Why is the mean of the natural log of a uniform distribution (between 0 and 1) different from the na
|
This is another illustration of Jensen's inequality
$$\mathbb E[\log X] < \log \mathbb E[X]$$
(since the function $x\mapsto \log(x)$ is strictly concave] and of the more general (anti-)property that t
|
Why is the mean of the natural log of a uniform distribution (between 0 and 1) different from the natural log of 0.5?
This is another illustration of Jensen's inequality
$$\mathbb E[\log X] < \log \mathbb E[X]$$
(since the function $x\mapsto \log(x)$ is strictly concave] and of the more general (anti-)property that the expectation of the transform is not the transform of the expectation when the transform is not linear (plus a few exotic cases). (Most of my undergraduate students are however firm believers in the magical identity $\mathbb E[h(X)] = h(\mathbb E[X])$ if I only judge from the frequency of this equality appearing in their final exam papers.)
|
Why is the mean of the natural log of a uniform distribution (between 0 and 1) different from the na
This is another illustration of Jensen's inequality
$$\mathbb E[\log X] < \log \mathbb E[X]$$
(since the function $x\mapsto \log(x)$ is strictly concave] and of the more general (anti-)property that t
|
12,312
|
Why is the mean of the natural log of a uniform distribution (between 0 and 1) different from the natural log of 0.5?
|
It is worthwhile to note that if $X \sim \operatorname{Uniform}(0,1)$, then $-\log X \sim \operatorname{Exponential}(\lambda = 1)$, so that $\operatorname{E}[\log X] = -1$. Explicitly, $$f_X(x) = \mathbb 1(0 < x < 1) = \begin{cases} 1, & 0 < x < 1 \\ 0, & \text{otherwise} \end{cases}$$ implies $$Y = g(X) = -\log X$$ has density $$\begin{align*}
f_Y(y) &= f_X(g^{-1}(y)) \left|\frac{dg^{-1}}{dy}\right| \\
&= \mathbb 1 \left( 0 < e^{-y} < 1 \right) \left| - e^{-y} \right| \\
&= e^{-y} \mathbb 1 (0 < y < \infty) \\
&= \begin{cases} e^{-y}, & y > 0 \\ 0, & \text{otherwise}. \end{cases}
\end{align*}$$ Thus $Y \sim \operatorname{Exponential}(\lambda = 1)$ and its mean is $1$. This furnishes a very convenient method to generate exponentially distributed random variables via log-transformation of a uniform random variable on $(0,1)$.
|
Why is the mean of the natural log of a uniform distribution (between 0 and 1) different from the na
|
It is worthwhile to note that if $X \sim \operatorname{Uniform}(0,1)$, then $-\log X \sim \operatorname{Exponential}(\lambda = 1)$, so that $\operatorname{E}[\log X] = -1$. Explicitly, $$f_X(x) = \ma
|
Why is the mean of the natural log of a uniform distribution (between 0 and 1) different from the natural log of 0.5?
It is worthwhile to note that if $X \sim \operatorname{Uniform}(0,1)$, then $-\log X \sim \operatorname{Exponential}(\lambda = 1)$, so that $\operatorname{E}[\log X] = -1$. Explicitly, $$f_X(x) = \mathbb 1(0 < x < 1) = \begin{cases} 1, & 0 < x < 1 \\ 0, & \text{otherwise} \end{cases}$$ implies $$Y = g(X) = -\log X$$ has density $$\begin{align*}
f_Y(y) &= f_X(g^{-1}(y)) \left|\frac{dg^{-1}}{dy}\right| \\
&= \mathbb 1 \left( 0 < e^{-y} < 1 \right) \left| - e^{-y} \right| \\
&= e^{-y} \mathbb 1 (0 < y < \infty) \\
&= \begin{cases} e^{-y}, & y > 0 \\ 0, & \text{otherwise}. \end{cases}
\end{align*}$$ Thus $Y \sim \operatorname{Exponential}(\lambda = 1)$ and its mean is $1$. This furnishes a very convenient method to generate exponentially distributed random variables via log-transformation of a uniform random variable on $(0,1)$.
|
Why is the mean of the natural log of a uniform distribution (between 0 and 1) different from the na
It is worthwhile to note that if $X \sim \operatorname{Uniform}(0,1)$, then $-\log X \sim \operatorname{Exponential}(\lambda = 1)$, so that $\operatorname{E}[\log X] = -1$. Explicitly, $$f_X(x) = \ma
|
12,313
|
Why is the mean of the natural log of a uniform distribution (between 0 and 1) different from the natural log of 0.5?
|
Note that the mean of a transformed uniform variable is just the mean value of the function doing the transformation over the domain (since we are expecting each value to be selected equally). This is simply,
$$
\frac{1}{b-a}\int_a^b{t(x)}dx = \int_0^1{t(x)}dx
$$
For example (in R):
$$
\int_0^1{log(x)}dx = (1\cdot log(1)-1) - 0 = 0-1 =-1
$$
mean(log(runif(1e6)))
[1] -1.000016
integrate(function(x) log(x), 0, 1)
-1 with absolute error < 1.1e-15
$$
\int_0^1{x^2}dx = \frac{1}{3}(1^3-0^3) = \frac{1}{3}
$$
mean(runif(1e6)^2)
[1] 0.3334427
integrate(function(x) (x)^2, 0, 1)
0.3333333 with absolute error < 3.7e-15
$$
\int_0^1{e^x}dx = e^1-e^0 = e-1
$$
mean(exp(runif(1e6)))
[1] 1.718425
integrate(function(x) exp(x), 0, 1)
1.718282 with absolute error < 1.9e-14
exp(1)-1
[1] 1.718282
|
Why is the mean of the natural log of a uniform distribution (between 0 and 1) different from the na
|
Note that the mean of a transformed uniform variable is just the mean value of the function doing the transformation over the domain (since we are expecting each value to be selected equally). This is
|
Why is the mean of the natural log of a uniform distribution (between 0 and 1) different from the natural log of 0.5?
Note that the mean of a transformed uniform variable is just the mean value of the function doing the transformation over the domain (since we are expecting each value to be selected equally). This is simply,
$$
\frac{1}{b-a}\int_a^b{t(x)}dx = \int_0^1{t(x)}dx
$$
For example (in R):
$$
\int_0^1{log(x)}dx = (1\cdot log(1)-1) - 0 = 0-1 =-1
$$
mean(log(runif(1e6)))
[1] -1.000016
integrate(function(x) log(x), 0, 1)
-1 with absolute error < 1.1e-15
$$
\int_0^1{x^2}dx = \frac{1}{3}(1^3-0^3) = \frac{1}{3}
$$
mean(runif(1e6)^2)
[1] 0.3334427
integrate(function(x) (x)^2, 0, 1)
0.3333333 with absolute error < 3.7e-15
$$
\int_0^1{e^x}dx = e^1-e^0 = e-1
$$
mean(exp(runif(1e6)))
[1] 1.718425
integrate(function(x) exp(x), 0, 1)
1.718282 with absolute error < 1.9e-14
exp(1)-1
[1] 1.718282
|
Why is the mean of the natural log of a uniform distribution (between 0 and 1) different from the na
Note that the mean of a transformed uniform variable is just the mean value of the function doing the transformation over the domain (since we are expecting each value to be selected equally). This is
|
12,314
|
When A and B are positively related variables, can they have opposite effect on their outcome variable C?
|
The other answers are truly marvelous - they give real life examples.
I want to explain why this can happen despite our intuition to the contrary.
See this geometrically!
Correlation is the cosine of the angle between the (centered) vectors.
Essentially, you are asking whether it is possible that
$A$ makes an acute angle with $B$ (positive correlation)
$B$ makes an acute angle with $C$ (positive correlation)
$A$ makes an obtuse angle with $C$ (negative correlation)
Yes, of course:
In this example ($\rho$ denotes correlation):
$A=(0.6,0.8)$
$B=(1,0)$
$C=(0.6,-0.8)$
$\rho(A,B)=0.6>0$
$\rho(B,C)=0.6>0$
$\rho(A,C)=-0.28<0$
Your Intuition is Right!
However, your surprise is not misplaced.
The angle between vectors is a distance metric on the unit sphere, so it satisfies the triangle inequality:
$$\measuredangle AB \le \measuredangle AC + \measuredangle BC$$
thus, since $\cos \measuredangle AB = \rho(A,B)$,
$$\arccos\rho(A,B) \le \arccos\rho(A,C) + \arccos\rho(B,C) $$
therefore (since $\cos$ is decreasing on $[0,\pi]$)
$$\rho(A,B)\ge\rho(A,C)\times\rho(B,C) - \sqrt{(1-\rho^2(A,C))\times(1-\rho^2(B,C))} $$
So,
if $\rho(A,C)=\rho(B,C)=0.9$, then $\rho(A,B)\ge 0.62$
if $\rho(A,C)=\rho(B,C)=0.95$, then $\rho(A,B)\ge 0.805$
if $\rho(A,C)=\rho(B,C)=0.99$, then $\rho(A,B)\ge 0.9602$
|
When A and B are positively related variables, can they have opposite effect on their outcome variab
|
The other answers are truly marvelous - they give real life examples.
I want to explain why this can happen despite our intuition to the contrary.
See this geometrically!
Correlation is the cosine of
|
When A and B are positively related variables, can they have opposite effect on their outcome variable C?
The other answers are truly marvelous - they give real life examples.
I want to explain why this can happen despite our intuition to the contrary.
See this geometrically!
Correlation is the cosine of the angle between the (centered) vectors.
Essentially, you are asking whether it is possible that
$A$ makes an acute angle with $B$ (positive correlation)
$B$ makes an acute angle with $C$ (positive correlation)
$A$ makes an obtuse angle with $C$ (negative correlation)
Yes, of course:
In this example ($\rho$ denotes correlation):
$A=(0.6,0.8)$
$B=(1,0)$
$C=(0.6,-0.8)$
$\rho(A,B)=0.6>0$
$\rho(B,C)=0.6>0$
$\rho(A,C)=-0.28<0$
Your Intuition is Right!
However, your surprise is not misplaced.
The angle between vectors is a distance metric on the unit sphere, so it satisfies the triangle inequality:
$$\measuredangle AB \le \measuredangle AC + \measuredangle BC$$
thus, since $\cos \measuredangle AB = \rho(A,B)$,
$$\arccos\rho(A,B) \le \arccos\rho(A,C) + \arccos\rho(B,C) $$
therefore (since $\cos$ is decreasing on $[0,\pi]$)
$$\rho(A,B)\ge\rho(A,C)\times\rho(B,C) - \sqrt{(1-\rho^2(A,C))\times(1-\rho^2(B,C))} $$
So,
if $\rho(A,C)=\rho(B,C)=0.9$, then $\rho(A,B)\ge 0.62$
if $\rho(A,C)=\rho(B,C)=0.95$, then $\rho(A,B)\ge 0.805$
if $\rho(A,C)=\rho(B,C)=0.99$, then $\rho(A,B)\ge 0.9602$
|
When A and B are positively related variables, can they have opposite effect on their outcome variab
The other answers are truly marvelous - they give real life examples.
I want to explain why this can happen despite our intuition to the contrary.
See this geometrically!
Correlation is the cosine of
|
12,315
|
When A and B are positively related variables, can they have opposite effect on their outcome variable C?
|
Yes, two co-occuring conditions can have opposite effects.
For example:
Making outrageous statements (A) is positively related to being entertaining (B).
Making outrageous statements (A) has a negative effect on winning elections (C).
Being entertaining (B) has a positive effect on winning elections (C).
|
When A and B are positively related variables, can they have opposite effect on their outcome variab
|
Yes, two co-occuring conditions can have opposite effects.
For example:
Making outrageous statements (A) is positively related to being entertaining (B).
Making outrageous statements (A) has a negati
|
When A and B are positively related variables, can they have opposite effect on their outcome variable C?
Yes, two co-occuring conditions can have opposite effects.
For example:
Making outrageous statements (A) is positively related to being entertaining (B).
Making outrageous statements (A) has a negative effect on winning elections (C).
Being entertaining (B) has a positive effect on winning elections (C).
|
When A and B are positively related variables, can they have opposite effect on their outcome variab
Yes, two co-occuring conditions can have opposite effects.
For example:
Making outrageous statements (A) is positively related to being entertaining (B).
Making outrageous statements (A) has a negati
|
12,316
|
When A and B are positively related variables, can they have opposite effect on their outcome variable C?
|
I've heard this car analogy which applies well to the question:
Driving uphill (A) is positively related to the driver stepping on the gas (B)
Driving uphill (A) has a negative effect on vehicle speed (C)
Stepping on the gas (B) has a positive effect on vehicle speed (C)
The key here is the driver's intention to maintain a constant speed (C), therefore the positive correlation between A and B naturally follows from that intention. You can construct endless examples of A, B, C with this relationship thus.
The analogy comes from an interpretation of Milton Friedman's Thermostat and comes from an interesting analysis of monetary policy and econometrics, but that's irrelevant to the question.
|
When A and B are positively related variables, can they have opposite effect on their outcome variab
|
I've heard this car analogy which applies well to the question:
Driving uphill (A) is positively related to the driver stepping on the gas (B)
Driving uphill (A) has a negative effect on vehicle spee
|
When A and B are positively related variables, can they have opposite effect on their outcome variable C?
I've heard this car analogy which applies well to the question:
Driving uphill (A) is positively related to the driver stepping on the gas (B)
Driving uphill (A) has a negative effect on vehicle speed (C)
Stepping on the gas (B) has a positive effect on vehicle speed (C)
The key here is the driver's intention to maintain a constant speed (C), therefore the positive correlation between A and B naturally follows from that intention. You can construct endless examples of A, B, C with this relationship thus.
The analogy comes from an interpretation of Milton Friedman's Thermostat and comes from an interesting analysis of monetary policy and econometrics, but that's irrelevant to the question.
|
When A and B are positively related variables, can they have opposite effect on their outcome variab
I've heard this car analogy which applies well to the question:
Driving uphill (A) is positively related to the driver stepping on the gas (B)
Driving uphill (A) has a negative effect on vehicle spee
|
12,317
|
When A and B are positively related variables, can they have opposite effect on their outcome variable C?
|
Yes, this is trivial to demonstrate with a simulation:
Simulate 2 variables, A and B that are positively correlated:
> require(MASS)
> set.seed(1)
> Sigma <- matrix(c(10,3,3,2),2,2)
> dt <- data.frame(mvrnorm(n = 1000, rep(0, 2), Sigma))
> names(dt) <- c("A","B")
> cor(dt)
A B
A 1.0000000 0.6707593
B 0.6707593 1.0000000
Create variable C:
> dt$C <- dt$A - dt$B + rnorm(1000,0,5)
Behold:
> (lm(C~A+B,data=dt))
Coefficients:
(Intercept) A B
0.03248 0.98587 -1.05113
Edit: Alternatively (as suggested by Kodiologist), just simulating from a multivariate normal such that $\operatorname{cor}(A,B) > 0$, $\operatorname{cor}(A,C) > 0$ and $\operatorname{cor}(B,C) < 0$
> set.seed(1)
> Sigma <- matrix(c(1,0.5,0.5,0.5,1,-0.5,0.5,-0.5,1),3,3)
> dt <- data.frame(mvrnorm(n = 1000, rep(0,3), Sigma, empirical=TRUE))
> names(dt) <- c("A","B","C")
> cor(dt)
A B C
A 1.0 0.5 0.5
B 0.5 1.0 -0.5
C 0.5 -0.5 1.0
|
When A and B are positively related variables, can they have opposite effect on their outcome variab
|
Yes, this is trivial to demonstrate with a simulation:
Simulate 2 variables, A and B that are positively correlated:
> require(MASS)
> set.seed(1)
> Sigma <- matrix(c(10,3,3,2),2,2)
> dt <- data.frame
|
When A and B are positively related variables, can they have opposite effect on their outcome variable C?
Yes, this is trivial to demonstrate with a simulation:
Simulate 2 variables, A and B that are positively correlated:
> require(MASS)
> set.seed(1)
> Sigma <- matrix(c(10,3,3,2),2,2)
> dt <- data.frame(mvrnorm(n = 1000, rep(0, 2), Sigma))
> names(dt) <- c("A","B")
> cor(dt)
A B
A 1.0000000 0.6707593
B 0.6707593 1.0000000
Create variable C:
> dt$C <- dt$A - dt$B + rnorm(1000,0,5)
Behold:
> (lm(C~A+B,data=dt))
Coefficients:
(Intercept) A B
0.03248 0.98587 -1.05113
Edit: Alternatively (as suggested by Kodiologist), just simulating from a multivariate normal such that $\operatorname{cor}(A,B) > 0$, $\operatorname{cor}(A,C) > 0$ and $\operatorname{cor}(B,C) < 0$
> set.seed(1)
> Sigma <- matrix(c(1,0.5,0.5,0.5,1,-0.5,0.5,-0.5,1),3,3)
> dt <- data.frame(mvrnorm(n = 1000, rep(0,3), Sigma, empirical=TRUE))
> names(dt) <- c("A","B","C")
> cor(dt)
A B C
A 1.0 0.5 0.5
B 0.5 1.0 -0.5
C 0.5 -0.5 1.0
|
When A and B are positively related variables, can they have opposite effect on their outcome variab
Yes, this is trivial to demonstrate with a simulation:
Simulate 2 variables, A and B that are positively correlated:
> require(MASS)
> set.seed(1)
> Sigma <- matrix(c(10,3,3,2),2,2)
> dt <- data.frame
|
12,318
|
When A and B are positively related variables, can they have opposite effect on their outcome variable C?
|
$$
C = mB + n (A-proj_B(A))
$$
then
$$
\left<C,A\right> = m\left<B,A\right> + n\left<A,A\right> -n \left<B,A\right>
$$
Then covariance between C and A could be negative in two conditions:
$n>m,\ \left<A,A\right> < \left<B,A\right> (n-m)/n $
$n<-m,\ \left<A,A\right> > \left<B,A\right> (n-m)/n$
|
When A and B are positively related variables, can they have opposite effect on their outcome variab
|
$$
C = mB + n (A-proj_B(A))
$$
then
$$
\left<C,A\right> = m\left<B,A\right> + n\left<A,A\right> -n \left<B,A\right>
$$
Then covariance between C and A could be negative in two conditions:
$n>m,\ \l
|
When A and B are positively related variables, can they have opposite effect on their outcome variable C?
$$
C = mB + n (A-proj_B(A))
$$
then
$$
\left<C,A\right> = m\left<B,A\right> + n\left<A,A\right> -n \left<B,A\right>
$$
Then covariance between C and A could be negative in two conditions:
$n>m,\ \left<A,A\right> < \left<B,A\right> (n-m)/n $
$n<-m,\ \left<A,A\right> > \left<B,A\right> (n-m)/n$
|
When A and B are positively related variables, can they have opposite effect on their outcome variab
$$
C = mB + n (A-proj_B(A))
$$
then
$$
\left<C,A\right> = m\left<B,A\right> + n\left<A,A\right> -n \left<B,A\right>
$$
Then covariance between C and A could be negative in two conditions:
$n>m,\ \l
|
12,319
|
Does machine learning really need data-efficient algorithms?
|
You are not entirely wrong, often it will be a lot easier to collect more/better data to improve an algorithm than to squeeze minor improvements out of the algorithm.
However, in practice, there are many settings, where it is difficult to get really large dataset.
Sure, it's easy to get really large datasets when you use (self-/un-)supervised approaches or if your labels are automatically created (e.g. if you are Google whether a user clicks on a link or not). However, many practical problems rely on human experts (whose time may be expensive) to label the examples. When any human can do the job (e.g. labeling dog or cat or something else for ImageNet), this can be scaled to millions of images, but when you pay physicians to classify medical images, tens of thousands (or perhaps 100,000ish) labelled images is a pretty large dataset. Or, if you need to run a chemical experiment for each label.
Additionally, there may be cases, where the or the number of possible real-world examples is naturally limited (e.g. training data for forecasting winners of US presidential elections, predicting eruptions of a volcanoes from seismic data etc., which are just things for which we so far can only have so much data).
|
Does machine learning really need data-efficient algorithms?
|
You are not entirely wrong, often it will be a lot easier to collect more/better data to improve an algorithm than to squeeze minor improvements out of the algorithm.
However, in practice, there are m
|
Does machine learning really need data-efficient algorithms?
You are not entirely wrong, often it will be a lot easier to collect more/better data to improve an algorithm than to squeeze minor improvements out of the algorithm.
However, in practice, there are many settings, where it is difficult to get really large dataset.
Sure, it's easy to get really large datasets when you use (self-/un-)supervised approaches or if your labels are automatically created (e.g. if you are Google whether a user clicks on a link or not). However, many practical problems rely on human experts (whose time may be expensive) to label the examples. When any human can do the job (e.g. labeling dog or cat or something else for ImageNet), this can be scaled to millions of images, but when you pay physicians to classify medical images, tens of thousands (or perhaps 100,000ish) labelled images is a pretty large dataset. Or, if you need to run a chemical experiment for each label.
Additionally, there may be cases, where the or the number of possible real-world examples is naturally limited (e.g. training data for forecasting winners of US presidential elections, predicting eruptions of a volcanoes from seismic data etc., which are just things for which we so far can only have so much data).
|
Does machine learning really need data-efficient algorithms?
You are not entirely wrong, often it will be a lot easier to collect more/better data to improve an algorithm than to squeeze minor improvements out of the algorithm.
However, in practice, there are m
|
12,320
|
Does machine learning really need data-efficient algorithms?
|
I work in retail forecasting. When you need to forecast tomorrow's demand for product X at store Y, you only have a limited amount of data available: possibly only the last two years' worth of sales of this particular product at this particular store, or potentially sales of all products at all stores, if you use a cross-learning model. But in any case, you cannot simply create new data. (And creating new data consists in actually running your supermarket and recording sales and inventories, so this is not a trivial matter.)
Also, if a worldwide unprecedented pandemic hits you, the value of your data from before that time suddenly becomes dubious indeed, so for practical uses, your amount of data just decreased dramatically.
Of course, you are right that certain use cases have practically unlimited data, or can create data on the fly. One example is training networks to play games like chess or go: you can simply let multiple instances of your models play against each other (reinforcement learning).
|
Does machine learning really need data-efficient algorithms?
|
I work in retail forecasting. When you need to forecast tomorrow's demand for product X at store Y, you only have a limited amount of data available: possibly only the last two years' worth of sales o
|
Does machine learning really need data-efficient algorithms?
I work in retail forecasting. When you need to forecast tomorrow's demand for product X at store Y, you only have a limited amount of data available: possibly only the last two years' worth of sales of this particular product at this particular store, or potentially sales of all products at all stores, if you use a cross-learning model. But in any case, you cannot simply create new data. (And creating new data consists in actually running your supermarket and recording sales and inventories, so this is not a trivial matter.)
Also, if a worldwide unprecedented pandemic hits you, the value of your data from before that time suddenly becomes dubious indeed, so for practical uses, your amount of data just decreased dramatically.
Of course, you are right that certain use cases have practically unlimited data, or can create data on the fly. One example is training networks to play games like chess or go: you can simply let multiple instances of your models play against each other (reinforcement learning).
|
Does machine learning really need data-efficient algorithms?
I work in retail forecasting. When you need to forecast tomorrow's demand for product X at store Y, you only have a limited amount of data available: possibly only the last two years' worth of sales o
|
12,321
|
Does machine learning really need data-efficient algorithms?
|
While it is true that nowadays it is fairly easy to gather large piles of data, this doesn't mean that it is good data. The large datasets are usually gathered by scraping the resources freely available on Internet, for example, for textual data those may be Reddit post, news articles, Wikipedia entries, for images those may be all kinds of images posted by people, for videos those could be things posted on YouTube. Notice that there are many potential problems with such data.
First, it is unlabelled. To label it someone needs to do it. Most commonly, this is done by Amazon Mechanical Turk workers that are paid very little amounts of money for the task so aren't really motivated to do it correctly, neither have any internal motivation for tagging random images. Also, you have no guarantees that the labelers have proper knowledge for tagging (e.g. they are asked to label wild animals, or car brands, that they are not familiar with). You can do it yourself, but you would need a lot of time, and this as well doesn't guarantee that there won't be human errors. You could do the labeling automatically, but then your "clever" machine learning algorithm would learn from the labels provided by a "dumb" heuristic, if the heuristic worked, would you need the more complicated algorithm learn to imitate it..?
Second, this data is biased. Most of the textual datasets are limited to English languages. Most of the image datasets with photos of humans depict white-skinned individuals. Most of the datasets with pictures of architecture show cities from the US or Europe. Those aren't really representative, unless you are building a machine learning model that would be used only by the white, English-speaking men living in the US.
There was recently a nice preprint on this topic Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks by Northcutt et al.
|
Does machine learning really need data-efficient algorithms?
|
While it is true that nowadays it is fairly easy to gather large piles of data, this doesn't mean that it is good data. The large datasets are usually gathered by scraping the resources freely availab
|
Does machine learning really need data-efficient algorithms?
While it is true that nowadays it is fairly easy to gather large piles of data, this doesn't mean that it is good data. The large datasets are usually gathered by scraping the resources freely available on Internet, for example, for textual data those may be Reddit post, news articles, Wikipedia entries, for images those may be all kinds of images posted by people, for videos those could be things posted on YouTube. Notice that there are many potential problems with such data.
First, it is unlabelled. To label it someone needs to do it. Most commonly, this is done by Amazon Mechanical Turk workers that are paid very little amounts of money for the task so aren't really motivated to do it correctly, neither have any internal motivation for tagging random images. Also, you have no guarantees that the labelers have proper knowledge for tagging (e.g. they are asked to label wild animals, or car brands, that they are not familiar with). You can do it yourself, but you would need a lot of time, and this as well doesn't guarantee that there won't be human errors. You could do the labeling automatically, but then your "clever" machine learning algorithm would learn from the labels provided by a "dumb" heuristic, if the heuristic worked, would you need the more complicated algorithm learn to imitate it..?
Second, this data is biased. Most of the textual datasets are limited to English languages. Most of the image datasets with photos of humans depict white-skinned individuals. Most of the datasets with pictures of architecture show cities from the US or Europe. Those aren't really representative, unless you are building a machine learning model that would be used only by the white, English-speaking men living in the US.
There was recently a nice preprint on this topic Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks by Northcutt et al.
|
Does machine learning really need data-efficient algorithms?
While it is true that nowadays it is fairly easy to gather large piles of data, this doesn't mean that it is good data. The large datasets are usually gathered by scraping the resources freely availab
|
12,322
|
Does machine learning really need data-efficient algorithms?
|
I was once asked to build a model that puts archeological artifacts into classes according to their manufactoring process. A big problem: for some classes, there were only four samples. And many artifacts are broken, so even for the samples we had, not all measurements were known (like their total length).
Yes, "small data" is indeed a problem. To get more data in this particular case would have meant to send the archeologists back to dig in the Central Asian mountains and to measure all the features of the artifacts that I find meaningful. At that, they better find artifacts in one piece, not broken ones! ;-)
|
Does machine learning really need data-efficient algorithms?
|
I was once asked to build a model that puts archeological artifacts into classes according to their manufactoring process. A big problem: for some classes, there were only four samples. And many art
|
Does machine learning really need data-efficient algorithms?
I was once asked to build a model that puts archeological artifacts into classes according to their manufactoring process. A big problem: for some classes, there were only four samples. And many artifacts are broken, so even for the samples we had, not all measurements were known (like their total length).
Yes, "small data" is indeed a problem. To get more data in this particular case would have meant to send the archeologists back to dig in the Central Asian mountains and to measure all the features of the artifacts that I find meaningful. At that, they better find artifacts in one piece, not broken ones! ;-)
|
Does machine learning really need data-efficient algorithms?
I was once asked to build a model that puts archeological artifacts into classes according to their manufactoring process. A big problem: for some classes, there were only four samples. And many art
|
12,323
|
Does machine learning really need data-efficient algorithms?
|
Here are a couple thoughts to add to what has been posted so far.
You might be interested in taking a look at the famous machine learning paper, Domingos, P. (2012). "A Few Useful Things to Know about Machine Learning". Communications of the ACM (pdf). It should contain some food for thought. Specifically, here are three relevant subsections:
DATA ALONE IS NOT ENOUGH
Generalization being the goal has another major consequence:
data alone is not enough, no matter how much of it you have.
Consider learning a Boolean function of (say) 100 variables
from a million examples. There are $2^{100}$ − $10^6$ examples
whose classes you don't know. How do you figure out what
those classes are? In the absence of further information,
there is just no way to do this that beats flipping a coin. ...
FEATURE ENGINEERING IS THE KEY
At the end of the day, some machine learning projects succeed and some fail. What makes the difference? Easily
the most important factor is the features used. If you have
many independent features that each correlate well with the
class, learning is easy. On the other hand, if the class is
a very complex function of the features, you may not be
able to learn it. Often, the raw data is not in a form that is
amenable to learning, but you can construct features from it
that are. ...
MORE DATA BEATS A CLEVERER ALGORITHM
Suppose you’ve constructed the best set of features you
can, but the classifiers you’re getting are still not accurate
enough. What can you do now? There are two main choices:
design a better learning algorithm, or gather more data
(more examples, and possibly more raw features, subject to
the curse of dimensionality). Machine learning researchers
are mainly concerned with the former, but pragmatically
the quickest path to success is often to just get more data.
As a rule of thumb, a dumb algorithm with lots and lots of
data beats a clever one with modest amounts of it. ...
The other thing I would say is that the idea that "a human needs 1-2 to reach comparable classification accuracy" is because the human is not a blank slate. A person has a wealth of experience (i.e., many prior data) and rich conceptual knowledge that can be brought to bear on learning a classification. (Sections 4 and 8 from Domingoes are related to this idea of background knowledge and knowing what to attend to.) To connect these facts to training a (deep learning or other) model, you could consider that pre-training a model sometimes helps quite a bit (although this is less done nowadays) and likewise that Bayesian models with sufficiently good priors should also perform better. Having said those things, section 9 from Domingoes implies we may be able to be sufficiently successful without those, due to the increasing volumes of data that you describe.
|
Does machine learning really need data-efficient algorithms?
|
Here are a couple thoughts to add to what has been posted so far.
You might be interested in taking a look at the famous machine learning paper, Domingos, P. (2012). "A Few Useful Things to Know about
|
Does machine learning really need data-efficient algorithms?
Here are a couple thoughts to add to what has been posted so far.
You might be interested in taking a look at the famous machine learning paper, Domingos, P. (2012). "A Few Useful Things to Know about Machine Learning". Communications of the ACM (pdf). It should contain some food for thought. Specifically, here are three relevant subsections:
DATA ALONE IS NOT ENOUGH
Generalization being the goal has another major consequence:
data alone is not enough, no matter how much of it you have.
Consider learning a Boolean function of (say) 100 variables
from a million examples. There are $2^{100}$ − $10^6$ examples
whose classes you don't know. How do you figure out what
those classes are? In the absence of further information,
there is just no way to do this that beats flipping a coin. ...
FEATURE ENGINEERING IS THE KEY
At the end of the day, some machine learning projects succeed and some fail. What makes the difference? Easily
the most important factor is the features used. If you have
many independent features that each correlate well with the
class, learning is easy. On the other hand, if the class is
a very complex function of the features, you may not be
able to learn it. Often, the raw data is not in a form that is
amenable to learning, but you can construct features from it
that are. ...
MORE DATA BEATS A CLEVERER ALGORITHM
Suppose you’ve constructed the best set of features you
can, but the classifiers you’re getting are still not accurate
enough. What can you do now? There are two main choices:
design a better learning algorithm, or gather more data
(more examples, and possibly more raw features, subject to
the curse of dimensionality). Machine learning researchers
are mainly concerned with the former, but pragmatically
the quickest path to success is often to just get more data.
As a rule of thumb, a dumb algorithm with lots and lots of
data beats a clever one with modest amounts of it. ...
The other thing I would say is that the idea that "a human needs 1-2 to reach comparable classification accuracy" is because the human is not a blank slate. A person has a wealth of experience (i.e., many prior data) and rich conceptual knowledge that can be brought to bear on learning a classification. (Sections 4 and 8 from Domingoes are related to this idea of background knowledge and knowing what to attend to.) To connect these facts to training a (deep learning or other) model, you could consider that pre-training a model sometimes helps quite a bit (although this is less done nowadays) and likewise that Bayesian models with sufficiently good priors should also perform better. Having said those things, section 9 from Domingoes implies we may be able to be sufficiently successful without those, due to the increasing volumes of data that you describe.
|
Does machine learning really need data-efficient algorithms?
Here are a couple thoughts to add to what has been posted so far.
You might be interested in taking a look at the famous machine learning paper, Domingos, P. (2012). "A Few Useful Things to Know about
|
12,324
|
Does machine learning really need data-efficient algorithms?
|
There's some ambiguity in saying a data set is large. To improve predictive performance of an algorithm, you need more observations. You need to increase your sample size ($n$) and not the number of things you measured/observed within an experimental unit.
These can be hard to come by depending on the field of research: In clinical science there are privacy, security and most importantly ethical concerns with gathering more observations.
There are even cases where it is simply impossible to get more observations of independent experimental units. If you want to use prediction for some rare disease, or a nearly extinct species, there are hard constraints on how 'large' your data set can be.
|
Does machine learning really need data-efficient algorithms?
|
There's some ambiguity in saying a data set is large. To improve predictive performance of an algorithm, you need more observations. You need to increase your sample size ($n$) and not the number of t
|
Does machine learning really need data-efficient algorithms?
There's some ambiguity in saying a data set is large. To improve predictive performance of an algorithm, you need more observations. You need to increase your sample size ($n$) and not the number of things you measured/observed within an experimental unit.
These can be hard to come by depending on the field of research: In clinical science there are privacy, security and most importantly ethical concerns with gathering more observations.
There are even cases where it is simply impossible to get more observations of independent experimental units. If you want to use prediction for some rare disease, or a nearly extinct species, there are hard constraints on how 'large' your data set can be.
|
Does machine learning really need data-efficient algorithms?
There's some ambiguity in saying a data set is large. To improve predictive performance of an algorithm, you need more observations. You need to increase your sample size ($n$) and not the number of t
|
12,325
|
Does machine learning really need data-efficient algorithms?
|
Would an ML algorithm that is, say, 100x more data-efficient, while being 1000x slower, be useful?
You have almost answered your own question.
There are multiple factors at play here:
The cost of gathering a data point
The cost of training a model with an additional data point
The cost of making the model learn more from a data point
The benefit gained from training the model with an additional data point
You are seeking to maximize the expression (benefits - costs). If you measure or estimate these factors accurately enough, and convert to comparable units (such as monetary equivalents perhaps), you'll find it easy to determine what to improve most easily.
As others have said, there are various applications with completely different such factors.
|
Does machine learning really need data-efficient algorithms?
|
Would an ML algorithm that is, say, 100x more data-efficient, while being 1000x slower, be useful?
You have almost answered your own question.
There are multiple factors at play here:
The cost of ga
|
Does machine learning really need data-efficient algorithms?
Would an ML algorithm that is, say, 100x more data-efficient, while being 1000x slower, be useful?
You have almost answered your own question.
There are multiple factors at play here:
The cost of gathering a data point
The cost of training a model with an additional data point
The cost of making the model learn more from a data point
The benefit gained from training the model with an additional data point
You are seeking to maximize the expression (benefits - costs). If you measure or estimate these factors accurately enough, and convert to comparable units (such as monetary equivalents perhaps), you'll find it easy to determine what to improve most easily.
As others have said, there are various applications with completely different such factors.
|
Does machine learning really need data-efficient algorithms?
Would an ML algorithm that is, say, 100x more data-efficient, while being 1000x slower, be useful?
You have almost answered your own question.
There are multiple factors at play here:
The cost of ga
|
12,326
|
Does machine learning really need data-efficient algorithms?
|
People who work on data-efficient algorithms often bring up robotics for "motivation". But even for robotics, large datasets can be collected, as is done in this data-collection factory at Google:
What if I want to (for example) use reinforcement learning on a task involving underwater robotics to classify arctic ocean fronts? Or train a vision module to classify extremely rare objects in space through a fly-by probe? I may have very limited data and the cost of gathering new data may be extremely expensive. Often in robotics simulators are not accurate enough (especially when natural phenonmena are involved) to really generate accurate training data (this is called the sim2real problem).
Additionally, gathering real-life data for every possible task you would like your robot to accomplish can be prohibitive, especially if you want a wide variety of tasks accomplished by something like an in-home robot.
|
Does machine learning really need data-efficient algorithms?
|
People who work on data-efficient algorithms often bring up robotics for "motivation". But even for robotics, large datasets can be collected, as is done in this data-collection factory at Google:
Wh
|
Does machine learning really need data-efficient algorithms?
People who work on data-efficient algorithms often bring up robotics for "motivation". But even for robotics, large datasets can be collected, as is done in this data-collection factory at Google:
What if I want to (for example) use reinforcement learning on a task involving underwater robotics to classify arctic ocean fronts? Or train a vision module to classify extremely rare objects in space through a fly-by probe? I may have very limited data and the cost of gathering new data may be extremely expensive. Often in robotics simulators are not accurate enough (especially when natural phenonmena are involved) to really generate accurate training data (this is called the sim2real problem).
Additionally, gathering real-life data for every possible task you would like your robot to accomplish can be prohibitive, especially if you want a wide variety of tasks accomplished by something like an in-home robot.
|
Does machine learning really need data-efficient algorithms?
People who work on data-efficient algorithms often bring up robotics for "motivation". But even for robotics, large datasets can be collected, as is done in this data-collection factory at Google:
Wh
|
12,327
|
Does machine learning really need data-efficient algorithms?
|
To all the other answers I'd add that in Deep Learning, Neural Architecture Search benefits immensely from data efficiency. Think about it: each data point is a trained network.
If your NAS setup requires $N$ data points (networks), and each network requires $D$ samples to be trained, that's $ND$ forward- and backpropagations overall: if you reduce $D$ by a ratio of $k$, that's $k$-many more architectures that you can explore with the same resources.
Of course, it isn't always that straightforward: This CVPR2021 paper by Mundt et al shows that the architectures themselves act as Deep Priors which don't need to be fully trained, if initialized correctly (as a nice counterpoint to the vast few-shot learning literature):
Neural Architecture Search of Deep Priors: Towards Continual Learning without Catastrophic Interference
|
Does machine learning really need data-efficient algorithms?
|
To all the other answers I'd add that in Deep Learning, Neural Architecture Search benefits immensely from data efficiency. Think about it: each data point is a trained network.
If your NAS setup requ
|
Does machine learning really need data-efficient algorithms?
To all the other answers I'd add that in Deep Learning, Neural Architecture Search benefits immensely from data efficiency. Think about it: each data point is a trained network.
If your NAS setup requires $N$ data points (networks), and each network requires $D$ samples to be trained, that's $ND$ forward- and backpropagations overall: if you reduce $D$ by a ratio of $k$, that's $k$-many more architectures that you can explore with the same resources.
Of course, it isn't always that straightforward: This CVPR2021 paper by Mundt et al shows that the architectures themselves act as Deep Priors which don't need to be fully trained, if initialized correctly (as a nice counterpoint to the vast few-shot learning literature):
Neural Architecture Search of Deep Priors: Towards Continual Learning without Catastrophic Interference
|
Does machine learning really need data-efficient algorithms?
To all the other answers I'd add that in Deep Learning, Neural Architecture Search benefits immensely from data efficiency. Think about it: each data point is a trained network.
If your NAS setup requ
|
12,328
|
Does machine learning really need data-efficient algorithms?
|
To generalize a bit what @FransrRodenburg says about sample size:
many data sets have structure from various influencing factors. In particular, there are certain situation that lead to what is called nested factors in statistics (clustered data sets, hierarchical data sets, 1 : n relationships between influencing factors) and in this situation the overall sample size is often very limited, while data is abundant at lower levels in the data hierarchy.
I'm spectroscopist and chemometrician, here are some examples:
I did my PhD about classification of brain tissues for tumor (boundary) recognition.
I got hundreds of spectra from each piece of tissue,
but the number of patients were only order of magnitude 100.
For one of the tumor types I was looking at, primary lymphomas of the central nervous system, surgery is only done if the bulk of the tumor causes trouble. After 7 years, our cooperation partners had collected 8 pieces from 5 patients. (For comparison: during the same time, samples of about 2000 glioblastomas were collected for us) The application scenario btw was guiding a biopsy needle, which needs to be done far more often - but one wouldn't remove additional brain tissue samples for research purposes there.
BTW: we had a huge problem finding good controls (normal brain tissue). Guess why.
I have a data set of about 8 TB hyperspectral imaging data containing a few thousand cacao beans collected at various stages of fermentation (they are classified according to color), from a few varieties, from 4 regions. And exactly n = 1 harvest year.
Yes, there are companies who collect large data sets, data bases covering, thousands of farms on several continents. Still AFAIK the largest have "only" a few decades of harvesting periods...
rare ones, that may not be worth automating (leave something for humans!).
The difficulty of getting samples (I'm talking of the physical pieces of material) does not necessarily say much about the need or the economic potential of the automated application. See e.g. the tumour example
During that tumor work I met a bunch of pathologists with whom I talked about their needs.
Some of them expressed needs rougly along your lines: "If I had an instrument that would automatically deal with the 95 % easy samples, I could concentrate on the rare/special/complicated/interesting cases"
Others were thinking pretty much of the opposite: "I can get through a routine cases within seconds - it's the rare/special/complicated/interesting cases where I'd really appreciate additional information."
(side note: one of the most important mistakes I see regularly is that the application scenario isn't specified in sufficient detail, resulting in something that mixes e.g. the two scenarios above and in the end doesn't meet the requirements of anyone.)
|
Does machine learning really need data-efficient algorithms?
|
To generalize a bit what @FransrRodenburg says about sample size:
many data sets have structure from various influencing factors. In particular, there are certain situation that lead to what is called
|
Does machine learning really need data-efficient algorithms?
To generalize a bit what @FransrRodenburg says about sample size:
many data sets have structure from various influencing factors. In particular, there are certain situation that lead to what is called nested factors in statistics (clustered data sets, hierarchical data sets, 1 : n relationships between influencing factors) and in this situation the overall sample size is often very limited, while data is abundant at lower levels in the data hierarchy.
I'm spectroscopist and chemometrician, here are some examples:
I did my PhD about classification of brain tissues for tumor (boundary) recognition.
I got hundreds of spectra from each piece of tissue,
but the number of patients were only order of magnitude 100.
For one of the tumor types I was looking at, primary lymphomas of the central nervous system, surgery is only done if the bulk of the tumor causes trouble. After 7 years, our cooperation partners had collected 8 pieces from 5 patients. (For comparison: during the same time, samples of about 2000 glioblastomas were collected for us) The application scenario btw was guiding a biopsy needle, which needs to be done far more often - but one wouldn't remove additional brain tissue samples for research purposes there.
BTW: we had a huge problem finding good controls (normal brain tissue). Guess why.
I have a data set of about 8 TB hyperspectral imaging data containing a few thousand cacao beans collected at various stages of fermentation (they are classified according to color), from a few varieties, from 4 regions. And exactly n = 1 harvest year.
Yes, there are companies who collect large data sets, data bases covering, thousands of farms on several continents. Still AFAIK the largest have "only" a few decades of harvesting periods...
rare ones, that may not be worth automating (leave something for humans!).
The difficulty of getting samples (I'm talking of the physical pieces of material) does not necessarily say much about the need or the economic potential of the automated application. See e.g. the tumour example
During that tumor work I met a bunch of pathologists with whom I talked about their needs.
Some of them expressed needs rougly along your lines: "If I had an instrument that would automatically deal with the 95 % easy samples, I could concentrate on the rare/special/complicated/interesting cases"
Others were thinking pretty much of the opposite: "I can get through a routine cases within seconds - it's the rare/special/complicated/interesting cases where I'd really appreciate additional information."
(side note: one of the most important mistakes I see regularly is that the application scenario isn't specified in sufficient detail, resulting in something that mixes e.g. the two scenarios above and in the end doesn't meet the requirements of anyone.)
|
Does machine learning really need data-efficient algorithms?
To generalize a bit what @FransrRodenburg says about sample size:
many data sets have structure from various influencing factors. In particular, there are certain situation that lead to what is called
|
12,329
|
Does machine learning really need data-efficient algorithms?
|
Random thoughts, although does not fully answer the question.
There seems to be a wastage of information in training new models on new data even if it is plentiful. Using an analogy with another general purpose technology, fitting new models is not totally dissimilar to reinventing the wheel. Bayesian and transfer learning seem to offer solutions in adding to cumulative data and knowledge, and hence to an extent can help mitigate this. Would the problem of replication crisis be as deep if Bayesian techniques had been used more and thus data containing surprisng results had to overcome the inertia of previous studies?
As @svavil highlights, it is the accumulation of prior training that has accumulated in the past that allows (seemingly?) impressive results in the present. Training efficiency can be obtained by incorporating new data on top of previous data by transfer learning, and neural networks because of their usage in many different domains seem to be amenable to transfer learning (is this right?) and transfer learning is sometimes (usually?) going to mean potentially some efficiencies. Further aggregating of different neural networks seems to also mean potential efficiency of say training a lidar data set by a neural network and then concatenating (word used in a non-technical way!) this neural network to a radar trained neural network.
As per the answer by @Tim, biased data can be a problem, and reusing a known good data set, say as a starting point for transfer learning may be helpful in mitigating a bias problem. As an aside, biased data presumably increases the problem if you accidently skew the bias-variance tradeoff too much for the former (low bias, high variance), and thus generalization will be even worse. Note here there are two types of bias used in the above sentence, and feel free to comment on this although is just an aside here.
So the question may be able to be reversed, techniques that add to accumulated knowledge are helpful, and they also tend to be efficient when new data is added to them.
|
Does machine learning really need data-efficient algorithms?
|
Random thoughts, although does not fully answer the question.
There seems to be a wastage of information in training new models on new data even if it is plentiful. Using an analogy with another gener
|
Does machine learning really need data-efficient algorithms?
Random thoughts, although does not fully answer the question.
There seems to be a wastage of information in training new models on new data even if it is plentiful. Using an analogy with another general purpose technology, fitting new models is not totally dissimilar to reinventing the wheel. Bayesian and transfer learning seem to offer solutions in adding to cumulative data and knowledge, and hence to an extent can help mitigate this. Would the problem of replication crisis be as deep if Bayesian techniques had been used more and thus data containing surprisng results had to overcome the inertia of previous studies?
As @svavil highlights, it is the accumulation of prior training that has accumulated in the past that allows (seemingly?) impressive results in the present. Training efficiency can be obtained by incorporating new data on top of previous data by transfer learning, and neural networks because of their usage in many different domains seem to be amenable to transfer learning (is this right?) and transfer learning is sometimes (usually?) going to mean potentially some efficiencies. Further aggregating of different neural networks seems to also mean potential efficiency of say training a lidar data set by a neural network and then concatenating (word used in a non-technical way!) this neural network to a radar trained neural network.
As per the answer by @Tim, biased data can be a problem, and reusing a known good data set, say as a starting point for transfer learning may be helpful in mitigating a bias problem. As an aside, biased data presumably increases the problem if you accidently skew the bias-variance tradeoff too much for the former (low bias, high variance), and thus generalization will be even worse. Note here there are two types of bias used in the above sentence, and feel free to comment on this although is just an aside here.
So the question may be able to be reversed, techniques that add to accumulated knowledge are helpful, and they also tend to be efficient when new data is added to them.
|
Does machine learning really need data-efficient algorithms?
Random thoughts, although does not fully answer the question.
There seems to be a wastage of information in training new models on new data even if it is plentiful. Using an analogy with another gener
|
12,330
|
How to write a linear model formula with 100 variables in R
|
Try this
df<-data.frame(y=rnorm(10),x1=rnorm(10),x2=rnorm(10))
lm(y~.,df)
|
How to write a linear model formula with 100 variables in R
|
Try this
df<-data.frame(y=rnorm(10),x1=rnorm(10),x2=rnorm(10))
lm(y~.,df)
|
How to write a linear model formula with 100 variables in R
Try this
df<-data.frame(y=rnorm(10),x1=rnorm(10),x2=rnorm(10))
lm(y~.,df)
|
How to write a linear model formula with 100 variables in R
Try this
df<-data.frame(y=rnorm(10),x1=rnorm(10),x2=rnorm(10))
lm(y~.,df)
|
12,331
|
How to write a linear model formula with 100 variables in R
|
Great answers!
I would add that by default, calling formula on a data.frame creates an additive formula to regress the first column onto the others.
So in the case of the answer of @danas.zuokas you can even do
lm(df)
which is interpreted correctly.
|
How to write a linear model formula with 100 variables in R
|
Great answers!
I would add that by default, calling formula on a data.frame creates an additive formula to regress the first column onto the others.
So in the case of the answer of @danas.zuokas you c
|
How to write a linear model formula with 100 variables in R
Great answers!
I would add that by default, calling formula on a data.frame creates an additive formula to regress the first column onto the others.
So in the case of the answer of @danas.zuokas you can even do
lm(df)
which is interpreted correctly.
|
How to write a linear model formula with 100 variables in R
Great answers!
I would add that by default, calling formula on a data.frame creates an additive formula to regress the first column onto the others.
So in the case of the answer of @danas.zuokas you c
|
12,332
|
How to write a linear model formula with 100 variables in R
|
If each row is an observation and each column is a predictor so that $Y$ is an $n$-length vector and $X$ is an $n \times p$ matrix ($p=100$ in this case), then you can do this with
Z = as.data.frame(cbind(Y,X))
lm(Y ~ .,data=Z)
If there are other columns you did not want to include as predictors, you would have to remove them from X before using this trick, or using - in the model formula to exclude them. For example, if you wanted to exclude the 67th predictor (that has the corresponding name x67), then you could write
lm(Y ~ .-x67,data=Z)
Also, if you want to include interactions, etc.. you will need to add them manually as (for example)
lm(Y ~ .+X[,1]*X[,2],data=Z)
or make sure they are entered as columns of X.
|
How to write a linear model formula with 100 variables in R
|
If each row is an observation and each column is a predictor so that $Y$ is an $n$-length vector and $X$ is an $n \times p$ matrix ($p=100$ in this case), then you can do this with
Z = as.data.frame(c
|
How to write a linear model formula with 100 variables in R
If each row is an observation and each column is a predictor so that $Y$ is an $n$-length vector and $X$ is an $n \times p$ matrix ($p=100$ in this case), then you can do this with
Z = as.data.frame(cbind(Y,X))
lm(Y ~ .,data=Z)
If there are other columns you did not want to include as predictors, you would have to remove them from X before using this trick, or using - in the model formula to exclude them. For example, if you wanted to exclude the 67th predictor (that has the corresponding name x67), then you could write
lm(Y ~ .-x67,data=Z)
Also, if you want to include interactions, etc.. you will need to add them manually as (for example)
lm(Y ~ .+X[,1]*X[,2],data=Z)
or make sure they are entered as columns of X.
|
How to write a linear model formula with 100 variables in R
If each row is an observation and each column is a predictor so that $Y$ is an $n$-length vector and $X$ is an $n \times p$ matrix ($p=100$ in this case), then you can do this with
Z = as.data.frame(c
|
12,333
|
How to write a linear model formula with 100 variables in R
|
You can also use a combination of the formula and paste functions.
Setup data: Let's imagine we have a data.frame that contains the predictor variables x1 to x100 and our dependent variable y, but that there is also a nuisance variable asdfasdf. Also the predictor variables are arranged in an order such that they are not all contiguous in the data.frame.
Data <- data.frame(matrix(rnorm(102 * 200), ncol=102))
names(Data) <- c(paste("x", 1:50, sep=""),
"asdfasdf", "y", paste("x", 51:100, sep=""))
Imagine also that you have a string containing the names of the predictor variables. In this case, this can easily be created using the paste function, but in other situations, grep or some other approach might be used to get this string.
PredictorVariables <- paste("x", 1:100, sep="")
Apply approach: We can then construct a formula as follows:
Formula <- formula(paste("y ~ ",
paste(PredictorVariables, collapse=" + ")))
lm(Formula, Data)
the collapse argument inserts + between the predictor variables
formula converts the string into an object of class formula suitable for the lm function.
More generally, I use the following function quite regularly when I want to supply a the predictor variables as vector of variable names.
regression <- function(dv, ivs, data) {
# run a linear model with text arguments for dv and ivs
iv_string <- paste(ivs, collapse=" + ")
regression_formula <- as.formula(paste(dv, iv_string, sep=" ~ "))
lm(regression_formula, data)
}
E.g.,
regression("y", PredictorVariables, Data)
|
How to write a linear model formula with 100 variables in R
|
You can also use a combination of the formula and paste functions.
Setup data: Let's imagine we have a data.frame that contains the predictor variables x1 to x100 and our dependent variable y, but tha
|
How to write a linear model formula with 100 variables in R
You can also use a combination of the formula and paste functions.
Setup data: Let's imagine we have a data.frame that contains the predictor variables x1 to x100 and our dependent variable y, but that there is also a nuisance variable asdfasdf. Also the predictor variables are arranged in an order such that they are not all contiguous in the data.frame.
Data <- data.frame(matrix(rnorm(102 * 200), ncol=102))
names(Data) <- c(paste("x", 1:50, sep=""),
"asdfasdf", "y", paste("x", 51:100, sep=""))
Imagine also that you have a string containing the names of the predictor variables. In this case, this can easily be created using the paste function, but in other situations, grep or some other approach might be used to get this string.
PredictorVariables <- paste("x", 1:100, sep="")
Apply approach: We can then construct a formula as follows:
Formula <- formula(paste("y ~ ",
paste(PredictorVariables, collapse=" + ")))
lm(Formula, Data)
the collapse argument inserts + between the predictor variables
formula converts the string into an object of class formula suitable for the lm function.
More generally, I use the following function quite regularly when I want to supply a the predictor variables as vector of variable names.
regression <- function(dv, ivs, data) {
# run a linear model with text arguments for dv and ivs
iv_string <- paste(ivs, collapse=" + ")
regression_formula <- as.formula(paste(dv, iv_string, sep=" ~ "))
lm(regression_formula, data)
}
E.g.,
regression("y", PredictorVariables, Data)
|
How to write a linear model formula with 100 variables in R
You can also use a combination of the formula and paste functions.
Setup data: Let's imagine we have a data.frame that contains the predictor variables x1 to x100 and our dependent variable y, but tha
|
12,334
|
What is the name of this plot that has rows with two connected dots?
|
Some call it a (horizontal) lollipop plot with two groups.
Here is how to make this plot in Python using matplotlib and seaborn (only used for the style), adapted from https://python-graph-gallery.com/184-lollipop-plot-with-2-groups/ and as requested by the OP in the comments.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import io
sns.set(style="whitegrid") # set style
data = io.StringIO(""""Country" 1990 2015
"Russia" 71.5 101.4
"Canada" 74.4 102.9
"Other non-OECD Europe/Eurasia" 60.9 135.2
"South Korea" 127.0 136.2
"China" 58.5 137.1
"Middle East" 170.9 158.8
"United States" 106.8 169.0
"Australia/New Zealand" 123.6 170.9
"Brazil" 208.5 199.8
"Japan" 181.0 216.7
"Africa" 185.4 222.0
"Other non-OECD Asia" 202.7 236.0
"OECD Europe" 173.8 239.9
"Other non-OECD Americas" 193.1 242.3
"India" 173.8 260.6
"Mexico/Chile" 221.1 269.8""")
df = pd.read_csv(data, sep="\s+", quotechar='"')
df = df.set_index("Country").sort_values("2015")
df["change"] = df["2015"] / df["1990"] - 1
plt.figure(figsize=(12,6))
y_range = np.arange(1, len(df.index) + 1)
colors = np.where(df['2015'] > df['1990'], '#d9d9d9', '#d57883')
plt.hlines(y=y_range, xmin=df['1990'], xmax=df['2015'],
color=colors, lw=10)
plt.scatter(df['1990'], y_range, color='#0096d7', s=200, label='1990', zorder=3)
plt.scatter(df['2015'], y_range, color='#003953', s=200 , label='2015', zorder=3)
for (_, row), y in zip(df.iterrows(), y_range):
plt.annotate(f"{row['change']:+.0%}", (max(row["1990"], row["2015"]) + 4, y - 0.25))
plt.legend(ncol=2, bbox_to_anchor=(1., 1.01), loc="lower right", frameon=False)
plt.yticks(y_range, df.index)
plt.title("Energy productivity in selected countries and regions, 1990 and 2015\nBillion dollars GDP per quadrillion BTU", loc='left')
plt.xlim(50, 300)
plt.gcf().subplots_adjust(left=0.35)
plt.tight_layout()
plt.show()
|
What is the name of this plot that has rows with two connected dots?
|
Some call it a (horizontal) lollipop plot with two groups.
Here is how to make this plot in Python using matplotlib and seaborn (only used for the style), adapted from https://python-graph-gallery.com
|
What is the name of this plot that has rows with two connected dots?
Some call it a (horizontal) lollipop plot with two groups.
Here is how to make this plot in Python using matplotlib and seaborn (only used for the style), adapted from https://python-graph-gallery.com/184-lollipop-plot-with-2-groups/ and as requested by the OP in the comments.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import io
sns.set(style="whitegrid") # set style
data = io.StringIO(""""Country" 1990 2015
"Russia" 71.5 101.4
"Canada" 74.4 102.9
"Other non-OECD Europe/Eurasia" 60.9 135.2
"South Korea" 127.0 136.2
"China" 58.5 137.1
"Middle East" 170.9 158.8
"United States" 106.8 169.0
"Australia/New Zealand" 123.6 170.9
"Brazil" 208.5 199.8
"Japan" 181.0 216.7
"Africa" 185.4 222.0
"Other non-OECD Asia" 202.7 236.0
"OECD Europe" 173.8 239.9
"Other non-OECD Americas" 193.1 242.3
"India" 173.8 260.6
"Mexico/Chile" 221.1 269.8""")
df = pd.read_csv(data, sep="\s+", quotechar='"')
df = df.set_index("Country").sort_values("2015")
df["change"] = df["2015"] / df["1990"] - 1
plt.figure(figsize=(12,6))
y_range = np.arange(1, len(df.index) + 1)
colors = np.where(df['2015'] > df['1990'], '#d9d9d9', '#d57883')
plt.hlines(y=y_range, xmin=df['1990'], xmax=df['2015'],
color=colors, lw=10)
plt.scatter(df['1990'], y_range, color='#0096d7', s=200, label='1990', zorder=3)
plt.scatter(df['2015'], y_range, color='#003953', s=200 , label='2015', zorder=3)
for (_, row), y in zip(df.iterrows(), y_range):
plt.annotate(f"{row['change']:+.0%}", (max(row["1990"], row["2015"]) + 4, y - 0.25))
plt.legend(ncol=2, bbox_to_anchor=(1., 1.01), loc="lower right", frameon=False)
plt.yticks(y_range, df.index)
plt.title("Energy productivity in selected countries and regions, 1990 and 2015\nBillion dollars GDP per quadrillion BTU", loc='left')
plt.xlim(50, 300)
plt.gcf().subplots_adjust(left=0.35)
plt.tight_layout()
plt.show()
|
What is the name of this plot that has rows with two connected dots?
Some call it a (horizontal) lollipop plot with two groups.
Here is how to make this plot in Python using matplotlib and seaborn (only used for the style), adapted from https://python-graph-gallery.com
|
12,335
|
What is the name of this plot that has rows with two connected dots?
|
That's a dot plot. It is sometimes called a "Cleveland dot plot" because there is a variant of a histogram made with dots that people sometimes call a dot plot as well. This particular version plots two dots per country (for the two years) and draws a thicker line between them. The countries are sorted by the latter value. The primary reference would be Cleveland's book Visualizing Data. Googling leads me to this Excel tutorial.
I scraped the data, in case anyone wants to play with them.
Country 1990 2015
Russia 71.5 101.4
Canada 74.4 102.9
Other non-OECD Europe/Eurasia 60.9 135.2
South Korea 127.0 136.2
China 58.5 137.1
Middle East 170.9 158.8
United States 106.8 169.0
Australia/New Zealand 123.6 170.9
Brazil 208.5 199.8
Japan 181.0 216.7
Africa 185.4 222.0
Other non-OECD Asia 202.7 236.0
OECD Europe 173.8 239.9
Other non-OECD Americas 193.1 242.3
India 173.8 260.6
Mexico/Chile 221.1 269.8
|
What is the name of this plot that has rows with two connected dots?
|
That's a dot plot. It is sometimes called a "Cleveland dot plot" because there is a variant of a histogram made with dots that people sometimes call a dot plot as well. This particular version plots
|
What is the name of this plot that has rows with two connected dots?
That's a dot plot. It is sometimes called a "Cleveland dot plot" because there is a variant of a histogram made with dots that people sometimes call a dot plot as well. This particular version plots two dots per country (for the two years) and draws a thicker line between them. The countries are sorted by the latter value. The primary reference would be Cleveland's book Visualizing Data. Googling leads me to this Excel tutorial.
I scraped the data, in case anyone wants to play with them.
Country 1990 2015
Russia 71.5 101.4
Canada 74.4 102.9
Other non-OECD Europe/Eurasia 60.9 135.2
South Korea 127.0 136.2
China 58.5 137.1
Middle East 170.9 158.8
United States 106.8 169.0
Australia/New Zealand 123.6 170.9
Brazil 208.5 199.8
Japan 181.0 216.7
Africa 185.4 222.0
Other non-OECD Asia 202.7 236.0
OECD Europe 173.8 239.9
Other non-OECD Americas 193.1 242.3
India 173.8 260.6
Mexico/Chile 221.1 269.8
|
What is the name of this plot that has rows with two connected dots?
That's a dot plot. It is sometimes called a "Cleveland dot plot" because there is a variant of a histogram made with dots that people sometimes call a dot plot as well. This particular version plots
|
12,336
|
What is the name of this plot that has rows with two connected dots?
|
The answer by @gung is correct in identifying the chart type and providing a link to how to implement in Excel, as requested by the OP. But for others wanting to know how to do this in R/tidyverse/ggplot, below is complete code:
library(dplyr) # for data manipulation
library(tidyr) # for reshaping the data frame
library(stringr) # string manipulation
library(ggplot2) # graphing
# create the data frame
# (in wide format, as needed for the line segments):
dat_wide = tibble::tribble(
~Country, ~Y1990, ~Y2015,
'Russia', 71.5, 101.4,
'Canada', 74.4, 102.9,
'Other non-OECD Europe/Eurasia', 60.9, 135.2,
'South Korea', 127, 136.2,
'China', 58.5, 137.1,
'Middle East', 170.9, 158.8,
'United States', 106.8, 169,
'Australia/New Zealand', 123.6, 170.9,
'Brazil', 208.5, 199.8,
'Japan', 181, 216.7,
'Africa', 185.4, 222,
'Other non-OECD Asia', 202.7, 236,
'OECD Europe', 173.8, 239.9,
'Other non-OECD Americas', 193.1, 242.3,
'India', 173.8, 260.6,
'Mexico/Chile', 221.1, 269.8
)
# a version reshaped to long format (for the points):
dat_long = dat_wide %>%
gather(key = 'Year', value = 'Energy_productivity', Y1990:Y2015) %>%
mutate(Year = str_replace(Year, 'Y', ''))
# create the graph:
ggplot() +
geom_segment(data = dat_wide,
aes(x = Y1990,
xend = Y2015,
y = reorder(Country, Y2015),
yend = reorder(Country, Y2015)),
size = 3, colour = '#D0D0D0') +
geom_point(data = dat_long,
aes(x = Energy_productivity,
y = Country,
colour = Year),
size = 4) +
labs(title = 'Energy productivity in selected countries \nand regions',
subtitle = 'Billion dollars GDP per quadrillion BTU',
caption = 'Source: EIA, 2016',
x = NULL, y = NULL) +
scale_colour_manual(values = c('#1082CD', '#042B41')) +
theme_bw() +
theme(legend.position = c(0.92, 0.20),
legend.title = element_blank(),
legend.box.background = element_rect(colour = 'black'),
panel.border = element_blank(),
axis.ticks = element_line(colour = '#E6E6E6'))
ggsave('energy.png', width = 20, height = 10, units = 'cm')
This could be extended to add value labels and to highlight the colour of the one case where the values swap order, as in the original.
|
What is the name of this plot that has rows with two connected dots?
|
The answer by @gung is correct in identifying the chart type and providing a link to how to implement in Excel, as requested by the OP. But for others wanting to know how to do this in R/tidyverse/ggp
|
What is the name of this plot that has rows with two connected dots?
The answer by @gung is correct in identifying the chart type and providing a link to how to implement in Excel, as requested by the OP. But for others wanting to know how to do this in R/tidyverse/ggplot, below is complete code:
library(dplyr) # for data manipulation
library(tidyr) # for reshaping the data frame
library(stringr) # string manipulation
library(ggplot2) # graphing
# create the data frame
# (in wide format, as needed for the line segments):
dat_wide = tibble::tribble(
~Country, ~Y1990, ~Y2015,
'Russia', 71.5, 101.4,
'Canada', 74.4, 102.9,
'Other non-OECD Europe/Eurasia', 60.9, 135.2,
'South Korea', 127, 136.2,
'China', 58.5, 137.1,
'Middle East', 170.9, 158.8,
'United States', 106.8, 169,
'Australia/New Zealand', 123.6, 170.9,
'Brazil', 208.5, 199.8,
'Japan', 181, 216.7,
'Africa', 185.4, 222,
'Other non-OECD Asia', 202.7, 236,
'OECD Europe', 173.8, 239.9,
'Other non-OECD Americas', 193.1, 242.3,
'India', 173.8, 260.6,
'Mexico/Chile', 221.1, 269.8
)
# a version reshaped to long format (for the points):
dat_long = dat_wide %>%
gather(key = 'Year', value = 'Energy_productivity', Y1990:Y2015) %>%
mutate(Year = str_replace(Year, 'Y', ''))
# create the graph:
ggplot() +
geom_segment(data = dat_wide,
aes(x = Y1990,
xend = Y2015,
y = reorder(Country, Y2015),
yend = reorder(Country, Y2015)),
size = 3, colour = '#D0D0D0') +
geom_point(data = dat_long,
aes(x = Energy_productivity,
y = Country,
colour = Year),
size = 4) +
labs(title = 'Energy productivity in selected countries \nand regions',
subtitle = 'Billion dollars GDP per quadrillion BTU',
caption = 'Source: EIA, 2016',
x = NULL, y = NULL) +
scale_colour_manual(values = c('#1082CD', '#042B41')) +
theme_bw() +
theme(legend.position = c(0.92, 0.20),
legend.title = element_blank(),
legend.box.background = element_rect(colour = 'black'),
panel.border = element_blank(),
axis.ticks = element_line(colour = '#E6E6E6'))
ggsave('energy.png', width = 20, height = 10, units = 'cm')
This could be extended to add value labels and to highlight the colour of the one case where the values swap order, as in the original.
|
What is the name of this plot that has rows with two connected dots?
The answer by @gung is correct in identifying the chart type and providing a link to how to implement in Excel, as requested by the OP. But for others wanting to know how to do this in R/tidyverse/ggp
|
12,337
|
How to generate random integers between 1 and 4 that have a specific mean?
|
I agree with X'ian that the problem is under-specified. However, there is an elegant, scalable, efficient, effective, and versatile solution worth considering.
Because the product of the sample mean and sample size equals the sample sum, the problem concerns generating a random sample of $n$ values in the set $\{1,2,\ldots, k\}$ that sum to $s$ (assuming $n \le s \le kn,$ of course).
To explain the proposed solution and, I hope, justify the claim of elegance, I offer a graphical interpretation of this sampling scheme. Lay out a grid of $k$ rows and $n$ columns. Select every cell in the first row. Randomly (and uniformly) select $s-n$ of the remaining cells in rows $2$ through $k.$ The value of observation $i$ in the sample is the number of cells selected in column $i:$
This $4\times 100$ grid is represented by black dots at the unselected cells and colored patches at the selected cells. It was generated to produce a mean value of $2,$ so $s=200.$ Thus, $200-100=100$ cells were randomly selected among the top $k-1=3$ rows. The colors represent the numbers of selected cells in each column. There are $28$ ones, $47$ twos, $22$ threes, and $3$ fours. The ordered sample corresponds to the sequence of colors from column $1$ through column $n=100.$
To demonstrate scalability and efficiency, here is an R command to generate a sample according to this scheme. The question concerns the case $k=4, n=100$ and $s$ is $n$ times the desired average of the sample:
tabulate(sample.int((k-1)*n, s-n) %% n + 1, n) + 1
Because sample.int requires $O(s-n)$ time and $O((k-1)n)$ space, and tabulate requires $O(n)$ time and space, this algorithm requires $O(\max(s-n,n))$ time and $O(kn)$ space: that's scalable. With $k=4$ and $n=100$ my workstation takes only 12 microseconds to perform this calculation: that's efficient.
(Here's a brief explanation of the code. Note that integers $x$ in $\{1,2,\ldots, (k-1)n\}$ can be expressed uniquely as $x = nj + i$ where $j \in \{0,1,\ldots, k-2\}$ and $i\in\{1,2,\ldots, n\}.$ The code takes a sample of such $x,$ converts them to their $(i,j)$ grid coordinates, counts how many times each $i$ appears (which will range from $0$ through $k-1$) and adds $1$ to each count.)
Why can this be considered effective? One reason is that the distributional properties of this sampling scheme are straightforward to work out:
It is exchangeable: all permutations of any sample are equally likely.
The chance that the value $x \in\{1,2,\ldots, k\}$ appears at position $i,$ which I will write as $\pi_i(x),$ is obtained through a basic hypergeometric counting argument as $$\pi_i(x) = \frac{\binom{k-1}{x-1}\binom{(n-1)(k-1)}{s-n-x+1}}{\binom{n(k-1)}{ s-n}}.$$ For example, with $k=4,$ $n=100,$ and a mean of $2.0$ (so that $s=200$) the chances are $\pi = (0.2948, 0.4467, 0.2222, 0.03630),$ closely agreeing with the frequencies in the foregoing sample. Here are graphs of $\pi_1(1), \pi_1(2), \pi_1(3),$ and $\pi_1(4)$ as a function of the sum:
The chance that the value $x$ appears at position $i$ while the value $y$ appears at position $j$ is similarly found as $$\pi_{ij}(x,y) = \frac{\binom{k-1}{x-1}\binom{k-1}{y-1}\binom{(n-1)(k-1)}{s-n-x-y+2}}{\binom{n(k-1)}{ s-n}}.$$
These probabilities $\pi_i$ and $\pi_{ij}$ enable one to apply the Horvitz-Thompson estimator to this probability sampling design as well as to compute the first two moments of the distributions of various statistics.
Finally, this solution is versatile insofar as it permits simple, readily-analyzable variations to control the sampling distribution. For instance, you could select cells on the grid with specified but unequal probabilities in each row, or with an urn-like model to modify the probabilities as sampling proceeds, thereby controlling the frequencies of the column counts.
|
How to generate random integers between 1 and 4 that have a specific mean?
|
I agree with X'ian that the problem is under-specified. However, there is an elegant, scalable, efficient, effective, and versatile solution worth considering.
Because the product of the sample mean
|
How to generate random integers between 1 and 4 that have a specific mean?
I agree with X'ian that the problem is under-specified. However, there is an elegant, scalable, efficient, effective, and versatile solution worth considering.
Because the product of the sample mean and sample size equals the sample sum, the problem concerns generating a random sample of $n$ values in the set $\{1,2,\ldots, k\}$ that sum to $s$ (assuming $n \le s \le kn,$ of course).
To explain the proposed solution and, I hope, justify the claim of elegance, I offer a graphical interpretation of this sampling scheme. Lay out a grid of $k$ rows and $n$ columns. Select every cell in the first row. Randomly (and uniformly) select $s-n$ of the remaining cells in rows $2$ through $k.$ The value of observation $i$ in the sample is the number of cells selected in column $i:$
This $4\times 100$ grid is represented by black dots at the unselected cells and colored patches at the selected cells. It was generated to produce a mean value of $2,$ so $s=200.$ Thus, $200-100=100$ cells were randomly selected among the top $k-1=3$ rows. The colors represent the numbers of selected cells in each column. There are $28$ ones, $47$ twos, $22$ threes, and $3$ fours. The ordered sample corresponds to the sequence of colors from column $1$ through column $n=100.$
To demonstrate scalability and efficiency, here is an R command to generate a sample according to this scheme. The question concerns the case $k=4, n=100$ and $s$ is $n$ times the desired average of the sample:
tabulate(sample.int((k-1)*n, s-n) %% n + 1, n) + 1
Because sample.int requires $O(s-n)$ time and $O((k-1)n)$ space, and tabulate requires $O(n)$ time and space, this algorithm requires $O(\max(s-n,n))$ time and $O(kn)$ space: that's scalable. With $k=4$ and $n=100$ my workstation takes only 12 microseconds to perform this calculation: that's efficient.
(Here's a brief explanation of the code. Note that integers $x$ in $\{1,2,\ldots, (k-1)n\}$ can be expressed uniquely as $x = nj + i$ where $j \in \{0,1,\ldots, k-2\}$ and $i\in\{1,2,\ldots, n\}.$ The code takes a sample of such $x,$ converts them to their $(i,j)$ grid coordinates, counts how many times each $i$ appears (which will range from $0$ through $k-1$) and adds $1$ to each count.)
Why can this be considered effective? One reason is that the distributional properties of this sampling scheme are straightforward to work out:
It is exchangeable: all permutations of any sample are equally likely.
The chance that the value $x \in\{1,2,\ldots, k\}$ appears at position $i,$ which I will write as $\pi_i(x),$ is obtained through a basic hypergeometric counting argument as $$\pi_i(x) = \frac{\binom{k-1}{x-1}\binom{(n-1)(k-1)}{s-n-x+1}}{\binom{n(k-1)}{ s-n}}.$$ For example, with $k=4,$ $n=100,$ and a mean of $2.0$ (so that $s=200$) the chances are $\pi = (0.2948, 0.4467, 0.2222, 0.03630),$ closely agreeing with the frequencies in the foregoing sample. Here are graphs of $\pi_1(1), \pi_1(2), \pi_1(3),$ and $\pi_1(4)$ as a function of the sum:
The chance that the value $x$ appears at position $i$ while the value $y$ appears at position $j$ is similarly found as $$\pi_{ij}(x,y) = \frac{\binom{k-1}{x-1}\binom{k-1}{y-1}\binom{(n-1)(k-1)}{s-n-x-y+2}}{\binom{n(k-1)}{ s-n}}.$$
These probabilities $\pi_i$ and $\pi_{ij}$ enable one to apply the Horvitz-Thompson estimator to this probability sampling design as well as to compute the first two moments of the distributions of various statistics.
Finally, this solution is versatile insofar as it permits simple, readily-analyzable variations to control the sampling distribution. For instance, you could select cells on the grid with specified but unequal probabilities in each row, or with an urn-like model to modify the probabilities as sampling proceeds, thereby controlling the frequencies of the column counts.
|
How to generate random integers between 1 and 4 that have a specific mean?
I agree with X'ian that the problem is under-specified. However, there is an elegant, scalable, efficient, effective, and versatile solution worth considering.
Because the product of the sample mean
|
12,338
|
How to generate random integers between 1 and 4 that have a specific mean?
|
The question is under-specified in that the constraints on the frequencies
\begin{align}n_1+2n_2+3n_3+4n_4&=100M\\n_1+n_2+n_3+n_4&=100\end{align}
do not determine a distribution: "random" is not associated with a particular distribution, unless the OP means "uniform". For instance, if there exists one solution $(n_1^0,n_2^0,n_3^0,n_4^0)$ to the above system, then the distribution degenerated at this solution is producing a random draw that is always $(n_1^0,n_2^0,n_3^0,n_4^0)$.
In the case the question is about simulating a Uniform distribution over the grid\begin{align}n_1+2n_2+3n_3+4n_4&=100M\\n_1+n_2+n_3+n_4&=100\end{align}one can always use a Metropolis-Hastings algorithm. Starting from $(n_1^0,n_2^0,n_3^0,n_4^0)$, create a Markov chain by proposing symmetric random perturbations of the vector $(n_1^t,n_2^t,n_3^t,n_4^t)$ and accept if the result is within $\{1,2,3,4\}^4$ and satisfies the constraints.
For instance, here is a crude R rendering:
cenM=293
#starting point (n¹,n³,n⁴)
n<-sample(1:100,3,rep=TRUE)
while((sum(n)>100)|(n[2]-n[1]+2*n[3]!=cenM-200))
n<-sample(1:100,3,rep=TRUE)
#Markov chain
for (t in 1:1e6){
prop<-n+sample(-10:10,3,rep=TRUE)
if ((sum(prop)<101)&
(prop[2]-prop[1]+2*prop[3]==cenM-200)&
(min(prop)>0))
n=prop}
c(n[1],100-sum(n),n[-1])
with the distribution of $(n_1,n_3,n_4)$ over the 10⁶ iterations:
In case you want draws of the integers themselves,
sample(c(rep(1,n[1]),rep(2,100-sum(n)),rep(3,n[2]),rep(4,n[3])))
is a quick & dirty way to produce a sample.
|
How to generate random integers between 1 and 4 that have a specific mean?
|
The question is under-specified in that the constraints on the frequencies
\begin{align}n_1+2n_2+3n_3+4n_4&=100M\\n_1+n_2+n_3+n_4&=100\end{align}
do not determine a distribution: "random" is not assoc
|
How to generate random integers between 1 and 4 that have a specific mean?
The question is under-specified in that the constraints on the frequencies
\begin{align}n_1+2n_2+3n_3+4n_4&=100M\\n_1+n_2+n_3+n_4&=100\end{align}
do not determine a distribution: "random" is not associated with a particular distribution, unless the OP means "uniform". For instance, if there exists one solution $(n_1^0,n_2^0,n_3^0,n_4^0)$ to the above system, then the distribution degenerated at this solution is producing a random draw that is always $(n_1^0,n_2^0,n_3^0,n_4^0)$.
In the case the question is about simulating a Uniform distribution over the grid\begin{align}n_1+2n_2+3n_3+4n_4&=100M\\n_1+n_2+n_3+n_4&=100\end{align}one can always use a Metropolis-Hastings algorithm. Starting from $(n_1^0,n_2^0,n_3^0,n_4^0)$, create a Markov chain by proposing symmetric random perturbations of the vector $(n_1^t,n_2^t,n_3^t,n_4^t)$ and accept if the result is within $\{1,2,3,4\}^4$ and satisfies the constraints.
For instance, here is a crude R rendering:
cenM=293
#starting point (n¹,n³,n⁴)
n<-sample(1:100,3,rep=TRUE)
while((sum(n)>100)|(n[2]-n[1]+2*n[3]!=cenM-200))
n<-sample(1:100,3,rep=TRUE)
#Markov chain
for (t in 1:1e6){
prop<-n+sample(-10:10,3,rep=TRUE)
if ((sum(prop)<101)&
(prop[2]-prop[1]+2*prop[3]==cenM-200)&
(min(prop)>0))
n=prop}
c(n[1],100-sum(n),n[-1])
with the distribution of $(n_1,n_3,n_4)$ over the 10⁶ iterations:
In case you want draws of the integers themselves,
sample(c(rep(1,n[1]),rep(2,100-sum(n)),rep(3,n[2]),rep(4,n[3])))
is a quick & dirty way to produce a sample.
|
How to generate random integers between 1 and 4 that have a specific mean?
The question is under-specified in that the constraints on the frequencies
\begin{align}n_1+2n_2+3n_3+4n_4&=100M\\n_1+n_2+n_3+n_4&=100\end{align}
do not determine a distribution: "random" is not assoc
|
12,339
|
How to generate random integers between 1 and 4 that have a specific mean?
|
I want to ... uh ... "attenuate" @whuber's amazing answer, which @TomZinger says is too difficult to follow. By that I mean I want to re-describe it in terms that I think Tom Zinger will understand, because it's clearly the best answer here. And as Tom gradually uses the method and finds that he needs, say, to know the distribution of the samples rather than just their mean, whuber's answer will be just what he's looking for.
In short: there are no original ideas here, only a simpler explanation.
You'd like to create $n$ integers from $1$ to $4$ with mean $r$. I'm going to suggest computing $n$ integers from $0$ to $3$ with mean $r-1$, and then adding one to each of them. If you can do that latter thing, you can solve the first problem. For instance, if we want 10 integers between $1$ and $4$ with mean $2.6$,
we can write down these $10$ integers between $0$ and $3$...
0,3,2,1,3,1,2,1,3,0
whose mean is $1.6$; if we increase each by $1$, we get
1,4,3,2,4,2,3,2,4,1
whose mean is $2.6$. It's that simple.
Now let's think about the numbers $0$ through $3$. I'm going to think of those as "how many items do I have in a 'small' set?" I might have no items, one item, two items, or three items. So the list
0,3,2,1,3,1,2,1,3,0
represents ten different small sets. The first is empty; the second has three items, and so on. The total number of items in all the sets is the sum of the ten numbers, i.e., $16$. And the average number of items in each set is this total, divided by $10$, hence $1.6$.
whuber's idea is this: suppose you make yourself ten small sets, with the total number of items being $10t$ for some number $t$. Then the average size of the sets will be exactly $t$. In the same way, if you make yourself $n$ sets with a total number of items being $nt$, the average number of items in a set will be $t$. You say you're interested in the case $n = 100$.
Let's make this concrete for your example: you want 100 items between 1 and 4 whose average is $1.9$. Using the idea of my first paragraph, I'm going to change this to "make $100$ ints between $0$ and $3$ whose average is $0.9$". When I'm done, I'll add $1$ to each of my ints to get a solution to your problem. So my target average is $t = 0.9$.
I want to make $100$ sets, each with between $0$ and $3$ items in it, with an average set-size of $0.9$.
As I've observed above, this means that there have to be a total of $100 \cdot 0.9 = 90$ items in the sets. From the numbers $1, 2, \ldots, 300$, I'm going to select exactly $90$. I can indicate the selected ones by making a list of 300 dots and Xs:
..X....X...XX...
where the list above indicates that I selected the numbers 3, 9, 13, 14, and then many others that I haven't shown because I got sick of typing. :)
I can take this sequence of 300 dots and Xs and break it into three groups of 100 dots each, which I arrange one atop the other, getting something that looks like this:
...X....X..X.....X...
.X...X.....X...X.....
..X...X.X..X......X..
but goes on for a full 100 items in each row. The number of Xs in each row might differ -- there might be 35 in the first row, 24 in the second, and 31 in the third, for instance, and that's OK. [Thanks to whuber for pointing out that I had this wrong in a first draft!]
Now look at each column: each column can be considered as a set, and that set has between 0 and 3 "X"s in it. I can write the tallies below the rows to get something like this:
...X....X..X.....X...
.X...X.....X...X.....
..X...X.X..X......X..
011101102003000101100
That is to say, I've produced 100 numbers, each between 1 and 3. And the sum of those 100 numbers must be the number of Xs, total, in all three rows, which was 90. So the average must be $90/100 = 0.9$, as desired.
So here are the steps to getting 100 integers between 1 and 4 whose average is exactly $s$.
Let $t = s - 1$.
Compute $k = 100 t$; that's how many Xs we'll place in the rows, total.
Make a list of 300 dots-or-Xs, $k$ of which are Xs.
Split this into three rows of 100 dots-or-Xs, each containing about a third of the Xs, more or less.
Arrange these in an array, and compute column sums, getting 100 integers between $0$ and $3$. Their average will be $t$.
Add one to each column sum to get 100 integers between $1$ and $4$ whose average is $s$.
Now the tricky part of this is really in step 4: how do you pick $300$ items, $k$ of which are "X" and the other $300-k$ of which are "."? Well, it turns out that R has a function that does exactly that.
And then whuber tells you how to use it: you write
tabulate(sample.int((k-1)*n, s-n) %% n + 1, n)
For your particular case, $n = 100$, and $s$, the total number of items in all the small sets, is $100r$, and you want numbers between $1$ and $4$, so $k = 4$, so $k-1$ (the largest size for a 'small set') is 3, so this becomes
tabulate(sample.int(3*100, 100r-100) %% 100 + 1, n)
or
tabulate(sample.int(3*100, 100*(r-1)) %% 100 + 1, 100)
or, using my name $t$ for $r - 1$, it becomes
tabulate(sample.int(3*100, 100*t) %% 100 + 1, 100)
The "+1" at the end of his original formula is exactly the step needed to convert from "numbers between $0$ and $3$" to "numbers between $1$ and $4$".
Let's work from the inside out, and let's simplify to $n = 10$ so that I can show sample outputs:
tabulate(sample.int(3*10, 10*t) %% 10 + 1, 10)
And let's aim for $t = 1.9$, so this becomes
tabulate(sample.int(3*10, 10*1.9) %% 10 + 1, 10)
Starting with sample.int(3*10, 10*1.9): this produces a list of $19$ integers between $1$ and $30$. (i.e., it solved the problem of picking $k$ numbers out of your total -- $300$ in your real problem, $30$ in my smaller example).
As you'll recall, we want to produce three rows of ten dots-and-Xs each, something like
X.X.XX.XX.
XXXX.XXX..
XX.X.XXX..
We can read this left-to-right-top-to-bottom (i.e., normal reading order) to produce a list of locations for Xs: the first item's a dot; the second and third are Xs, and so on, so our list of locations starts out $1, 3, 5, 6, \ldots$. When we get to the end of a row, we just keep counting up, so for the picture above, the X-locations would be $1, 3, 5, 6, 8, 9, 11, 12, 13, 14, 16, 17, 18, 21, 22, 24, 26, 27, 28$. Is that clear?
Well, whubers code produces exactly that list of locations with its innermost section.
The next item is %% 10; that takes a number and produces its remainder on division by ten. So our list becomes $1, 3, 5, 6, 8, 9, 1, 2, 3, 4, 6, 7, 8, 1, 2, 4, 6, 7, 8$. If we break that into three groups --- those that came from numbers between $1$ and $10$, those that came from numbers from $11$ to $20$, and those that came from numbers $21$ to $30$, we get $1, 3, 5, 6, 8, 9$, then $1, 2, 3, 4, 6, 7, 8,$, and finally $1, 2, 4, 6, 7, 8$. Those tell you where the Xs in each of the three rows are. There's a subtle problem here: if there had been an X in position 10 in the first row, the first of our three lists would have been $1, 3, 5, 6, 8, 9, 0$, and the tabulate function doesn't like "0". So whuber adds 1 to each item in the list to get $2, 4, 6, 7, 9, 10, 1$. Let's move on to the overall computation:
tabulate(sample.int(3*10, 10*1.9) %% 10 + 1, 10)
This asks "for those $30$ numbers, each indicating whether there's an X in some column, tell me how many times each column (from $1$ to $10$ --- that's what the final "10" tells you) appears, i.e., tell me how many Xs are in each column. The result is
0 3 2 2 2 1 3 2 3 1
which (because of the shift-by-one thing) you have to read as "there are no Xs in the 10th column; there are 3 Xs in the first column; there are 2 Xs in the second column," and so on up to "there is one X in the 9th column".
That gives you ten integers between $0$ and $3$ whose sum is $19$, hence whose average is $1.9$. If you increase each by 1, you get ten integers between $1$ and $4$ whose sum is $29$, hence an average value of $2.9$.
You can generalize to $n = 100$, I hope.
|
How to generate random integers between 1 and 4 that have a specific mean?
|
I want to ... uh ... "attenuate" @whuber's amazing answer, which @TomZinger says is too difficult to follow. By that I mean I want to re-describe it in terms that I think Tom Zinger will understand, b
|
How to generate random integers between 1 and 4 that have a specific mean?
I want to ... uh ... "attenuate" @whuber's amazing answer, which @TomZinger says is too difficult to follow. By that I mean I want to re-describe it in terms that I think Tom Zinger will understand, because it's clearly the best answer here. And as Tom gradually uses the method and finds that he needs, say, to know the distribution of the samples rather than just their mean, whuber's answer will be just what he's looking for.
In short: there are no original ideas here, only a simpler explanation.
You'd like to create $n$ integers from $1$ to $4$ with mean $r$. I'm going to suggest computing $n$ integers from $0$ to $3$ with mean $r-1$, and then adding one to each of them. If you can do that latter thing, you can solve the first problem. For instance, if we want 10 integers between $1$ and $4$ with mean $2.6$,
we can write down these $10$ integers between $0$ and $3$...
0,3,2,1,3,1,2,1,3,0
whose mean is $1.6$; if we increase each by $1$, we get
1,4,3,2,4,2,3,2,4,1
whose mean is $2.6$. It's that simple.
Now let's think about the numbers $0$ through $3$. I'm going to think of those as "how many items do I have in a 'small' set?" I might have no items, one item, two items, or three items. So the list
0,3,2,1,3,1,2,1,3,0
represents ten different small sets. The first is empty; the second has three items, and so on. The total number of items in all the sets is the sum of the ten numbers, i.e., $16$. And the average number of items in each set is this total, divided by $10$, hence $1.6$.
whuber's idea is this: suppose you make yourself ten small sets, with the total number of items being $10t$ for some number $t$. Then the average size of the sets will be exactly $t$. In the same way, if you make yourself $n$ sets with a total number of items being $nt$, the average number of items in a set will be $t$. You say you're interested in the case $n = 100$.
Let's make this concrete for your example: you want 100 items between 1 and 4 whose average is $1.9$. Using the idea of my first paragraph, I'm going to change this to "make $100$ ints between $0$ and $3$ whose average is $0.9$". When I'm done, I'll add $1$ to each of my ints to get a solution to your problem. So my target average is $t = 0.9$.
I want to make $100$ sets, each with between $0$ and $3$ items in it, with an average set-size of $0.9$.
As I've observed above, this means that there have to be a total of $100 \cdot 0.9 = 90$ items in the sets. From the numbers $1, 2, \ldots, 300$, I'm going to select exactly $90$. I can indicate the selected ones by making a list of 300 dots and Xs:
..X....X...XX...
where the list above indicates that I selected the numbers 3, 9, 13, 14, and then many others that I haven't shown because I got sick of typing. :)
I can take this sequence of 300 dots and Xs and break it into three groups of 100 dots each, which I arrange one atop the other, getting something that looks like this:
...X....X..X.....X...
.X...X.....X...X.....
..X...X.X..X......X..
but goes on for a full 100 items in each row. The number of Xs in each row might differ -- there might be 35 in the first row, 24 in the second, and 31 in the third, for instance, and that's OK. [Thanks to whuber for pointing out that I had this wrong in a first draft!]
Now look at each column: each column can be considered as a set, and that set has between 0 and 3 "X"s in it. I can write the tallies below the rows to get something like this:
...X....X..X.....X...
.X...X.....X...X.....
..X...X.X..X......X..
011101102003000101100
That is to say, I've produced 100 numbers, each between 1 and 3. And the sum of those 100 numbers must be the number of Xs, total, in all three rows, which was 90. So the average must be $90/100 = 0.9$, as desired.
So here are the steps to getting 100 integers between 1 and 4 whose average is exactly $s$.
Let $t = s - 1$.
Compute $k = 100 t$; that's how many Xs we'll place in the rows, total.
Make a list of 300 dots-or-Xs, $k$ of which are Xs.
Split this into three rows of 100 dots-or-Xs, each containing about a third of the Xs, more or less.
Arrange these in an array, and compute column sums, getting 100 integers between $0$ and $3$. Their average will be $t$.
Add one to each column sum to get 100 integers between $1$ and $4$ whose average is $s$.
Now the tricky part of this is really in step 4: how do you pick $300$ items, $k$ of which are "X" and the other $300-k$ of which are "."? Well, it turns out that R has a function that does exactly that.
And then whuber tells you how to use it: you write
tabulate(sample.int((k-1)*n, s-n) %% n + 1, n)
For your particular case, $n = 100$, and $s$, the total number of items in all the small sets, is $100r$, and you want numbers between $1$ and $4$, so $k = 4$, so $k-1$ (the largest size for a 'small set') is 3, so this becomes
tabulate(sample.int(3*100, 100r-100) %% 100 + 1, n)
or
tabulate(sample.int(3*100, 100*(r-1)) %% 100 + 1, 100)
or, using my name $t$ for $r - 1$, it becomes
tabulate(sample.int(3*100, 100*t) %% 100 + 1, 100)
The "+1" at the end of his original formula is exactly the step needed to convert from "numbers between $0$ and $3$" to "numbers between $1$ and $4$".
Let's work from the inside out, and let's simplify to $n = 10$ so that I can show sample outputs:
tabulate(sample.int(3*10, 10*t) %% 10 + 1, 10)
And let's aim for $t = 1.9$, so this becomes
tabulate(sample.int(3*10, 10*1.9) %% 10 + 1, 10)
Starting with sample.int(3*10, 10*1.9): this produces a list of $19$ integers between $1$ and $30$. (i.e., it solved the problem of picking $k$ numbers out of your total -- $300$ in your real problem, $30$ in my smaller example).
As you'll recall, we want to produce three rows of ten dots-and-Xs each, something like
X.X.XX.XX.
XXXX.XXX..
XX.X.XXX..
We can read this left-to-right-top-to-bottom (i.e., normal reading order) to produce a list of locations for Xs: the first item's a dot; the second and third are Xs, and so on, so our list of locations starts out $1, 3, 5, 6, \ldots$. When we get to the end of a row, we just keep counting up, so for the picture above, the X-locations would be $1, 3, 5, 6, 8, 9, 11, 12, 13, 14, 16, 17, 18, 21, 22, 24, 26, 27, 28$. Is that clear?
Well, whubers code produces exactly that list of locations with its innermost section.
The next item is %% 10; that takes a number and produces its remainder on division by ten. So our list becomes $1, 3, 5, 6, 8, 9, 1, 2, 3, 4, 6, 7, 8, 1, 2, 4, 6, 7, 8$. If we break that into three groups --- those that came from numbers between $1$ and $10$, those that came from numbers from $11$ to $20$, and those that came from numbers $21$ to $30$, we get $1, 3, 5, 6, 8, 9$, then $1, 2, 3, 4, 6, 7, 8,$, and finally $1, 2, 4, 6, 7, 8$. Those tell you where the Xs in each of the three rows are. There's a subtle problem here: if there had been an X in position 10 in the first row, the first of our three lists would have been $1, 3, 5, 6, 8, 9, 0$, and the tabulate function doesn't like "0". So whuber adds 1 to each item in the list to get $2, 4, 6, 7, 9, 10, 1$. Let's move on to the overall computation:
tabulate(sample.int(3*10, 10*1.9) %% 10 + 1, 10)
This asks "for those $30$ numbers, each indicating whether there's an X in some column, tell me how many times each column (from $1$ to $10$ --- that's what the final "10" tells you) appears, i.e., tell me how many Xs are in each column. The result is
0 3 2 2 2 1 3 2 3 1
which (because of the shift-by-one thing) you have to read as "there are no Xs in the 10th column; there are 3 Xs in the first column; there are 2 Xs in the second column," and so on up to "there is one X in the 9th column".
That gives you ten integers between $0$ and $3$ whose sum is $19$, hence whose average is $1.9$. If you increase each by 1, you get ten integers between $1$ and $4$ whose sum is $29$, hence an average value of $2.9$.
You can generalize to $n = 100$, I hope.
|
How to generate random integers between 1 and 4 that have a specific mean?
I want to ... uh ... "attenuate" @whuber's amazing answer, which @TomZinger says is too difficult to follow. By that I mean I want to re-describe it in terms that I think Tom Zinger will understand, b
|
12,340
|
How to generate random integers between 1 and 4 that have a specific mean?
|
You can use sample() and select specific probabilities for each integer. If you sum the product of the probabilities and the integers, you get the expected value of the distribution. So, if you have a mean value in mind, say $k$, you can solve the following equation:
$$k = 1\times P(1) + 2\times P(2) + 3\times P(3) + 4\times P(4)$$
You can arbitrarily choose two of the probabilities and solve for the third, which determines the fourth (because $P(1)=1-(P(2)+P(3)+P(4))$ because the probabilities must sum to $1$). For example, let $k=2.3$, $P(4)=.1$, and $P(3)=.2$. Then we have that
$$k = 1 \times [1-(P(2)+P(3)+P(4)] + 2\times P(2) + 3\times P(3) + 4\times P(4)$$
$$2.3 = [1 - (P(2)+.1+.2)] + 2*P(2) + 3\times .2 + 4\times .1$$
$$2.3 = .7 + P(2) + .6 + .4$$
$$P(2)=.6$$
$$P(1)=1-(P(2)+P(3)+P(4)=1 - (.6+.1+.2)=.1$$
So you can run x <- sample(c(1, 2, 3, 4), 1e6, replace = TRUE, prob = c(.1, .6, .2, .1)) and mean(x) is approximately $2.3$
|
How to generate random integers between 1 and 4 that have a specific mean?
|
You can use sample() and select specific probabilities for each integer. If you sum the product of the probabilities and the integers, you get the expected value of the distribution. So, if you have a
|
How to generate random integers between 1 and 4 that have a specific mean?
You can use sample() and select specific probabilities for each integer. If you sum the product of the probabilities and the integers, you get the expected value of the distribution. So, if you have a mean value in mind, say $k$, you can solve the following equation:
$$k = 1\times P(1) + 2\times P(2) + 3\times P(3) + 4\times P(4)$$
You can arbitrarily choose two of the probabilities and solve for the third, which determines the fourth (because $P(1)=1-(P(2)+P(3)+P(4))$ because the probabilities must sum to $1$). For example, let $k=2.3$, $P(4)=.1$, and $P(3)=.2$. Then we have that
$$k = 1 \times [1-(P(2)+P(3)+P(4)] + 2\times P(2) + 3\times P(3) + 4\times P(4)$$
$$2.3 = [1 - (P(2)+.1+.2)] + 2*P(2) + 3\times .2 + 4\times .1$$
$$2.3 = .7 + P(2) + .6 + .4$$
$$P(2)=.6$$
$$P(1)=1-(P(2)+P(3)+P(4)=1 - (.6+.1+.2)=.1$$
So you can run x <- sample(c(1, 2, 3, 4), 1e6, replace = TRUE, prob = c(.1, .6, .2, .1)) and mean(x) is approximately $2.3$
|
How to generate random integers between 1 and 4 that have a specific mean?
You can use sample() and select specific probabilities for each integer. If you sum the product of the probabilities and the integers, you get the expected value of the distribution. So, if you have a
|
12,341
|
How to generate random integers between 1 and 4 that have a specific mean?
|
Here is a simple algorithm: Create $n-1$ random integers in the range $[1,4]$ and calculate the $n^{th}$ integer for the mean to be equal to the specified value. If that number is smaller than $1$ or larger than $4$, then one by one distribute the surplus/lacking onto other integers, e.g. if the integer is $5$, we have $1$ surplus; and we may add this to the next integer if it's not $4$, else add to the next etc. Then, shuffle the entire array.
|
How to generate random integers between 1 and 4 that have a specific mean?
|
Here is a simple algorithm: Create $n-1$ random integers in the range $[1,4]$ and calculate the $n^{th}$ integer for the mean to be equal to the specified value. If that number is smaller than $1$ or
|
How to generate random integers between 1 and 4 that have a specific mean?
Here is a simple algorithm: Create $n-1$ random integers in the range $[1,4]$ and calculate the $n^{th}$ integer for the mean to be equal to the specified value. If that number is smaller than $1$ or larger than $4$, then one by one distribute the surplus/lacking onto other integers, e.g. if the integer is $5$, we have $1$ surplus; and we may add this to the next integer if it's not $4$, else add to the next etc. Then, shuffle the entire array.
|
How to generate random integers between 1 and 4 that have a specific mean?
Here is a simple algorithm: Create $n-1$ random integers in the range $[1,4]$ and calculate the $n^{th}$ integer for the mean to be equal to the specified value. If that number is smaller than $1$ or
|
12,342
|
How to generate random integers between 1 and 4 that have a specific mean?
|
As a supplement to whuber's answer, I've written a script in Python which goes through each step of the sampling scheme. Note that this is meant for illustrative purposes and is not necessarily performant.
Example output:
n=10, s=20, k=4
Starting grid
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
X X X X X X X X X X
Filled in grid
X X . . X . X . . X
. . X X X . . . . .
. . . . X X . . . .
X X X X X X X X X X
Final grid
X X . . X . X . . X
. . X X X . . . . .
. . . . X X . . . .
X X X X X X X X X X
2 2 2 2 4 2 2 1 1 2
The script:
import numpy as np
# Define the starting parameters
integers = [1, 2, 3, 4]
n = 10
s = 20
k = len(integers)
def print_grid(grid, title):
print(f'\n{title}')
for row in grid:
print(' '.join([str(element) for element in row]))
# Create the starting grid
grid = []
for i in range(1, k + 1):
if i < k:
grid.append(['.' for j in range(n)])
else:
grid.append(['X' for j in range(n)])
# Print the starting grid
print_grid(grid, 'Starting grid')
# Randomly and uniformly fill in the remaining rows
indexes = np.random.choice(range((k - 1) * n), s - n, replace=False)
for i in indexes:
row = i // n
col = i % n
grid[row][col] = 'X'
# Print the filled in grid
print_grid(grid, 'Filled in grid')
# Compute how many cells were selected in each column
column_counts = []
for col in range(n):
count = sum(1 for i in range(k) if grid[i][col] == 'X')
column_counts.append(count)
grid.append(column_counts)
# Print the final grid and check that the column counts sum to s
print_grid(grid, 'Final grid')
print()
print(f'Do the column counts sum to {s}? {sum(column_counts) == s}.')
|
How to generate random integers between 1 and 4 that have a specific mean?
|
As a supplement to whuber's answer, I've written a script in Python which goes through each step of the sampling scheme. Note that this is meant for illustrative purposes and is not necessarily perfor
|
How to generate random integers between 1 and 4 that have a specific mean?
As a supplement to whuber's answer, I've written a script in Python which goes through each step of the sampling scheme. Note that this is meant for illustrative purposes and is not necessarily performant.
Example output:
n=10, s=20, k=4
Starting grid
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
X X X X X X X X X X
Filled in grid
X X . . X . X . . X
. . X X X . . . . .
. . . . X X . . . .
X X X X X X X X X X
Final grid
X X . . X . X . . X
. . X X X . . . . .
. . . . X X . . . .
X X X X X X X X X X
2 2 2 2 4 2 2 1 1 2
The script:
import numpy as np
# Define the starting parameters
integers = [1, 2, 3, 4]
n = 10
s = 20
k = len(integers)
def print_grid(grid, title):
print(f'\n{title}')
for row in grid:
print(' '.join([str(element) for element in row]))
# Create the starting grid
grid = []
for i in range(1, k + 1):
if i < k:
grid.append(['.' for j in range(n)])
else:
grid.append(['X' for j in range(n)])
# Print the starting grid
print_grid(grid, 'Starting grid')
# Randomly and uniformly fill in the remaining rows
indexes = np.random.choice(range((k - 1) * n), s - n, replace=False)
for i in indexes:
row = i // n
col = i % n
grid[row][col] = 'X'
# Print the filled in grid
print_grid(grid, 'Filled in grid')
# Compute how many cells were selected in each column
column_counts = []
for col in range(n):
count = sum(1 for i in range(k) if grid[i][col] == 'X')
column_counts.append(count)
grid.append(column_counts)
# Print the final grid and check that the column counts sum to s
print_grid(grid, 'Final grid')
print()
print(f'Do the column counts sum to {s}? {sum(column_counts) == s}.')
|
How to generate random integers between 1 and 4 that have a specific mean?
As a supplement to whuber's answer, I've written a script in Python which goes through each step of the sampling scheme. Note that this is meant for illustrative purposes and is not necessarily perfor
|
12,343
|
How to generate random integers between 1 and 4 that have a specific mean?
|
I've turned whuber's answer into an r function. I hope it helps someone.
n is how many integers you want;
t is the mean you want; and
k is the upper limit you want for your returned values
whubernator<-function(n=NULL, t=NULL, kMax=5){
z = tabulate(sample.int(kMax*(n), (n)*(t),replace =F) %% (n)+1, (n))
return(z)
}
It seems to work as expected:
> w = whubernator(n=10,t=4.2)
> mean(w)
[1] 4.2
> length(w)
[1] 10
> w
[1] 3 5 3 5 5 3 4 5 5 4
It can return 0s, which matches my needs.
> whubernator(n=2,t=0.5)
[1] 1 0
|
How to generate random integers between 1 and 4 that have a specific mean?
|
I've turned whuber's answer into an r function. I hope it helps someone.
n is how many integers you want;
t is the mean you want; and
k is the upper limit you want for your returned values
whubern
|
How to generate random integers between 1 and 4 that have a specific mean?
I've turned whuber's answer into an r function. I hope it helps someone.
n is how many integers you want;
t is the mean you want; and
k is the upper limit you want for your returned values
whubernator<-function(n=NULL, t=NULL, kMax=5){
z = tabulate(sample.int(kMax*(n), (n)*(t),replace =F) %% (n)+1, (n))
return(z)
}
It seems to work as expected:
> w = whubernator(n=10,t=4.2)
> mean(w)
[1] 4.2
> length(w)
[1] 10
> w
[1] 3 5 3 5 5 3 4 5 5 4
It can return 0s, which matches my needs.
> whubernator(n=2,t=0.5)
[1] 1 0
|
How to generate random integers between 1 and 4 that have a specific mean?
I've turned whuber's answer into an r function. I hope it helps someone.
n is how many integers you want;
t is the mean you want; and
k is the upper limit you want for your returned values
whubern
|
12,344
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to infinity?
|
The other answers already here do a great job of explaining why Gaussian RVs don't converge to anything as the variance increases without bound, but I want to point out a seemingly-uniform property that such a collection of Gaussians does satisfy that I think might be enough for someone to guess that they are becoming uniform, but that turns out to not be strong enough to conclude that.
$\newcommand{\len}{\text{len}}$
Consider a collection of random variables $\{X_1,X_2,\dots\}$ where $X_n \sim \mathcal N(0, n^2)$. Let $A = [a_1,a_2]$ be a fixed interval of finite length, and for some $c \in \mathbb R$ define $B = A +c$, i.e. $B$ is $A$ but just shifted over by $c$. For an interval $I = [i_1,i_2]$ define $\len (I) = i_2-i_1$ to be the length of $I$, and note that $\len(A) = \len(B)$.
I'll now prove the following result:
Result: $\vert P(X_n \in A) - P(x_n\in B)\vert \to 0$ as $n \to \infty$.
I call this uniform-like because it says that the distribution of $X_n$ increasingly has two fixed intervals of equal length having equal probability, no matter how far apart they may be. That's definitely a very uniform feature, but as we'll see this doesn't say anything about the actual distribution of the $X_n$ converging to a uniform one.
Pf: note that $X_n = n X_1$ where $X_1 \sim \mathcal N(0, 1)$ so
$$
P(X_n \in A) = P(a_1 \leq n X_1 \leq a_2) = P\left(\frac{a_1}{n} \leq X_1 \leq \frac{a_2}n\right)
$$
$$
= \frac{1}{\sqrt{2\pi}}\int_{a_1/n}^{a_2/n} e^{-x^2/2}\,\text dx.
$$
I can use the (very rough) bound that $e^{-x^2/2} \leq 1$ to get
$$
\frac{1}{\sqrt{2\pi}}\int_{a_1/n}^{a_2/n} e^{-x^2/2}\,\text dx \leq \frac{1}{\sqrt{2\pi}}\int_{a_1/n}^{a_2/n} 1\,\text dx
$$
$$
= \frac{\text{len}(A)}{n\sqrt{2\pi}}.
$$
I can do the same thing for $B$ to get
$$
P(X_n \in B) \leq \frac{\text{len}(B)}{n\sqrt{2\pi}}.
$$
Putting these together I have
$$
\left\vert P(X_n \in A) - P(X_n \in B)\right\vert \leq \frac{\sqrt 2 \text{len}(A) }{n\sqrt{\pi}} \to 0
$$
as $n\to\infty$ (I'm using the triangle inequality here).
$\square$
How is this different from $X_n$ converging on a uniform distribution? I just proved that the probabilities given to any two fixed intervals of the same finite length get closer and closer, and intuitively that makes sense that as the densities are "flattening out" from $A$ and $B$'s perspectives.
But in order for $X_n$ to converge on a uniform distribution, I'd need $P(X_n \in I)$ to head towards being proportional to $\text{len}(I)$ for any interval $I$, and that is a very different thing because this needs to apply to any $I$, not just one fixed in advance (and as mentioned elsewhere, this is also not even possible for a distribution with unbounded support).
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to
|
The other answers already here do a great job of explaining why Gaussian RVs don't converge to anything as the variance increases without bound, but I want to point out a seemingly-uniform property th
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to infinity?
The other answers already here do a great job of explaining why Gaussian RVs don't converge to anything as the variance increases without bound, but I want to point out a seemingly-uniform property that such a collection of Gaussians does satisfy that I think might be enough for someone to guess that they are becoming uniform, but that turns out to not be strong enough to conclude that.
$\newcommand{\len}{\text{len}}$
Consider a collection of random variables $\{X_1,X_2,\dots\}$ where $X_n \sim \mathcal N(0, n^2)$. Let $A = [a_1,a_2]$ be a fixed interval of finite length, and for some $c \in \mathbb R$ define $B = A +c$, i.e. $B$ is $A$ but just shifted over by $c$. For an interval $I = [i_1,i_2]$ define $\len (I) = i_2-i_1$ to be the length of $I$, and note that $\len(A) = \len(B)$.
I'll now prove the following result:
Result: $\vert P(X_n \in A) - P(x_n\in B)\vert \to 0$ as $n \to \infty$.
I call this uniform-like because it says that the distribution of $X_n$ increasingly has two fixed intervals of equal length having equal probability, no matter how far apart they may be. That's definitely a very uniform feature, but as we'll see this doesn't say anything about the actual distribution of the $X_n$ converging to a uniform one.
Pf: note that $X_n = n X_1$ where $X_1 \sim \mathcal N(0, 1)$ so
$$
P(X_n \in A) = P(a_1 \leq n X_1 \leq a_2) = P\left(\frac{a_1}{n} \leq X_1 \leq \frac{a_2}n\right)
$$
$$
= \frac{1}{\sqrt{2\pi}}\int_{a_1/n}^{a_2/n} e^{-x^2/2}\,\text dx.
$$
I can use the (very rough) bound that $e^{-x^2/2} \leq 1$ to get
$$
\frac{1}{\sqrt{2\pi}}\int_{a_1/n}^{a_2/n} e^{-x^2/2}\,\text dx \leq \frac{1}{\sqrt{2\pi}}\int_{a_1/n}^{a_2/n} 1\,\text dx
$$
$$
= \frac{\text{len}(A)}{n\sqrt{2\pi}}.
$$
I can do the same thing for $B$ to get
$$
P(X_n \in B) \leq \frac{\text{len}(B)}{n\sqrt{2\pi}}.
$$
Putting these together I have
$$
\left\vert P(X_n \in A) - P(X_n \in B)\right\vert \leq \frac{\sqrt 2 \text{len}(A) }{n\sqrt{\pi}} \to 0
$$
as $n\to\infty$ (I'm using the triangle inequality here).
$\square$
How is this different from $X_n$ converging on a uniform distribution? I just proved that the probabilities given to any two fixed intervals of the same finite length get closer and closer, and intuitively that makes sense that as the densities are "flattening out" from $A$ and $B$'s perspectives.
But in order for $X_n$ to converge on a uniform distribution, I'd need $P(X_n \in I)$ to head towards being proportional to $\text{len}(I)$ for any interval $I$, and that is a very different thing because this needs to apply to any $I$, not just one fixed in advance (and as mentioned elsewhere, this is also not even possible for a distribution with unbounded support).
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to
The other answers already here do a great job of explaining why Gaussian RVs don't converge to anything as the variance increases without bound, but I want to point out a seemingly-uniform property th
|
12,345
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to infinity?
|
A common mistake in probability is to think that a distribution is uniform because it looks visually flat when all its values are near zero. This is because we tend to see that $f(x)=0.001 \approx 0.000001=f(y)$ and yet $f(x)/f(y)=0.001/0.000001=1000$, i.e. a small interval around $x$ is 1000 times more likely than a small interval around $y$.
It's definitely not uniform on the entire real line in the limit, as there is no uniform distribution on $(-\infty,\infty)$. It's also not even approximately uniform on $[-2\sigma,2\sigma]$.
You can see the latter from the 68-95-99.7 rule you seem to be familiar with. If it were approximately uniform on $[-2\sigma,2\sigma]$, then the probability of being in $[0,\sigma]$ and $[\sigma,2\sigma]$ should be the same, as the two intervals are the same length. But this is not the case: $P([0,\sigma])\approx 0.68/2= 0.34$, yet $P([\sigma,2\sigma])\approx (0.95-0.68)/2 = 0.135$.
When viewed over the entire real line, this sequence of normal distributions doesn't converge to any probability distribution. There are a few ways to see this. As an example, the cdf of a normal with standard deviation $\sigma$ is $F_\sigma(x) = (1/2)(1+\mbox{erf}(x/\sqrt{2}\sigma)$, and $\lim_{\sigma\rightarrow\infty} F_\sigma(x) = 1/2$ for all $x$, which is not the cdf of any random variable. In fact, it's not a cdf at all.
The reason for this non-convergence boils down to "mass loss" is the limit. The limiting function of the normal distribution has actually "lost" probability (i.e. it has escaped to infinity). This is related to the concept of tightness of measures, which gives necessary conditions for a sequence of random variables to converge to another random variable.
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to
|
A common mistake in probability is to think that a distribution is uniform because it looks visually flat when all its values are near zero. This is because we tend to see that $f(x)=0.001 \approx 0.0
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to infinity?
A common mistake in probability is to think that a distribution is uniform because it looks visually flat when all its values are near zero. This is because we tend to see that $f(x)=0.001 \approx 0.000001=f(y)$ and yet $f(x)/f(y)=0.001/0.000001=1000$, i.e. a small interval around $x$ is 1000 times more likely than a small interval around $y$.
It's definitely not uniform on the entire real line in the limit, as there is no uniform distribution on $(-\infty,\infty)$. It's also not even approximately uniform on $[-2\sigma,2\sigma]$.
You can see the latter from the 68-95-99.7 rule you seem to be familiar with. If it were approximately uniform on $[-2\sigma,2\sigma]$, then the probability of being in $[0,\sigma]$ and $[\sigma,2\sigma]$ should be the same, as the two intervals are the same length. But this is not the case: $P([0,\sigma])\approx 0.68/2= 0.34$, yet $P([\sigma,2\sigma])\approx (0.95-0.68)/2 = 0.135$.
When viewed over the entire real line, this sequence of normal distributions doesn't converge to any probability distribution. There are a few ways to see this. As an example, the cdf of a normal with standard deviation $\sigma$ is $F_\sigma(x) = (1/2)(1+\mbox{erf}(x/\sqrt{2}\sigma)$, and $\lim_{\sigma\rightarrow\infty} F_\sigma(x) = 1/2$ for all $x$, which is not the cdf of any random variable. In fact, it's not a cdf at all.
The reason for this non-convergence boils down to "mass loss" is the limit. The limiting function of the normal distribution has actually "lost" probability (i.e. it has escaped to infinity). This is related to the concept of tightness of measures, which gives necessary conditions for a sequence of random variables to converge to another random variable.
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to
A common mistake in probability is to think that a distribution is uniform because it looks visually flat when all its values are near zero. This is because we tend to see that $f(x)=0.001 \approx 0.0
|
12,346
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to infinity?
|
Your statement the pdf starts looking like a uniform distribution with bounds given by $[−2σ,2σ]$ is not correct if you adjust $\sigma$ to match the wider standard deviation.
Consider this chart of two normal densities centred on zero. The red curve corresponds to a standard deviation of $1$ and the blue curve to a standard deviation of $10$, and it is indeed the case that the blue curve is almost flat on $[-2,2]$
but for the blue curve with $\sigma=10$, we should actually be looking at its shape on $[-20,20]$. Rescaling both the $x$-axis and $y$-axis by factors of $10$ gives this next plot, and you get exactly the same shape for the blue density in this later plot as the red density in the earlier plot
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to
|
Your statement the pdf starts looking like a uniform distribution with bounds given by $[−2σ,2σ]$ is not correct if you adjust $\sigma$ to match the wider standard deviation.
Consider this chart of tw
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to infinity?
Your statement the pdf starts looking like a uniform distribution with bounds given by $[−2σ,2σ]$ is not correct if you adjust $\sigma$ to match the wider standard deviation.
Consider this chart of two normal densities centred on zero. The red curve corresponds to a standard deviation of $1$ and the blue curve to a standard deviation of $10$, and it is indeed the case that the blue curve is almost flat on $[-2,2]$
but for the blue curve with $\sigma=10$, we should actually be looking at its shape on $[-20,20]$. Rescaling both the $x$-axis and $y$-axis by factors of $10$ gives this next plot, and you get exactly the same shape for the blue density in this later plot as the red density in the earlier plot
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to
Your statement the pdf starts looking like a uniform distribution with bounds given by $[−2σ,2σ]$ is not correct if you adjust $\sigma$ to match the wider standard deviation.
Consider this chart of tw
|
12,347
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to infinity?
|
The limit of normal distributions leads to another nice property that reflects a uniform distribution, which is that conditional probabilities for any two bounded sets converge in the limit to the conditional probability that applies for the uniform distribution. I will show this below.
To facilitate our analysis, we will let $Z \sim \text{N}(0,1)$ be a standard normal random variable and we will examine the behaviour of the random variable $X=\sigma Z$ in the limit where $\sigma \rightarrow \infty$. Let $\mathcal{B}$ be a (measureable) bounded set and denote the density bounds over the set as:
$$L_\mathcal{B}(\sigma) \equiv \min_{x \in \mathcal{B}} \ \text{N}(x|0,\sigma^2)
\quad \quad \quad
U_\mathcal{B}(\sigma) \equiv \max_{x \in \mathcal{B}} \ \text{N}(x|0,\sigma^2).$$
Since $\mathcal{B}$ is bounded, it can be shown that:
$$\lim_{\sigma \rightarrow \infty} \frac{U_\mathcal{B}(\sigma)}{L_\mathcal{B}(\sigma)} = 0.$$
Now, let $\mathcal{A}$ and $\mathcal{B}$ be arbitrary (measureable) bounded intervals. We can write the conditional probability of the first given the second as:
$$\begin{align}
\mathbb{P}(X \in \mathcal{A} | X \in \mathcal {B})
&= \frac{\mathbb{P}(X \in \mathcal {A} \cap \mathcal{B})}{\mathbb{P}(X \in \mathcal {B})} \\[6pt]
&= \frac{\int_\mathcal{A \cap \mathcal{B}} \text{N}(x|0,\sigma^2) \ dx}{\int_\mathcal{B} \text{N}(x|0,\sigma^2) \ dx} \\[6pt]
&= \frac{\int_\mathcal{A \cap \mathcal{B}} \text{N}(x|0,\sigma^2) \ dx}{\int_\mathcal{A \cap \mathcal{B}} \text{N}(x|0,\sigma^2) \ dx + \int_\mathcal{B-A} \text{N}(x|0,\sigma^2) \ dx} \\[6pt]
\end{align}$$
Let $\lambda_{\ \cdot}$ denote the Lebesgue measure (of a set $\cdot$ shown as a subscript). By applying the bounds to the densities in the above integrals we obtain the conditional probability bounds:
$$\frac{\lambda_\mathcal{A \cap \mathcal{B}} \cdot U_\mathcal{B-A}(\sigma)}{\lambda_\mathcal{A \cap \mathcal{B}} \cdot U_\mathcal{B-A}(\sigma) + \lambda_\mathcal{B-A} \cdot L_\mathcal{B-A}(\sigma)}
\leqslant
\mathbb{P}(X \in \mathcal{A} | X \in \mathcal {B})
\leqslant
\frac{\lambda_\mathcal{A \cap \mathcal{B}} \cdot L_\mathcal{B-A}(\sigma)}{\lambda_\mathcal{A \cap \mathcal{B}} \cdot L_\mathcal{B-A}(\sigma) + \lambda_\mathcal{B-A} \cdot U_\mathcal{B-A}(\sigma)}.$$
Taking the limit $\sigma \rightarrow \infty$ and applying the squeeze theorem then gives:
$$\lim_{\sigma \rightarrow \infty}
\mathbb{P}(X \in \mathcal{A} | X \in \mathcal {B})
= \frac{\lambda_\mathcal{A \cap \mathcal{B}}}{\lambda_\mathcal{B}},$$
which is the standard conditional probability result for a uniform distribution. This confirms that when we take the limit of a sequence of normal distributions with variance approaching infinity, the conditional probability of any bounded set given any other bounded set is just as for a uniform random variable taken over any set encompassing the union of those sets.
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to
|
The limit of normal distributions leads to another nice property that reflects a uniform distribution, which is that conditional probabilities for any two bounded sets converge in the limit to the con
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to infinity?
The limit of normal distributions leads to another nice property that reflects a uniform distribution, which is that conditional probabilities for any two bounded sets converge in the limit to the conditional probability that applies for the uniform distribution. I will show this below.
To facilitate our analysis, we will let $Z \sim \text{N}(0,1)$ be a standard normal random variable and we will examine the behaviour of the random variable $X=\sigma Z$ in the limit where $\sigma \rightarrow \infty$. Let $\mathcal{B}$ be a (measureable) bounded set and denote the density bounds over the set as:
$$L_\mathcal{B}(\sigma) \equiv \min_{x \in \mathcal{B}} \ \text{N}(x|0,\sigma^2)
\quad \quad \quad
U_\mathcal{B}(\sigma) \equiv \max_{x \in \mathcal{B}} \ \text{N}(x|0,\sigma^2).$$
Since $\mathcal{B}$ is bounded, it can be shown that:
$$\lim_{\sigma \rightarrow \infty} \frac{U_\mathcal{B}(\sigma)}{L_\mathcal{B}(\sigma)} = 0.$$
Now, let $\mathcal{A}$ and $\mathcal{B}$ be arbitrary (measureable) bounded intervals. We can write the conditional probability of the first given the second as:
$$\begin{align}
\mathbb{P}(X \in \mathcal{A} | X \in \mathcal {B})
&= \frac{\mathbb{P}(X \in \mathcal {A} \cap \mathcal{B})}{\mathbb{P}(X \in \mathcal {B})} \\[6pt]
&= \frac{\int_\mathcal{A \cap \mathcal{B}} \text{N}(x|0,\sigma^2) \ dx}{\int_\mathcal{B} \text{N}(x|0,\sigma^2) \ dx} \\[6pt]
&= \frac{\int_\mathcal{A \cap \mathcal{B}} \text{N}(x|0,\sigma^2) \ dx}{\int_\mathcal{A \cap \mathcal{B}} \text{N}(x|0,\sigma^2) \ dx + \int_\mathcal{B-A} \text{N}(x|0,\sigma^2) \ dx} \\[6pt]
\end{align}$$
Let $\lambda_{\ \cdot}$ denote the Lebesgue measure (of a set $\cdot$ shown as a subscript). By applying the bounds to the densities in the above integrals we obtain the conditional probability bounds:
$$\frac{\lambda_\mathcal{A \cap \mathcal{B}} \cdot U_\mathcal{B-A}(\sigma)}{\lambda_\mathcal{A \cap \mathcal{B}} \cdot U_\mathcal{B-A}(\sigma) + \lambda_\mathcal{B-A} \cdot L_\mathcal{B-A}(\sigma)}
\leqslant
\mathbb{P}(X \in \mathcal{A} | X \in \mathcal {B})
\leqslant
\frac{\lambda_\mathcal{A \cap \mathcal{B}} \cdot L_\mathcal{B-A}(\sigma)}{\lambda_\mathcal{A \cap \mathcal{B}} \cdot L_\mathcal{B-A}(\sigma) + \lambda_\mathcal{B-A} \cdot U_\mathcal{B-A}(\sigma)}.$$
Taking the limit $\sigma \rightarrow \infty$ and applying the squeeze theorem then gives:
$$\lim_{\sigma \rightarrow \infty}
\mathbb{P}(X \in \mathcal{A} | X \in \mathcal {B})
= \frac{\lambda_\mathcal{A \cap \mathcal{B}}}{\lambda_\mathcal{B}},$$
which is the standard conditional probability result for a uniform distribution. This confirms that when we take the limit of a sequence of normal distributions with variance approaching infinity, the conditional probability of any bounded set given any other bounded set is just as for a uniform random variable taken over any set encompassing the union of those sets.
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to
The limit of normal distributions leads to another nice property that reflects a uniform distribution, which is that conditional probabilities for any two bounded sets converge in the limit to the con
|
12,348
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to infinity?
|
Your question is fundamentally flawed. The standard normal distribution is scaled so that $\sigma = 1$. So for some other Gaussian distribution ($\mu = 0, \sigma = \sigma^*$) then the curve between bounds $[-2\sigma^*, 2\sigma^*]$ has the same shape as the standard normal distribution. The only difference is the scaling factor. So if you rescale the Gaussian by dividing by $\sigma^*$, then you end up with the standard normal distribution.
Now if you have a Gaussian distribution ($\mu = 0, \sigma = \sigma^*$) then yes as $\sigma^* \rightarrow \infty$, the region between $[-2, 2]$ becomes increasing flatter.
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to
|
Your question is fundamentally flawed. The standard normal distribution is scaled so that $\sigma = 1$. So for some other Gaussian distribution ($\mu = 0, \sigma = \sigma^*$) then the curve between b
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to infinity?
Your question is fundamentally flawed. The standard normal distribution is scaled so that $\sigma = 1$. So for some other Gaussian distribution ($\mu = 0, \sigma = \sigma^*$) then the curve between bounds $[-2\sigma^*, 2\sigma^*]$ has the same shape as the standard normal distribution. The only difference is the scaling factor. So if you rescale the Gaussian by dividing by $\sigma^*$, then you end up with the standard normal distribution.
Now if you have a Gaussian distribution ($\mu = 0, \sigma = \sigma^*$) then yes as $\sigma^* \rightarrow \infty$, the region between $[-2, 2]$ becomes increasing flatter.
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to
Your question is fundamentally flawed. The standard normal distribution is scaled so that $\sigma = 1$. So for some other Gaussian distribution ($\mu = 0, \sigma = \sigma^*$) then the curve between b
|
12,349
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to infinity?
|
Here is an alternative view of the problem that shows that of $X_n\sim N(\mu;n\sigma)$, where $\mu$ and $\sigma>0$ are fixed and $\{x\}=x-\lfloor x\rfloor$ is the fractional part function, then $\{X_n\}$ converges weakly to a random variable $U$ uniformly distributed over $[0,1]$, in other words, $$X_n\mod 1\stackrel{n}{\Longrightarrow}U$$
The function $\{x\}$ is measurable of period $1$ and so, for any $f\in\mathcal{C}_b(\mathbb{R})$, $f_\sigma(x):=f(\{\sigma x\})$ is measurable bounded and $\frac{1}{\sigma}$-periodic. Let $\phi(x)$ be the density function of the standard normal distribution. Then, by Fejér's formula
$$\begin{align}
E[f(\{\sigma n X+\mu\})]&=\int f(\{n\sigma x+\mu\})\phi(x)\,dx=\int f_\sigma\big(nx+\tfrac{\mu}{\sigma}\big)\phi(x)\,dx\\
&\xrightarrow{n\rightarrow\infty}\Big(\frac{1}{1/\sigma}\int^{1/\sigma}_0f_\sigma(x)\,dx\Big)\int\phi(x)\,dx=\sigma\int^{1/\sigma}_0f(\{\sigma x\})\,dx\\
&=\int^1_0f(x)\,dx=E[f(U)]
\end{align}$$
Edit: After a second inspection and a comment of @whuber, it seems that view the I suggested in my answer is actually holds for any random variable $X$ whose law admits a density with respect Lebesgue measure, that is, if the law if $X$ $P_X(dx)= \phi(x)\,dx$ where $\phi\in L_1(\mathbb{R})$, then for any $\mu\in\mathbb{R}$
$$\{\sigma n X+\mu\}\stackrel{n\rightarrow\infty}{\Longrightarrow}U(0,1)$$
The same argument used above for $X_n\sim N(\mu;\sigma n)$ works for $\sigma n X+\mu$. So indeed, in this instance, other than scale and location invariance, there is nothing special about normality in the context of $\mod 1$.
One last observation. When variance is increased by the transformation $\sigma X$, where $X$ is a random variable that has density $\phi$ (with respect to Lebesgue's measure $\lambda$) such that $\phi$ is continuous at $0$, and $\phi(0)>0$, then the density of $\sigma X$ flattens out locally as $\sigma\rightarrow\infty$ giving the appearance of convergence to uniform distribution. To be more precise, suppose $A$ is a Borel set with $0<\lambda(A)<\infty$ and $P(X\in A)>0$. Consider the conditional distribution
$$ P^A_\sigma(dx):=P[\sigma X\in dx|\sigma X\in A]$$
Then, for any $f\in\mathcal{C}_b(\mathbb{R})$
$$\begin{align}
E[f(\sigma X)|\sigma X\in A]&=\frac{\int \mathbb{1}_{A}(\sigma x) f(\sigma x)\phi(x)\,dx}{\int\mathbb{1}_A(\sigma x)\phi(x)\,dx}\\
&=\frac{\int \mathbb{1}_{A}(x) f(x)\phi\big(\tfrac{x}{\sigma}\big)\,dx}{\int\mathbb{1}_A(x)\phi\big(\tfrac{x}{\sigma}\big)\,dx}\xrightarrow{\sigma\rightarrow\infty}\frac{1}{\lambda(A)}\int_A\,f(x)\,dx
\end{align}$$
by dominated convergence. Therefore, $P^A_\sigma\stackrel{\sigma\rightarrow\infty}{\Longrightarrow}\frac{1}{\lambda(A)}\mathbb{1}_A(x)\,dx$, that is, $P^A_\sigma$ converges weakly to the uniform distribution over the set $A$. This in particular, holds for $X\sim N(0;1)$.
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to
|
Here is an alternative view of the problem that shows that of $X_n\sim N(\mu;n\sigma)$, where $\mu$ and $\sigma>0$ are fixed and $\{x\}=x-\lfloor x\rfloor$ is the fractional part function, then $\{X_n
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to infinity?
Here is an alternative view of the problem that shows that of $X_n\sim N(\mu;n\sigma)$, where $\mu$ and $\sigma>0$ are fixed and $\{x\}=x-\lfloor x\rfloor$ is the fractional part function, then $\{X_n\}$ converges weakly to a random variable $U$ uniformly distributed over $[0,1]$, in other words, $$X_n\mod 1\stackrel{n}{\Longrightarrow}U$$
The function $\{x\}$ is measurable of period $1$ and so, for any $f\in\mathcal{C}_b(\mathbb{R})$, $f_\sigma(x):=f(\{\sigma x\})$ is measurable bounded and $\frac{1}{\sigma}$-periodic. Let $\phi(x)$ be the density function of the standard normal distribution. Then, by Fejér's formula
$$\begin{align}
E[f(\{\sigma n X+\mu\})]&=\int f(\{n\sigma x+\mu\})\phi(x)\,dx=\int f_\sigma\big(nx+\tfrac{\mu}{\sigma}\big)\phi(x)\,dx\\
&\xrightarrow{n\rightarrow\infty}\Big(\frac{1}{1/\sigma}\int^{1/\sigma}_0f_\sigma(x)\,dx\Big)\int\phi(x)\,dx=\sigma\int^{1/\sigma}_0f(\{\sigma x\})\,dx\\
&=\int^1_0f(x)\,dx=E[f(U)]
\end{align}$$
Edit: After a second inspection and a comment of @whuber, it seems that view the I suggested in my answer is actually holds for any random variable $X$ whose law admits a density with respect Lebesgue measure, that is, if the law if $X$ $P_X(dx)= \phi(x)\,dx$ where $\phi\in L_1(\mathbb{R})$, then for any $\mu\in\mathbb{R}$
$$\{\sigma n X+\mu\}\stackrel{n\rightarrow\infty}{\Longrightarrow}U(0,1)$$
The same argument used above for $X_n\sim N(\mu;\sigma n)$ works for $\sigma n X+\mu$. So indeed, in this instance, other than scale and location invariance, there is nothing special about normality in the context of $\mod 1$.
One last observation. When variance is increased by the transformation $\sigma X$, where $X$ is a random variable that has density $\phi$ (with respect to Lebesgue's measure $\lambda$) such that $\phi$ is continuous at $0$, and $\phi(0)>0$, then the density of $\sigma X$ flattens out locally as $\sigma\rightarrow\infty$ giving the appearance of convergence to uniform distribution. To be more precise, suppose $A$ is a Borel set with $0<\lambda(A)<\infty$ and $P(X\in A)>0$. Consider the conditional distribution
$$ P^A_\sigma(dx):=P[\sigma X\in dx|\sigma X\in A]$$
Then, for any $f\in\mathcal{C}_b(\mathbb{R})$
$$\begin{align}
E[f(\sigma X)|\sigma X\in A]&=\frac{\int \mathbb{1}_{A}(\sigma x) f(\sigma x)\phi(x)\,dx}{\int\mathbb{1}_A(\sigma x)\phi(x)\,dx}\\
&=\frac{\int \mathbb{1}_{A}(x) f(x)\phi\big(\tfrac{x}{\sigma}\big)\,dx}{\int\mathbb{1}_A(x)\phi\big(\tfrac{x}{\sigma}\big)\,dx}\xrightarrow{\sigma\rightarrow\infty}\frac{1}{\lambda(A)}\int_A\,f(x)\,dx
\end{align}$$
by dominated convergence. Therefore, $P^A_\sigma\stackrel{\sigma\rightarrow\infty}{\Longrightarrow}\frac{1}{\lambda(A)}\mathbb{1}_A(x)\,dx$, that is, $P^A_\sigma$ converges weakly to the uniform distribution over the set $A$. This in particular, holds for $X\sim N(0;1)$.
|
Does the normal distribution converge to a uniform distribution when the standard deviation grows to
Here is an alternative view of the problem that shows that of $X_n\sim N(\mu;n\sigma)$, where $\mu$ and $\sigma>0$ are fixed and $\{x\}=x-\lfloor x\rfloor$ is the fractional part function, then $\{X_n
|
12,350
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
Imagine that you threw your fair six-sided die and you got ⚀. The
result was so fascinating that you called your friend Dave and told
him about it. Since he was curious what he'd get when
throwing his fair six-sided die, he threw it and got ⚁.
A standard die has six sides. If you are not cheating then it lands on each side with equal probability, i.e. $1$ in $6$ times. The probability that you throw ⚀, the same as with the other sides, is $\tfrac{1}{6}$. The probability that you throw ⚀, and your friend throws ⚁, is $\tfrac{1}{6} \times \tfrac{1}{6} = \tfrac{1}{36}$ since the two events are independent and we multiply independent probabilities. Saying it differently, there are $36$ arrangements of such pairs that can be easily listed (as you already did). The probability of the opposite event (you throw ⚁ and your friend throws ⚀) is also $\tfrac{1}{36}$. The probabilities that you throw ⚀, and your friend throws ⚁, or that you throw ⚁, and your friend throws ⚀, are exclusive, so we add them $\tfrac{1}{36} + \tfrac{1}{36} = \tfrac{2}{36}$. Among all the possible arrangements, there are two meeting this condition.
How do we know all of this? Well, on the grounds of probability, combinatorics and logic, but those three need some factual knowledge to rely on. We know on the basis of the experience of thousands of gamblers and some physics, that there is no reason to believe that a fair six-sided die has other than an equiprobable chance of landing on each side. Similarly, we have no reason to suspect that two independent throws are somehow related and influence each other.
You can imagine a box with tickets labeled using all the $2$-combinations (with repetition) of numbers from $1$ to $6$. That would limit the number of possible outcomes to $21$ and change the probabilities. However if you think of such a definition in term of dice, then you would have to imagine two dice that are somehow glued together. This is something very different than two dice that can function independently and can be thrown alone landing on each side with equal probability without affecting each other.
All that said, one needs to comment that such models are possible, but not for things like dice. For example, in particle physics based on empirical observations it appeared that Bose-Einstein statistic of non-distinguishable particles (see also the stars-and-bars problem) is more appropriate than the distinguishable-particles model. You can find some remarks about those models in Probability or Probability via Expectation by Peter Whittle, or in volume one of An introduction to probability theory and its applications by William Feller.
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
Imagine that you threw your fair six-sided die and you got ⚀. The
result was so fascinating that you called your friend Dave and told
him about it. Since he was curious what he'd get when
throw
|
How do we know that the probability of rolling 1 and 2 is 1/18?
Imagine that you threw your fair six-sided die and you got ⚀. The
result was so fascinating that you called your friend Dave and told
him about it. Since he was curious what he'd get when
throwing his fair six-sided die, he threw it and got ⚁.
A standard die has six sides. If you are not cheating then it lands on each side with equal probability, i.e. $1$ in $6$ times. The probability that you throw ⚀, the same as with the other sides, is $\tfrac{1}{6}$. The probability that you throw ⚀, and your friend throws ⚁, is $\tfrac{1}{6} \times \tfrac{1}{6} = \tfrac{1}{36}$ since the two events are independent and we multiply independent probabilities. Saying it differently, there are $36$ arrangements of such pairs that can be easily listed (as you already did). The probability of the opposite event (you throw ⚁ and your friend throws ⚀) is also $\tfrac{1}{36}$. The probabilities that you throw ⚀, and your friend throws ⚁, or that you throw ⚁, and your friend throws ⚀, are exclusive, so we add them $\tfrac{1}{36} + \tfrac{1}{36} = \tfrac{2}{36}$. Among all the possible arrangements, there are two meeting this condition.
How do we know all of this? Well, on the grounds of probability, combinatorics and logic, but those three need some factual knowledge to rely on. We know on the basis of the experience of thousands of gamblers and some physics, that there is no reason to believe that a fair six-sided die has other than an equiprobable chance of landing on each side. Similarly, we have no reason to suspect that two independent throws are somehow related and influence each other.
You can imagine a box with tickets labeled using all the $2$-combinations (with repetition) of numbers from $1$ to $6$. That would limit the number of possible outcomes to $21$ and change the probabilities. However if you think of such a definition in term of dice, then you would have to imagine two dice that are somehow glued together. This is something very different than two dice that can function independently and can be thrown alone landing on each side with equal probability without affecting each other.
All that said, one needs to comment that such models are possible, but not for things like dice. For example, in particle physics based on empirical observations it appeared that Bose-Einstein statistic of non-distinguishable particles (see also the stars-and-bars problem) is more appropriate than the distinguishable-particles model. You can find some remarks about those models in Probability or Probability via Expectation by Peter Whittle, or in volume one of An introduction to probability theory and its applications by William Feller.
|
How do we know that the probability of rolling 1 and 2 is 1/18?
Imagine that you threw your fair six-sided die and you got ⚀. The
result was so fascinating that you called your friend Dave and told
him about it. Since he was curious what he'd get when
throw
|
12,351
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
I think you are overlooking the fact that it does not matter whether "we" can distinguish the dice or not, but rather it matters that the dice are unique and distinct, and act on their own accord.
So if in the closed box scenario, you open the box and see a 1 and a 2, you don't know whether it is $(1,2)$ or $(2,1)$, because you cannot distinguish the dice. However, both $(1,2)$ and $(2,1)$ would lead to the same visual you see, that is, a 1 and a 2. So there are two outcomes favoring that visual. Similarly for every non-same pair, there are two outcomes favoring each visual, and thus there are 36 possible outcomes.
Mathematically, the formula for the probability of an event is
$$\dfrac{\text{Number of outcomes for the event}}{\text{Number of total possible outcomes}}. $$
However, this formula only holds for when each outcome is equally likely. In the first table, each of those pairs is equally likely, so the formula holds. In your second table, each outcome is not equally likely, so the formula does not work. The way you find the answer using your table is
Probability of 1 and 2 = Probability of $(1,2)$ + Probability of $(2,1)$ = $\dfrac{1}{36} + \dfrac{1}{36} = \dfrac{1}{18}$.
Another way to to think about this is that this experiment is the exact same as rolling each die separately, where you can spot Die 1 and Die 2. Thus the outcomes and their probabilities will match with the closed box experiment.
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
I think you are overlooking the fact that it does not matter whether "we" can distinguish the dice or not, but rather it matters that the dice are unique and distinct, and act on their own accord.
So
|
How do we know that the probability of rolling 1 and 2 is 1/18?
I think you are overlooking the fact that it does not matter whether "we" can distinguish the dice or not, but rather it matters that the dice are unique and distinct, and act on their own accord.
So if in the closed box scenario, you open the box and see a 1 and a 2, you don't know whether it is $(1,2)$ or $(2,1)$, because you cannot distinguish the dice. However, both $(1,2)$ and $(2,1)$ would lead to the same visual you see, that is, a 1 and a 2. So there are two outcomes favoring that visual. Similarly for every non-same pair, there are two outcomes favoring each visual, and thus there are 36 possible outcomes.
Mathematically, the formula for the probability of an event is
$$\dfrac{\text{Number of outcomes for the event}}{\text{Number of total possible outcomes}}. $$
However, this formula only holds for when each outcome is equally likely. In the first table, each of those pairs is equally likely, so the formula holds. In your second table, each outcome is not equally likely, so the formula does not work. The way you find the answer using your table is
Probability of 1 and 2 = Probability of $(1,2)$ + Probability of $(2,1)$ = $\dfrac{1}{36} + \dfrac{1}{36} = \dfrac{1}{18}$.
Another way to to think about this is that this experiment is the exact same as rolling each die separately, where you can spot Die 1 and Die 2. Thus the outcomes and their probabilities will match with the closed box experiment.
|
How do we know that the probability of rolling 1 and 2 is 1/18?
I think you are overlooking the fact that it does not matter whether "we" can distinguish the dice or not, but rather it matters that the dice are unique and distinct, and act on their own accord.
So
|
12,352
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
Lets imagine that the first scenario involves rolling one red die and one blue die, while the second involves you rolling a pair of white dice.
In the first case, can write down every possible outcome as (red die, blue die), which gives you this table (reproduced from your question):
\begin{array} {|c|c|c|c|c|c|c|}
\hline
\frac{\textrm{Blue}}{\textrm{Red}}&1 & 2 & 3 & 4 & 5 & 6 \\
\hline
1 & (1,1) & \mathbf{(1,2)} & (1,3) & (1,4) & (1,5) & (1,6) \\
\hline
2 & \mathbf{(2,1)} & (2,2) & (2,3) & (2,4) & (2,5) & (2,6) \\
\hline
3 & (3,1) & (3,2) & (3,3) & (3,4) & (3,5) & (3,6) \\
\hline
4 & (4,1) & (4,2) & (4,3) & (4,4) & (4,5) & (4,6) \\
\hline
5 & (5,1) & (5,2) & (5,3) & (5,4) & (5,5) & (5,6) \\
\hline
6 & (6,1) & (6,2) & (6,3) & (6,4) & (6,5) & (6,6) \\
\hline
\end{array}
Our idealized dice are fair (each outcome is equally likely) and you've listed every outcome. Based on this, you correctly conclude that a one and a two occurs with probability $\frac{2}{36}$, or $\frac{1}{18}.$ So far, so good.
Next, suppose you roll two identical dice instead. You've correctly listed all the possible outcomes, but you incorrectly assumed all of these outcomes are equally likely. In particular, the $(n,n)$ outcomes are half as likely as the other outcomes. Because of this, you cannot just calculate the probability by adding up the # of desired outcomes over the total number of outcomes. Instead, you need to weight each outcome by the probability of it occurring. If you run through the math, you'll find that it comes out the same--one doubly-likely event in the numerator out of 15 double-likely events and 6 singleton events.
The next question is "how could I know that the events aren't all equally likely?" One way to think about this is to imagine what would happen if you could distinguish the two dice. Perhaps you put a tiny mark on each die. This can't change the outcome, but it reduces the problem the previous one. Alternately, suppose you write the chart out so that instead of Blue/Red, it reads Left Die/Right Die.
As a further exercise, think about the difference between seeing an ordered outcome (red=1, blue=2) vs. an unordered one (one die showing 1, one die showing 2).
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
Lets imagine that the first scenario involves rolling one red die and one blue die, while the second involves you rolling a pair of white dice.
In the first case, can write down every possible outcome
|
How do we know that the probability of rolling 1 and 2 is 1/18?
Lets imagine that the first scenario involves rolling one red die and one blue die, while the second involves you rolling a pair of white dice.
In the first case, can write down every possible outcome as (red die, blue die), which gives you this table (reproduced from your question):
\begin{array} {|c|c|c|c|c|c|c|}
\hline
\frac{\textrm{Blue}}{\textrm{Red}}&1 & 2 & 3 & 4 & 5 & 6 \\
\hline
1 & (1,1) & \mathbf{(1,2)} & (1,3) & (1,4) & (1,5) & (1,6) \\
\hline
2 & \mathbf{(2,1)} & (2,2) & (2,3) & (2,4) & (2,5) & (2,6) \\
\hline
3 & (3,1) & (3,2) & (3,3) & (3,4) & (3,5) & (3,6) \\
\hline
4 & (4,1) & (4,2) & (4,3) & (4,4) & (4,5) & (4,6) \\
\hline
5 & (5,1) & (5,2) & (5,3) & (5,4) & (5,5) & (5,6) \\
\hline
6 & (6,1) & (6,2) & (6,3) & (6,4) & (6,5) & (6,6) \\
\hline
\end{array}
Our idealized dice are fair (each outcome is equally likely) and you've listed every outcome. Based on this, you correctly conclude that a one and a two occurs with probability $\frac{2}{36}$, or $\frac{1}{18}.$ So far, so good.
Next, suppose you roll two identical dice instead. You've correctly listed all the possible outcomes, but you incorrectly assumed all of these outcomes are equally likely. In particular, the $(n,n)$ outcomes are half as likely as the other outcomes. Because of this, you cannot just calculate the probability by adding up the # of desired outcomes over the total number of outcomes. Instead, you need to weight each outcome by the probability of it occurring. If you run through the math, you'll find that it comes out the same--one doubly-likely event in the numerator out of 15 double-likely events and 6 singleton events.
The next question is "how could I know that the events aren't all equally likely?" One way to think about this is to imagine what would happen if you could distinguish the two dice. Perhaps you put a tiny mark on each die. This can't change the outcome, but it reduces the problem the previous one. Alternately, suppose you write the chart out so that instead of Blue/Red, it reads Left Die/Right Die.
As a further exercise, think about the difference between seeing an ordered outcome (red=1, blue=2) vs. an unordered one (one die showing 1, one die showing 2).
|
How do we know that the probability of rolling 1 and 2 is 1/18?
Lets imagine that the first scenario involves rolling one red die and one blue die, while the second involves you rolling a pair of white dice.
In the first case, can write down every possible outcome
|
12,353
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
The key idea is that if you list the 36 possible outcomes of two distinguishable dice, you are listing equally probable outcomes. This is not obvious, or axiomatic; it's true only if your dice are fair and not somehow connected. If you list the outcomes of indistinguishable dice, they are not equally probable, because why should they be, any more than the outcomes "win the lottery" and "don't win the lottery" are equally probable.
To get to the conclusion, you need:
We are working with fair dice, for which all six numbers are equally probable.
The two dice are independent, so that the probability of die number two obtaining a particular number is always independent of what number die number one gave. (Imagine instead rolling the same die twice on a sticky surface of some kind that made the second roll come out different.)
Given those two facts about the situation, the rules of probability tell you that the probability of achieving any pair $(a,b)$ is the probability of achieving $a$ on the first die times that of achieving $b$ on the second. If you start lumping $(a,b)$ and $(b,a)$ together, then you don't have the simple independence of events to help you any more, so you can't just multiply probabilities. Instead, you have made a collection of mutually exclusive events (if $a \neq b$), so you can safely add the probabilities of getting $(a,b)$ and $(b,a)$ if they are different.
The idea that you can get probabilities by just counting possibilities relies on assumptions of equal probability and independence. These assumptions are rarely verified in reality but almost always in classroom problems.
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
The key idea is that if you list the 36 possible outcomes of two distinguishable dice, you are listing equally probable outcomes. This is not obvious, or axiomatic; it's true only if your dice are fai
|
How do we know that the probability of rolling 1 and 2 is 1/18?
The key idea is that if you list the 36 possible outcomes of two distinguishable dice, you are listing equally probable outcomes. This is not obvious, or axiomatic; it's true only if your dice are fair and not somehow connected. If you list the outcomes of indistinguishable dice, they are not equally probable, because why should they be, any more than the outcomes "win the lottery" and "don't win the lottery" are equally probable.
To get to the conclusion, you need:
We are working with fair dice, for which all six numbers are equally probable.
The two dice are independent, so that the probability of die number two obtaining a particular number is always independent of what number die number one gave. (Imagine instead rolling the same die twice on a sticky surface of some kind that made the second roll come out different.)
Given those two facts about the situation, the rules of probability tell you that the probability of achieving any pair $(a,b)$ is the probability of achieving $a$ on the first die times that of achieving $b$ on the second. If you start lumping $(a,b)$ and $(b,a)$ together, then you don't have the simple independence of events to help you any more, so you can't just multiply probabilities. Instead, you have made a collection of mutually exclusive events (if $a \neq b$), so you can safely add the probabilities of getting $(a,b)$ and $(b,a)$ if they are different.
The idea that you can get probabilities by just counting possibilities relies on assumptions of equal probability and independence. These assumptions are rarely verified in reality but almost always in classroom problems.
|
How do we know that the probability of rolling 1 and 2 is 1/18?
The key idea is that if you list the 36 possible outcomes of two distinguishable dice, you are listing equally probable outcomes. This is not obvious, or axiomatic; it's true only if your dice are fai
|
12,354
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
If you translate this into terms of coins - say, flipping two indistinguishable pennies - it becomes a question of only three outcomes: 2 heads, 2 tails, 1 of each, and the problem is easier to spot. The same logic applies, and we see that it's more likely to get 1 of each than to get 2 heads or 2 tails.
That's the slipperiness of your second table - it represents all possible outcomes, even though they are not all equally weighted probabilities, as in the first table. It would be ill-defined to try to spell out what each row and column in the second table means - they're only meaningful in the combined table where each outcome has 1 box, regardless of likelihood, whereas the first table displays "all the equally likely outcomes of die 1, each having its own row," and similarly for columns and die 2.
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
If you translate this into terms of coins - say, flipping two indistinguishable pennies - it becomes a question of only three outcomes: 2 heads, 2 tails, 1 of each, and the problem is easier to spot.
|
How do we know that the probability of rolling 1 and 2 is 1/18?
If you translate this into terms of coins - say, flipping two indistinguishable pennies - it becomes a question of only three outcomes: 2 heads, 2 tails, 1 of each, and the problem is easier to spot. The same logic applies, and we see that it's more likely to get 1 of each than to get 2 heads or 2 tails.
That's the slipperiness of your second table - it represents all possible outcomes, even though they are not all equally weighted probabilities, as in the first table. It would be ill-defined to try to spell out what each row and column in the second table means - they're only meaningful in the combined table where each outcome has 1 box, regardless of likelihood, whereas the first table displays "all the equally likely outcomes of die 1, each having its own row," and similarly for columns and die 2.
|
How do we know that the probability of rolling 1 and 2 is 1/18?
If you translate this into terms of coins - say, flipping two indistinguishable pennies - it becomes a question of only three outcomes: 2 heads, 2 tails, 1 of each, and the problem is easier to spot.
|
12,355
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
Let's start by stating the assumption: indistinguishable dice only roll 21 possible outcomes, while distinguishable dice roll 36 possible outcomes.
To test the difference, get a pair of identical white dice. Coat one in a UV-absorbent material like sunscreen, which is invisible to the naked eye. The dice still appear indistinguishable until you look at them under a black light, when the coated die appears black while the clean die glows.
Conceal the pair of dice in a box and shake it. What are the odds you'll get a 2 and a 1 when you open the box? Intuitively you might think "rolling a 1 and a 2" is just 1 of 21 possible outcomes because you can't tell the dice apart. But if you open the box under a black light, you can tell them apart. When you can tell the dice apart, "rolling a 1 and a 2" is 2 of 36 possible combinations.
Does that mean a black light has the power to change the probability of obtaining a certain outcome, even if the dice are only exposed to the light and observed after they've been rolled? Of course not. Nothing changes the dice after you stop shaking the box. The probability of a given outcome can't change.
Since the original assumption depends on a change that doesn't exist, it's reasonable to conclude that the original assumption was incorrect. But what about the original assumption is incorrect - that indistinguishable dice only roll 21 possible outcomes, or that distinguishable dice roll 36 possible outcomes?
Clearly the black light experiment demonstrated that observation has no impact on probability (at least on this scale - quantum probability is a different matter) or the distinctness of objects. The term "indistinguishable" merely describes something which observation cannot differentiate from something else. In other words, the fact that the dice appear the same under some circumstances (i.e. that they aren't under a black light) and not others has no bear on the fact that they are truly two distinct objects. This would be true even if the circumstances under which you're able to distinguish between them are never discovered.
In short: your ability to distinguish between the dice being rolled is irrelevant when analyzing the probability of a particular outcome. Each die is inherently distinct. All outcomes are based on this fact, not on an observer's point of view.
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
Let's start by stating the assumption: indistinguishable dice only roll 21 possible outcomes, while distinguishable dice roll 36 possible outcomes.
To test the difference, get a pair of identical whit
|
How do we know that the probability of rolling 1 and 2 is 1/18?
Let's start by stating the assumption: indistinguishable dice only roll 21 possible outcomes, while distinguishable dice roll 36 possible outcomes.
To test the difference, get a pair of identical white dice. Coat one in a UV-absorbent material like sunscreen, which is invisible to the naked eye. The dice still appear indistinguishable until you look at them under a black light, when the coated die appears black while the clean die glows.
Conceal the pair of dice in a box and shake it. What are the odds you'll get a 2 and a 1 when you open the box? Intuitively you might think "rolling a 1 and a 2" is just 1 of 21 possible outcomes because you can't tell the dice apart. But if you open the box under a black light, you can tell them apart. When you can tell the dice apart, "rolling a 1 and a 2" is 2 of 36 possible combinations.
Does that mean a black light has the power to change the probability of obtaining a certain outcome, even if the dice are only exposed to the light and observed after they've been rolled? Of course not. Nothing changes the dice after you stop shaking the box. The probability of a given outcome can't change.
Since the original assumption depends on a change that doesn't exist, it's reasonable to conclude that the original assumption was incorrect. But what about the original assumption is incorrect - that indistinguishable dice only roll 21 possible outcomes, or that distinguishable dice roll 36 possible outcomes?
Clearly the black light experiment demonstrated that observation has no impact on probability (at least on this scale - quantum probability is a different matter) or the distinctness of objects. The term "indistinguishable" merely describes something which observation cannot differentiate from something else. In other words, the fact that the dice appear the same under some circumstances (i.e. that they aren't under a black light) and not others has no bear on the fact that they are truly two distinct objects. This would be true even if the circumstances under which you're able to distinguish between them are never discovered.
In short: your ability to distinguish between the dice being rolled is irrelevant when analyzing the probability of a particular outcome. Each die is inherently distinct. All outcomes are based on this fact, not on an observer's point of view.
|
How do we know that the probability of rolling 1 and 2 is 1/18?
Let's start by stating the assumption: indistinguishable dice only roll 21 possible outcomes, while distinguishable dice roll 36 possible outcomes.
To test the difference, get a pair of identical whit
|
12,356
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
We can deduce that your second table does not represent the scenario accurately.
You have eliminated all the cells below and left of the diagonal, on the supposed basis that (1, 2) and (2, 1) are congruent and therefore redundant outcomes.
Instead suppose that you roll one die twice in a row. Is it valid to count 1-then-2 as an identical outcome as 2-then-1? Clearly not. Even though the second roll outcome does not depend on the first, they are still distinct outcomes. You cannot eliminate rearrangements as duplicates. Now, rolling two dice at once is the same for this purpose as rolling one die twice in a row. You therefore cannot eliminate rearrangements.
(Still not convinced? Here is an analogy of sorts. You walk from your house to the top of the mountain. Tomorrow you walk back. Was there any point in time on both days when you were at the same place? Maybe? Now imagine you walk from your house to the top of the mountain, and on the same day another person walks from the top of the mountain to your house. Is there any time that day when you meet? Obviously yes. They are the same question. Transposition in time of untangled events does not change deductions that can be made from those events.)
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
We can deduce that your second table does not represent the scenario accurately.
You have eliminated all the cells below and left of the diagonal, on the supposed basis that (1, 2) and (2, 1) are cong
|
How do we know that the probability of rolling 1 and 2 is 1/18?
We can deduce that your second table does not represent the scenario accurately.
You have eliminated all the cells below and left of the diagonal, on the supposed basis that (1, 2) and (2, 1) are congruent and therefore redundant outcomes.
Instead suppose that you roll one die twice in a row. Is it valid to count 1-then-2 as an identical outcome as 2-then-1? Clearly not. Even though the second roll outcome does not depend on the first, they are still distinct outcomes. You cannot eliminate rearrangements as duplicates. Now, rolling two dice at once is the same for this purpose as rolling one die twice in a row. You therefore cannot eliminate rearrangements.
(Still not convinced? Here is an analogy of sorts. You walk from your house to the top of the mountain. Tomorrow you walk back. Was there any point in time on both days when you were at the same place? Maybe? Now imagine you walk from your house to the top of the mountain, and on the same day another person walks from the top of the mountain to your house. Is there any time that day when you meet? Obviously yes. They are the same question. Transposition in time of untangled events does not change deductions that can be made from those events.)
|
How do we know that the probability of rolling 1 and 2 is 1/18?
We can deduce that your second table does not represent the scenario accurately.
You have eliminated all the cells below and left of the diagonal, on the supposed basis that (1, 2) and (2, 1) are cong
|
12,357
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
If we just observe "Somebody gives me a box. I open the box. There is a $1$ and a $2$", without further information, we don't know anything about the probability.
If we know that the two dice are fair and that they have been rolled, then the probability is 1/18 as all other answer have explained. The fact we don't know if the die with 1 o the die with 2 was rolled first doesn't matter, because we must account for both ways - and therefore the probability is 1/18 instead of 1/36.
But if we don't know which process led to having the 1-2 combination, we can't know anything about the probability. Maybe the person who handed us the box just purposely chose this combination and stuck the dice to the box (probability=1), or maybe he shacked the box rolling the dice (probability=1/18) or he might have chosen at random one combination from the 21 combinations in the table you gave us in the question, and therefore probability=1/21.
In summary, we know the probability because we know what process lead to the final situation, and we can compute probability for each stage (probability for each dice). The process matters, even if we haven't seen it taking place.
To end the answer, I'll give a couple of examples where the process matters a lot:
We flip ten coins. What's the probability getting heads all of ten times? You can see that the probability (1/1024) is a lot smaller than the probability of getting a 10 if we just choose a random number between 0 and 10 (1/11).
If you have enjoyed this problem, you can try with the Monty Hall problem. It's a similar problem where the process matters much more than what our intuition would expect.
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
If we just observe "Somebody gives me a box. I open the box. There is a $1$ and a $2$", without further information, we don't know anything about the probability.
If we know that the two dice are fair
|
How do we know that the probability of rolling 1 and 2 is 1/18?
If we just observe "Somebody gives me a box. I open the box. There is a $1$ and a $2$", without further information, we don't know anything about the probability.
If we know that the two dice are fair and that they have been rolled, then the probability is 1/18 as all other answer have explained. The fact we don't know if the die with 1 o the die with 2 was rolled first doesn't matter, because we must account for both ways - and therefore the probability is 1/18 instead of 1/36.
But if we don't know which process led to having the 1-2 combination, we can't know anything about the probability. Maybe the person who handed us the box just purposely chose this combination and stuck the dice to the box (probability=1), or maybe he shacked the box rolling the dice (probability=1/18) or he might have chosen at random one combination from the 21 combinations in the table you gave us in the question, and therefore probability=1/21.
In summary, we know the probability because we know what process lead to the final situation, and we can compute probability for each stage (probability for each dice). The process matters, even if we haven't seen it taking place.
To end the answer, I'll give a couple of examples where the process matters a lot:
We flip ten coins. What's the probability getting heads all of ten times? You can see that the probability (1/1024) is a lot smaller than the probability of getting a 10 if we just choose a random number between 0 and 10 (1/11).
If you have enjoyed this problem, you can try with the Monty Hall problem. It's a similar problem where the process matters much more than what our intuition would expect.
|
How do we know that the probability of rolling 1 and 2 is 1/18?
If we just observe "Somebody gives me a box. I open the box. There is a $1$ and a $2$", without further information, we don't know anything about the probability.
If we know that the two dice are fair
|
12,358
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
The probability of event A and B is calculated by multiplying both probabilities.
The probability of rolling a 1 when there are six possible options is 1/6. The probability of rolling a 2 when there are six possible options is 1/6.
1/6 * 1/6 = 1/36.
However, the event is not contingent on time (in other words, it is not required that we roll a 1 before a 2; only that we roll both a 1 and 2 in two rolls).
Thus, I could roll a 1 and then 2 and satisfy the condition of rolling both 1 and 2, or I could roll a 2 and then 1 and satisfy the condition of rolling both 1 and 2.
The probability of rolling 2 and then 1 has the same calculation:
1/6 * 1/6 = 1/36.
The probability of either A or B is the sum of the probabilities. So let's say event A is rolling 1 then 2, and event B is rolling 2 then 1.
Probability of Event A: 1/36
Probability of Event B: 1/36
1/36 + 1/36 = 2/36 which reduces to 1/18.
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
The probability of event A and B is calculated by multiplying both probabilities.
The probability of rolling a 1 when there are six possible options is 1/6. The probability of rolling a 2 when there a
|
How do we know that the probability of rolling 1 and 2 is 1/18?
The probability of event A and B is calculated by multiplying both probabilities.
The probability of rolling a 1 when there are six possible options is 1/6. The probability of rolling a 2 when there are six possible options is 1/6.
1/6 * 1/6 = 1/36.
However, the event is not contingent on time (in other words, it is not required that we roll a 1 before a 2; only that we roll both a 1 and 2 in two rolls).
Thus, I could roll a 1 and then 2 and satisfy the condition of rolling both 1 and 2, or I could roll a 2 and then 1 and satisfy the condition of rolling both 1 and 2.
The probability of rolling 2 and then 1 has the same calculation:
1/6 * 1/6 = 1/36.
The probability of either A or B is the sum of the probabilities. So let's say event A is rolling 1 then 2, and event B is rolling 2 then 1.
Probability of Event A: 1/36
Probability of Event B: 1/36
1/36 + 1/36 = 2/36 which reduces to 1/18.
|
How do we know that the probability of rolling 1 and 2 is 1/18?
The probability of event A and B is calculated by multiplying both probabilities.
The probability of rolling a 1 when there are six possible options is 1/6. The probability of rolling a 2 when there a
|
12,359
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
The naive definition of probability is the ratio of favourable outcomes to total outcomes, as you worked it out for your example to be 2/36 = 1/18. The naive definition is applicable when the following two conditions are met:
All outcomes are equally likely
Sample space is finite.
We meet the first requirement (by symmetry of the experiment) if we have the dice labelled. This way, we can distinguish a (5, 6) from a (6, 5). The second requirement is met since the total possible pairs is 36, which is a finite number. Hence, we know that our calculation is correct.
When we do not make that distinction between the dice (no labels on dice), we cannot distinguish a (5, 6) from a (6, 5). We just call it, say, (5, 6). Now, the event (5, 6) becomes more probable than (6, 6) because (5, 6) is supported underneath by both (5, 6) and (6, 5), but (6, 6) was always a single outcome in both labelled and unlabelled sample spaces. This is when the outcomes stop being equally likely (read about "Leibniz's mistake"). Hence, we cannot use this kind of counting to calculate probabilities.
|
How do we know that the probability of rolling 1 and 2 is 1/18?
|
The naive definition of probability is the ratio of favourable outcomes to total outcomes, as you worked it out for your example to be 2/36 = 1/18. The naive definition is applicable when the followin
|
How do we know that the probability of rolling 1 and 2 is 1/18?
The naive definition of probability is the ratio of favourable outcomes to total outcomes, as you worked it out for your example to be 2/36 = 1/18. The naive definition is applicable when the following two conditions are met:
All outcomes are equally likely
Sample space is finite.
We meet the first requirement (by symmetry of the experiment) if we have the dice labelled. This way, we can distinguish a (5, 6) from a (6, 5). The second requirement is met since the total possible pairs is 36, which is a finite number. Hence, we know that our calculation is correct.
When we do not make that distinction between the dice (no labels on dice), we cannot distinguish a (5, 6) from a (6, 5). We just call it, say, (5, 6). Now, the event (5, 6) becomes more probable than (6, 6) because (5, 6) is supported underneath by both (5, 6) and (6, 5), but (6, 6) was always a single outcome in both labelled and unlabelled sample spaces. This is when the outcomes stop being equally likely (read about "Leibniz's mistake"). Hence, we cannot use this kind of counting to calculate probabilities.
|
How do we know that the probability of rolling 1 and 2 is 1/18?
The naive definition of probability is the ratio of favourable outcomes to total outcomes, as you worked it out for your example to be 2/36 = 1/18. The naive definition is applicable when the followin
|
12,360
|
Example of a non-negative discrete distribution where the mean (or another moment) does not exist?
|
Let the CDF $F$ equal $1-1/n$ at the integers $n=1,2,\ldots,$ piecewise constant everywhere else, and subject to all criteria to be a CDF. The expectation is
$$\int_{0}^\infty (1-F(x))\mathrm{d}x = 1/2 + 1/3 + 1/4 + \cdots$$
which diverges. In this sense the first moment (and therefore all higher moments) is infinite. (See remarks at the end for further elaboration.)
If you're uncomfortable with this notation, note that for $n=1,2,3,\ldots,$
$${\Pr}_{F}(n) = \frac{1}{n} - \frac{1}{n+1.}$$
This defines a probability distribution since each term is positive and $$\sum_{n=1}^\infty {\Pr}_{F}(n) = \sum_{n=1}^\infty \left(\frac{1}{n} - \frac{1}{n+1}\right) = \lim_{n\to \infty} 1 - \frac{1}{n+1} = 1.$$
The expectation is
$$\sum_{n=1}^\infty n\,{\Pr}_{F}(n) = \sum_{n=1}^\infty n\left(\frac{1}{n} - \frac{1}{n+1}\right) =\sum_{n=1}^\infty \frac{1}{n+1} = 1/2 + 1/3 + 1/4 + \cdots$$
which diverges.
This way of expressing the answer it makes it clear that all solutions are obtained by such divergent series. Indeed, if you would like the distribution to be supported on some subset of the positive values $x_1, x_2, \ldots, x_n, \ldots,$ with probabilities $p_1, p_2, \ldots$ summing to unity, then for the expectation to diverge the series which expresses it, namely
$$(a_n) = (x_n p_n),$$
must have divergent partial sums.
Conversely, every divergent series $(a_n)$ of non-negative numbers is associated with many discrete positive distributions having divergent expectation. For instance, given $(a_n)$ you could apply the following algorithm to determine sequences $(x_n)$ and $(p_n)$. Begin by setting $q_n = 2^{-n}$ and $y_n = 2^n a_n$ for $n=1, 2, \ldots.$ Define $\Omega$ to be the set of all $y_n$ that arise in this way, index its elements as $\Omega=\{\omega_1, \omega_2, \ldots, \omega_i, \ldots\},$ and define a probability distribution on $\Omega$ by
$$\Pr(\omega_i) = \sum_{n \mid y_n = \omega_i}q_n.$$
This works because the sum of the $p_n$ equals the sum of the $q_n,$ which is $1,$ and $\Omega$ has at most a countable number of positive elements.
As an example, the series $(a_n) = (1, 1/2, 1, 1/2, \ldots)$ obviously diverges. The algorithm gives
$$y_1 = 2a_1 = 2;\ y_2 = 2^2 a_2 = 2;\ y_3 = 2^3 a_3 = 8; \ldots$$
Thus $$\Omega = \{2, 8, 32, 128, \ldots, 2^{2n+1},\ldots\}$$
is the set of odd positive powers of $2$ and $$p_1 = q_1 + q_2 = 3/4;\ p_2 = q_3 + q_4 = 3/16;\ p_3 = q_5 + q_6 = 3/64; \ldots$$
About infinite and non-existent moments
When all the values are positive, there is no such thing as an "undefined" moment: moments all exist, but they can be infinite in the sense of a divergent sum (or integral), as shown at the outset of this answer.
Generally, all moments are defined for positive random variables, because the sum or integral that expresses them either converges absolutely or it diverges (is "infinite.") In contrast to that, moments can become undefined for variables that take on positive and negative values, because--by definition of the Lebesgue integral--the moment is the difference between a moment of the positive part and a moment of the absolute value of the negative part. If both those are infinite, convergence is not absolute and you face the problem of subtracting an infinity from an infinity: that does not exist.
|
Example of a non-negative discrete distribution where the mean (or another moment) does not exist?
|
Let the CDF $F$ equal $1-1/n$ at the integers $n=1,2,\ldots,$ piecewise constant everywhere else, and subject to all criteria to be a CDF. The expectation is
$$\int_{0}^\infty (1-F(x))\mathrm{d}x = 1
|
Example of a non-negative discrete distribution where the mean (or another moment) does not exist?
Let the CDF $F$ equal $1-1/n$ at the integers $n=1,2,\ldots,$ piecewise constant everywhere else, and subject to all criteria to be a CDF. The expectation is
$$\int_{0}^\infty (1-F(x))\mathrm{d}x = 1/2 + 1/3 + 1/4 + \cdots$$
which diverges. In this sense the first moment (and therefore all higher moments) is infinite. (See remarks at the end for further elaboration.)
If you're uncomfortable with this notation, note that for $n=1,2,3,\ldots,$
$${\Pr}_{F}(n) = \frac{1}{n} - \frac{1}{n+1.}$$
This defines a probability distribution since each term is positive and $$\sum_{n=1}^\infty {\Pr}_{F}(n) = \sum_{n=1}^\infty \left(\frac{1}{n} - \frac{1}{n+1}\right) = \lim_{n\to \infty} 1 - \frac{1}{n+1} = 1.$$
The expectation is
$$\sum_{n=1}^\infty n\,{\Pr}_{F}(n) = \sum_{n=1}^\infty n\left(\frac{1}{n} - \frac{1}{n+1}\right) =\sum_{n=1}^\infty \frac{1}{n+1} = 1/2 + 1/3 + 1/4 + \cdots$$
which diverges.
This way of expressing the answer it makes it clear that all solutions are obtained by such divergent series. Indeed, if you would like the distribution to be supported on some subset of the positive values $x_1, x_2, \ldots, x_n, \ldots,$ with probabilities $p_1, p_2, \ldots$ summing to unity, then for the expectation to diverge the series which expresses it, namely
$$(a_n) = (x_n p_n),$$
must have divergent partial sums.
Conversely, every divergent series $(a_n)$ of non-negative numbers is associated with many discrete positive distributions having divergent expectation. For instance, given $(a_n)$ you could apply the following algorithm to determine sequences $(x_n)$ and $(p_n)$. Begin by setting $q_n = 2^{-n}$ and $y_n = 2^n a_n$ for $n=1, 2, \ldots.$ Define $\Omega$ to be the set of all $y_n$ that arise in this way, index its elements as $\Omega=\{\omega_1, \omega_2, \ldots, \omega_i, \ldots\},$ and define a probability distribution on $\Omega$ by
$$\Pr(\omega_i) = \sum_{n \mid y_n = \omega_i}q_n.$$
This works because the sum of the $p_n$ equals the sum of the $q_n,$ which is $1,$ and $\Omega$ has at most a countable number of positive elements.
As an example, the series $(a_n) = (1, 1/2, 1, 1/2, \ldots)$ obviously diverges. The algorithm gives
$$y_1 = 2a_1 = 2;\ y_2 = 2^2 a_2 = 2;\ y_3 = 2^3 a_3 = 8; \ldots$$
Thus $$\Omega = \{2, 8, 32, 128, \ldots, 2^{2n+1},\ldots\}$$
is the set of odd positive powers of $2$ and $$p_1 = q_1 + q_2 = 3/4;\ p_2 = q_3 + q_4 = 3/16;\ p_3 = q_5 + q_6 = 3/64; \ldots$$
About infinite and non-existent moments
When all the values are positive, there is no such thing as an "undefined" moment: moments all exist, but they can be infinite in the sense of a divergent sum (or integral), as shown at the outset of this answer.
Generally, all moments are defined for positive random variables, because the sum or integral that expresses them either converges absolutely or it diverges (is "infinite.") In contrast to that, moments can become undefined for variables that take on positive and negative values, because--by definition of the Lebesgue integral--the moment is the difference between a moment of the positive part and a moment of the absolute value of the negative part. If both those are infinite, convergence is not absolute and you face the problem of subtracting an infinity from an infinity: that does not exist.
|
Example of a non-negative discrete distribution where the mean (or another moment) does not exist?
Let the CDF $F$ equal $1-1/n$ at the integers $n=1,2,\ldots,$ piecewise constant everywhere else, and subject to all criteria to be a CDF. The expectation is
$$\int_{0}^\infty (1-F(x))\mathrm{d}x = 1
|
12,361
|
Example of a non-negative discrete distribution where the mean (or another moment) does not exist?
|
Here's a famous example: Let $X$ take value $2^k$ with probability $2^{-k}$, for each integer $k\ge1$. Then $X$ takes values in (a subset of) the positive integers; the total mass is $\sum_{k=1}^\infty 2^{-k}=1$, but its expectation is
$$E(X) = \sum_{k=1}^\infty 2^k P(X=2^k) = \sum_{k=1}^\infty 1 = \infty.
$$
This random variable $X$ arises in the St. Petersburg paradox.
|
Example of a non-negative discrete distribution where the mean (or another moment) does not exist?
|
Here's a famous example: Let $X$ take value $2^k$ with probability $2^{-k}$, for each integer $k\ge1$. Then $X$ takes values in (a subset of) the positive integers; the total mass is $\sum_{k=1}^\inft
|
Example of a non-negative discrete distribution where the mean (or another moment) does not exist?
Here's a famous example: Let $X$ take value $2^k$ with probability $2^{-k}$, for each integer $k\ge1$. Then $X$ takes values in (a subset of) the positive integers; the total mass is $\sum_{k=1}^\infty 2^{-k}=1$, but its expectation is
$$E(X) = \sum_{k=1}^\infty 2^k P(X=2^k) = \sum_{k=1}^\infty 1 = \infty.
$$
This random variable $X$ arises in the St. Petersburg paradox.
|
Example of a non-negative discrete distribution where the mean (or another moment) does not exist?
Here's a famous example: Let $X$ take value $2^k$ with probability $2^{-k}$, for each integer $k\ge1$. Then $X$ takes values in (a subset of) the positive integers; the total mass is $\sum_{k=1}^\inft
|
12,362
|
Example of a non-negative discrete distribution where the mean (or another moment) does not exist?
|
The zeta distribution is a fairly well-known discrete distribution on the positive integers that doesn't have finite mean (for $1<\theta\leq 2$) .
$P(X=x|\theta)={{\frac {1}{\zeta (\theta)}}x^{-\theta}}\,,\: x=1,2,...,\:\theta>1$
where the normalizing constant involves $\zeta(\cdot)$, the Riemann zeta function
(edit: The case $\theta=2$ is very similar to whuber's answer)
Another distribution with similar tail behaviour is the Yule-Simon distribution.
Another example would be the beta-negative binomial distribution with $0<\alpha\leq 1$:
$P(X=x|\alpha ,\beta ,r)={\frac {\Gamma (r+x)}{x!\;\Gamma (r)}}{\frac {\mathrm{B} (\alpha +r,\beta +x)}{\mathrm{B} (\alpha ,\beta )}}\,,\:x=0,1,2...\:\alpha,\beta,r > 0$
|
Example of a non-negative discrete distribution where the mean (or another moment) does not exist?
|
The zeta distribution is a fairly well-known discrete distribution on the positive integers that doesn't have finite mean (for $1<\theta\leq 2$) .
$P(X=x|\theta)={{\frac {1}{\zeta (\theta)}}x^{-\theta
|
Example of a non-negative discrete distribution where the mean (or another moment) does not exist?
The zeta distribution is a fairly well-known discrete distribution on the positive integers that doesn't have finite mean (for $1<\theta\leq 2$) .
$P(X=x|\theta)={{\frac {1}{\zeta (\theta)}}x^{-\theta}}\,,\: x=1,2,...,\:\theta>1$
where the normalizing constant involves $\zeta(\cdot)$, the Riemann zeta function
(edit: The case $\theta=2$ is very similar to whuber's answer)
Another distribution with similar tail behaviour is the Yule-Simon distribution.
Another example would be the beta-negative binomial distribution with $0<\alpha\leq 1$:
$P(X=x|\alpha ,\beta ,r)={\frac {\Gamma (r+x)}{x!\;\Gamma (r)}}{\frac {\mathrm{B} (\alpha +r,\beta +x)}{\mathrm{B} (\alpha ,\beta )}}\,,\:x=0,1,2...\:\alpha,\beta,r > 0$
|
Example of a non-negative discrete distribution where the mean (or another moment) does not exist?
The zeta distribution is a fairly well-known discrete distribution on the positive integers that doesn't have finite mean (for $1<\theta\leq 2$) .
$P(X=x|\theta)={{\frac {1}{\zeta (\theta)}}x^{-\theta
|
12,363
|
Example of a non-negative discrete distribution where the mean (or another moment) does not exist?
|
some discretized version of the Cauchy distribution
Yes, if you take $p(n)$ as being the average value of the Cauchy distribution in the interval around $n$, then clearly its zeroth moment is the same as that of the Cauchy distribution, and its first moment asymptotically approaches the first moment of the Cauchy distribution. As far as "the interval around $n$", it doesn't really matter how you define that; take $(n-1,n]$, $[n,n+1)$, $[n-.5,n+.5)$, vel cetera, and it will work. For positive integers, you can also take $p(n) =\frac6{(n\pi)^2}$. The zeroth moment sums to one, and the first moment is the sum of $\frac6{n\pi^2}$, which diverges.
And in fact for any polynomial $p(n)$, there is some $c$ such that $\frac c {p(n)}$ sums to 1. If
we then take the $k$th moment, where $k$ is the order of $p(n)$, that will diverge.
|
Example of a non-negative discrete distribution where the mean (or another moment) does not exist?
|
some discretized version of the Cauchy distribution
Yes, if you take $p(n)$ as being the average value of the Cauchy distribution in the interval around $n$, then clearly its zeroth moment is the sam
|
Example of a non-negative discrete distribution where the mean (or another moment) does not exist?
some discretized version of the Cauchy distribution
Yes, if you take $p(n)$ as being the average value of the Cauchy distribution in the interval around $n$, then clearly its zeroth moment is the same as that of the Cauchy distribution, and its first moment asymptotically approaches the first moment of the Cauchy distribution. As far as "the interval around $n$", it doesn't really matter how you define that; take $(n-1,n]$, $[n,n+1)$, $[n-.5,n+.5)$, vel cetera, and it will work. For positive integers, you can also take $p(n) =\frac6{(n\pi)^2}$. The zeroth moment sums to one, and the first moment is the sum of $\frac6{n\pi^2}$, which diverges.
And in fact for any polynomial $p(n)$, there is some $c$ such that $\frac c {p(n)}$ sums to 1. If
we then take the $k$th moment, where $k$ is the order of $p(n)$, that will diverge.
|
Example of a non-negative discrete distribution where the mean (or another moment) does not exist?
some discretized version of the Cauchy distribution
Yes, if you take $p(n)$ as being the average value of the Cauchy distribution in the interval around $n$, then clearly its zeroth moment is the sam
|
12,364
|
How I can convert distance (Euclidean) to similarity score
|
If $d(p_1,p_2)$ represents the euclidean distance from point $p_1$ to point $p_2$,
$$\frac{1}{1 + d(p_1, p_2)}$$
is commonly used.
|
How I can convert distance (Euclidean) to similarity score
|
If $d(p_1,p_2)$ represents the euclidean distance from point $p_1$ to point $p_2$,
$$\frac{1}{1 + d(p_1, p_2)}$$
is commonly used.
|
How I can convert distance (Euclidean) to similarity score
If $d(p_1,p_2)$ represents the euclidean distance from point $p_1$ to point $p_2$,
$$\frac{1}{1 + d(p_1, p_2)}$$
is commonly used.
|
How I can convert distance (Euclidean) to similarity score
If $d(p_1,p_2)$ represents the euclidean distance from point $p_1$ to point $p_2$,
$$\frac{1}{1 + d(p_1, p_2)}$$
is commonly used.
|
12,365
|
How I can convert distance (Euclidean) to similarity score
|
You could also use: $\frac{1}{e^{dist}}$ where dist is your desired distance function.
|
How I can convert distance (Euclidean) to similarity score
|
You could also use: $\frac{1}{e^{dist}}$ where dist is your desired distance function.
|
How I can convert distance (Euclidean) to similarity score
You could also use: $\frac{1}{e^{dist}}$ where dist is your desired distance function.
|
How I can convert distance (Euclidean) to similarity score
You could also use: $\frac{1}{e^{dist}}$ where dist is your desired distance function.
|
12,366
|
How I can convert distance (Euclidean) to similarity score
|
It sounds like you want something akin to cosine similarity, which is itself a similarity score in the unit interval. In fact, a direct relationship between Euclidean distance and cosine similarity exists!
Observe that
$$
||x-x^\prime||^2=(x-x^\prime)^T(x-x^\prime)=||x||+||x^\prime||-2||x-x^\prime||.
$$
While cosine similarity is
$$
f(x,x^\prime)=\frac{x^T x^\prime}{||x||||x^\prime||}=\cos(\theta)
$$ where $\theta$ is the angle between $x$ and $x^\prime$.
When $||x||=||x^\prime||=1,$ we have
$$
||x-x^\prime||^2=2(1-f(x,x^\prime))
$$
and
$$
f(x,x^\prime)=x^T x^\prime,
$$
so
$$
1-\frac{||x-x^\prime||^2}{2}=f(x,x^\prime)=\cos(\theta)
$$ in this special case.
From a computational perspective, it may be more efficient to just compute the cosine, rather than Euclidean distance and then perform the transformation.
|
How I can convert distance (Euclidean) to similarity score
|
It sounds like you want something akin to cosine similarity, which is itself a similarity score in the unit interval. In fact, a direct relationship between Euclidean distance and cosine similarity ex
|
How I can convert distance (Euclidean) to similarity score
It sounds like you want something akin to cosine similarity, which is itself a similarity score in the unit interval. In fact, a direct relationship between Euclidean distance and cosine similarity exists!
Observe that
$$
||x-x^\prime||^2=(x-x^\prime)^T(x-x^\prime)=||x||+||x^\prime||-2||x-x^\prime||.
$$
While cosine similarity is
$$
f(x,x^\prime)=\frac{x^T x^\prime}{||x||||x^\prime||}=\cos(\theta)
$$ where $\theta$ is the angle between $x$ and $x^\prime$.
When $||x||=||x^\prime||=1,$ we have
$$
||x-x^\prime||^2=2(1-f(x,x^\prime))
$$
and
$$
f(x,x^\prime)=x^T x^\prime,
$$
so
$$
1-\frac{||x-x^\prime||^2}{2}=f(x,x^\prime)=\cos(\theta)
$$ in this special case.
From a computational perspective, it may be more efficient to just compute the cosine, rather than Euclidean distance and then perform the transformation.
|
How I can convert distance (Euclidean) to similarity score
It sounds like you want something akin to cosine similarity, which is itself a similarity score in the unit interval. In fact, a direct relationship between Euclidean distance and cosine similarity ex
|
12,367
|
How I can convert distance (Euclidean) to similarity score
|
How about a Gaussian kernel ?
$K(x, x') = \exp\left( -\frac{\| x - x' \|^2}{2\sigma^2} \right)$
The distance $\|x - x'\|$ is used in the exponent. The kernel value is in the range $[0, 1]$. There is one tuning parameter $\sigma$. Basically if $\sigma$ is high, $K(x, x')$ will be close to 1 for any $x, x'$. If $\sigma$ is low, a slight distance from $x$ to $x'$ will lead to $K(x,x')$ being close to 0.
|
How I can convert distance (Euclidean) to similarity score
|
How about a Gaussian kernel ?
$K(x, x') = \exp\left( -\frac{\| x - x' \|^2}{2\sigma^2} \right)$
The distance $\|x - x'\|$ is used in the exponent. The kernel value is in the range $[0, 1]$. There is
|
How I can convert distance (Euclidean) to similarity score
How about a Gaussian kernel ?
$K(x, x') = \exp\left( -\frac{\| x - x' \|^2}{2\sigma^2} \right)$
The distance $\|x - x'\|$ is used in the exponent. The kernel value is in the range $[0, 1]$. There is one tuning parameter $\sigma$. Basically if $\sigma$ is high, $K(x, x')$ will be close to 1 for any $x, x'$. If $\sigma$ is low, a slight distance from $x$ to $x'$ will lead to $K(x,x')$ being close to 0.
|
How I can convert distance (Euclidean) to similarity score
How about a Gaussian kernel ?
$K(x, x') = \exp\left( -\frac{\| x - x' \|^2}{2\sigma^2} \right)$
The distance $\|x - x'\|$ is used in the exponent. The kernel value is in the range $[0, 1]$. There is
|
12,368
|
How I can convert distance (Euclidean) to similarity score
|
If you are using a distance metric that is naturally between 0 and 1, like Hellinger distance. Then you can use 1 - distance to obtain similarity.
|
How I can convert distance (Euclidean) to similarity score
|
If you are using a distance metric that is naturally between 0 and 1, like Hellinger distance. Then you can use 1 - distance to obtain similarity.
|
How I can convert distance (Euclidean) to similarity score
If you are using a distance metric that is naturally between 0 and 1, like Hellinger distance. Then you can use 1 - distance to obtain similarity.
|
How I can convert distance (Euclidean) to similarity score
If you are using a distance metric that is naturally between 0 and 1, like Hellinger distance. Then you can use 1 - distance to obtain similarity.
|
12,369
|
ARIMA model interpretation
|
I think that you need to remember that ARIMA models are atheoretic models, so the usual approach to interpreting estimated regression coefficients does not really carry over to ARIMA modelling.
In order to interpret (or understand) estimated ARIMA models, one would do well to be cognizant of the different features displayed by a number of common ARIMA models.
We can explore some of these features by investigating the types of forecasts produced by different ARIMA models. This is the main approach that I've taken below, but a good alternative would be to look at the impulse response functions or dynamic time paths associated with different ARIMA models (or stochastic difference equations). I'll talk about these at the end.
AR(1) Models
Let's consider an AR(1) model for a moment. In this model, we can say that the lower the value of $\alpha_{1}$ then the quicker is the rate of convergence (to the mean). We can try to understand this aspect of AR(1) models by investigating the nature of the forecasts for a small set of simulated AR(1) models with different values for $\alpha_{1}$.
The set of four AR(1) models that we'll discuss can be written in algebraic notation as:
\begin{equation}
Y_{t} = C + 0.95 Y_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (1)\\
Y_{t} = C + 0.8 Y_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (2)\\
Y_{t} = C + 0.5 Y_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (3)\\
Y_{t} = C + 0.4 Y_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (4)
\end{equation}
where $C$ is a constant and the rest of the notation follows from the OP. As can be seen, each model differs only with respect to the value of $\alpha_{1}$.
In the graph below, I have plotted out-of-sample forecasts for these four AR(1) models. It can be seen that the forecasts for the AR(1) model with $\alpha_{1} = 0.95$ converges at a slower rate with respect to the other models. The forecasts for the AR(1) model with $\alpha_{1} = 0.4$ converges at a quicker rate than the others.
Note: when the red line is horizontal, it has reached the mean of the simulated series.
MA(1) Models
Now let's consider four MA(1) models with different values for $\theta_{1}$. The four models we'll discuss can be written as:
\begin{equation}
Y_{t} = C + 0.95 \nu_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (5)\\
Y_{t} = C + 0.8 \nu_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (6)\\
Y_{t} = C + 0.5 \nu_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (7)\\
Y_{t} = C + 0.4 \nu_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (8)
\end{equation}
In the graph below, I have plotted out-of-sample forecasts for these four different MA(1) models. As the graph shows, the behaviour of the forecasts in all four cases are markedly similar; quick (linear) convergence to the mean. Notice that there is less variety in the dynamics of these forecasts compared to those of the AR(1) models.
Note: when the red line is horizontal, it has reached the mean of the simulated series.
AR(2) Models
Things get a lot more interesting when we start to consider more complex ARIMA models. Take for example AR(2) models. These are just a small step up from the AR(1) model, right? Well, one might like to think that, but the dynamics of AR(2) models are quite rich in variety as we'll see in a moment.
Let's explore four different AR(2) models:
\begin{equation}
Y_{t} = C + 1.7 Y_{t-1} -0.8 Y_{t-2} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (9)\\
Y_{t} = C + 0.9 Y_{t-1} -0.2 Y_{t-2} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (10)\\
Y_{t} = C + 0.5 Y_{t-1} -0.2 Y_{t-2} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (11)\\
Y_{t} = C + 0.1 Y_{t-1} -0.7 Y_{t-2} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (12)
\end{equation}
The out-of-sample forecasts associated with each of these models is shown in the graph below. It is quite clear that they each differ significantly and they are also quite a varied bunch in comparison to the forecasts that we've seen above - except for model 2's forecasts (top right plot) which behave similar to those for an AR(1) model.
Note: when the red line is horizontal, it has reached the mean of the simulated series.
The key point here is that not all AR(2) models have the same dynamics! For example, if the condition,
\begin{equation}
\alpha_{1}^{2}+4\alpha_{2} < 0,
\end{equation}
is satisfied then the AR(2) model displays pseudo periodic behaviour and as a result its forecasts will appear as stochastic cycles. On the other hand, if this condition is not satisfied, stochastic cycles will not be present in the forecasts; instead, the forecasts will be more similar to those for an AR(1) model.
It's worth noting that the above condition comes from the general solution to the homogeneous form of the linear, autonomous, second-order difference equation (with complex roots). If this if foreign to you, I recommend both Chapter 1 of Hamilton (1994) and Chapter 20 of Hoy et al. (2001).
Testing the above condition for the four AR(2) models results in the following:
\begin{equation}
(1.7)^{2} + 4 (-0.8) = -0.31 < 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (13)\\
(0.9)^{2} + 4 (-0.2) = 0.01 > 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (14)\\
(0.5)^{2} + 4 (-0.2) = -0.55 < 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (15)\\
(0.1)^{2} + 4 (-0.7) = -2.54 < 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (16)
\end{equation}
As expected by the appearance of the plotted forecasts, the condition is satisfied for each of the four models except for model 2. Recall from the graph, model 2's forecasts behave ("normally") similar to an AR(1) model's forecasts. The forecasts associated with the other models contain cycles.
Application - Modelling Inflation
Now that we have some background under our feet, let's try to interpret an AR(2) model in an application. Consider the following model for the inflation rate ($\pi_{t}$):
\begin{equation}
\pi_{t} = C + \alpha_{1} \pi_{t-1} + \alpha_{2} \pi_{t-2} + \nu_{t}.
\end{equation}
A natural expression to associate with such a model would be something like: "inflation today depends on the level of inflation yesterday and on the level of inflation on the day before yesterday". Now, I wouldn't argue against such an interpretation, but I'd suggest that some caution be drawn and that we ought to dig a bit deeper to devise a proper interpretation. In this case we could ask, in which way is inflation related to previous levels of inflation? Are there cycles? If so, how many cycles are there? Can we say something about the peak and trough? How quickly do the forecasts converge to the mean? And so on.
These are the sorts of questions we can ask when trying to interpret an AR(2) model and as you can see, it's not as straightforward as taking an estimated coefficient and saying "a 1 unit increase in this variable is associated with a so-many unit increase in the dependent variable" - making sure to attach the ceteris paribus condition to that statement, of course.
Bear in mind that in our discussion so far, we have only explored a selection of AR(1), MA(1), and AR(2) models. We haven't even looked at the dynamics of mixed ARMA models and ARIMA models involving higher lags.
To show how difficult it would be to interpret models that fall into that category, imagine another inflation model - an ARMA(3,1) with $\alpha_{2}$ constrained to zero:
\begin{equation}
\pi_{t} = C + \alpha_{1} \pi_{t-1} + \alpha_{3} \pi_{t-3} + \theta_{1}\nu_{t-1} + \nu_{t}.
\end{equation}
Say what you'd like, but here it's better to try to understand the dynamics of the system itself. As before, we can look and see what sort of forecasts the model produces, but the alternative approach that I mentioned at the beginning of this answer was to look at the impulse response function or time path associated with the system.
This brings me to next part of my answer where we'll discuss impulse response functions.
Impulse Response Functions
Those who are familiar with vector autoregressions (VARs) will be aware that one usually tries to understand the estimated VAR model by interpreting the impulse response functions; rather than trying to interpret the estimated coefficients which are often too difficult to interpret anyway.
The same approach can be taken when trying to understand ARIMA models. That is, rather than try to make sense of (complicated) statements like "today's inflation depends on yesterday's inflation and on inflation from two months ago, but not on last week's inflation!", we instead plot the impulse response function and try to make sense of that.
Application - Four Macro Variables
For this example (based on Leamer(2010)), let's consider four ARIMA models based on four macroeconomic variables; GDP growth, inflation, the unemployment rate, and the short-term interest rate. The four models have been estimated and can be written as:
\begin{eqnarray}
Y_{t} &=& 3.20 + 0.22 Y_{t-1} + 0.15 Y_{t-2} + \nu_{t}\\
\pi_{t} &=& 4.10 + 0.46 \pi_{t-1} + 0.31\pi_{t-2} + 0.16\pi_{t-3} + 0.01\pi_{t-4} + \nu_{t}\\
u_{t} &=& 6.2+ 1.58 u_{t-1} - 0.64 u_{t-2} + \nu_{t}\\
r_{t} &=& 6.0 + 1.18 r_{t-1} - 0.23 r_{t-2} + \nu_{t}
\end{eqnarray}
where $Y_{t}$ denotes GDP growth at time $t$, $\pi$ denotes inflation, $u$ denotes the unemployment rate, and $r$ denotes the short-term interest rate (3-month treasury).
The equations show that GDP growth, the unemployment rate, and the short-term interest rate are modeled as AR(2) processes while inflation is modeled as an AR(4) process.
Rather than try to interpret the coefficients in each equation, let's plot the impulse response functions (IRFs) and interpret them instead. The graph below shows the impulse response functions associated with each of these models.
Don't take this as a masterclass in interpreting IRFs - think of it more like a basic introduction - but anyway, to help us interpret the IRFs we'll need to accustom ourselves with two concepts; momentum and persistence.
These two concepts are defined in Leamer (2010) as follows:
Momentum: Momentum is the tendency to continue moving in the same
direction. The momentum effect can offset the force of regression
(convergence) toward the mean and can allow a variable to move away
from its historical mean, for some time, but not indefinitely.
Persistence: A persistence variable will hang around where it is and
converge slowly only to the historical mean.
Equipped with this knowledge, we now ask the question: suppose a variable is at its historical mean and it receives a temporary one unit shock in a single period, how will the variable respond in future periods? This is akin to asking those questions we asked before, such as, do the forecasts contains cycles?, how quickly do the forecasts converge to the mean?, etc.
At last, we can now attempt to interpret the IRFs.
Following a one unit shock, the unemployment rate and short-term interest rate (3-month treasury) are carried further from their historical mean. This is the momentum effect. The IRFs also show that the unemployment rate overshoots to a greater extent than does the short-term interest rate.
We also see that all of the variables return to their historical means (none of them "blow up"), although they each do this at different rates. For example, GDP growth returns to its historical mean after about 6 periods following a shock, the unemployment rate returns to its historical mean after about 18 periods, but inflation and short-term interest take longer than 20 periods to return to their historical means. In this sense, GDP growth is the least persistent of the four variables while inflation can be said to be highly persistent.
I think it's a fair conclusion to say that we've managed (at least partially) to make sense of what the four ARIMA models are telling us about each of the four macro variables.
Conclusion
Rather than try to interpret the estimated coefficients in ARIMA models (difficult for many models), try instead to understand the dynamics of the system. We can attempt this by exploring the forecasts produced by our model and by plotting the impulse response function.
[I'm happy enough to share my R code if anyone wants it.]
References
Hamilton, J. D. (1994). Time series analysis (Vol. 2). Princeton: Princeton university press.
Leamer, E. (2010). Macroeconomic Patterns and Stories - A Guide for MBAs, Springer.
Stengos, T., M. Hoy, J. Livernois, C. McKenna and R. Rees (2001). Mathematics for Economics, 2nd edition, MIT Press: Cambridge, MA.
|
ARIMA model interpretation
|
I think that you need to remember that ARIMA models are atheoretic models, so the usual approach to interpreting estimated regression coefficients does not really carry over to ARIMA modelling.
In ord
|
ARIMA model interpretation
I think that you need to remember that ARIMA models are atheoretic models, so the usual approach to interpreting estimated regression coefficients does not really carry over to ARIMA modelling.
In order to interpret (or understand) estimated ARIMA models, one would do well to be cognizant of the different features displayed by a number of common ARIMA models.
We can explore some of these features by investigating the types of forecasts produced by different ARIMA models. This is the main approach that I've taken below, but a good alternative would be to look at the impulse response functions or dynamic time paths associated with different ARIMA models (or stochastic difference equations). I'll talk about these at the end.
AR(1) Models
Let's consider an AR(1) model for a moment. In this model, we can say that the lower the value of $\alpha_{1}$ then the quicker is the rate of convergence (to the mean). We can try to understand this aspect of AR(1) models by investigating the nature of the forecasts for a small set of simulated AR(1) models with different values for $\alpha_{1}$.
The set of four AR(1) models that we'll discuss can be written in algebraic notation as:
\begin{equation}
Y_{t} = C + 0.95 Y_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (1)\\
Y_{t} = C + 0.8 Y_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (2)\\
Y_{t} = C + 0.5 Y_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (3)\\
Y_{t} = C + 0.4 Y_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (4)
\end{equation}
where $C$ is a constant and the rest of the notation follows from the OP. As can be seen, each model differs only with respect to the value of $\alpha_{1}$.
In the graph below, I have plotted out-of-sample forecasts for these four AR(1) models. It can be seen that the forecasts for the AR(1) model with $\alpha_{1} = 0.95$ converges at a slower rate with respect to the other models. The forecasts for the AR(1) model with $\alpha_{1} = 0.4$ converges at a quicker rate than the others.
Note: when the red line is horizontal, it has reached the mean of the simulated series.
MA(1) Models
Now let's consider four MA(1) models with different values for $\theta_{1}$. The four models we'll discuss can be written as:
\begin{equation}
Y_{t} = C + 0.95 \nu_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (5)\\
Y_{t} = C + 0.8 \nu_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (6)\\
Y_{t} = C + 0.5 \nu_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (7)\\
Y_{t} = C + 0.4 \nu_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (8)
\end{equation}
In the graph below, I have plotted out-of-sample forecasts for these four different MA(1) models. As the graph shows, the behaviour of the forecasts in all four cases are markedly similar; quick (linear) convergence to the mean. Notice that there is less variety in the dynamics of these forecasts compared to those of the AR(1) models.
Note: when the red line is horizontal, it has reached the mean of the simulated series.
AR(2) Models
Things get a lot more interesting when we start to consider more complex ARIMA models. Take for example AR(2) models. These are just a small step up from the AR(1) model, right? Well, one might like to think that, but the dynamics of AR(2) models are quite rich in variety as we'll see in a moment.
Let's explore four different AR(2) models:
\begin{equation}
Y_{t} = C + 1.7 Y_{t-1} -0.8 Y_{t-2} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (9)\\
Y_{t} = C + 0.9 Y_{t-1} -0.2 Y_{t-2} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (10)\\
Y_{t} = C + 0.5 Y_{t-1} -0.2 Y_{t-2} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (11)\\
Y_{t} = C + 0.1 Y_{t-1} -0.7 Y_{t-2} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (12)
\end{equation}
The out-of-sample forecasts associated with each of these models is shown in the graph below. It is quite clear that they each differ significantly and they are also quite a varied bunch in comparison to the forecasts that we've seen above - except for model 2's forecasts (top right plot) which behave similar to those for an AR(1) model.
Note: when the red line is horizontal, it has reached the mean of the simulated series.
The key point here is that not all AR(2) models have the same dynamics! For example, if the condition,
\begin{equation}
\alpha_{1}^{2}+4\alpha_{2} < 0,
\end{equation}
is satisfied then the AR(2) model displays pseudo periodic behaviour and as a result its forecasts will appear as stochastic cycles. On the other hand, if this condition is not satisfied, stochastic cycles will not be present in the forecasts; instead, the forecasts will be more similar to those for an AR(1) model.
It's worth noting that the above condition comes from the general solution to the homogeneous form of the linear, autonomous, second-order difference equation (with complex roots). If this if foreign to you, I recommend both Chapter 1 of Hamilton (1994) and Chapter 20 of Hoy et al. (2001).
Testing the above condition for the four AR(2) models results in the following:
\begin{equation}
(1.7)^{2} + 4 (-0.8) = -0.31 < 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (13)\\
(0.9)^{2} + 4 (-0.2) = 0.01 > 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (14)\\
(0.5)^{2} + 4 (-0.2) = -0.55 < 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (15)\\
(0.1)^{2} + 4 (-0.7) = -2.54 < 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (16)
\end{equation}
As expected by the appearance of the plotted forecasts, the condition is satisfied for each of the four models except for model 2. Recall from the graph, model 2's forecasts behave ("normally") similar to an AR(1) model's forecasts. The forecasts associated with the other models contain cycles.
Application - Modelling Inflation
Now that we have some background under our feet, let's try to interpret an AR(2) model in an application. Consider the following model for the inflation rate ($\pi_{t}$):
\begin{equation}
\pi_{t} = C + \alpha_{1} \pi_{t-1} + \alpha_{2} \pi_{t-2} + \nu_{t}.
\end{equation}
A natural expression to associate with such a model would be something like: "inflation today depends on the level of inflation yesterday and on the level of inflation on the day before yesterday". Now, I wouldn't argue against such an interpretation, but I'd suggest that some caution be drawn and that we ought to dig a bit deeper to devise a proper interpretation. In this case we could ask, in which way is inflation related to previous levels of inflation? Are there cycles? If so, how many cycles are there? Can we say something about the peak and trough? How quickly do the forecasts converge to the mean? And so on.
These are the sorts of questions we can ask when trying to interpret an AR(2) model and as you can see, it's not as straightforward as taking an estimated coefficient and saying "a 1 unit increase in this variable is associated with a so-many unit increase in the dependent variable" - making sure to attach the ceteris paribus condition to that statement, of course.
Bear in mind that in our discussion so far, we have only explored a selection of AR(1), MA(1), and AR(2) models. We haven't even looked at the dynamics of mixed ARMA models and ARIMA models involving higher lags.
To show how difficult it would be to interpret models that fall into that category, imagine another inflation model - an ARMA(3,1) with $\alpha_{2}$ constrained to zero:
\begin{equation}
\pi_{t} = C + \alpha_{1} \pi_{t-1} + \alpha_{3} \pi_{t-3} + \theta_{1}\nu_{t-1} + \nu_{t}.
\end{equation}
Say what you'd like, but here it's better to try to understand the dynamics of the system itself. As before, we can look and see what sort of forecasts the model produces, but the alternative approach that I mentioned at the beginning of this answer was to look at the impulse response function or time path associated with the system.
This brings me to next part of my answer where we'll discuss impulse response functions.
Impulse Response Functions
Those who are familiar with vector autoregressions (VARs) will be aware that one usually tries to understand the estimated VAR model by interpreting the impulse response functions; rather than trying to interpret the estimated coefficients which are often too difficult to interpret anyway.
The same approach can be taken when trying to understand ARIMA models. That is, rather than try to make sense of (complicated) statements like "today's inflation depends on yesterday's inflation and on inflation from two months ago, but not on last week's inflation!", we instead plot the impulse response function and try to make sense of that.
Application - Four Macro Variables
For this example (based on Leamer(2010)), let's consider four ARIMA models based on four macroeconomic variables; GDP growth, inflation, the unemployment rate, and the short-term interest rate. The four models have been estimated and can be written as:
\begin{eqnarray}
Y_{t} &=& 3.20 + 0.22 Y_{t-1} + 0.15 Y_{t-2} + \nu_{t}\\
\pi_{t} &=& 4.10 + 0.46 \pi_{t-1} + 0.31\pi_{t-2} + 0.16\pi_{t-3} + 0.01\pi_{t-4} + \nu_{t}\\
u_{t} &=& 6.2+ 1.58 u_{t-1} - 0.64 u_{t-2} + \nu_{t}\\
r_{t} &=& 6.0 + 1.18 r_{t-1} - 0.23 r_{t-2} + \nu_{t}
\end{eqnarray}
where $Y_{t}$ denotes GDP growth at time $t$, $\pi$ denotes inflation, $u$ denotes the unemployment rate, and $r$ denotes the short-term interest rate (3-month treasury).
The equations show that GDP growth, the unemployment rate, and the short-term interest rate are modeled as AR(2) processes while inflation is modeled as an AR(4) process.
Rather than try to interpret the coefficients in each equation, let's plot the impulse response functions (IRFs) and interpret them instead. The graph below shows the impulse response functions associated with each of these models.
Don't take this as a masterclass in interpreting IRFs - think of it more like a basic introduction - but anyway, to help us interpret the IRFs we'll need to accustom ourselves with two concepts; momentum and persistence.
These two concepts are defined in Leamer (2010) as follows:
Momentum: Momentum is the tendency to continue moving in the same
direction. The momentum effect can offset the force of regression
(convergence) toward the mean and can allow a variable to move away
from its historical mean, for some time, but not indefinitely.
Persistence: A persistence variable will hang around where it is and
converge slowly only to the historical mean.
Equipped with this knowledge, we now ask the question: suppose a variable is at its historical mean and it receives a temporary one unit shock in a single period, how will the variable respond in future periods? This is akin to asking those questions we asked before, such as, do the forecasts contains cycles?, how quickly do the forecasts converge to the mean?, etc.
At last, we can now attempt to interpret the IRFs.
Following a one unit shock, the unemployment rate and short-term interest rate (3-month treasury) are carried further from their historical mean. This is the momentum effect. The IRFs also show that the unemployment rate overshoots to a greater extent than does the short-term interest rate.
We also see that all of the variables return to their historical means (none of them "blow up"), although they each do this at different rates. For example, GDP growth returns to its historical mean after about 6 periods following a shock, the unemployment rate returns to its historical mean after about 18 periods, but inflation and short-term interest take longer than 20 periods to return to their historical means. In this sense, GDP growth is the least persistent of the four variables while inflation can be said to be highly persistent.
I think it's a fair conclusion to say that we've managed (at least partially) to make sense of what the four ARIMA models are telling us about each of the four macro variables.
Conclusion
Rather than try to interpret the estimated coefficients in ARIMA models (difficult for many models), try instead to understand the dynamics of the system. We can attempt this by exploring the forecasts produced by our model and by plotting the impulse response function.
[I'm happy enough to share my R code if anyone wants it.]
References
Hamilton, J. D. (1994). Time series analysis (Vol. 2). Princeton: Princeton university press.
Leamer, E. (2010). Macroeconomic Patterns and Stories - A Guide for MBAs, Springer.
Stengos, T., M. Hoy, J. Livernois, C. McKenna and R. Rees (2001). Mathematics for Economics, 2nd edition, MIT Press: Cambridge, MA.
|
ARIMA model interpretation
I think that you need to remember that ARIMA models are atheoretic models, so the usual approach to interpreting estimated regression coefficients does not really carry over to ARIMA modelling.
In ord
|
12,370
|
ARIMA model interpretation
|
Note that due to Wold's decomposition theorem you can rewrite any stationary ARMA model as a $MA(\infty)$ model, i.e. :
$$\Delta Y_t=\sum_{j=0}^{\infty} \psi_j\nu_{t-j}$$
In this form there are no lagged variables, so any interpretation involving notion of a lagged variable is not very convincing. However looking at the $MA(1)$ and the $AR(1)$ models separately:
$$Y_t=\nu_t+\theta_{1}\nu_{t-1}$$
$$Y_t=\rho Y_{t-1}+\nu_{t}=\nu_t+\rho \nu_{t-1}+ \rho^2 \nu_{t-1}+...$$
you can say that error terms in ARMA models explain "short-term" influence of the past, and lagged terms explain "long-term" influence. Having said that I do not think that this helps a lot and usually nobody bothers with the precise interpretation of ARMA coefficients. The goal usually is to get an adequate model and use it for forecasting.
|
ARIMA model interpretation
|
Note that due to Wold's decomposition theorem you can rewrite any stationary ARMA model as a $MA(\infty)$ model, i.e. :
$$\Delta Y_t=\sum_{j=0}^{\infty} \psi_j\nu_{t-j}$$
In this form there are no lag
|
ARIMA model interpretation
Note that due to Wold's decomposition theorem you can rewrite any stationary ARMA model as a $MA(\infty)$ model, i.e. :
$$\Delta Y_t=\sum_{j=0}^{\infty} \psi_j\nu_{t-j}$$
In this form there are no lagged variables, so any interpretation involving notion of a lagged variable is not very convincing. However looking at the $MA(1)$ and the $AR(1)$ models separately:
$$Y_t=\nu_t+\theta_{1}\nu_{t-1}$$
$$Y_t=\rho Y_{t-1}+\nu_{t}=\nu_t+\rho \nu_{t-1}+ \rho^2 \nu_{t-1}+...$$
you can say that error terms in ARMA models explain "short-term" influence of the past, and lagged terms explain "long-term" influence. Having said that I do not think that this helps a lot and usually nobody bothers with the precise interpretation of ARMA coefficients. The goal usually is to get an adequate model and use it for forecasting.
|
ARIMA model interpretation
Note that due to Wold's decomposition theorem you can rewrite any stationary ARMA model as a $MA(\infty)$ model, i.e. :
$$\Delta Y_t=\sum_{j=0}^{\infty} \psi_j\nu_{t-j}$$
In this form there are no lag
|
12,371
|
ARIMA model interpretation
|
I totally agree with the sentiment of the previous commentators. I would like to add that all ARIMA model can also be represented as a pure AR model. These weights are referred to as the Pi weights as compared to the pure MA form (Psi weights) . In this way you can view (interpret) an ARIMA model as an optimized weighted average of the past values. In other words rather than assume a pre-specified length and values for a weighted average , an ARIMA model delivers both the length ($n$) of the weights and the actual weights ($c_1,c_2,...,c_n$).
$$Y(t) =c_1 Y(t−1) + c_2 Y(t-2) + c_3 Y(t-3)+ ... + c_n Y(t-n) + a(t)$$
In this way an ARIMA model can be explained as the answer to the question
How many historical values should I use to compute a weighted sum of the past?
Precisely what are those values?
|
ARIMA model interpretation
|
I totally agree with the sentiment of the previous commentators. I would like to add that all ARIMA model can also be represented as a pure AR model. These weights are referred to as the Pi weights as
|
ARIMA model interpretation
I totally agree with the sentiment of the previous commentators. I would like to add that all ARIMA model can also be represented as a pure AR model. These weights are referred to as the Pi weights as compared to the pure MA form (Psi weights) . In this way you can view (interpret) an ARIMA model as an optimized weighted average of the past values. In other words rather than assume a pre-specified length and values for a weighted average , an ARIMA model delivers both the length ($n$) of the weights and the actual weights ($c_1,c_2,...,c_n$).
$$Y(t) =c_1 Y(t−1) + c_2 Y(t-2) + c_3 Y(t-3)+ ... + c_n Y(t-n) + a(t)$$
In this way an ARIMA model can be explained as the answer to the question
How many historical values should I use to compute a weighted sum of the past?
Precisely what are those values?
|
ARIMA model interpretation
I totally agree with the sentiment of the previous commentators. I would like to add that all ARIMA model can also be represented as a pure AR model. These weights are referred to as the Pi weights as
|
12,372
|
Random forest is overfitting?
|
This is a common rookie error when using RF models (I'll put my hand up as a previous perpetrator). The forest that you build using the training set will in many cases fit the training data almost perfectly (as you are finding) when considered in totality. However, as the algorithm builds the forest it remembers the out-of-bag (OOB) prediction error, which is its best guess of the generalization error.
If you send the training data back into the predict method (as you are doing) you get this almost perfect prediction (which is wildly optimistic) instead of the correct OOB error. Don't do this. Instead, the trained Forest object should have remembered within it the OOB error. I am unfamiliar with the scikit-learn implementation but looking at the documentation here it looks like you need to specify oob_score=True when calling the fit method, and then the generalization error will be stored as oob_score_ in the returned object. In the R package "randomForest", calling the predict method with no arguments on the returned object will return the OOB prediction on the training set. That lets you define the error using some other measure. Sending the training set back into the predict method will give you a different result, as that will use all the trees. I don't know if the scikit-learn implementation will do this or not.
It is a mistake to send the training data back into the predict method in order to test the accuracy. It's a very common mistake though, so don't worry.
|
Random forest is overfitting?
|
This is a common rookie error when using RF models (I'll put my hand up as a previous perpetrator). The forest that you build using the training set will in many cases fit the training data almost per
|
Random forest is overfitting?
This is a common rookie error when using RF models (I'll put my hand up as a previous perpetrator). The forest that you build using the training set will in many cases fit the training data almost perfectly (as you are finding) when considered in totality. However, as the algorithm builds the forest it remembers the out-of-bag (OOB) prediction error, which is its best guess of the generalization error.
If you send the training data back into the predict method (as you are doing) you get this almost perfect prediction (which is wildly optimistic) instead of the correct OOB error. Don't do this. Instead, the trained Forest object should have remembered within it the OOB error. I am unfamiliar with the scikit-learn implementation but looking at the documentation here it looks like you need to specify oob_score=True when calling the fit method, and then the generalization error will be stored as oob_score_ in the returned object. In the R package "randomForest", calling the predict method with no arguments on the returned object will return the OOB prediction on the training set. That lets you define the error using some other measure. Sending the training set back into the predict method will give you a different result, as that will use all the trees. I don't know if the scikit-learn implementation will do this or not.
It is a mistake to send the training data back into the predict method in order to test the accuracy. It's a very common mistake though, so don't worry.
|
Random forest is overfitting?
This is a common rookie error when using RF models (I'll put my hand up as a previous perpetrator). The forest that you build using the training set will in many cases fit the training data almost per
|
12,373
|
Random forest is overfitting?
|
I think the answer is the max_features parameter: int, string or None, optional (default=”auto”) parameter. basically for this problem you should set it to None , so that each tree is built with all the inputs, since clearly you can't build a proper classifier using only a fraction of the cards ( default "auto" is selecting sqrt(nfeatures) inputs for each tree)
|
Random forest is overfitting?
|
I think the answer is the max_features parameter: int, string or None, optional (default=”auto”) parameter. basically for this problem you should set it to None , so that each tree is built with all
|
Random forest is overfitting?
I think the answer is the max_features parameter: int, string or None, optional (default=”auto”) parameter. basically for this problem you should set it to None , so that each tree is built with all the inputs, since clearly you can't build a proper classifier using only a fraction of the cards ( default "auto" is selecting sqrt(nfeatures) inputs for each tree)
|
Random forest is overfitting?
I think the answer is the max_features parameter: int, string or None, optional (default=”auto”) parameter. basically for this problem you should set it to None , so that each tree is built with all
|
12,374
|
What is the name of this chart showing false and true positive rates and how is it generated?
|
The plot is ROC curve and the (False Positive Rate, True Positive Rate) points are calculated for different thresholds. Assuming you have an uniform utility function, the optimal threshold value is the one for the point closest to (0, 1).
|
What is the name of this chart showing false and true positive rates and how is it generated?
|
The plot is ROC curve and the (False Positive Rate, True Positive Rate) points are calculated for different thresholds. Assuming you have an uniform utility function, the optimal threshold value is th
|
What is the name of this chart showing false and true positive rates and how is it generated?
The plot is ROC curve and the (False Positive Rate, True Positive Rate) points are calculated for different thresholds. Assuming you have an uniform utility function, the optimal threshold value is the one for the point closest to (0, 1).
|
What is the name of this chart showing false and true positive rates and how is it generated?
The plot is ROC curve and the (False Positive Rate, True Positive Rate) points are calculated for different thresholds. Assuming you have an uniform utility function, the optimal threshold value is th
|
12,375
|
What is the name of this chart showing false and true positive rates and how is it generated?
|
To generate ROC curves (= Receiver Operating Characteristic curves):
Assume we have a probabilistic, binary classifier such as logistic regression. Before presenting the ROC curve, the concept of confusion matrix must be understood. When we make a binary prediction, there can be 4 types of errors:
We predict 0 while we should have the class is actually 0: this is called a True Negative, i.e. we correctly predict that the class is negative (0). For example, an antivirus did not detect a harmless file as a virus .
We predict 0 while we should have the class is actually 1: this is called a False Negative, i.e. we incorrectly predict that the class is negative (0). For example, an antivirus failed to detect a virus.
We predict 1 while we should have the class is actually 0: this is called a False Positive, i.e. we incorrectly predict that the class is positive (1). For example, an antivirus considered a harmless file to be a virus.
We predict 1 while we should have the class is actually 1: this is called a True Positive, i.e. we correctly predict that the class is positive (1). For example, an antivirus rightfully detected a virus.
To get the confusion matrix, we go over all the predictions made by the model, and count how many times each of those 4 types of errors occur:
In this example of a confusion matrix, among the 50 data points that are classified, 45 are correctly classified and the 5 are misclassified.
Since to compare two different models it is often more convenient to have a single metric rather than several ones, we compute two metrics from the confusion matrix, which we will later combine into one:
True positive rate (TPR), aka. sensitivity, hit rate, and recall, which is defined as $ \frac{TP}{TP+FN}$. Intuitively this metric corresponds to the proportion of positive data points that are correctly considered as positive, with respect to all positive data points. In other words, the higher TPR, the fewer positive data points we will miss.
False positive rate (FPR), aka. fall-out, which is defined as $ \frac{FP}{FP+TN}$. Intuitively this metric corresponds to the proportion of negative data points that are mistakenly considered as positive, with respect to all negative data points. In other words, the higher FPR, the more negative data points we will missclassified.
To combine the FPR and the TPR into one single metric, we first compute the two former metrics with many different threshold (for example $0.00; 0.01, 0.02, \dots, 1.00$) for the logistic regression, then plot them on a single graph, with the FPR values on the abscissa and the TPR values on the ordinate. The resulting curve is called ROC curve:
In this figure, the blue area corresponds to the Area Under the curve of the Receiver Operating Characteristic (AUROC). The dashed line in the diagonal we present the ROC curve of a random predictor: it has an AUROC of 0.5. The random predictor is commonly used as a baseline to see whether the model is useful.
If you want to get some first-hand experience:
Python: http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html
MATLAB: http://www.mathworks.com/help/stats/perfcurve.html
|
What is the name of this chart showing false and true positive rates and how is it generated?
|
To generate ROC curves (= Receiver Operating Characteristic curves):
Assume we have a probabilistic, binary classifier such as logistic regression. Before presenting the ROC curve, the concept of conf
|
What is the name of this chart showing false and true positive rates and how is it generated?
To generate ROC curves (= Receiver Operating Characteristic curves):
Assume we have a probabilistic, binary classifier such as logistic regression. Before presenting the ROC curve, the concept of confusion matrix must be understood. When we make a binary prediction, there can be 4 types of errors:
We predict 0 while we should have the class is actually 0: this is called a True Negative, i.e. we correctly predict that the class is negative (0). For example, an antivirus did not detect a harmless file as a virus .
We predict 0 while we should have the class is actually 1: this is called a False Negative, i.e. we incorrectly predict that the class is negative (0). For example, an antivirus failed to detect a virus.
We predict 1 while we should have the class is actually 0: this is called a False Positive, i.e. we incorrectly predict that the class is positive (1). For example, an antivirus considered a harmless file to be a virus.
We predict 1 while we should have the class is actually 1: this is called a True Positive, i.e. we correctly predict that the class is positive (1). For example, an antivirus rightfully detected a virus.
To get the confusion matrix, we go over all the predictions made by the model, and count how many times each of those 4 types of errors occur:
In this example of a confusion matrix, among the 50 data points that are classified, 45 are correctly classified and the 5 are misclassified.
Since to compare two different models it is often more convenient to have a single metric rather than several ones, we compute two metrics from the confusion matrix, which we will later combine into one:
True positive rate (TPR), aka. sensitivity, hit rate, and recall, which is defined as $ \frac{TP}{TP+FN}$. Intuitively this metric corresponds to the proportion of positive data points that are correctly considered as positive, with respect to all positive data points. In other words, the higher TPR, the fewer positive data points we will miss.
False positive rate (FPR), aka. fall-out, which is defined as $ \frac{FP}{FP+TN}$. Intuitively this metric corresponds to the proportion of negative data points that are mistakenly considered as positive, with respect to all negative data points. In other words, the higher FPR, the more negative data points we will missclassified.
To combine the FPR and the TPR into one single metric, we first compute the two former metrics with many different threshold (for example $0.00; 0.01, 0.02, \dots, 1.00$) for the logistic regression, then plot them on a single graph, with the FPR values on the abscissa and the TPR values on the ordinate. The resulting curve is called ROC curve:
In this figure, the blue area corresponds to the Area Under the curve of the Receiver Operating Characteristic (AUROC). The dashed line in the diagonal we present the ROC curve of a random predictor: it has an AUROC of 0.5. The random predictor is commonly used as a baseline to see whether the model is useful.
If you want to get some first-hand experience:
Python: http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html
MATLAB: http://www.mathworks.com/help/stats/perfcurve.html
|
What is the name of this chart showing false and true positive rates and how is it generated?
To generate ROC curves (= Receiver Operating Characteristic curves):
Assume we have a probabilistic, binary classifier such as logistic regression. Before presenting the ROC curve, the concept of conf
|
12,376
|
What is the name of this chart showing false and true positive rates and how is it generated?
|
Morten's answer correctly addresses the question in the title -- the figure is, indeed, a ROC curve. It's produced by plotting a sequence of false positive rates (FPR) against their corresponding true positive rates.
However, I'd like to reply to the question that you ask in the body of your post.
If a method is applied to a dataset, it has a certain FP rate and a certain FN rate. Doesn't that mean that each method should have a single point rather than a curve? Of course there's multiple ways to configure a method, producing multiple different points, but it's not clear to me how there is this continuum of rates or how it's generated.
Many machine learning methods have adjustable parameters. For example, the output of a logistic regression is a predicted probability of class membership. A decision rule to classify all points with predicted probabilities above some threshold to one class, and the rest to another, can create a flexible range of classifiers, each with different TPR and FPR statistics. The same can be done in the case of random forest, where one is considering the trees' votes, or SVM, where you are considering the signed distance from the hyperplane.
In the case where you are doing cross-validation to estimate out-of-sample performance, typical practice is to use the prediction values (votes, probabilities, signed distances) to generate a sequence of TPR and FPR. This usually looks like a step function, because typically there is just one point moving from TP to FN or FP to FN, at each predicted value (i.e. all the out-of-sample predicted values are unique). In this case, while there is a continuum of options for computing TPR and FPR, the TPR and FPR functions will not be continuous because there are only finitely many out-of-sample points, so the resulting curves will have a step-like appearance.
|
What is the name of this chart showing false and true positive rates and how is it generated?
|
Morten's answer correctly addresses the question in the title -- the figure is, indeed, a ROC curve. It's produced by plotting a sequence of false positive rates (FPR) against their corresponding true
|
What is the name of this chart showing false and true positive rates and how is it generated?
Morten's answer correctly addresses the question in the title -- the figure is, indeed, a ROC curve. It's produced by plotting a sequence of false positive rates (FPR) against their corresponding true positive rates.
However, I'd like to reply to the question that you ask in the body of your post.
If a method is applied to a dataset, it has a certain FP rate and a certain FN rate. Doesn't that mean that each method should have a single point rather than a curve? Of course there's multiple ways to configure a method, producing multiple different points, but it's not clear to me how there is this continuum of rates or how it's generated.
Many machine learning methods have adjustable parameters. For example, the output of a logistic regression is a predicted probability of class membership. A decision rule to classify all points with predicted probabilities above some threshold to one class, and the rest to another, can create a flexible range of classifiers, each with different TPR and FPR statistics. The same can be done in the case of random forest, where one is considering the trees' votes, or SVM, where you are considering the signed distance from the hyperplane.
In the case where you are doing cross-validation to estimate out-of-sample performance, typical practice is to use the prediction values (votes, probabilities, signed distances) to generate a sequence of TPR and FPR. This usually looks like a step function, because typically there is just one point moving from TP to FN or FP to FN, at each predicted value (i.e. all the out-of-sample predicted values are unique). In this case, while there is a continuum of options for computing TPR and FPR, the TPR and FPR functions will not be continuous because there are only finitely many out-of-sample points, so the resulting curves will have a step-like appearance.
|
What is the name of this chart showing false and true positive rates and how is it generated?
Morten's answer correctly addresses the question in the title -- the figure is, indeed, a ROC curve. It's produced by plotting a sequence of false positive rates (FPR) against their corresponding true
|
12,377
|
What is the name of this chart showing false and true positive rates and how is it generated?
|
From Wikipedia:
The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battlefields and was soon introduced to psychology to account for perceptual detection of stimuli. ROC analysis since then has been used in medicine, radiology, biometrics, and other areas for many decades and is increasingly used in machine learning and data mining research.
The ROC is also known as a relative operating characteristic curve, because it is a comparison of two operating characteristics (TPR and FPR) as the criterion changes.
You can think of the two axes as costs that must be incurred in order for the binary classifier to operate. Ideally you want to incur as low a false positive rate as possible for as high a true positive rate as possible. That is you want the binary classifier to call as few false positives for as many true positives as possible.
To make it concrete imagine a classifier that can detect whether a certain disease is present by measuring the amount of some biomarker. Imagine that the biomarker had a value in the range 0 (absent) to 1 (saturated). What level maximises detection of the disease? It might be the case that above some level the biomarker will classify some people as having the disease yet they don't have the disease. These are false positives. Then of course there are those who will be classified as having the disease when they do indeed have the disease. These are the true positives.
The ROC assesses the proportion of true positives of all positives against the proportion of false positives by taking into account all possible threshold values.
|
What is the name of this chart showing false and true positive rates and how is it generated?
|
From Wikipedia:
The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battlefields and was soon introduced to psychology to
|
What is the name of this chart showing false and true positive rates and how is it generated?
From Wikipedia:
The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battlefields and was soon introduced to psychology to account for perceptual detection of stimuli. ROC analysis since then has been used in medicine, radiology, biometrics, and other areas for many decades and is increasingly used in machine learning and data mining research.
The ROC is also known as a relative operating characteristic curve, because it is a comparison of two operating characteristics (TPR and FPR) as the criterion changes.
You can think of the two axes as costs that must be incurred in order for the binary classifier to operate. Ideally you want to incur as low a false positive rate as possible for as high a true positive rate as possible. That is you want the binary classifier to call as few false positives for as many true positives as possible.
To make it concrete imagine a classifier that can detect whether a certain disease is present by measuring the amount of some biomarker. Imagine that the biomarker had a value in the range 0 (absent) to 1 (saturated). What level maximises detection of the disease? It might be the case that above some level the biomarker will classify some people as having the disease yet they don't have the disease. These are false positives. Then of course there are those who will be classified as having the disease when they do indeed have the disease. These are the true positives.
The ROC assesses the proportion of true positives of all positives against the proportion of false positives by taking into account all possible threshold values.
|
What is the name of this chart showing false and true positive rates and how is it generated?
From Wikipedia:
The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battlefields and was soon introduced to psychology to
|
12,378
|
Continuous dependent variable with ordinal independent variable
|
@Scortchi's got you covered with this answer on Coding for an ordered covariate. I've repeated the recommendation on my answer to Effect of two demographic IVs on survey answers (Likert scale). Specifically, the recommendation is to use Gertheiss' (2013) ordPens package, and to refer to Gertheiss and Tutz (2009a) for theoretical background and a simulation study.
The specific function you probably want is ordSmooth*. This essentially smooths dummy coefficients across levels of ordinal variables to be less different from those for adjacent ranks, which reduces overfitting and improves predictions. It generally performs as well as or (sometimes much) better than maximum likelihood (i.e., ordinary least squares in this case) estimation of a regression model for continuous (or in their terms, metric) data when the data are actually ordinal. It appears compatible with all sorts of generalized linear models, and allows you to enter nominal and continuous predictors as separate matrices.
Several additional references from Gertheiss, Tutz, and colleagues are available and listed below. Some of these may contain alternatives – even Gertheiss and Tutz (2009a) discuss ridge reroughing as another alternative. I haven't dug through it all yet myself, but suffice it to say this solves @Erik's problem of too little literature on ordinal predictors!
References
- Gertheiss, J. (2013, June 14). ordPens: Selection and/or smoothing of ordinal predictors, version 0.2-1. Retrieved from http://cran.r-project.org/web/packages/ordPens/ordPens.pdf.
- Gertheiss, J., Hogger, S., Oberhauser, C., & Tutz, G. (2011). Selection of ordinally scaled independent variables with applications to international classification of functioning core sets. Journal of the Royal Statistical Society: Series C (Applied Statistics), 60(3), 377–395.
- Gertheiss, J., & Tutz, G. (2009a). Penalized regression with ordinal predictors. International Statistical Review, 77(3), 345–365. Retrieved from http://epub.ub.uni-muenchen.de/2100/1/tr015.pdf.
- Gertheiss, J., & Tutz, G. (2009b). Supervised feature selection in mass spectrometry-based proteomic profiling by blockwise boosting. Bioinformatics, 25(8), 1076–1077.
- Gertheiss, J., & Tutz, G. (2009c). Variable scaling and nearest neighbor methods. Journal of Chemometrics, 23(3), 149–151.
- Gertheiss, J. & Tutz, G. (2010). Sparse modeling of categorial explanatory variables.
The Annals of Applied Statistics, 4, 2150–2180.
- Hofner, B., Hothorn, T., Kneib, T., & Schmid, M. (2011). A framework for unbiased model selection based on boosting. Journal of Computational and Graphical Statistics, 20(4), 956–971. Retrieved from http://epub.ub.uni-muenchen.de/11243/1/TR072.pdf.
- Oelker, M.-R., Gertheiss, J., & Tutz, G. (2012). Regularization and model selection with categorial predictors and effect modifiers in generalized linear models. Department of Statistics: Technical Reports, No. 122. Retrieved from http://epub.ub.uni-muenchen.de/13082/1/tr.gvcm.cat.pdf.
- Oelker, M.-R., & Tutz, G. (2013). A general family of penalties for combining differing types of penalties in generalized structured models. Department of Statistics: Technical Reports, No. 139. Retrieved from http://epub.ub.uni-muenchen.de/17664/1/tr.pirls.pdf.
- Petry, S., Flexeder, C., & Tutz, G. (2011). Pairwise fused lasso. Department of Statistics: Technical Reports, No. 102. Retrieved from http://epub.ub.uni-muenchen.de/12164/1/petry_etal_TR102_2011.pdf.
- Rufibach, K. (2010). An active set algorithm to estimate parameters in generalized linear models with ordered predictors. Computational Statistics & Data Analysis, 54(6), 1442–1456. Retrieved from http://arxiv.org/pdf/0902.0240.pdf?origin=publication_detail.
- Tutz, G. (2011, October). Regularization methods for categorical data. Munich: Ludwig-Maximilians-Universität. Retrieved from http://m.wu.ac.at/it/departments/statmath/resseminar/talktutz.pdf.
- Tutz, G., & Gertheiss, J. (2013). Rating scales as predictors—The old question of scale level and some answers. Psychometrika, 1-20.
|
Continuous dependent variable with ordinal independent variable
|
@Scortchi's got you covered with this answer on Coding for an ordered covariate. I've repeated the recommendation on my answer to Effect of two demographic IVs on survey answers (Likert scale). Specif
|
Continuous dependent variable with ordinal independent variable
@Scortchi's got you covered with this answer on Coding for an ordered covariate. I've repeated the recommendation on my answer to Effect of two demographic IVs on survey answers (Likert scale). Specifically, the recommendation is to use Gertheiss' (2013) ordPens package, and to refer to Gertheiss and Tutz (2009a) for theoretical background and a simulation study.
The specific function you probably want is ordSmooth*. This essentially smooths dummy coefficients across levels of ordinal variables to be less different from those for adjacent ranks, which reduces overfitting and improves predictions. It generally performs as well as or (sometimes much) better than maximum likelihood (i.e., ordinary least squares in this case) estimation of a regression model for continuous (or in their terms, metric) data when the data are actually ordinal. It appears compatible with all sorts of generalized linear models, and allows you to enter nominal and continuous predictors as separate matrices.
Several additional references from Gertheiss, Tutz, and colleagues are available and listed below. Some of these may contain alternatives – even Gertheiss and Tutz (2009a) discuss ridge reroughing as another alternative. I haven't dug through it all yet myself, but suffice it to say this solves @Erik's problem of too little literature on ordinal predictors!
References
- Gertheiss, J. (2013, June 14). ordPens: Selection and/or smoothing of ordinal predictors, version 0.2-1. Retrieved from http://cran.r-project.org/web/packages/ordPens/ordPens.pdf.
- Gertheiss, J., Hogger, S., Oberhauser, C., & Tutz, G. (2011). Selection of ordinally scaled independent variables with applications to international classification of functioning core sets. Journal of the Royal Statistical Society: Series C (Applied Statistics), 60(3), 377–395.
- Gertheiss, J., & Tutz, G. (2009a). Penalized regression with ordinal predictors. International Statistical Review, 77(3), 345–365. Retrieved from http://epub.ub.uni-muenchen.de/2100/1/tr015.pdf.
- Gertheiss, J., & Tutz, G. (2009b). Supervised feature selection in mass spectrometry-based proteomic profiling by blockwise boosting. Bioinformatics, 25(8), 1076–1077.
- Gertheiss, J., & Tutz, G. (2009c). Variable scaling and nearest neighbor methods. Journal of Chemometrics, 23(3), 149–151.
- Gertheiss, J. & Tutz, G. (2010). Sparse modeling of categorial explanatory variables.
The Annals of Applied Statistics, 4, 2150–2180.
- Hofner, B., Hothorn, T., Kneib, T., & Schmid, M. (2011). A framework for unbiased model selection based on boosting. Journal of Computational and Graphical Statistics, 20(4), 956–971. Retrieved from http://epub.ub.uni-muenchen.de/11243/1/TR072.pdf.
- Oelker, M.-R., Gertheiss, J., & Tutz, G. (2012). Regularization and model selection with categorial predictors and effect modifiers in generalized linear models. Department of Statistics: Technical Reports, No. 122. Retrieved from http://epub.ub.uni-muenchen.de/13082/1/tr.gvcm.cat.pdf.
- Oelker, M.-R., & Tutz, G. (2013). A general family of penalties for combining differing types of penalties in generalized structured models. Department of Statistics: Technical Reports, No. 139. Retrieved from http://epub.ub.uni-muenchen.de/17664/1/tr.pirls.pdf.
- Petry, S., Flexeder, C., & Tutz, G. (2011). Pairwise fused lasso. Department of Statistics: Technical Reports, No. 102. Retrieved from http://epub.ub.uni-muenchen.de/12164/1/petry_etal_TR102_2011.pdf.
- Rufibach, K. (2010). An active set algorithm to estimate parameters in generalized linear models with ordered predictors. Computational Statistics & Data Analysis, 54(6), 1442–1456. Retrieved from http://arxiv.org/pdf/0902.0240.pdf?origin=publication_detail.
- Tutz, G. (2011, October). Regularization methods for categorical data. Munich: Ludwig-Maximilians-Universität. Retrieved from http://m.wu.ac.at/it/departments/statmath/resseminar/talktutz.pdf.
- Tutz, G., & Gertheiss, J. (2013). Rating scales as predictors—The old question of scale level and some answers. Psychometrika, 1-20.
|
Continuous dependent variable with ordinal independent variable
@Scortchi's got you covered with this answer on Coding for an ordered covariate. I've repeated the recommendation on my answer to Effect of two demographic IVs on survey answers (Likert scale). Specif
|
12,379
|
Continuous dependent variable with ordinal independent variable
|
When there are multiple predictors, and the predictor of interest is ordinal, it is often difficult to decide how to code the variable. Coding it as categorical loses the order information, while coding it as numerical imposes linearity on the effects of the ordered categories that may be far from their true effects. For the former, isotonic regression has been proposed as a way to address non-monotonicity, but it is a data-driven model selection procedure, which like many other data-driven procedures, requires a careful evaluation of the final fitted model and the significance of its parameters. For the latter, splines may partially mitigate the rigid linearity assumption, but numbers still must be assigned to ordered categories, and results are sensitive to these choices. In our paper (Li and Shepherd, 2010, Introduction, paragraphs 3-5), we gave a more detailed explanation of these issues, which are applicable to all regression models with an ordinal predictor of interest.
Let $Y$ be an outcome variable, $X$ be the ordinal predictor of interest, and $\bf Z$ be the other covariates. We have proposed to fit two regression models, one for $Y$ on $\bf Z$ and the other $X$ on $\bf Z$, calculate the residuals for the two models, and evaluate the correlation between the residuals. In Li and Shepherd (2010), we studied this approach when $Y$ is ordinal and showed that it can be a very good robust approach as long as the effect of the $X$ categories is monotonic. We are currently evaluating the performance of this approach on other outcome types.
This approach requires an appropriate residual for the regression of ordinal $X$ on $\bf Z$. We proposed a new residual for ordinal outcomes in Li and Shepherd (2010) and used it to construct a test statistic. We further studied the properties and other uses of this residual in a separate paper (Li and Shepherd, 2012).
We have developed an R package, PResiduals, which is available from CRAN. The package contains functions for performing our approach for linear and ordinal outcome types. We are working to add other outcome types (e.g., count) and features (e.g., allowing interactions). The package also contains functions for calculating our residual, which is a probability-scale residual, for various regression models.
References
Li, C. & Shepherd, B. E. (2010). Test of association between two ordinal variables while adjusting for covariates. JASA, 105, 612–620.
Li, C. & Shepherd, B. E. (2012). A new residual for ordinal outcomes. Biometrika 99, 473–480.
|
Continuous dependent variable with ordinal independent variable
|
When there are multiple predictors, and the predictor of interest is ordinal, it is often difficult to decide how to code the variable. Coding it as categorical loses the order information, while cod
|
Continuous dependent variable with ordinal independent variable
When there are multiple predictors, and the predictor of interest is ordinal, it is often difficult to decide how to code the variable. Coding it as categorical loses the order information, while coding it as numerical imposes linearity on the effects of the ordered categories that may be far from their true effects. For the former, isotonic regression has been proposed as a way to address non-monotonicity, but it is a data-driven model selection procedure, which like many other data-driven procedures, requires a careful evaluation of the final fitted model and the significance of its parameters. For the latter, splines may partially mitigate the rigid linearity assumption, but numbers still must be assigned to ordered categories, and results are sensitive to these choices. In our paper (Li and Shepherd, 2010, Introduction, paragraphs 3-5), we gave a more detailed explanation of these issues, which are applicable to all regression models with an ordinal predictor of interest.
Let $Y$ be an outcome variable, $X$ be the ordinal predictor of interest, and $\bf Z$ be the other covariates. We have proposed to fit two regression models, one for $Y$ on $\bf Z$ and the other $X$ on $\bf Z$, calculate the residuals for the two models, and evaluate the correlation between the residuals. In Li and Shepherd (2010), we studied this approach when $Y$ is ordinal and showed that it can be a very good robust approach as long as the effect of the $X$ categories is monotonic. We are currently evaluating the performance of this approach on other outcome types.
This approach requires an appropriate residual for the regression of ordinal $X$ on $\bf Z$. We proposed a new residual for ordinal outcomes in Li and Shepherd (2010) and used it to construct a test statistic. We further studied the properties and other uses of this residual in a separate paper (Li and Shepherd, 2012).
We have developed an R package, PResiduals, which is available from CRAN. The package contains functions for performing our approach for linear and ordinal outcome types. We are working to add other outcome types (e.g., count) and features (e.g., allowing interactions). The package also contains functions for calculating our residual, which is a probability-scale residual, for various regression models.
References
Li, C. & Shepherd, B. E. (2010). Test of association between two ordinal variables while adjusting for covariates. JASA, 105, 612–620.
Li, C. & Shepherd, B. E. (2012). A new residual for ordinal outcomes. Biometrika 99, 473–480.
|
Continuous dependent variable with ordinal independent variable
When there are multiple predictors, and the predictor of interest is ordinal, it is often difficult to decide how to code the variable. Coding it as categorical loses the order information, while cod
|
12,380
|
Continuous dependent variable with ordinal independent variable
|
Generally there is lot of literature on ordinal variables as the dependent and little on using them as predictors. In statistical practice they are usually either assumed to be continous or categorical. You can check whether a linear model with the predictor as a continous variable looks like a good fit, by checking the residuals.
They are sometimes also coded cumulatively. An example would be for a ordinal variable x1 with the levels 1,2 and 3 to have a dummy binary variable d1 for x1>1 and a dummy binary variable d2 for x1>2. Then the coefficient for d1 is the effect you get when you increase your ordinal for 2 to 3 and the coefficient for d2 is the effect you get when you ordinal from 2 to 3.
This makes interpretation often more easily, but is equivalent to using it as a categorical variable for practical purposes.
Gelman even suggests that one might use the ordinal predictor both as a categorical factor (for the main effects) and as continous variable (for interactions) to increase the flexibility of the models.
My personal strategy is usually to look whether treating them as continous makes sense and results in a reasonable model and only use them as categorical if necessary.
|
Continuous dependent variable with ordinal independent variable
|
Generally there is lot of literature on ordinal variables as the dependent and little on using them as predictors. In statistical practice they are usually either assumed to be continous or categorica
|
Continuous dependent variable with ordinal independent variable
Generally there is lot of literature on ordinal variables as the dependent and little on using them as predictors. In statistical practice they are usually either assumed to be continous or categorical. You can check whether a linear model with the predictor as a continous variable looks like a good fit, by checking the residuals.
They are sometimes also coded cumulatively. An example would be for a ordinal variable x1 with the levels 1,2 and 3 to have a dummy binary variable d1 for x1>1 and a dummy binary variable d2 for x1>2. Then the coefficient for d1 is the effect you get when you increase your ordinal for 2 to 3 and the coefficient for d2 is the effect you get when you ordinal from 2 to 3.
This makes interpretation often more easily, but is equivalent to using it as a categorical variable for practical purposes.
Gelman even suggests that one might use the ordinal predictor both as a categorical factor (for the main effects) and as continous variable (for interactions) to increase the flexibility of the models.
My personal strategy is usually to look whether treating them as continous makes sense and results in a reasonable model and only use them as categorical if necessary.
|
Continuous dependent variable with ordinal independent variable
Generally there is lot of literature on ordinal variables as the dependent and little on using them as predictors. In statistical practice they are usually either assumed to be continous or categorica
|
12,381
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
|
You just need to take exponential of both sides of the equation and you will get a potential relation, that may make sense for some data.
$$\log(Y) = a\log(X) + b$$
$$\exp(\log(Y)) = \exp(a \log(X) + b)$$
$$Y = e^b\cdot X^a$$
And since $e^b$ is just a parameter that can take any positive value, this model is equivalent to:
$$Y=c \cdot X^a$$
It should be noted that model expression should include the error term, and these change of variables has interesting effects on it:
$$\log(Y) = a \log(X) + b + \epsilon$$
$$Y = e^b\cdot X^a\cdot \exp(\epsilon)$$
That is, your model with a additive errors abiding to the conditions for OLS (normally distributed errors with constant variance) is equivalent to a potential model with multiplicative errors whose logaritm follows a normal distribution with constant variance.
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
|
You just need to take exponential of both sides of the equation and you will get a potential relation, that may make sense for some data.
$$\log(Y) = a\log(X) + b$$
$$\exp(\log(Y)) = \exp(a \log(X) +
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
You just need to take exponential of both sides of the equation and you will get a potential relation, that may make sense for some data.
$$\log(Y) = a\log(X) + b$$
$$\exp(\log(Y)) = \exp(a \log(X) + b)$$
$$Y = e^b\cdot X^a$$
And since $e^b$ is just a parameter that can take any positive value, this model is equivalent to:
$$Y=c \cdot X^a$$
It should be noted that model expression should include the error term, and these change of variables has interesting effects on it:
$$\log(Y) = a \log(X) + b + \epsilon$$
$$Y = e^b\cdot X^a\cdot \exp(\epsilon)$$
That is, your model with a additive errors abiding to the conditions for OLS (normally distributed errors with constant variance) is equivalent to a potential model with multiplicative errors whose logaritm follows a normal distribution with constant variance.
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
You just need to take exponential of both sides of the equation and you will get a potential relation, that may make sense for some data.
$$\log(Y) = a\log(X) + b$$
$$\exp(\log(Y)) = \exp(a \log(X) +
|
12,382
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
|
You can take your model $\log(Y)=a\log(X)+b$ and calculate the total differential, you will end up with something like :
$$\frac{1}YdY=a\frac{1}XdX$$
which yields to
$$\frac{dY}{dX}\frac{X}{Y}=a$$
Hence one simple interpretation of the coefficient $a$ will be the percent change in $Y$ for a percent change in $X$.
This implies furthermore that the variable $Y$ growths at a constant fraction ($a$) of the growth rate of $X$.
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
|
You can take your model $\log(Y)=a\log(X)+b$ and calculate the total differential, you will end up with something like :
$$\frac{1}YdY=a\frac{1}XdX$$
which yields to
$$\frac{dY}{dX}\frac{X}{Y}=a$$
He
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
You can take your model $\log(Y)=a\log(X)+b$ and calculate the total differential, you will end up with something like :
$$\frac{1}YdY=a\frac{1}XdX$$
which yields to
$$\frac{dY}{dX}\frac{X}{Y}=a$$
Hence one simple interpretation of the coefficient $a$ will be the percent change in $Y$ for a percent change in $X$.
This implies furthermore that the variable $Y$ growths at a constant fraction ($a$) of the growth rate of $X$.
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
You can take your model $\log(Y)=a\log(X)+b$ and calculate the total differential, you will end up with something like :
$$\frac{1}YdY=a\frac{1}XdX$$
which yields to
$$\frac{dY}{dX}\frac{X}{Y}=a$$
He
|
12,383
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
|
Intuitively $\log$ gives us the order of magnitude of a variable, so we can view the relationship as the orders of magnitudes of the two variables are linearly related. For example, increasing the predictor by one order of magnitude may be associated with an increase of three orders of magnitude of the response.
When plotting using a log-log plot we hope to see a linear relationship.
Using an example from this question, we can check the linear model assumptions:
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
|
Intuitively $\log$ gives us the order of magnitude of a variable, so we can view the relationship as the orders of magnitudes of the two variables are linearly related. For example, increasing the pre
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
Intuitively $\log$ gives us the order of magnitude of a variable, so we can view the relationship as the orders of magnitudes of the two variables are linearly related. For example, increasing the predictor by one order of magnitude may be associated with an increase of three orders of magnitude of the response.
When plotting using a log-log plot we hope to see a linear relationship.
Using an example from this question, we can check the linear model assumptions:
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
Intuitively $\log$ gives us the order of magnitude of a variable, so we can view the relationship as the orders of magnitudes of the two variables are linearly related. For example, increasing the pre
|
12,384
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
|
Reconciling the answer by @Rscrill with actual discrete data, consider
$$\log(Y_t) = a\log(X_t) + b,\;\;\; \log(Y_{t-1}) = a\log(X_{t-1}) + b$$
$$\implies \log(Y_t) - \log(Y_{t-1}) = a\left[\log(X_t)-\log(X_{t-1})\right]$$
But
$$\log(Y_t) - \log(Y_{t-1}) = \log\left(\frac{Y_t}{Y_{t-1}}\right) \equiv \log\left(\frac{Y_{t-1}+\Delta Y_t}{Y_{t-1}}\right) = \log\left(1+\frac{\Delta Y_t}{Y_{t-1}}\right)$$
$\frac{\Delta Y_t}{Y_{t-1}}$ is the percentage change of $Y$ between periods $t-1$ and $t$, or the growth rate of $Y_t$, say $g_{Y_{t}}$. When it is smaller than $0.1$, we have that an acceptable approximation is
$$\log\left(1+\frac{\Delta Y_t}{Y_{t-1}}\right) \approx \frac{\Delta Y_t}{Y_{t-1}}=g_{Y_{t}}$$
Therefore we get
$$g_{Y_{t}}\approx ag_{X_{t}}$$
which validates in empirical studies the theoretical treatment of @Rscrill.
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
|
Reconciling the answer by @Rscrill with actual discrete data, consider
$$\log(Y_t) = a\log(X_t) + b,\;\;\; \log(Y_{t-1}) = a\log(X_{t-1}) + b$$
$$\implies \log(Y_t) - \log(Y_{t-1}) = a\left[\log(X_t)-
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
Reconciling the answer by @Rscrill with actual discrete data, consider
$$\log(Y_t) = a\log(X_t) + b,\;\;\; \log(Y_{t-1}) = a\log(X_{t-1}) + b$$
$$\implies \log(Y_t) - \log(Y_{t-1}) = a\left[\log(X_t)-\log(X_{t-1})\right]$$
But
$$\log(Y_t) - \log(Y_{t-1}) = \log\left(\frac{Y_t}{Y_{t-1}}\right) \equiv \log\left(\frac{Y_{t-1}+\Delta Y_t}{Y_{t-1}}\right) = \log\left(1+\frac{\Delta Y_t}{Y_{t-1}}\right)$$
$\frac{\Delta Y_t}{Y_{t-1}}$ is the percentage change of $Y$ between periods $t-1$ and $t$, or the growth rate of $Y_t$, say $g_{Y_{t}}$. When it is smaller than $0.1$, we have that an acceptable approximation is
$$\log\left(1+\frac{\Delta Y_t}{Y_{t-1}}\right) \approx \frac{\Delta Y_t}{Y_{t-1}}=g_{Y_{t}}$$
Therefore we get
$$g_{Y_{t}}\approx ag_{X_{t}}$$
which validates in empirical studies the theoretical treatment of @Rscrill.
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
Reconciling the answer by @Rscrill with actual discrete data, consider
$$\log(Y_t) = a\log(X_t) + b,\;\;\; \log(Y_{t-1}) = a\log(X_{t-1}) + b$$
$$\implies \log(Y_t) - \log(Y_{t-1}) = a\left[\log(X_t)-
|
12,385
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
|
A linear relationship between the logs is equivalent to a power law dependence:
$$Y \sim X^\alpha$$
In physics such behavior means that the system is scale free or scale invariant. As an example, if $X$ is distance or time this means that the dependence on $X$ cannot be characterized by a characteristic length or time scale (as opposed to exponential decays). As a result, such a system exhibits a long-range dependence of the $Y$ on $X$.
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
|
A linear relationship between the logs is equivalent to a power law dependence:
$$Y \sim X^\alpha$$
In physics such behavior means that the system is scale free or scale invariant. As an example, if
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
A linear relationship between the logs is equivalent to a power law dependence:
$$Y \sim X^\alpha$$
In physics such behavior means that the system is scale free or scale invariant. As an example, if $X$ is distance or time this means that the dependence on $X$ cannot be characterized by a characteristic length or time scale (as opposed to exponential decays). As a result, such a system exhibits a long-range dependence of the $Y$ on $X$.
|
What is the intuitive meaning of having a linear relationship between the logs of two variables?
A linear relationship between the logs is equivalent to a power law dependence:
$$Y \sim X^\alpha$$
In physics such behavior means that the system is scale free or scale invariant. As an example, if
|
12,386
|
Simulating draws from a Uniform Distribution using draws from a Normal Distribution
|
In the spirit of using simple algebraic calculations which are unrelated to computation of the Normal distribution, I would lean towards the following. They are ordered as I thought of them (and therefore needed to get more and more creative), but I have saved the best--and most surprising--to last.
Reverse the Box-Mueller technique: from each pair of normals $(X,Y)$, two independent uniforms can be constructed as $\text{atan2}(Y,X)$ (on the interval $[-\pi, \pi]$) and $\exp(-(X^2+Y^2)/2)$ (on the interval $[0,1]$).
Take the normals in groups of two and sum their squares to obtain a sequence of $\chi^2_2$ variates $Y_1, Y_2, \ldots, Y_i, \ldots$. The expressions obtained from the pairs
$$X_i = \frac{Y_{2i}}{Y_{2i-1}+Y_{2i}}$$
will have a $\text{Beta}(1,1)$ distribution, which is uniform.
That this requires only basic, simple arithmetic should be clear.
Because the exact distribution of the Pearson correlation coefficient of a four-pair sample from a standard bivariate Normal distribution is uniformly distributed on $[-1,1]$, we may simply take the normals in groups of four pairs (that is, eight values in each set) and return the correlation coefficient of these pairs. (This involves simple arithmetic plus two square root operations.)
It has been known since ancient times that a cylindrical projection of the sphere (a surface in three-space) is equal-area. This implies that in the projection of a uniform distribution on the sphere, both the horizontal coordinate (corresponding to longitude) and the vertical coordinate (corresponding to latitude) will have uniform distributions. Because the trivariate standard Normal distribution is spherically symmetric, its projection onto the sphere is uniform. Obtaining the longitude is essentially the same calculation as the angle in the Box-Mueller method (q.v.), but the projected latitude is new. The projection onto the sphere merely normalizes a triple of coordinates $(x,y,z)$ and at that point $z$ is the projected latitude. Thus, take the Normal variates in groups of three, $X_{3i-2}, X_{3i-1}, X_{3i}$, and compute
$$\frac{X_{3i}}{\sqrt{X_{3i-2}^2 + X_{3i-1}^2 + X_{3i}^2}}$$
for $i=1, 2, 3, \ldots$.
Because most computing systems represent numbers in binary, uniform number generation usually begins by producing uniformly distributed integers between $0$ and $2^{32}-1$ (or some high power of $2$ related to computer word length) and rescaling them as needed. Such integers are represented internally as strings of $32$ binary digits. We can obtain independent random bits by comparing a Normal variable to its median. Thus, it suffices to break the Normal variables into groups of size equal to the desired number of bits, compare each one to its mean, and assemble the resulting sequences of true/false results into a binary number. Writing $k$ for the number of bits and $H$ for the sign (that is, $H(x)=1$ when $x\gt 0$ and $H(x)=0$ otherwise) we can express the resulting normalized uniform value in $[0, 1)$ with the formula
$$\sum_{j=0}^{k-1} H(X_{ki - j})2^{-j-1}.$$
The variates $X_n$ can be drawn from any continuous distribution whose median is $0$ (such as a standard Normal); they are processed in groups of $k$ with each group producing one such pseudo-uniform value.
Rejection sampling is a standard, flexible, powerful way to draw random variates from arbitrary distributions. Suppose the target distribution has PDF $f$. A value $Y$ is drawn according to another distribution with PDF $g$. In the rejection step, a uniform value $U$ lying between $0$ and $g(Y)$ is drawn independently of $Y$ and compared to $f(Y)$: if it is smaller, $Y$ is retained but otherwise the process is repeated. This approach seems circular, though: how do we generate a uniform variate with a process that needs a uniform variate to begin with?
The answer is that we don't actually need a uniform variate in order to carry out the rejection step. Instead (assuming $g(Y)\ne 0$) we can flip a fair coin to obtain a $0$ or $1$ randomly. This will be interpreted as the first bit in the binary representation of a uniform variate $U$ in the interval $[0,1)$. When the outcome is $0$, that means $0 \le U \lt 1/2$; otherwise, $1/2\le U \lt 1$. Half of the time, this is enough to decide the rejection step: if $f(Y)/g(Y) \ge 1/2$ but the coin is $0$, $Y$ should be accepted; if $f(Y)/g(Y) \lt 1/2$ but the coin is $1$, $Y$ should be rejected; otherwise, we need to flip the coin again in order to obtain the next bit of $U$. Because--no matter what value $f(Y)/g(Y)$ has--there is a $1/2$ chance of stopping after each flip, the expected number of flips is only $1/2(1)+1/4(2)+1/8(3)+\cdots+2^{-n}(n)+\cdots=2$.
Rejection sampling can be worthwhile (and efficient) provided the expected number of rejections is small. We can accomplish this by fitting the largest possible rectangle (representing a uniform distribution) beneath a Normal PDF.
Using Calculus to optimize the rectangle's area, you will find that its endpoints should lie at $\pm 1$, where its height equals $\exp(-1/2)/\sqrt{2\pi}\approx 0.241971$, making its area a little greater than $0.48$. By using this standard Normal density as $g$ and rejecting all values outside the interval $[-1,1]$ automatically, and otherwise applying the rejection procedure, we will obtain uniform variates in $[-1,1]$ efficiently:
In a fraction $2\Phi(-1) \approx 0.317$ of the time, the Normal variate lies beyond $[-1,1]$ and is immediately rejected. ($\Phi$ is the standard Normal CDF.)
In the remaining fraction of the time, the binary rejection procedure has to be followed, requiring two more Normal variates on average.
The overall procedure requires an average of $1/(2\exp(-1/2)/\sqrt{2\pi}) \approx 2.07$ steps.
The expected number of Normal variates needed to produce each uniform result works out to
$$\sqrt{2 e \pi}\left(1-2\Phi(-1)\right) \approx 2.82137.$$
Although that is pretty efficient, note that (1) computation of the Normal PDF requires computing an exponential and (2) the value $\Phi(-1)$ must be precomputed once and for all. It's still a little less calculation than the Box-Mueller method (q.v.).
The order statistics of a uniform distribution have exponential gaps. Since the sum of squares of two Normals (of zero mean) is exponential, we may generate a realization of $n$ independent uniforms by summing the squares of pairs of such Normals, computing the cumulative sum of these, rescaling the results to fall in the interval $[0,1]$, and dropping the last one (which will always equal $1$). This is a pleasing approach because it requires only squaring, summing, and (at the end) a single division.
The $n$ values will automatically be in ascending order. If such a sorting is desired, this method is computationally superior to all the others insofar as it avoids the $O(n\log(n))$ cost of a sort. If a sequence of independent uniforms is needed, however, then sorting these $n$ values randomly will do the trick. Since (as seen in the Box-Mueller method, q.v.) the ratios of each pair of Normals are independent of the sum of squares of each pair, we already have the means to obtain that random permutation: order the cumulative sums by the corresponding ratios. (If $n$ is very large, this process could be carried out in smaller groups of $k$ with little loss of efficiency, since each group needs only $2(k+1)$ Normals to create $k$ uniform values. For fixed $k$, the asymptotic computational cost is therefore $O(n\log(k))$ = $O(n)$, needing $2n(1+1/k)$ Normal variates to generate $n$ uniform values.)
To a superb approximation, any Normal variate with a large standard deviation looks uniform over ranges of much smaller values. Upon rolling this distribution into the range $[0,1]$ (by taking only the fractional parts of the values), we thereby obtain a distribution that is uniform for all practical purposes. This is extremely efficient, requiring one of the simplest arithmetic operations of all: simply round each Normal variate down to the nearest integer and retain the excess. The simplicity of this approach becomes compelling when we examine a practical R implementation:
rnorm(n, sd=10) %% 1
reliably produces n uniform values in the range $[0,1]$ at the cost of just n Normal variates and almost no computation.
(Even when the standard deviation is $1$, the PDF of this approximation varies from a uniform PDF, as shown in the following figure, by less than one part in $10^8$! To detect it reliably would require a sample of $10^{16}$ values--that's already beyond the capability of any standard test of randomness. With a larger standard deviation the non-uniformity is so small it cannot even be calculated. For instance, with an SD of $10$ as shown in the code, the maximum deviation from a uniform PDF is only $10^{-857}$.)
In every case Normal variables "with known parameters" can easily be recentered and rescaled into the Standard Normals assumed above. Afterwards, the resulting uniformly distributed values can be recentered and rescaled to cover any desired interval. These require only basic arithmetic operations.
The ease of these constructions is evidenced by the following R code, which uses only one or two lines for most of them. Their correctness is witnessed by the resulting near-uniform histograms based on $100,000$ independent values in each case (requiring around 12 seconds for all seven simulations). For reference--in case you are worried about the amount of variation appearing in any of these plots--a histogram of uniform values simulated with R's uniform random number generator is included at the end.
All these simulations were tested for uniformity using a $\chi^2$ test based on $1000$ bins; none could be considered significantly non-uniform (the lowest p-value was $3\%$--for the results generated by R's actual uniform number generator!).
set.seed(17)
n <- 1e5
y <- matrix(rnorm(floor(n/2)*2), nrow=2)
x <- c(atan2(y[2,], y[1,])/(2*pi) + 1/2, exp(-(y[1,]^2+y[2,]^2)/2))
hist(x, main="Box-Mueller")
y <- apply(array(rnorm(4*n), c(2,2,n)), c(3,2), function(z) sum(z^2))
x <- y[,2] / (y[,1]+y[,2])
hist(x, main="Beta")
x <- apply(array(rnorm(8*n), c(4,2,n)), 3, function(y) cor(y[,1], y[,2]))
hist(x, main="Correlation")
n.bits <- 32; x <- (2^-(1:n.bits)) %*% matrix(rnorm(n*n.bits) > 0, n.bits)
hist(x, main="Binary")
y <- matrix(rnorm(n*3), 3)
x <- y[1, ] / sqrt(apply(y, 2, function(x) sum(x^2)))
hist(x, main="Equal area")
accept <- function(p) { # Using random normals, return TRUE with chance `p`
p.bit <- x <- 0
while(p.bit == x) {
p.bit <- p >= 1/2
x <- rnorm(1) >= 0
p <- (2*p) %% 1
}
return(x == 0)
}
y <- rnorm(ceiling(n * sqrt(exp(1)*pi/2))) # This aims to produce `n` uniforms
y <- y[abs(y) < 1]
x <- y[sapply(y, function(x) accept(exp((x^2-1)/2)))]
hist(x, main="Rejection")
y <- matrix(rnorm(2*(n+1))^2, 2)
x <- cumsum(y)[seq(2, 2*(n+1), 2)]
x <- x[-(n+1)] / x[n+1]
x <- x[order(y[2,-(n+1)]/y[1,-(n+1)])]
hist(x, main="Ordered")
x <- rnorm(n) %% 1 # (Use SD of 5 or greater in practice)
hist(x, main="Modular")
x <- runif(n) # Reference distribution
hist(x, main="Uniform")
|
Simulating draws from a Uniform Distribution using draws from a Normal Distribution
|
In the spirit of using simple algebraic calculations which are unrelated to computation of the Normal distribution, I would lean towards the following. They are ordered as I thought of them (and ther
|
Simulating draws from a Uniform Distribution using draws from a Normal Distribution
In the spirit of using simple algebraic calculations which are unrelated to computation of the Normal distribution, I would lean towards the following. They are ordered as I thought of them (and therefore needed to get more and more creative), but I have saved the best--and most surprising--to last.
Reverse the Box-Mueller technique: from each pair of normals $(X,Y)$, two independent uniforms can be constructed as $\text{atan2}(Y,X)$ (on the interval $[-\pi, \pi]$) and $\exp(-(X^2+Y^2)/2)$ (on the interval $[0,1]$).
Take the normals in groups of two and sum their squares to obtain a sequence of $\chi^2_2$ variates $Y_1, Y_2, \ldots, Y_i, \ldots$. The expressions obtained from the pairs
$$X_i = \frac{Y_{2i}}{Y_{2i-1}+Y_{2i}}$$
will have a $\text{Beta}(1,1)$ distribution, which is uniform.
That this requires only basic, simple arithmetic should be clear.
Because the exact distribution of the Pearson correlation coefficient of a four-pair sample from a standard bivariate Normal distribution is uniformly distributed on $[-1,1]$, we may simply take the normals in groups of four pairs (that is, eight values in each set) and return the correlation coefficient of these pairs. (This involves simple arithmetic plus two square root operations.)
It has been known since ancient times that a cylindrical projection of the sphere (a surface in three-space) is equal-area. This implies that in the projection of a uniform distribution on the sphere, both the horizontal coordinate (corresponding to longitude) and the vertical coordinate (corresponding to latitude) will have uniform distributions. Because the trivariate standard Normal distribution is spherically symmetric, its projection onto the sphere is uniform. Obtaining the longitude is essentially the same calculation as the angle in the Box-Mueller method (q.v.), but the projected latitude is new. The projection onto the sphere merely normalizes a triple of coordinates $(x,y,z)$ and at that point $z$ is the projected latitude. Thus, take the Normal variates in groups of three, $X_{3i-2}, X_{3i-1}, X_{3i}$, and compute
$$\frac{X_{3i}}{\sqrt{X_{3i-2}^2 + X_{3i-1}^2 + X_{3i}^2}}$$
for $i=1, 2, 3, \ldots$.
Because most computing systems represent numbers in binary, uniform number generation usually begins by producing uniformly distributed integers between $0$ and $2^{32}-1$ (or some high power of $2$ related to computer word length) and rescaling them as needed. Such integers are represented internally as strings of $32$ binary digits. We can obtain independent random bits by comparing a Normal variable to its median. Thus, it suffices to break the Normal variables into groups of size equal to the desired number of bits, compare each one to its mean, and assemble the resulting sequences of true/false results into a binary number. Writing $k$ for the number of bits and $H$ for the sign (that is, $H(x)=1$ when $x\gt 0$ and $H(x)=0$ otherwise) we can express the resulting normalized uniform value in $[0, 1)$ with the formula
$$\sum_{j=0}^{k-1} H(X_{ki - j})2^{-j-1}.$$
The variates $X_n$ can be drawn from any continuous distribution whose median is $0$ (such as a standard Normal); they are processed in groups of $k$ with each group producing one such pseudo-uniform value.
Rejection sampling is a standard, flexible, powerful way to draw random variates from arbitrary distributions. Suppose the target distribution has PDF $f$. A value $Y$ is drawn according to another distribution with PDF $g$. In the rejection step, a uniform value $U$ lying between $0$ and $g(Y)$ is drawn independently of $Y$ and compared to $f(Y)$: if it is smaller, $Y$ is retained but otherwise the process is repeated. This approach seems circular, though: how do we generate a uniform variate with a process that needs a uniform variate to begin with?
The answer is that we don't actually need a uniform variate in order to carry out the rejection step. Instead (assuming $g(Y)\ne 0$) we can flip a fair coin to obtain a $0$ or $1$ randomly. This will be interpreted as the first bit in the binary representation of a uniform variate $U$ in the interval $[0,1)$. When the outcome is $0$, that means $0 \le U \lt 1/2$; otherwise, $1/2\le U \lt 1$. Half of the time, this is enough to decide the rejection step: if $f(Y)/g(Y) \ge 1/2$ but the coin is $0$, $Y$ should be accepted; if $f(Y)/g(Y) \lt 1/2$ but the coin is $1$, $Y$ should be rejected; otherwise, we need to flip the coin again in order to obtain the next bit of $U$. Because--no matter what value $f(Y)/g(Y)$ has--there is a $1/2$ chance of stopping after each flip, the expected number of flips is only $1/2(1)+1/4(2)+1/8(3)+\cdots+2^{-n}(n)+\cdots=2$.
Rejection sampling can be worthwhile (and efficient) provided the expected number of rejections is small. We can accomplish this by fitting the largest possible rectangle (representing a uniform distribution) beneath a Normal PDF.
Using Calculus to optimize the rectangle's area, you will find that its endpoints should lie at $\pm 1$, where its height equals $\exp(-1/2)/\sqrt{2\pi}\approx 0.241971$, making its area a little greater than $0.48$. By using this standard Normal density as $g$ and rejecting all values outside the interval $[-1,1]$ automatically, and otherwise applying the rejection procedure, we will obtain uniform variates in $[-1,1]$ efficiently:
In a fraction $2\Phi(-1) \approx 0.317$ of the time, the Normal variate lies beyond $[-1,1]$ and is immediately rejected. ($\Phi$ is the standard Normal CDF.)
In the remaining fraction of the time, the binary rejection procedure has to be followed, requiring two more Normal variates on average.
The overall procedure requires an average of $1/(2\exp(-1/2)/\sqrt{2\pi}) \approx 2.07$ steps.
The expected number of Normal variates needed to produce each uniform result works out to
$$\sqrt{2 e \pi}\left(1-2\Phi(-1)\right) \approx 2.82137.$$
Although that is pretty efficient, note that (1) computation of the Normal PDF requires computing an exponential and (2) the value $\Phi(-1)$ must be precomputed once and for all. It's still a little less calculation than the Box-Mueller method (q.v.).
The order statistics of a uniform distribution have exponential gaps. Since the sum of squares of two Normals (of zero mean) is exponential, we may generate a realization of $n$ independent uniforms by summing the squares of pairs of such Normals, computing the cumulative sum of these, rescaling the results to fall in the interval $[0,1]$, and dropping the last one (which will always equal $1$). This is a pleasing approach because it requires only squaring, summing, and (at the end) a single division.
The $n$ values will automatically be in ascending order. If such a sorting is desired, this method is computationally superior to all the others insofar as it avoids the $O(n\log(n))$ cost of a sort. If a sequence of independent uniforms is needed, however, then sorting these $n$ values randomly will do the trick. Since (as seen in the Box-Mueller method, q.v.) the ratios of each pair of Normals are independent of the sum of squares of each pair, we already have the means to obtain that random permutation: order the cumulative sums by the corresponding ratios. (If $n$ is very large, this process could be carried out in smaller groups of $k$ with little loss of efficiency, since each group needs only $2(k+1)$ Normals to create $k$ uniform values. For fixed $k$, the asymptotic computational cost is therefore $O(n\log(k))$ = $O(n)$, needing $2n(1+1/k)$ Normal variates to generate $n$ uniform values.)
To a superb approximation, any Normal variate with a large standard deviation looks uniform over ranges of much smaller values. Upon rolling this distribution into the range $[0,1]$ (by taking only the fractional parts of the values), we thereby obtain a distribution that is uniform for all practical purposes. This is extremely efficient, requiring one of the simplest arithmetic operations of all: simply round each Normal variate down to the nearest integer and retain the excess. The simplicity of this approach becomes compelling when we examine a practical R implementation:
rnorm(n, sd=10) %% 1
reliably produces n uniform values in the range $[0,1]$ at the cost of just n Normal variates and almost no computation.
(Even when the standard deviation is $1$, the PDF of this approximation varies from a uniform PDF, as shown in the following figure, by less than one part in $10^8$! To detect it reliably would require a sample of $10^{16}$ values--that's already beyond the capability of any standard test of randomness. With a larger standard deviation the non-uniformity is so small it cannot even be calculated. For instance, with an SD of $10$ as shown in the code, the maximum deviation from a uniform PDF is only $10^{-857}$.)
In every case Normal variables "with known parameters" can easily be recentered and rescaled into the Standard Normals assumed above. Afterwards, the resulting uniformly distributed values can be recentered and rescaled to cover any desired interval. These require only basic arithmetic operations.
The ease of these constructions is evidenced by the following R code, which uses only one or two lines for most of them. Their correctness is witnessed by the resulting near-uniform histograms based on $100,000$ independent values in each case (requiring around 12 seconds for all seven simulations). For reference--in case you are worried about the amount of variation appearing in any of these plots--a histogram of uniform values simulated with R's uniform random number generator is included at the end.
All these simulations were tested for uniformity using a $\chi^2$ test based on $1000$ bins; none could be considered significantly non-uniform (the lowest p-value was $3\%$--for the results generated by R's actual uniform number generator!).
set.seed(17)
n <- 1e5
y <- matrix(rnorm(floor(n/2)*2), nrow=2)
x <- c(atan2(y[2,], y[1,])/(2*pi) + 1/2, exp(-(y[1,]^2+y[2,]^2)/2))
hist(x, main="Box-Mueller")
y <- apply(array(rnorm(4*n), c(2,2,n)), c(3,2), function(z) sum(z^2))
x <- y[,2] / (y[,1]+y[,2])
hist(x, main="Beta")
x <- apply(array(rnorm(8*n), c(4,2,n)), 3, function(y) cor(y[,1], y[,2]))
hist(x, main="Correlation")
n.bits <- 32; x <- (2^-(1:n.bits)) %*% matrix(rnorm(n*n.bits) > 0, n.bits)
hist(x, main="Binary")
y <- matrix(rnorm(n*3), 3)
x <- y[1, ] / sqrt(apply(y, 2, function(x) sum(x^2)))
hist(x, main="Equal area")
accept <- function(p) { # Using random normals, return TRUE with chance `p`
p.bit <- x <- 0
while(p.bit == x) {
p.bit <- p >= 1/2
x <- rnorm(1) >= 0
p <- (2*p) %% 1
}
return(x == 0)
}
y <- rnorm(ceiling(n * sqrt(exp(1)*pi/2))) # This aims to produce `n` uniforms
y <- y[abs(y) < 1]
x <- y[sapply(y, function(x) accept(exp((x^2-1)/2)))]
hist(x, main="Rejection")
y <- matrix(rnorm(2*(n+1))^2, 2)
x <- cumsum(y)[seq(2, 2*(n+1), 2)]
x <- x[-(n+1)] / x[n+1]
x <- x[order(y[2,-(n+1)]/y[1,-(n+1)])]
hist(x, main="Ordered")
x <- rnorm(n) %% 1 # (Use SD of 5 or greater in practice)
hist(x, main="Modular")
x <- runif(n) # Reference distribution
hist(x, main="Uniform")
|
Simulating draws from a Uniform Distribution using draws from a Normal Distribution
In the spirit of using simple algebraic calculations which are unrelated to computation of the Normal distribution, I would lean towards the following. They are ordered as I thought of them (and ther
|
12,387
|
Simulating draws from a Uniform Distribution using draws from a Normal Distribution
|
You can use a trick very similar to what you mention. Let's say that $X \sim N(\mu, \sigma^2)$ is a normal random variable with known parameters. Then we know its distribution function, $\Phi_{\mu,\sigma^2}$, and $\Phi_{\mu,\sigma^2}(X)$ will be uniformly distributed on $(0,1)$. To prove this, note that for $d \in (0,1)$ we see that
$P(\Phi_{\mu,\sigma^2}(X) \leq d) = P(X \leq \Phi_{\mu,\sigma^2}^{-1}(d)) = d$.
The above probability is clearly zero for non-positive $d$ and $1$ for $d \geq 1$. This is enough to show that $\Phi_{\mu,\sigma^2}(X)$ has a uniform distribution on $(0,1)$ as we have shown that the corresponding measures are equal for a generator of the Borel $\sigma$-algebra on $\mathbb{R}$. Thus, you can just tranform the normally distributed data by the distribution function and you'll get uniformly distributed data.
|
Simulating draws from a Uniform Distribution using draws from a Normal Distribution
|
You can use a trick very similar to what you mention. Let's say that $X \sim N(\mu, \sigma^2)$ is a normal random variable with known parameters. Then we know its distribution function, $\Phi_{\mu,\si
|
Simulating draws from a Uniform Distribution using draws from a Normal Distribution
You can use a trick very similar to what you mention. Let's say that $X \sim N(\mu, \sigma^2)$ is a normal random variable with known parameters. Then we know its distribution function, $\Phi_{\mu,\sigma^2}$, and $\Phi_{\mu,\sigma^2}(X)$ will be uniformly distributed on $(0,1)$. To prove this, note that for $d \in (0,1)$ we see that
$P(\Phi_{\mu,\sigma^2}(X) \leq d) = P(X \leq \Phi_{\mu,\sigma^2}^{-1}(d)) = d$.
The above probability is clearly zero for non-positive $d$ and $1$ for $d \geq 1$. This is enough to show that $\Phi_{\mu,\sigma^2}(X)$ has a uniform distribution on $(0,1)$ as we have shown that the corresponding measures are equal for a generator of the Borel $\sigma$-algebra on $\mathbb{R}$. Thus, you can just tranform the normally distributed data by the distribution function and you'll get uniformly distributed data.
|
Simulating draws from a Uniform Distribution using draws from a Normal Distribution
You can use a trick very similar to what you mention. Let's say that $X \sim N(\mu, \sigma^2)$ is a normal random variable with known parameters. Then we know its distribution function, $\Phi_{\mu,\si
|
12,388
|
Simulating draws from a Uniform Distribution using draws from a Normal Distribution
|
Adding on to 5:
The trick of transforming random variables to bits works for any independent pair of absolutely continuous random variables X and Y, even if X and Y are dependent on each other or the two variables are not identically distributed, as long as the two variables are statistically indifferent (Montes Gutiérrez 2014, De Schuymer et al. 2003); equivalently, their probabilistic index (Acion et al. 2006) is 1/2 or their net benefit is 0. This means in our case that P(X < Y) = P(X > Y). In particular, two independent normal random variables X and Y are statistically indifferent if and only if they have the same mean (they remain so even if their standard deviations differ) (Montes Gutiérrez 2014).
In this case, to generate unbiased random bits this way, sample an independent pair of statistically indifferent, absolutely continuous random variates, X and Y, and compare them. If X is less than Y, output 1; otherwise, output 0 (Morina et al. 2019). Because they are statistically indifferent, this method will output 1 or 0 with equal probability (see the appendix in my Note on Randomness Extraction).
(Note that statistical indifference also makes sense for discrete and singular random variables, but in this case an additional rejection step will be necessary if X and Y turn out to be equal [Morina et al. 2019].)
REFERENCES:
Montes Gutiérrez, I., "Comparison of alternatives under uncertainty and imprecision", doctoral thesis, Universidad de Oviedo, 2014.
De Schuymer, Bart, Hans De Meyer, and Bernard De Baets. "A fuzzy approach to stochastic dominance of random variables", in International Fuzzy Systems Association World Congress 2003.
Morina, G., Łatuszyński, K., et al., "From the Bernoulli Factory to a Dice Enterprise via Perfect Sampling of Markov Chains", arXiv:1912.09229 [math.PR], 2019.
Acion, Laura, John J. Peterson, Scott Temple, and Stephan Arndt. "Probabilistic index: an intuitive non‐parametric approach to measuring the size of treatment effects." Statistics in medicine 25, no. 4 (2006): 591-602.
|
Simulating draws from a Uniform Distribution using draws from a Normal Distribution
|
Adding on to 5:
The trick of transforming random variables to bits works for any independent pair of absolutely continuous random variables X and Y, even if X and Y are dependent on each other or the
|
Simulating draws from a Uniform Distribution using draws from a Normal Distribution
Adding on to 5:
The trick of transforming random variables to bits works for any independent pair of absolutely continuous random variables X and Y, even if X and Y are dependent on each other or the two variables are not identically distributed, as long as the two variables are statistically indifferent (Montes Gutiérrez 2014, De Schuymer et al. 2003); equivalently, their probabilistic index (Acion et al. 2006) is 1/2 or their net benefit is 0. This means in our case that P(X < Y) = P(X > Y). In particular, two independent normal random variables X and Y are statistically indifferent if and only if they have the same mean (they remain so even if their standard deviations differ) (Montes Gutiérrez 2014).
In this case, to generate unbiased random bits this way, sample an independent pair of statistically indifferent, absolutely continuous random variates, X and Y, and compare them. If X is less than Y, output 1; otherwise, output 0 (Morina et al. 2019). Because they are statistically indifferent, this method will output 1 or 0 with equal probability (see the appendix in my Note on Randomness Extraction).
(Note that statistical indifference also makes sense for discrete and singular random variables, but in this case an additional rejection step will be necessary if X and Y turn out to be equal [Morina et al. 2019].)
REFERENCES:
Montes Gutiérrez, I., "Comparison of alternatives under uncertainty and imprecision", doctoral thesis, Universidad de Oviedo, 2014.
De Schuymer, Bart, Hans De Meyer, and Bernard De Baets. "A fuzzy approach to stochastic dominance of random variables", in International Fuzzy Systems Association World Congress 2003.
Morina, G., Łatuszyński, K., et al., "From the Bernoulli Factory to a Dice Enterprise via Perfect Sampling of Markov Chains", arXiv:1912.09229 [math.PR], 2019.
Acion, Laura, John J. Peterson, Scott Temple, and Stephan Arndt. "Probabilistic index: an intuitive non‐parametric approach to measuring the size of treatment effects." Statistics in medicine 25, no. 4 (2006): 591-602.
|
Simulating draws from a Uniform Distribution using draws from a Normal Distribution
Adding on to 5:
The trick of transforming random variables to bits works for any independent pair of absolutely continuous random variables X and Y, even if X and Y are dependent on each other or the
|
12,389
|
What is effect size... and why is it even useful?
|
That is one measure of effect size, but there are many others. It is certainly not the $t$ test statistic. Your measure of effect size is often called Cohen's $d$ (strictly speaking that is correct only if the SD is estimated via MLE—i.e., without Bessel's correction); more generically, it is called the 'standardized mean difference'. Perhaps this will make it clearer that $t\ne d$:
\begin{align}
d &= \frac{\bar x_2 - \bar x_1}{SD} \\[10pt]
&\ne \\[10pt]
t &= \frac{\bar x_2 - \bar x_1}{SE} \\[10pt]
t &= \frac{\bar x_2 - \bar x_1}{\frac{SD}{\sqrt N}} \\
\end{align}
That is, the "$/\sqrt N$" is missing from the formula for the standardized mean difference.
More generally, taking the sample size out of the value provides real information. Assuming the true effect is not exactly $0$ to infinite decimal places, you can achieve any level of significance you might like with sufficient $N$. The $p$-value provides information about how confident we can be in rejecting the null hypothesis, but does so by conflating how big the effect is with how much data you have. It is certainly nice to know if we should reject the null hypothesis, but it would also be nice to know if the effect of your educational intervention produces large gains for schoolchildren or is trivial and was only significant due to large $N$.
|
What is effect size... and why is it even useful?
|
That is one measure of effect size, but there are many others. It is certainly not the $t$ test statistic. Your measure of effect size is often called Cohen's $d$ (strictly speaking that is correct
|
What is effect size... and why is it even useful?
That is one measure of effect size, but there are many others. It is certainly not the $t$ test statistic. Your measure of effect size is often called Cohen's $d$ (strictly speaking that is correct only if the SD is estimated via MLE—i.e., without Bessel's correction); more generically, it is called the 'standardized mean difference'. Perhaps this will make it clearer that $t\ne d$:
\begin{align}
d &= \frac{\bar x_2 - \bar x_1}{SD} \\[10pt]
&\ne \\[10pt]
t &= \frac{\bar x_2 - \bar x_1}{SE} \\[10pt]
t &= \frac{\bar x_2 - \bar x_1}{\frac{SD}{\sqrt N}} \\
\end{align}
That is, the "$/\sqrt N$" is missing from the formula for the standardized mean difference.
More generally, taking the sample size out of the value provides real information. Assuming the true effect is not exactly $0$ to infinite decimal places, you can achieve any level of significance you might like with sufficient $N$. The $p$-value provides information about how confident we can be in rejecting the null hypothesis, but does so by conflating how big the effect is with how much data you have. It is certainly nice to know if we should reject the null hypothesis, but it would also be nice to know if the effect of your educational intervention produces large gains for schoolchildren or is trivial and was only significant due to large $N$.
|
What is effect size... and why is it even useful?
That is one measure of effect size, but there are many others. It is certainly not the $t$ test statistic. Your measure of effect size is often called Cohen's $d$ (strictly speaking that is correct
|
12,390
|
What is effect size... and why is it even useful?
|
I expect someone with a background in a more relevant area (psychology or education, say) will chime in with a better answer, but I'll give it a shot.
"Effect size" is a term with more than one meaning -- which many years past led some some confused conversations until I eventually came to that realization. Here we're clearly dealing with the scaled-for-standard-deviation version ("how many standard deviations did that change by?")
Part of the reason for looking at that sort of "effect size" in the subject areas they're common in is that they frequently have variables whose particular values are not inherently meaningful but are constructed to attempt to measure some underlying thing that's hard to get at.
For example, imagine you are trying to measure job satisfaction (perhaps for a model which relates it to some set of independent variables, perhaps including some treatment of interest, for example). You don't have any way to get at it directly, but you could (for example) try to construct some questionnaire to get at different aspects of it, perhaps using something like a Likert scale.
A different researcher may have a different approach to measuring job satisfaction, and so your two sets of "Satisfaction" measurements are not directly comparable -- but if they have the various forms of validity and so on that these things are checked for (so that they may reasonably be measuring satisfaction), then they may be hoped to have very similar effect sizes; at the least effect size is going to be more nearly comparable.
|
What is effect size... and why is it even useful?
|
I expect someone with a background in a more relevant area (psychology or education, say) will chime in with a better answer, but I'll give it a shot.
"Effect size" is a term with more than one meanin
|
What is effect size... and why is it even useful?
I expect someone with a background in a more relevant area (psychology or education, say) will chime in with a better answer, but I'll give it a shot.
"Effect size" is a term with more than one meaning -- which many years past led some some confused conversations until I eventually came to that realization. Here we're clearly dealing with the scaled-for-standard-deviation version ("how many standard deviations did that change by?")
Part of the reason for looking at that sort of "effect size" in the subject areas they're common in is that they frequently have variables whose particular values are not inherently meaningful but are constructed to attempt to measure some underlying thing that's hard to get at.
For example, imagine you are trying to measure job satisfaction (perhaps for a model which relates it to some set of independent variables, perhaps including some treatment of interest, for example). You don't have any way to get at it directly, but you could (for example) try to construct some questionnaire to get at different aspects of it, perhaps using something like a Likert scale.
A different researcher may have a different approach to measuring job satisfaction, and so your two sets of "Satisfaction" measurements are not directly comparable -- but if they have the various forms of validity and so on that these things are checked for (so that they may reasonably be measuring satisfaction), then they may be hoped to have very similar effect sizes; at the least effect size is going to be more nearly comparable.
|
What is effect size... and why is it even useful?
I expect someone with a background in a more relevant area (psychology or education, say) will chime in with a better answer, but I'll give it a shot.
"Effect size" is a term with more than one meanin
|
12,391
|
What is effect size... and why is it even useful?
|
The formula above is how you calculate Cohen's d for related samples (which is probably what you have?), if they're unrelated you can use pooled variance instead. There are different stats that will tell you about effect size, but Cohen's d is a standardised measure that can vary between 0 and 3. If you have lots of different variables, it can be nice to have a standardised measure when you're thinking about them all together. On the other hand, many people prefer understanding the effect size in terms of the units being measured. Why calculate d when you already have p values? Here's an example from a dataset I'm currently working with. I am looking at a behavioural intervention conducted in schools, measured using validated psychological questionnaires (producing Likert data). Almost all of my variables show statistically significant change, perhaps unsurprising as I have a large sample (n=~250). However, for some of the variables, the Cohen's d is quite miniscule, say 0.12 which indicates that although there is certainly change, it might not be a clinically important change and so it is important to the discussion and interpretation of what's going on in the data. This concept is widely used in psychology and health sciences where the practitioners (or schools, in your case) need to consider the actual clinical utility of treatments (or whatever they're experimenting with). Cohen's d helps us answer questions about whether its really worth doing an intervention (regardless of p values). In medical sciences they also like to consider the NNT, and evaluate this in terms of the severity of the condition in question. Have a look at this great resource from @krstoffr http://rpsychologist.com/d3/cohend/
|
What is effect size... and why is it even useful?
|
The formula above is how you calculate Cohen's d for related samples (which is probably what you have?), if they're unrelated you can use pooled variance instead. There are different stats that will
|
What is effect size... and why is it even useful?
The formula above is how you calculate Cohen's d for related samples (which is probably what you have?), if they're unrelated you can use pooled variance instead. There are different stats that will tell you about effect size, but Cohen's d is a standardised measure that can vary between 0 and 3. If you have lots of different variables, it can be nice to have a standardised measure when you're thinking about them all together. On the other hand, many people prefer understanding the effect size in terms of the units being measured. Why calculate d when you already have p values? Here's an example from a dataset I'm currently working with. I am looking at a behavioural intervention conducted in schools, measured using validated psychological questionnaires (producing Likert data). Almost all of my variables show statistically significant change, perhaps unsurprising as I have a large sample (n=~250). However, for some of the variables, the Cohen's d is quite miniscule, say 0.12 which indicates that although there is certainly change, it might not be a clinically important change and so it is important to the discussion and interpretation of what's going on in the data. This concept is widely used in psychology and health sciences where the practitioners (or schools, in your case) need to consider the actual clinical utility of treatments (or whatever they're experimenting with). Cohen's d helps us answer questions about whether its really worth doing an intervention (regardless of p values). In medical sciences they also like to consider the NNT, and evaluate this in terms of the severity of the condition in question. Have a look at this great resource from @krstoffr http://rpsychologist.com/d3/cohend/
|
What is effect size... and why is it even useful?
The formula above is how you calculate Cohen's d for related samples (which is probably what you have?), if they're unrelated you can use pooled variance instead. There are different stats that will
|
12,392
|
What is effect size... and why is it even useful?
|
In fact, p-values are now finally 'out of fashion' as well: http://www.nature.com/news/psychology-journal-bans-p-values-1.17001. Null hypothesis significance testing (NHST) produces little more than a description of your sample size.(*) Any experimental intervention will have some effect, which is to say that the simple null hypothesis of 'no effect' is always false in a strict sense. Therefore, a 'non-significant' test simply means that your sample size wasn't big enough; a 'significant' test means you collected enough data to 'find' something.
The 'effect size' represents an attempt to remedy this, by introducing a measure on the natural scale of the problem. In medicine, where treatments always have some effect (even if it's a placebo effect), the notion of a 'clinically meaningful effect' is introduced to guard against the 50% prior probability that a 'treatment' will be found to have 'a (statistically) significant positive effect' (however minuscule) in an arbitrarily large study.
If I understand the nature of your work, Clarinetist, then at the end of the day, its legitimate aim is to inform actions/interventions that improve education in the schools under your purview. Thus, your setting is a decision-theoretic one, and Bayesian methods are the most appropriate (and uniquely coherent[1]) approach.
Indeed, the best way to understand frequentist methods is as approximations to Bayesian methods. The estimated effect size can be understood as aiming at a measure of centrality for the Bayesian posterior distribution, while the p-value can be understood as aiming to measure one tail of that posterior. Thus, together these two quantities contain some rough gist of the Bayesian posterior that constitutes the natural input to a decision-theoretic outlook on your problem. (Alternatively, a frequentist confidence interval on the effect size can be understood likewise as a wannabe credible interval.)
In the fields of psychology and education, Bayesian methods are actually quite popular. One reason for this is that it is easy to install 'constructs' into Bayesian models, as latent variables. You might like to check out 'the puppy book' by John K. Kruschke, a psychologist. In education (where you have students nested in classrooms, nested in schools, nested in districts, ...), hierarchical modeling is unavoidable. And Bayesian models are great for hierarchical modeling, too. On this account, you might like to check out Gelman & Hill [2].
[1]: Robert, Christian P. The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation. 2nd ed. Springer Texts in Statistics. New York: Springer, 2007.
[2]: Gelman, Andrew, and Jennifer Hill. Data Analysis Using Regression and Multilevel/hierarchical Models. Analytical Methods for Social Research. Cambridge ; New York: Cambridge University Press, 2007.
For more on 'coherence' from a not-necessarily-beating-you-on-the-head-with-a-Bayesian-brick perspective, see [3].
[3]: Robins, James, and Larry Wasserman. “Conditioning, Likelihood, and Coherence: A Review of Some Foundational Concepts.” Journal of the American Statistical Association 95, no. 452 (December 1, 2000): 1340–46. doi:10.1080/01621459.2000.10474344.
(*) In [4], Meehl scourges NHST far more elegantly, but no less abrasively, than I do:
Since the null hypothesis is quasi-always false, tables summarizing research in terms of patterns of “significant differences” are little more than complex, causally uninterpretable outcomes of statistical power functions.
[4]: Meehl, Paul E. “Theoretical Risks and Tabular Asterisks: Sir Karl, Sir Ronald, and the Slow Progress of Soft Psychology.” Journal of Consulting and Clinical Psychiatry 46 (1978): 806–34. http://www3.nd.edu/~ghaeffel/Meehl(1978).pdf
And here's a related quote from Tukey: https://stats.stackexchange.com/a/728/41404
|
What is effect size... and why is it even useful?
|
In fact, p-values are now finally 'out of fashion' as well: http://www.nature.com/news/psychology-journal-bans-p-values-1.17001. Null hypothesis significance testing (NHST) produces little more than a
|
What is effect size... and why is it even useful?
In fact, p-values are now finally 'out of fashion' as well: http://www.nature.com/news/psychology-journal-bans-p-values-1.17001. Null hypothesis significance testing (NHST) produces little more than a description of your sample size.(*) Any experimental intervention will have some effect, which is to say that the simple null hypothesis of 'no effect' is always false in a strict sense. Therefore, a 'non-significant' test simply means that your sample size wasn't big enough; a 'significant' test means you collected enough data to 'find' something.
The 'effect size' represents an attempt to remedy this, by introducing a measure on the natural scale of the problem. In medicine, where treatments always have some effect (even if it's a placebo effect), the notion of a 'clinically meaningful effect' is introduced to guard against the 50% prior probability that a 'treatment' will be found to have 'a (statistically) significant positive effect' (however minuscule) in an arbitrarily large study.
If I understand the nature of your work, Clarinetist, then at the end of the day, its legitimate aim is to inform actions/interventions that improve education in the schools under your purview. Thus, your setting is a decision-theoretic one, and Bayesian methods are the most appropriate (and uniquely coherent[1]) approach.
Indeed, the best way to understand frequentist methods is as approximations to Bayesian methods. The estimated effect size can be understood as aiming at a measure of centrality for the Bayesian posterior distribution, while the p-value can be understood as aiming to measure one tail of that posterior. Thus, together these two quantities contain some rough gist of the Bayesian posterior that constitutes the natural input to a decision-theoretic outlook on your problem. (Alternatively, a frequentist confidence interval on the effect size can be understood likewise as a wannabe credible interval.)
In the fields of psychology and education, Bayesian methods are actually quite popular. One reason for this is that it is easy to install 'constructs' into Bayesian models, as latent variables. You might like to check out 'the puppy book' by John K. Kruschke, a psychologist. In education (where you have students nested in classrooms, nested in schools, nested in districts, ...), hierarchical modeling is unavoidable. And Bayesian models are great for hierarchical modeling, too. On this account, you might like to check out Gelman & Hill [2].
[1]: Robert, Christian P. The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation. 2nd ed. Springer Texts in Statistics. New York: Springer, 2007.
[2]: Gelman, Andrew, and Jennifer Hill. Data Analysis Using Regression and Multilevel/hierarchical Models. Analytical Methods for Social Research. Cambridge ; New York: Cambridge University Press, 2007.
For more on 'coherence' from a not-necessarily-beating-you-on-the-head-with-a-Bayesian-brick perspective, see [3].
[3]: Robins, James, and Larry Wasserman. “Conditioning, Likelihood, and Coherence: A Review of Some Foundational Concepts.” Journal of the American Statistical Association 95, no. 452 (December 1, 2000): 1340–46. doi:10.1080/01621459.2000.10474344.
(*) In [4], Meehl scourges NHST far more elegantly, but no less abrasively, than I do:
Since the null hypothesis is quasi-always false, tables summarizing research in terms of patterns of “significant differences” are little more than complex, causally uninterpretable outcomes of statistical power functions.
[4]: Meehl, Paul E. “Theoretical Risks and Tabular Asterisks: Sir Karl, Sir Ronald, and the Slow Progress of Soft Psychology.” Journal of Consulting and Clinical Psychiatry 46 (1978): 806–34. http://www3.nd.edu/~ghaeffel/Meehl(1978).pdf
And here's a related quote from Tukey: https://stats.stackexchange.com/a/728/41404
|
What is effect size... and why is it even useful?
In fact, p-values are now finally 'out of fashion' as well: http://www.nature.com/news/psychology-journal-bans-p-values-1.17001. Null hypothesis significance testing (NHST) produces little more than a
|
12,393
|
What is effect size... and why is it even useful?
|
What you wrote is not a test statistic. It's a measure used to define how different the two means are. Generally, effect sizes are used to quantify how far from the null hypotheses the something is. For example, if you are doing power analysis for the two sample $t$-test, you might quantify the power as a function of the effect size (for a fixed $n$) you just wrote (which, I think, is called Cohen's D). In other contexts, the effect size might be something else.
It is also not uncommon to report effect sizes using sample quantities, which may coincide with some familiar statistics, such as the pearson correlation - the true effect size is the underlying correlation coefficient that generated the data, but the sample correlation is also useful information to have sometimes. The purpose to quantify how far from the null hypothesis the observed data are, in one way or another, rather than just reporting a $p$-value and calling it a day.
|
What is effect size... and why is it even useful?
|
What you wrote is not a test statistic. It's a measure used to define how different the two means are. Generally, effect sizes are used to quantify how far from the null hypotheses the something is. F
|
What is effect size... and why is it even useful?
What you wrote is not a test statistic. It's a measure used to define how different the two means are. Generally, effect sizes are used to quantify how far from the null hypotheses the something is. For example, if you are doing power analysis for the two sample $t$-test, you might quantify the power as a function of the effect size (for a fixed $n$) you just wrote (which, I think, is called Cohen's D). In other contexts, the effect size might be something else.
It is also not uncommon to report effect sizes using sample quantities, which may coincide with some familiar statistics, such as the pearson correlation - the true effect size is the underlying correlation coefficient that generated the data, but the sample correlation is also useful information to have sometimes. The purpose to quantify how far from the null hypothesis the observed data are, in one way or another, rather than just reporting a $p$-value and calling it a day.
|
What is effect size... and why is it even useful?
What you wrote is not a test statistic. It's a measure used to define how different the two means are. Generally, effect sizes are used to quantify how far from the null hypotheses the something is. F
|
12,394
|
Does a univariate random variable's mean always equal the integral of its quantile function?
|
Let $F$ be the CDF of the random variable $X$, so the inverse CDF can be written $F^{-1}$. In your integral make the substitution $p = F(x)$, $dp = F'(x)dx = f(x)dx$ to obtain
$$\int_0^1F^{-1}(p)dp = \int_{-\infty}^{\infty}x f(x) dx = \mathbb{E}_F[X].$$
This is valid for continuous distributions. Care must be taken for other distributions because an inverse CDF hasn't a unique definition.
Edit
When the variable is not continuous, it does not have a distribution that is absolutely continuous with respect to Lebesgue measure, requiring care in the definition of the inverse CDF and care in computing integrals. Consider, for instance, the case of a discrete distribution. By definition, this is one whose CDF $F$ is a step function with steps of size $\Pr_F(x)$ at each possible value $x$.
This figure shows the CDF of a Bernoulli$(2/3)$ distribution scaled by $2$. That is, the random variable has a probability $1/3$ of equalling $0$ and a probability of $2/3$ of equalling $2$. The heights of the jumps at $0$ and $2$ give their probabilities. The expectation of this variable evidently equals $0\times(1/3)+2\times(2/3)=4/3$.
We could define an "inverse CDF" $F^{-1}$ by requiring
$$F^{-1}(p) = x \text{ if } F(x) \ge p \text{ and } F(x^{-}) \lt p.$$
This means that $F^{-1}$ is also a step function. For any possible value $x$ of the random variable, $F^{-1}$ will attain the value $x$ over an interval of length $\Pr_F(x)$. Therefore its integral is obtained by summing the values $x\Pr_F(x)$, which is just the expectation.
This is the graph of the inverse CDF of the preceding example. The jumps of $1/3$ and $2/3$ in the CDF become horizontal lines of these lengths at heights equal to $0$ and $2$, the values to whose probabilities they correspond. (The Inverse CDF is not defined beyond the interval $[0,1]$.) Its integral is the sum of two rectangles, one of height $0$ and base $1/3$, the other of height $2$ and base $2/3$, totaling $4/3$, as before.
In general, for a mixture of a continuous and a discrete distribution, we need to define the inverse CDF to parallel this construction: at each discrete jump of height $p$ we must form a horizontal line of length $p$ as given by the preceding formula.
|
Does a univariate random variable's mean always equal the integral of its quantile function?
|
Let $F$ be the CDF of the random variable $X$, so the inverse CDF can be written $F^{-1}$. In your integral make the substitution $p = F(x)$, $dp = F'(x)dx = f(x)dx$ to obtain
$$\int_0^1F^{-1}(p)dp =
|
Does a univariate random variable's mean always equal the integral of its quantile function?
Let $F$ be the CDF of the random variable $X$, so the inverse CDF can be written $F^{-1}$. In your integral make the substitution $p = F(x)$, $dp = F'(x)dx = f(x)dx$ to obtain
$$\int_0^1F^{-1}(p)dp = \int_{-\infty}^{\infty}x f(x) dx = \mathbb{E}_F[X].$$
This is valid for continuous distributions. Care must be taken for other distributions because an inverse CDF hasn't a unique definition.
Edit
When the variable is not continuous, it does not have a distribution that is absolutely continuous with respect to Lebesgue measure, requiring care in the definition of the inverse CDF and care in computing integrals. Consider, for instance, the case of a discrete distribution. By definition, this is one whose CDF $F$ is a step function with steps of size $\Pr_F(x)$ at each possible value $x$.
This figure shows the CDF of a Bernoulli$(2/3)$ distribution scaled by $2$. That is, the random variable has a probability $1/3$ of equalling $0$ and a probability of $2/3$ of equalling $2$. The heights of the jumps at $0$ and $2$ give their probabilities. The expectation of this variable evidently equals $0\times(1/3)+2\times(2/3)=4/3$.
We could define an "inverse CDF" $F^{-1}$ by requiring
$$F^{-1}(p) = x \text{ if } F(x) \ge p \text{ and } F(x^{-}) \lt p.$$
This means that $F^{-1}$ is also a step function. For any possible value $x$ of the random variable, $F^{-1}$ will attain the value $x$ over an interval of length $\Pr_F(x)$. Therefore its integral is obtained by summing the values $x\Pr_F(x)$, which is just the expectation.
This is the graph of the inverse CDF of the preceding example. The jumps of $1/3$ and $2/3$ in the CDF become horizontal lines of these lengths at heights equal to $0$ and $2$, the values to whose probabilities they correspond. (The Inverse CDF is not defined beyond the interval $[0,1]$.) Its integral is the sum of two rectangles, one of height $0$ and base $1/3$, the other of height $2$ and base $2/3$, totaling $4/3$, as before.
In general, for a mixture of a continuous and a discrete distribution, we need to define the inverse CDF to parallel this construction: at each discrete jump of height $p$ we must form a horizontal line of length $p$ as given by the preceding formula.
|
Does a univariate random variable's mean always equal the integral of its quantile function?
Let $F$ be the CDF of the random variable $X$, so the inverse CDF can be written $F^{-1}$. In your integral make the substitution $p = F(x)$, $dp = F'(x)dx = f(x)dx$ to obtain
$$\int_0^1F^{-1}(p)dp =
|
12,395
|
Does a univariate random variable's mean always equal the integral of its quantile function?
|
An equivalent result is well known in survival analysis: the expected lifetime is $$\int_{t=0}^\infty S(t) \; dt$$ where the survival function is $S(t) = \Pr(T \gt t)$ measured from birth at $t=0$. (It can easily be extended to cover negative values of $t$.)
So we can rewrite this as $$\int_{t=0}^\infty (1-F(t)) \; dt$$ but this is $$\int_{q=0}^1 F^{-1}(q) \; dq$$
as shown in various reflections of the area in question
|
Does a univariate random variable's mean always equal the integral of its quantile function?
|
An equivalent result is well known in survival analysis: the expected lifetime is $$\int_{t=0}^\infty S(t) \; dt$$ where the survival function is $S(t) = \Pr(T \gt t)$ measured from birth at $t=0$. (
|
Does a univariate random variable's mean always equal the integral of its quantile function?
An equivalent result is well known in survival analysis: the expected lifetime is $$\int_{t=0}^\infty S(t) \; dt$$ where the survival function is $S(t) = \Pr(T \gt t)$ measured from birth at $t=0$. (It can easily be extended to cover negative values of $t$.)
So we can rewrite this as $$\int_{t=0}^\infty (1-F(t)) \; dt$$ but this is $$\int_{q=0}^1 F^{-1}(q) \; dq$$
as shown in various reflections of the area in question
|
Does a univariate random variable's mean always equal the integral of its quantile function?
An equivalent result is well known in survival analysis: the expected lifetime is $$\int_{t=0}^\infty S(t) \; dt$$ where the survival function is $S(t) = \Pr(T \gt t)$ measured from birth at $t=0$. (
|
12,396
|
Does a univariate random variable's mean always equal the integral of its quantile function?
|
For any real-valued random variable $X$ with cdf $F$ it is well-known that $F^{-1}(U)$ has the same law than $X$ when $U$ is uniform on $(0,1)$. Therefore the expectation of $X$, whenever it exists, is the same as the expectation of $F^{-1}(U)$: $$E(X)=E(F^{-1}(U))=\int_0^1 F^{-1}(u)\mathrm{d}u.$$
The representation $X \sim F^{-1}(U)$ holds for a general cdf $F$, taking $F^{-1}$ to be the left-continuous inverse of $F$ in the case when $F$ it is not invertible.
|
Does a univariate random variable's mean always equal the integral of its quantile function?
|
For any real-valued random variable $X$ with cdf $F$ it is well-known that $F^{-1}(U)$ has the same law than $X$ when $U$ is uniform on $(0,1)$. Therefore the expectation of $X$, whenever it exists, i
|
Does a univariate random variable's mean always equal the integral of its quantile function?
For any real-valued random variable $X$ with cdf $F$ it is well-known that $F^{-1}(U)$ has the same law than $X$ when $U$ is uniform on $(0,1)$. Therefore the expectation of $X$, whenever it exists, is the same as the expectation of $F^{-1}(U)$: $$E(X)=E(F^{-1}(U))=\int_0^1 F^{-1}(u)\mathrm{d}u.$$
The representation $X \sim F^{-1}(U)$ holds for a general cdf $F$, taking $F^{-1}$ to be the left-continuous inverse of $F$ in the case when $F$ it is not invertible.
|
Does a univariate random variable's mean always equal the integral of its quantile function?
For any real-valued random variable $X$ with cdf $F$ it is well-known that $F^{-1}(U)$ has the same law than $X$ when $U$ is uniform on $(0,1)$. Therefore the expectation of $X$, whenever it exists, i
|
12,397
|
Does a univariate random variable's mean always equal the integral of its quantile function?
|
We are evaluating:
Let's try with a simple change of variable:
And we notice that, by definition of PDF and CDF:
almost everywhere. Thus we have, by definition of expected value:
|
Does a univariate random variable's mean always equal the integral of its quantile function?
|
We are evaluating:
Let's try with a simple change of variable:
And we notice that, by definition of PDF and CDF:
almost everywhere. Thus we have, by definition of expected value:
|
Does a univariate random variable's mean always equal the integral of its quantile function?
We are evaluating:
Let's try with a simple change of variable:
And we notice that, by definition of PDF and CDF:
almost everywhere. Thus we have, by definition of expected value:
|
Does a univariate random variable's mean always equal the integral of its quantile function?
We are evaluating:
Let's try with a simple change of variable:
And we notice that, by definition of PDF and CDF:
almost everywhere. Thus we have, by definition of expected value:
|
12,398
|
Does a univariate random variable's mean always equal the integral of its quantile function?
|
Note that $F(x)$ is defined as $P(X\le x)$ and is a right-continuous function. $F^{-1}$ is defined as
\begin{equation}
F^{-1}(p)=\min(x|F(x)\ge p).
\end{equation}
The $\min$ makes sense because of the right continuity. Let $U$ be a uniform distribution on $[0, 1]$. You can easily verify that $F^{-1}(U)$ has the same CDF as $X$, which is $F$. This doesn't require $X$ to be continuous. Hence, $E(X)=E(F^{-1}(U))=\int_0^1F^{-1}(p)\mathop{dp}$. The integral is the Riemann–Stieltjes integral. The only assumption we need is the mean of $X$ exists ($E|X|<\infty$).
|
Does a univariate random variable's mean always equal the integral of its quantile function?
|
Note that $F(x)$ is defined as $P(X\le x)$ and is a right-continuous function. $F^{-1}$ is defined as
\begin{equation}
F^{-1}(p)=\min(x|F(x)\ge p).
\end{equation}
The $\min$ makes sense because of the
|
Does a univariate random variable's mean always equal the integral of its quantile function?
Note that $F(x)$ is defined as $P(X\le x)$ and is a right-continuous function. $F^{-1}$ is defined as
\begin{equation}
F^{-1}(p)=\min(x|F(x)\ge p).
\end{equation}
The $\min$ makes sense because of the right continuity. Let $U$ be a uniform distribution on $[0, 1]$. You can easily verify that $F^{-1}(U)$ has the same CDF as $X$, which is $F$. This doesn't require $X$ to be continuous. Hence, $E(X)=E(F^{-1}(U))=\int_0^1F^{-1}(p)\mathop{dp}$. The integral is the Riemann–Stieltjes integral. The only assumption we need is the mean of $X$ exists ($E|X|<\infty$).
|
Does a univariate random variable's mean always equal the integral of its quantile function?
Note that $F(x)$ is defined as $P(X\le x)$ and is a right-continuous function. $F^{-1}$ is defined as
\begin{equation}
F^{-1}(p)=\min(x|F(x)\ge p).
\end{equation}
The $\min$ makes sense because of the
|
12,399
|
The proof of equivalent formulas of ridge regression
|
The classic Ridge Regression (Tikhonov Regularization) is given by:
$$ \arg \min_{x} \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \lambda {\left\| x \right\|}_{2}^{2} $$
The claim above is that the following problem is equivalent:
$$\begin{align*}
\arg \min_{x} \quad & \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} \\
\text{subject to} \quad & {\left\| x \right\|}_{2}^{2} \leq t
\end{align*}$$
Let's define $ \hat{x} $ as the optimal solution of the first problem and $ \tilde{x} $ as the optimal solution of the second problem.
The claim of equivalence means that $ \forall t, \: \exists \lambda \geq 0 : \hat{x} = \tilde{x} $.
Namely you can always have a pair of $ t $ and $ \lambda \geq 0 $ such the solution of the problem is the same.
How could we find a pair?
Well, by solving the problems and looking at the properties of the solution.
Both problems are Convex and smooth so it should make things simpler.
The solution for the first problem is given at the point the gradient vanishes which means:
$$ \hat{x} - y + 2 \lambda \hat{x} = 0 $$
The KKT Conditions of the second problem states:
$$ \tilde{x} - y + 2 \mu \tilde{x} = 0 $$
and
$$ \mu \left( {\left\| \tilde{x} \right\|}_{2}^{2} - t \right) = 0 $$
The last equation suggests that either $ \mu = 0 $ or $ {\left\| \tilde{x} \right\|}_{2}^{2} = t $.
Pay attention that the 2 base equations are equivalent.
Namely if $ \hat{x} = \tilde{x} $ and $ \mu = \lambda $ both equations hold.
So it means that in case $ {\left\| y \right\|}_{2}^{2} \leq t $ one must set $ \mu = 0 $ which means that for $ t $ large enough in order for both to be equivalent one must set $ \lambda = 0 $.
On the other case one should find $ \mu $ where:
$$ {y}^{t} \left( I + 2 \mu I \right)^{-1} \left( I + 2 \mu I \right)^{-1} y = t $$
This is basically when $ {\left\| \tilde{x} \right\|}_{2}^{2} = t $
Once you find that $ \mu $ the solutions will collide.
Regarding the $ {L}_{1} $ (LASSO) case, well, it works with the same idea.
The only difference is we don't have closed for solution hence deriving the connection is trickier.
Have a look at my answer at StackExchange Cross Validated Q291962 and StackExchange Signal Processing Q21730 - Significance of $ \lambda $ in Basis Pursuit.
Remark
What's actually happening?
In both problems, $ x $ tries to be as close as possible to $ y $.
In the first case, $ x = y $ will vanish the first term (The $ {L}_{2} $ distance) and in the second case it will make the objective function vanish.
The difference is that in the first case one must balance $ {L}_{2} $ Norm of $ x $. As $ \lambda $ gets higher the balance means you should make $ x $ smaller.
In the second case there is a wall, you bring $ x $ closer and closer to $ y $ until you hit the wall which is the constraint on its Norm (By $ t $).
If the wall is far enough (High value of $ t $) and enough depends on the norm of $ y $ then i has no meaning, just like $ \lambda $ is relevant only of its value multiplied by the norm of $ y $ starts to be meaningful.
The exact connection is by the Lagrangian stated above.
Resources
I found this paper today (03/04/2019):
Approximation Hardness for A Class of Sparse Optimization Problems.
|
The proof of equivalent formulas of ridge regression
|
The classic Ridge Regression (Tikhonov Regularization) is given by:
$$ \arg \min_{x} \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \lambda {\left\| x \right\|}_{2}^{2} $$
The claim above is that the
|
The proof of equivalent formulas of ridge regression
The classic Ridge Regression (Tikhonov Regularization) is given by:
$$ \arg \min_{x} \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \lambda {\left\| x \right\|}_{2}^{2} $$
The claim above is that the following problem is equivalent:
$$\begin{align*}
\arg \min_{x} \quad & \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} \\
\text{subject to} \quad & {\left\| x \right\|}_{2}^{2} \leq t
\end{align*}$$
Let's define $ \hat{x} $ as the optimal solution of the first problem and $ \tilde{x} $ as the optimal solution of the second problem.
The claim of equivalence means that $ \forall t, \: \exists \lambda \geq 0 : \hat{x} = \tilde{x} $.
Namely you can always have a pair of $ t $ and $ \lambda \geq 0 $ such the solution of the problem is the same.
How could we find a pair?
Well, by solving the problems and looking at the properties of the solution.
Both problems are Convex and smooth so it should make things simpler.
The solution for the first problem is given at the point the gradient vanishes which means:
$$ \hat{x} - y + 2 \lambda \hat{x} = 0 $$
The KKT Conditions of the second problem states:
$$ \tilde{x} - y + 2 \mu \tilde{x} = 0 $$
and
$$ \mu \left( {\left\| \tilde{x} \right\|}_{2}^{2} - t \right) = 0 $$
The last equation suggests that either $ \mu = 0 $ or $ {\left\| \tilde{x} \right\|}_{2}^{2} = t $.
Pay attention that the 2 base equations are equivalent.
Namely if $ \hat{x} = \tilde{x} $ and $ \mu = \lambda $ both equations hold.
So it means that in case $ {\left\| y \right\|}_{2}^{2} \leq t $ one must set $ \mu = 0 $ which means that for $ t $ large enough in order for both to be equivalent one must set $ \lambda = 0 $.
On the other case one should find $ \mu $ where:
$$ {y}^{t} \left( I + 2 \mu I \right)^{-1} \left( I + 2 \mu I \right)^{-1} y = t $$
This is basically when $ {\left\| \tilde{x} \right\|}_{2}^{2} = t $
Once you find that $ \mu $ the solutions will collide.
Regarding the $ {L}_{1} $ (LASSO) case, well, it works with the same idea.
The only difference is we don't have closed for solution hence deriving the connection is trickier.
Have a look at my answer at StackExchange Cross Validated Q291962 and StackExchange Signal Processing Q21730 - Significance of $ \lambda $ in Basis Pursuit.
Remark
What's actually happening?
In both problems, $ x $ tries to be as close as possible to $ y $.
In the first case, $ x = y $ will vanish the first term (The $ {L}_{2} $ distance) and in the second case it will make the objective function vanish.
The difference is that in the first case one must balance $ {L}_{2} $ Norm of $ x $. As $ \lambda $ gets higher the balance means you should make $ x $ smaller.
In the second case there is a wall, you bring $ x $ closer and closer to $ y $ until you hit the wall which is the constraint on its Norm (By $ t $).
If the wall is far enough (High value of $ t $) and enough depends on the norm of $ y $ then i has no meaning, just like $ \lambda $ is relevant only of its value multiplied by the norm of $ y $ starts to be meaningful.
The exact connection is by the Lagrangian stated above.
Resources
I found this paper today (03/04/2019):
Approximation Hardness for A Class of Sparse Optimization Problems.
|
The proof of equivalent formulas of ridge regression
The classic Ridge Regression (Tikhonov Regularization) is given by:
$$ \arg \min_{x} \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \lambda {\left\| x \right\|}_{2}^{2} $$
The claim above is that the
|
12,400
|
The proof of equivalent formulas of ridge regression
|
A less mathematically rigorous, but possibly more intuitive, approach to understanding what is going on is to start with the constraint version (equation 3.42 in the question) and solve it using the methods of "Lagrange Multiplier" (https://en.wikipedia.org/wiki/Lagrange_multiplier or your favorite multivariable calculus text). Just remember that in calculus $x$ is the vector of variables, but in our case $x$ is constant and $\beta$ is the variable vector. Once you apply the Lagrange multiplier technique you end up with the first equation (3.41) (after throwing away the extra $-\lambda t$ which is constant relative to the minimization and can be ignored).
This also shows that this works for lasso and other constraints.
|
The proof of equivalent formulas of ridge regression
|
A less mathematically rigorous, but possibly more intuitive, approach to understanding what is going on is to start with the constraint version (equation 3.42 in the question) and solve it using the m
|
The proof of equivalent formulas of ridge regression
A less mathematically rigorous, but possibly more intuitive, approach to understanding what is going on is to start with the constraint version (equation 3.42 in the question) and solve it using the methods of "Lagrange Multiplier" (https://en.wikipedia.org/wiki/Lagrange_multiplier or your favorite multivariable calculus text). Just remember that in calculus $x$ is the vector of variables, but in our case $x$ is constant and $\beta$ is the variable vector. Once you apply the Lagrange multiplier technique you end up with the first equation (3.41) (after throwing away the extra $-\lambda t$ which is constant relative to the minimization and can be ignored).
This also shows that this works for lasso and other constraints.
|
The proof of equivalent formulas of ridge regression
A less mathematically rigorous, but possibly more intuitive, approach to understanding what is going on is to start with the constraint version (equation 3.42 in the question) and solve it using the m
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.